智能控制例题汇编Word下载.docx
- 文档编号:17100087
- 上传时间:2022-11-28
- 格式:DOCX
- 页数:25
- 大小:90.51KB
智能控制例题汇编Word下载.docx
《智能控制例题汇编Word下载.docx》由会员分享,可在线阅读,更多相关《智能控制例题汇编Word下载.docx(25页珍藏版)》请在冰豆网上搜索。
%计算隐含层各神经元输出
neti=wij*X;
forj=1:
1
fori=1:
q
Oi(i,j)=2/(1+exp(-neti(i,j)))-1;
end
end
%计算输出层各神经元输出
netk=wki*Oi;
fori=1:
fork=1:
l
Ok(k,i)=2/(1+exp(-netk(k,i)))-1;
%计算误差函数
E=(D-Ok)'
*(D-Ok)/2;
if(E<
err_goal)
break
%调整输出层加权系数
deltak=Ok.*(1-Ok).*(D-Ok);
w=wki;
wki=wki+lr*deltak*Oi'
+a*(wki-wki0);
wki0=w;
%调整隐含层加权系数
deltai=Oi.*(1-Oi).*(deltak'
*wki)'
;
w=wij;
wij=wij+lr*deltai*X'
+a*(wij-wij0);
wij0=w;
%显示计算次数
epoch
E
%BP网络地第二阶段工作期
X1=X;
neti=wij*X1;
%显示结果
Ok
epoch=
3
E=
2.2905e-011
Ok=
1.0000
2.将BP网络高度抽象成一个神经元,
nntwarnoff%临时关闭神经网络工具箱的警告功能
p=[-3.02.0];
%输入向量
d=[0.40.8];
%期望的输出值
[w,b]=initff(p,d,'
logsig'
);
%w为权值b为阈值,初试化不超过三层的前向神经网络
df=10;
%显示间隔
max_epoch=1000;
%给定训练最大步数
err_goal=0.0001;
%给定期望误差最小值
lr=1;
%设定修正权值的学习速率为1
tp=[dfmax_epocherr_goallr];
%训练控制参数
[w,b,epoch,tr]=trainbp(w,b,'
p,d,tp);
%epoch为训练步数,基于梯度下降的训练方法
ploterr(tr,err_goal)%tr为网络训练误差平方和的行向量,绘制误差曲线
pause
p=-3.0;
a=simuff(p,w,b,'
)%测试网络输出,仿真函数
TRAINBP:
0/1000epochs,SSE=0.103917.
10/1000epochs,SSE=0.0198673.
20/1000epochs,SSE=0.0164732.
30/1000epochs,SSE=0.0125821.
40/1000epochs,SSE=0.0085481.
50/1000epochs,SSE=0.00499354.
60/1000epochs,SSE=0.00246948.
70/1000epochs,SSE=0.00104823.
80/1000epochs,SSE=0.000395431.
90/1000epochs,SSE=0.000137702.
93/1000epochs,SSE=9.92659e-005.
a=
0.4033
结论:
该网络训练收敛,在第93步时小于了期望误差最小值,返回值为0.4033,收敛于期望值0.4。
p=2.0;
0/1000epochs,SSE=0.139374.
10/1000epochs,SSE=0.000103699.
11/1000epochs,SSE=6.61035e-005.
0.7934
该网络训练收敛,在第11步时小于了期望误差最小值,返回值为0.7934,收敛于期望值0.8。
3.利用三层BP网络完成函数逼近
clc
nntwarnoff
p=0:
0.1:
2;
%起点0终点2你、步长0.1
t=sin(p*pi);
figure
(1);
plot(p,t);
r=1;
s1=5;
s2=1;
[w1,b1,w2,b2]=initff(p,s1,'
tansig'
t,'
purelin'
%正切s型传递函数,线性传递函数
max_epoch=3000;
%最大训练次数
err_goal=0.01;
lr=0.01%学习速率
[w1,b1,w2,b2]=trainbp(w1,b1,'
w2,b2,'
p,t,tp);
%基于梯度下降的训练方法
t=simuff(p,w1,b1,'
)%仿真函数;
figure
(1)
plot(p,t)
lr=
0.0100
0/5000epochs,SSE=20.2993.
10/5000epochs,SSE=7.38075.
20/5000epochs,SSE=3.09758.
30/5000epochs,SSE=1.54046.
40/5000epochs,SSE=0.908906.
50/5000epochs,SSE=0.622736.
60/5000epochs,SSE=0.478537.
70/5000epochs,SSE=0.396625.
80/5000epochs,SSE=0.343376.
90/5000epochs,SSE=0.304016.
100/5000epochs,SSE=0.271945.
110/5000epochs,SSE=0.244218.
120/5000epochs,SSE=0.219537.
130/5000epochs,SSE=0.197333.
140/5000epochs,SSE=0.177339.
150/5000epochs,SSE=0.159399.
160/5000epochs,SSE=0.143381.
170/5000epochs,SSE=0.12915.
180/5000epochs,SSE=0.116557.
190/5000epochs,SSE=0.10545.
200/5000epochs,SSE=0.095671.
210/5000epochs,SSE=0.0870709.
220/5000epochs,SSE=0.0795077.
230/5000epochs,SSE=0.0728516.
240/5000epochs,SSE=0.0669861.
250/5000epochs,SSE=0.0618077.
260/5000epochs,SSE=0.0572259.
270/5000epochs,SSE=0.0531618.
280/5000epochs,SSE=0.0495471.
290/5000epochs,SSE=0.0463231.
300/5000epochs,SSE=0.0434392.
310/5000epochs,SSE=0.040852.
320/5000epochs,SSE=0.0385242.
330/5000epochs,SSE=0.0364238.
340/5000epochs,SSE=0.0345232.
350/5000epochs,SSE=0.0327988.
360/5000epochs,SSE=0.03123.
370/5000epochs,SSE=0.0297991.
380/5000epochs,SSE=0.0284909.
390/5000epochs,SSE=0.0272919.
400/5000epochs,SSE=0.0261906.
410/5000epochs,SSE=0.0251768.
420/5000epochs,SSE=0.0242416.
430/5000epochs,SSE=0.023377.
440/5000epochs,SSE=0.0225763.
450/5000epochs,SSE=0.0218334.
460/5000epochs,SSE=0.0211427.
470/5000epochs,SSE=0.0204995.
480/5000epochs,SSE=0.0198995.
490/5000epochs,SSE=0.0193389.
500/5000epochs,SSE=0.0188141.
510/5000epochs,SSE=0.0183222.
520/5000epochs,SSE=0.0178603.
530/5000epochs,SSE=0.017426.
540/5000epochs,SSE=0.017017.
550/5000epochs,SSE=0.0166312.
560/5000epochs,SSE=0.0162667.
570/5000epochs,SSE=0.015922.
580/5000epochs,SSE=0.0155955.
590/5000epochs,SSE=0.0152857.
600/5000epochs,SSE=0.0149914.
610/5000epochs,SSE=0.0147115.
620/5000epochs,SSE=0.0144449.
630/5000epochs,SSE=0.0141906.
640/5000epochs,SSE=0.0139478.
650/5000epochs,SSE=0.0137156.
660/5000epochs,SSE=0.0134933.
670/5000epochs,SSE=0.0132802.
680/5000epochs,SSE=0.0130758.
690/5000epochs,SSE=0.0128793.
700/5000epochs,SSE=0.0126904.
710/5000epochs,SSE=0.0125084.
720/5000epochs,SSE=0.0123331.
730/5000epochs,SSE=0.0121638.
740/5000epochs,SSE=0.0120004.
750/5000epochs,SSE=0.0118424.
760/5000epochs,SSE=0.0116895.
770/5000epochs,SSE=0.0115413.
780/5000epochs,SSE=0.0113978.
790/5000epochs,SSE=0.0112584.
800/5000epochs,SSE=0.0111232.
810/5000epochs,SSE=0.0109917.
820/5000epochs,SSE=0.0108639.
830/5000epochs,SSE=0.0107395.
840/5000epochs,SSE=0.0106184.
850/5000epochs,SSE=0.0105004.
860/5000epochs,SSE=0.0103853.
870/5000epochs,SSE=0.0102731.
880/5000epochs,SSE=0.0101635.
890/5000epochs,SSE=0.0100565.
896/5000epochs,SSE=0.00999345.
t=
Columns1through7
0.02600.29420.57420.79810.93540.98890.9628
Columns8through14
0.84320.61210.2931-0.0345-0.3151-0.5604-0.7865
Columns15through21
-0.9596-1.0264-0.9673-0.7989-0.5575-0.2890-0.0365
>
4、线性阈值单元的学习
单变量样本有四个:
假设权值的初值为:
-2.5,阈值的初值为:
1.75。
问题的Matlab实现如下所示:
%%线性阀值单元的学习
x=[1-0.53-2];
d=[1-11-1];
y=[0000];
w0
(1)=1.75;
w1
(1)=-2.5;
lr=0.8;
%学习效率
i=1;
k=1;
forn=1:
1000
y(1,i)=sign(w0(k)+w1(k)*x(1,i));
ify(1,i)~=d(1,i)
w0(k+1)=w0(k)+lr*(d(1,i)-y(1,i));
w1(k+1)=w1(k)+lr*(d(1,i)-y(1,i))*x(1,i);
else
w0(k+1)=w0(k);
w1(k+1)=w1(k);
i=i+1;
k=k+1;
ifi>
4
i=1;
ify==d
break;
w0
w1
figure,plot(w0)
figure,plot(w1)
5、建立一个感知器神经网络,使其能够完成“与”的功能,利用Matlab编程实现。
实现‘与’逻辑的Matlab(*.m)文件如下:
closeall
clear,clc
%定义变量
p=[0011;
0101];
d=[0001];
lr=maxlinlr(p,'
bias'
)
%%线性网络实现
net1=linearlayer(0,lr);
net1=train(net1,p,d);
%%感知器实现
net2=newp([-1,1;
-1,1],1,'
hardlim'
net2=train(net2,p,d);
%%显示
disp('
线性网络输出'
Y1=sim(net1,p)
线性网络二值输出'
YY1=Y1>
=0.5
线性网络最终权值:
'
w1=[net1.iw{1,1},net1.b{1,1}]
感知器输出'
Y2=sim(net2,p)
感知器二值输出'
YY2=Y2>
感知器最终权值'
w2=[net2.iw{1,1},net2.b{1,1}]
plot([001],[010],'
o'
holdon;
plot(1,1,'
d'
x=-2:
.2:
y1=1/2/w1
(2)-w1
(1)/w1
(2)*x-w1(3)/w1
(2);
plot(x,y1,'
-'
y2=-w2
(1)/w2
(2)*x-w2(3)/w2
(2);
plot(x,y2,'
--'
axis([-0.5,2,-0.5,2])
xlabel('
x'
ylabel('
ylabel'
title('
线性网络用于求解与逻辑'
legend('
0'
'
1'
线性网络分类面'
感知器分类面'
执行脚本文件,得到结果如下:
第199次迭代
线性网络的二值输出:
yy=
0001
均方误差:
0.062500
权值向量:
w=
-0.25000.50000.5000
第200次迭代
6.利用多层感知器完成“异或”的功能
if(neti21(i,j)>
=0)
y(i,j)=1;
y(i,j)=0;
end%感知器神经网络地第一阶段学习期学习加权系数
clear
err=0.001;
%给定期望误差最小值
lr=0.9;
%设置学习速率
max=1000;
%设置训练的最大次数
x=[0011;
0101;
1111];
%提供输入训练样本集
T1=[0100];
T2=[1101];
xx=[T1;
T2;
T=[0110];
%提供输出期望
[M,N]=size(x);
%M为输入节点的数量,L为输出节点的数量,N为训练样本数
[L,N]=size(T);
Wij11=rand(L,M);
%初始化:
置所有的加权系数为最小的随机数
Wij12=rand(L,M);
Wij
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 智能 控制 例题 汇编
![提示](https://static.bdocx.com/images/bang_tan.gif)