上海交通大学神经网络原理与应用作业2资料下载.pdf
- 文档编号:16119821
- 上传时间:2022-11-20
- 格式:PDF
- 页数:11
- 大小:676.53KB
上海交通大学神经网络原理与应用作业2资料下载.pdf
《上海交通大学神经网络原理与应用作业2资料下载.pdf》由会员分享,可在线阅读,更多相关《上海交通大学神经网络原理与应用作业2资料下载.pdf(11页珍藏版)》请在冰豆网上搜索。
nki=uk+1,jix2kinki+vk+1,jixkinki=uk+1,jif2(nki)nki+vk+1,jif(nki)nki=(2uk+1,jixki+vk+1,ji)f(nki)=(2uk+1,jixki+vk+1,ji)(1xki)xkink+1nk=(2uk+1xk+vk+1)F(nk)Wecannowwriteouttherecurrencerelationforthesensitivitybyusingthechainruleinmatrixform:
sk=Fnk=?
nk+1nk?
TFnk+1(7)=F(nk)(2uk+1xk+vk+1)Tsk+1(8)Thestartingpoint,sM,fortherecurrencerelationhasbeenderived:
sM=2F(nM)(txM)(9)2.Writeaprogramtorealizeit(3layers).Ans.cf.src/mlqp.m3.Runyourprogramforpatternclassificationonthetwo-spiraldataset,whichisthesameashomeworkone.Youcanchoose10hiddenunitsinthisproblem.Ans.Wecanfirstdiscussabouttheprocessofconvergence.Whenlearningrateissetto0.1andinitialvaluesarerandomlychosen.WechooseMSE(MeanSquaredError)topresentthelearningqualityofourNN.However,thefinalresultsareclassifications,sowealsopresentErrorRate(Figure1).InFigure2,wepresenttheprocessofconvergence.FromFigure1bwecannoticethatwheniterationisbetween200and250,ErrorRatechangedalot.Sowesnapshotmoredecisionboundariesinthisdomain.Trying3differentlearningratesWechoosethreedifferentlearningrates,whichare0.1,0.5and1.Herewefixinitialvaluestothesamerandomvalues.ResultsareshowninFigure3.Trying3differentinitialvaluesThreedifferentkindsofinitialvaluesarechosen,whicharerandomvalues,all0.01sandall0s.Herewefixlearningrateto0.1.ResultsareshowninFigure4.2010020030040050060070080000.10.20.30.40.50.60.7IterationMSETrainTest(a)MeanSquaredError010020030040050060070080000.10.20.30.40.50.60.70.80.91IterationErrorRateTrainTest(b)ErrorRateFigure1:
ResultsofEx.2.3(learningrate:
0.1,initialvalues:
random)4321012343210123ClassificationClass1Class2(a)Iteration=1004321012343210123ClassificationClass1Class2(b)Iteration=2004321012343210123ClassificationClass1Class2(c)Iteration=2144321012343210123ClassificationClass1Class2(d)Iteration=2204321012343210123ClassificationClass1Class2(e)Iteration=2504321012343210123ClassificationClass1Class2(f)Iteration=500Figure2:
ProcessofConvergence305010015020025000.10.20.30.40.50.60.7IterationMSETrainTest(a)MSE(learningrate:
0.5)0501001500.20.250.30.350.40.450.50.550.60.650.7IterationMSETrainTest(b)MSE(learningrate:
1)05010015020025000.10.20.30.40.50.60.70.80.91IterationErrorRateTrainTest(c)ER(learningrate:
0.5)05010015000.10.20.30.40.50.60.70.80.91IterationErrorRateTrainTest(d)ER(learningrate:
1)Figure3:
ResultsofEx.2.3(initialvalues:
random)401002003004005006007008000.40.420.440.460.480.50.52IterationMSETrainTest(a)MSE(initialvalues:
0s)01002003004005006007008000.40.420.440.460.480.50.52IterationMSETrainTest(b)MSE(initialvalues:
0.01s)01002003004005006007008000.20.30.40.50.60.70.80.91IterationErrorRateTrainTest(c)ER(initialvalues:
0s)01002003004005006007008000.20.30.40.50.60.70.80.91IterationErrorRateTrainTest(d)ER(initialvalues:
0.01s)Figure4:
0.1)5DiscussingthealgorithmconvergenceA.DifferentLearningRatesComparingFigure3withFigure1,wecanobtainthatthemorelearningratesare,thefasterthealgorithmconverges,butthemoreroughtheMSEandERcurvesare.Whenlearningrateis0.5,correctdecisionboundarycanstillbelearned.Butwhenlearningrateisaslargeas1,theNNcanhardlyfindthefinalcorrectdecisionboundaryandthealgorithmconvergedtoalocalminimumpoints(ERdoesnotreach0).B.DifferentInitialValuesComparingFigure4withFigure1,wecanfindthatsettinginitialvaluesas0sor0.01sdoesnotaffectthesmoothnessofMSEandERcurves,buttheybothconvergedtoalocalminimumpoints.Inordertounderstandlocalminimumpointsintuitively,wedrawthedecisionboundaryforinitialvaluesare0sand0.01s(Figure5).4321012343210123ClassificationClass1Class2(a)
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 上海交通大学 神经网络 原理 应用 作业