python - How to serialize/deserialized pybrain networks? -


pybrain python library provides (among other things) easy use artificial neural networks.

i fail serialize/deserialize pybrain networks using either pickle or cpickle.

see following example:

from pybrain.datasets            import superviseddataset pybrain.tools.shortcuts     import buildnetwork pybrain.supervised.trainers import backproptrainer import cpickle pickle import numpy np   #generate data np.random.seed(93939393) data = superviseddataset(2, 1) x in xrange(10):     y = x * 3     z = x + y + 0.2 * np.random.randn()       data.addsample((x, y), (z,))  #build network , train      net1 = buildnetwork( data.indim, 2, data.outdim ) trainer1 = backproptrainer(net1, dataset=data, verbose=true) in xrange(4):     trainer1.trainepochs(1)     print '\tvalue after %d epochs: %.2f'%(i, net1.activate((1, 4))[0]) 

this output of above code:

total error: 201.501998476     value after 0 epochs: 2.79 total error: 152.487616382     value after 1 epochs: 5.44 total error: 120.48092561     value after 2 epochs: 7.56 total error: 97.9884043452     value after 3 epochs: 8.41 

as can see, network total error decreases training progresses. can see predicted value approaches expected value of 12.

now similar exercise, include serialization/deserialization:

print 'creating net2' net2 = buildnetwork(data.indim, 2, data.outdim) trainer2 = backproptrainer(net2, dataset=data, verbose=true) trainer2.trainepochs(1) print '\tvalue after %d epochs: %.2f'%(1, net2.activate((1, 4))[0])  #so far, good. let's test pickle pickle.dump(net2, open('testnetwork.dump', 'w')) net2 = pickle.load(open('testnetwork.dump')) trainer2 = backproptrainer(net2, dataset=data, verbose=true) print 'loaded net2 using pickle, continue training' in xrange(1, 4):         trainer2.trainepochs(1)         print '\tvalue after %d epochs: %.2f'%(i, net2.activate((1, 4))[0]) 

this output of block:

creating net2 total error: 176.339378639     value after 1 epochs: 5.45 loaded net2 using pickle, continue training total error: 123.392181859     value after 1 epochs: 5.45 total error: 94.2867637623     value after 2 epochs: 5.45 total error: 78.076711114     value after 3 epochs: 5.45 

as can see, seems training has effect on network (the reported total error value continues decrease), output value of network freezes on value relevant first training iteration.

is there caching mechanism need aware of causes erroneous behaviour? there better ways serialize/deserialize pybrain networks?

relevant version numbers:

  • python 2.6.5 (r265:79096, mar 19 2010, 21:48:26) [msc v.1500 32 bit (intel)]
  • numpy 1.5.1
  • cpickle 1.71
  • pybrain 0.3

p.s. have created a bug report on project's site , keep both , bug tracker updatedj

cause

the mechanism causes behavior handling of parameters (.params) , derivatives (.derivs) in pybrain modules: in fact, network parameters stored in 1 array, individual module or connection objects have access "their own" .params, which, view on slice of total array. allows both local , network-wide writes , read-outs on same data-structure.

apparently slice-view link gets lost pickling-unpickling.

solution

insert

net2.sorted = false net2.sortmodules() 

after loading file (which recreates sharing), , should work.


Comments

Popular posts from this blog

asp.net - repeatedly call AddImageUrl(url) to assemble pdf document -

java - Android recognize cell phone with keyboard or not? -

iphone - How would you achieve a LED Scrolling effect? -