Brian Blais wrote: > Hello, > > I am trying to translate some Matlab/mex code to Python, for doing neural > simulations. This application is definitely computing-time limited, and I > need to > optimize at least one inner loop of the code, or perhaps even rethink the > algorithm. > The procedure is very simple, after initializing any variables: > > 1) select a random input vector, which I will call "x". right now I have it > as an > array, and I choose columns from that array randomly. in other cases, I may > need to > take an image, select a patch, and then make that a column vector. > > 2) calculate an output value, which is the dot product of the "x" and a weight > vector, "w", so > > y=dot(x,w) > > 3) modify the weight vector based on a matrix equation, like: > > w=w+ eta * (y*x - y**2*w) > ^ > | > +---- learning rate constant > > 4) repeat steps 1-3 many times >
Brian ETA = 0.001 INV_TAU = 1.0/100.0 rather than: > params={} > params['eta']=0.001; > params['tau']=100.0; ie. take the dictionary lookup (and the repeated division) out? If you have a collection of random vectors do you gain anything by *choosing* them randomly? I'm not exactly sure if it's equivalent to your existing code, but how about: x=random((100,250000)) w=random((100,1)); th=random((1,1)) for vector in x: ... w=w+ ETA * (y*vector - y**2*w) th = th + INV_TAU *(y**2-th) ... ? (I'm not familar with Numeric) Gerard > old_mx=0; > for e in range(100): > > rnd=randint(0,numpats,250000) > t1=time.time() > if 0: # straight python > for i in range(len(rnd)): > pat=rnd[i] > xx=reshape(x[:,pat],(1,-1)) > y=matrixmultiply(xx,w) > w=w+params['eta']*(y*transpose(xx)-y**2*w); > th=th+(1.0/params['tau'])*(y**2-th); > else: # pyrex > dohebb(params,w,th,x,rnd) > print time.time()-t1 > > -- http://mail.python.org/mailman/listinfo/python-list