If I run it from the shell (unix) I get: Segmentation fault and see a
core dump in my processes. If I run it in the python shell I get as
above:
File D:\Python24\Lib\site-packages\numpy\core\defmatrix.py, line
149, in
That's a Window's path... Does Windows even make full use of
4GB?
I'm afraid you're using terminology (and abbreviations!) that I can't follow.
Let me try to restate what's going on and you can correct me as I screw up. You
have a neural net that has 80 output units. You have 25000 observations that
you
are using to train the neural net. Each observation vector
I'm still not sure what was stopping the inner
loop from working earlier - but removing the redundancy in j=0 and so
on seems to have solved it.
Call me crazy, but be careful when programming python in different text
editors and in general, ie cutting and pasting, tabing and spacing.
Loops can
I'm running operations large arrays of floats, approx 25,000 x 80.
Python (scipy) does not seem to come close to using 4GB of wired mem,
but segments at around a gig. Everything works fine on smaller batches
of data around 10,000 x 80 and uses a max of ~600mb of mem. Any Ideas?
Is this just too
If I run it from the shell (unix) I get: Segmentation fault and see a
core dump in my processes. If I run it in the python shell I get as
above:
File D:\Python24\Lib\site-packages\numpy\core\defmatrix.py, line
149, in
__mul__
return N.dot(self, other)
MemoryError
I your experience as one of
Good point. Finding the SSE using an absolute error matrix of (25000 x
1) is insane. I pulled out the error function (for now) and I'm back
in business. Thanks for all the great advise.
--
http://mail.python.org/mailman/listinfo/python-list
Using large arrays of data I found it is MUCH faster to cast arrays to
matricies and then multiply the two matricies togther
(scipy.matrix(ARRAY1)*scipy.matrix(ARRAY2)) in order to do a matrix
multipy of two arrays vs. scipy.matrixmultipy(ARRAY1, ARRAY2).
Are there any logical/efficiency errors
Im using rprop (not dependent on error function in this case ie.
standard rprop vs. irprop or arprop) for an MLP tanh, sigmod nnet as
part of a hybrid model. I guess I was using a little Matlab thought
when I wrote the SSE funtion. My batches are about 25,000 x 80 so my
absolute error (diff
Ok, so I found out that even though mylist[] and all objects in it were
fine ie id(mylist[i]) != id(mylist[all others]) what was happening is
that during a reproduction function a shallow copies were being made
making all offspring (genetic algorithm) have different
id(mylist[0..n]), however the
Ok, so I found out that even though mylist[] and all objects in it were
fine ie id(mylist[i]) != id(mylist[all others]) what was happening is
that during a reproduction function a shallow copies were being made
making all offspring (genetic algorithm) have different
id(mylist[0..n]), however the
Ok, so I found out that even though mylist[] and all objects in it were
fine ie id(mylist[i]) != id(mylist[all others]) what was happening is
that during a reproduction function a shallow copies were being made
making all offspring (genetic algorithm) have different
id(mylist[0..n]), however the
Ok, so I found out that even though mylist[] and all objects in it were
fine ie id(mylist[i]) != id(mylist[all others]) what was happening is
that during a reproduction function a shallow copies were being made
making all offspring (genetic algorithm) have different
id(mylist[0..n]), however the
Ok, so I found out that even though mylist[] and all objects in it were
fine ie id(mylist[i]) != id(mylist[all others]) what was happening is
that during a reproduction function a shallow copies were being made
making all offspring (genetic algorithm) have different
id(mylist[0..n]), however the
The Problem (very basic, but strange):
I have a list holding a population of objects, each object has 5 vars
and appropriate funtions to get or modify the vars. When objects in
the list have identical vars (like all = 5 for var a and all = 10 for
var b across all vars and objects) and i change
14 matches
Mail list logo