Hi,
How do I find the maximum possible array size for a given data type on a given
architecture?
For example if I do the following on a 32-bit Windows machine:
matrix = np.zeros((8873,9400),np.dtype('f8'))
I get,
Traceback (most recent call last):
File "<pyshell#115>", line 1, in <module>
matrix = np.zeros((8873,9400),np.dtype('f8'))
MemoryError
If I reduce the matrix size then it works.
However, if I run the original command on an equivalent 32-bit Linux machine
this works fine (presumably some limit of memory allocation in the Windows
kernel? I tested increasing the available RAM and it doesn't solve the problem).
Is there a way I can find this limit? When distributing software to users (who
all run different architectures) it would be great if we could check this
before running the process and catch the error before the user hits "run".
Many thanks in advance,
Tom
_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion