On 4/23/2011 2:47 PM, Charles R Harris wrote:
On Sat, Apr 23, 2011 at 3:38 PM, Christoph Gohlke <cgoh...@uci.edu <mailto:cgoh...@uci.edu>> wrote: On 4/23/2011 10:41 AM, Charles R Harris wrote: > > > On Sat, Apr 23, 2011 at 11:09 AM, Bruce Southey <bsout...@gmail.com <mailto:bsout...@gmail.com> > <mailto:bsout...@gmail.com <mailto:bsout...@gmail.com>>> wrote: > > On Sat, Apr 23, 2011 at 9:58 AM, Till Stensitzki <mail.t...@gmx.de <mailto:mail.t...@gmx.de> > <mailto:mail.t...@gmx.de <mailto:mail.t...@gmx.de>>> wrote: > > > > > Do you also have an earlier version of numpy installed? As David > says, this > > >should raise an error for recent numpy and > > >I'm wondering if you are inadvertently > > >running an earlier version.Chuck > > > > > > I only have one python installation and > > numpy.__version__ shows 1.6b. > > I could reinstall numpy, if it would help. > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org> <mailto:NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>> > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > Hi, > I can get this with 64-bit Win 7, 32-bit Python 2.6, 2.7 (below) and > 3.1 and numpy 1.6b (fresh install) IDLE and the command line. I can > also confirm the 'ValueError' with Python2.6 and numpy 1.51 on the > same system. > > Actually this is 'weird' when printing and crashed with the range - > accessing unassigned memory? > A smaller array gives an numpy error or memory error in idle. > > Bruce > > > > >> import numpy as np > > >> x=np.zeros((262144, 262144)) > > >> x > array([], shape=(262144, 262144), dtype=float64) > > >> x[0,0] > 2.1453735050108555e-314 > > >> x[1:10,1:10] > > > >> ================================ RESTART > ================================ > > >> import numpy as np > > > >> x=np.zeros((26214, 26214)) > > Traceback (most recent call last): > File "<pyshell#7>", line 1, in <module> > x=np.zeros((26214, 26214)) > ValueError: array is too big. > > >> > > >> x=np.zeros((262144, 26214)) > > Traceback (most recent call last): > File "<pyshell#8>", line 1, in <module> > x=np.zeros((262144, 26214)) > MemoryError > _____ > > > This was fixed before, maybe it got broken again. Since this looks > windows specific, I'm guessing it has something to do with the size of > long being 32 bits. > > The previous problem was integer overflow when multiplying the > dimensions together to get the array size when repeated divisions of the > maximum size should have be used instead. > > Chuck Could be related to this change: <https://github.com/numpy/numpy/commit/fcc6cc73ddcb1fc85446ba9256ac24ecdda6c6d8#L1L1121> My, that does look suspicious ;) Could you revert that loop and test it out? There was also a function for doing that check, I don't recall which, and it should probably be checked to make sure it remains as was. Chuck
Reverting the change worked. A patch is attached.
import numpy as np np.__version__
'1.6.0b3'
x=np.zeros((262144, 262144))
Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: array is too big. Christoph
diff --git a/numpy/core/src/multiarray/ctors.c b/numpy/core/src/multiarray/ctors.c index 261e675..a026ecb 100644 --- a/numpy/core/src/multiarray/ctors.c +++ b/numpy/core/src/multiarray/ctors.c @@ -978,14 +978,14 @@ PyArray_NewFromDescr(PyTypeObject *subtype, PyArray_Descr *descr, int nd, return NULL; } - size *= dim; - - if (size > largest) { + if (dim > largest) { PyErr_SetString(PyExc_ValueError, "array is too big."); Py_DECREF(descr); return NULL; } + size *= dim; + largest /= dim; } self = (PyArrayObject *) subtype->tp_alloc(subtype, 0);
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion