to, 2010-10-07 kello 12:08 -0400, Andrew P. Mullhaupt kirjoitti: [clip] > No. You can define the arrays as backed by mapped files with real and > imaginary parts separated. Then the imaginary part, being initially > zero, is a sparse part of the file, takes only a fraction of the > space (and, on decent machine doesn't incur memory bandwidth costs > either). You can then slipstream the cost of testing for whether the > imaginary part has been subsequently assigned to zero (so you can > re-sparsify the representation of a page) with any operation that > examines all the values on that page. Consistency would be provided by > the OS, so there wouldn't really be much numpy-specific code involved. > > So there is at least one efficient way to implement my suggestion.
Interesting idea. Most OSes offer also page-allocated memory not backed in files. In fact, Glibc's malloc works just like this on Linux for large memory blocks. It would work automatically like this with complex arrays, if the imaginary part was stored after the real part, and additional branches were added to not write zeros to memory. But to implement this, you'd have to rewrite large parts of Numpy since the separated storage of re/im conflicts with its memory model. I believe this will simply not be done, since there seems to be little need for such a feature. -- Pauli Virtanen _______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
