I think you'll find the algorithm below to be a lot faster, especially if the
arrays are big. Checking each array index against the list of included or
excluded elements is must slower than simply creating a secondary array and
looking up whether the elements are included or not.
b =
The difference between IDL and numpy is that IDL uses single precision
floats by default while numpy uses doubles. If you try it with
doubles in IDL, you will see that it also returns false.
As David Cournapeau said, you should not expect different floating
point arithmetic operations to
You're not supposed to write to the locals() dictionary. Sometimes
it works, but sometimes it doesn't. From the Python library docs:
locals()
Update and return a dictionary representing the current local symbol
table.
Note: The contents of this dictionary should not be
Here's a technique that works:
Python 2.4.2 (#5, Nov 21 2005, 23:08:11)
[GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin
Type help, copyright, credits or license for more information.
import numpy as np
a = np.array([0,4,0,11])
b = np.array([-1,11,4,15])
rangelen = b-a+1
On Dec 14, 2006, at 2:56 AM, Cameron Walsh wrote:
At some point I might try and test
different cache sizes for different data-set sizes and see what the
effect is. For now, 65536 seems a good number and I would be happy to
see this replace the current numpy.histogram.
I experimented a
On Dec 12, 2006, at 10:27 PM, Cameron Walsh wrote:
I'm trying to generate histograms of extremely large datasets. I've
tried a few methods, listed below, all with their own shortcomings.
Mailing-list archive and google searches have not revealed any
solutions.
The numpy.histogram function