Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a snippet
of the code that I'm
Eli Bressert wrote:
Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a
Eli Bressert wrote:
Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a snippet
of
Short answer to the subject: Oh yes.
Basically, MaskedArrays in its current implementation is more of a
convenience class than anything. Most of the functions manipulating
masked arrays create a lot of temporaries. When performance is needed,
I must advise you to work directly on the data
On May 9, 2009, at 8:17 PM, Eric Firing wrote:
Eric Firing wrote:
A part of the slowdown is what looks to me like unnecessary copying
in _MaskedBinaryOperation.__call__. It is using getdata, which
applies numpy.array to its input, forcing a copy. I think the copy
is actually