On May 31, 2011, at 8:08 PM, Charles R Harris wrote:
> Hi All,
>
> I've been contemplating new functions that could be added to numpy and
> thought I'd run them by folks to see if there is any interest.
>
> 1) Modified sort/argsort functions that return the maximum k values.
> This is easy
> On Thu, Jun 2, 2011 at 1:49 AM, Mark Miller wrote:
>> Not quite. Bincount is fine if you have a set of approximately
>> sequential numbers. But if you don't
On 6/1/2011 9:35 PM, David Cournapeau wrote:
> Even worse, it fails miserably if you sequential numbers but with a high
> shift.
> n
On Thu, Jun 2, 2011 at 1:49 AM, Mark Miller wrote:
> Not quite. Bincount is fine if you have a set of approximately
> sequential numbers. But if you don't
Even worse, it fails miserably if you sequential numbers but with a high shift.
np.bincount([10001, 10002]) # will take a lof of
On Wed, Jun 1, 2011 at 10:29 PM, Charles R Harris
wrote:
>
>
> On Wed, Jun 1, 2011 at 3:16 PM, Mark Wiebe wrote:
>>
>> On Wed, Jun 1, 2011 at 3:52 PM, Charles R Harris
>> wrote:
>>>
>>>
>>> Just a quick comment, as this really needs more thought, but time is a
>>> bag of worms.
>>
>> Certainly
On Wed, Jun 1, 2011 at 3:16 PM, Mark Wiebe wrote:
> On Wed, Jun 1, 2011 at 3:52 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>>
>> Just a quick comment, as this really needs more thought, but time is a bag
>> of worms.
>>
>
> Certainly a bag of worms, I agree.
>
>
>> Trying
On Wed, Jun 1, 2011 at 4:01 PM, Ralf Gommers wrote:
>
>
> On Wed, Jun 1, 2011 at 10:05 PM, Mark Wiebe wrote:
>
>> Hey all,
>>
>> So I'm doing a summer internship at Enthought, and the first thing they
>> asked me to look into is finishing the datetime type in numpy. It turns out
>> that the estim
On Wed, Jun 1, 2011 at 3:52 PM, Charles R Harris
wrote:
>
>
> Just a quick comment, as this really needs more thought, but time is a bag
> of worms.
>
Certainly a bag of worms, I agree.
> Trying to represent some standard -- say seconds at the solar system
> barycenter to account for general r
On Wed, Jun 1, 2011 at 11:04 PM, Charles R Harris wrote:
>
>
> On Wed, Jun 1, 2011 at 3:01 PM, Ralf Gommers
> wrote:
>
>>
>>
>> On Wed, Jun 1, 2011 at 10:05 PM, Mark Wiebe wrote:
>>
>>> Hey all,
>>>
>>> So I'm doing a summer internship at Enthought, and the first thing they
>>> asked me to look
On Wed, Jun 1, 2011 at 3:01 PM, Ralf Gommers wrote:
>
>
> On Wed, Jun 1, 2011 at 10:05 PM, Mark Wiebe wrote:
>
>> Hey all,
>>
>> So I'm doing a summer internship at Enthought, and the first thing they
>> asked me to look into is finishing the datetime type in numpy. It turns out
>> that the estim
On Wed, Jun 1, 2011 at 10:05 PM, Mark Wiebe wrote:
> Hey all,
>
> So I'm doing a summer internship at Enthought, and the first thing they
> asked me to look into is finishing the datetime type in numpy. It turns out
> that the estimates of how complete the type was weren't accurate, and to
> supp
On Wed, Jun 1, 2011 at 2:05 PM, Mark Wiebe wrote:
> Hey all,
>
> So I'm doing a summer internship at Enthought, and the first thing they
> asked me to look into is finishing the datetime type in numpy. It turns out
> that the estimates of how complete the type was weren't accurate, and to
> suppo
Hey all,
So I'm doing a summer internship at Enthought, and the first thing they
asked me to look into is finishing the datetime type in numpy. It turns out
that the estimates of how complete the type was weren't accurate, and to
support what the NEP describes required generalizing the ufunc type
My favorite missing extension to numpy functions
np.bincount with 2 (or more) dimensional weights for fast calculation
of group statistics.
Josef
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-d
On Wed, Jun 1, 2011 at 12:30, Christopher Barker wrote:
> On 5/31/11 6:08 PM, Charles R Harris wrote:
>> 2) Ufunc fadd (nanadd?) Treats nan as zero in addition.
>
> so:
>
> In [53]: a
> Out[53]: array([ 1., 2., nan])
>
> In [54]: b
> Out[54]: array([0, 1, 2])
>
> In [55]: a + b
> Out[55]: arra
On 5/31/11 6:08 PM, Charles R Harris wrote:
> 2) Ufunc fadd (nanadd?) Treats nan as zero in addition.
so:
In [53]: a
Out[53]: array([ 1., 2., nan])
In [54]: b
Out[54]: array([0, 1, 2])
In [55]: a + b
Out[55]: array([ 1., 3., nan])
and nanadd(a,b) would yield:
array([ 1., 3., 2.)
Not quite. Bincount is fine if you have a set of approximately
sequential numbers. But if you don't
>>> a = numpy.array((1,500,1000))
>>> a
array([ 1, 500, 1000])
>>> b = numpy.bincount(a)
>>> b
array([0, 1, 0, ..., 0, 0, 1])
>>> len(b)
1001
-Mark
On Wed, Jun 1, 2011 at 9:32 AM, Skipper
On Wed, Jun 1, 2011 at 11:31 AM, Mark Miller wrote:
> I'd love to see something like a "count_unique" function included. The
> numpy.unique function is handy, but it can be a little awkward to
> efficiently go back and get counts of each unique value after the
> fact.
>
Does bincount do what you'
yes, and its probably slower to boot. A quick benchmark on my computer shows
that:
a = np.zeros([4000,4000],'f4')+500
np.mean(a)
takes 0.02 secs
np.mean(a,dtype=np.float64)
takes 0.1 secs
np.mean(a.astype(np.float64))
takes 0.06 secs
so casting the whole array is almost 40% faster than setti
On Wed, Jun 1, 2011 at 11:11, Bruce Southey wrote:
> On 06/01/2011 11:01 AM, Robert Kern wrote:
>> On Wed, Jun 1, 2011 at 10:44, Craig Yoshioka wrote:
>>> would anyone object to fixing the numpy mean and stdv functions, so that
>>> they always used a 64-bit value to track sums, or so that they u
On 06/01/2011 11:01 AM, Robert Kern wrote:
> On Wed, Jun 1, 2011 at 10:44, Craig Yoshioka wrote:
>> would anyone object to fixing the numpy mean and stdv functions, so that
>> they always used a 64-bit value to track sums, or so that they used a
>> running calculation. That way
>>
>> np.mean(np
On Wed, Jun 1, 2011 at 10:44, Craig Yoshioka wrote:
> would anyone object to fixing the numpy mean and stdv functions, so that they
> always used a 64-bit value to track sums, or so that they used a running
> calculation. That way
>
> np.mean(np.zeros([4000,4000],'f4')+500)
>
> would not equal
Short-circuiting find would be nice. Right now, to 'find' something you first
make a bool array, then iterate over it. If all you want is the first index
where x[i] = e, not very efficient.
What I just described is a find with a '==' predicate. Not sure if it's
worthwhile to consider other p
I have a bit of code that performs multi-taper power spectra using numpy
and a C extension module. The C portion consists of an interface file
and a python-unaware computational file. The latter invokes fftpack.
The straightforward setup.py appended below works fine on Linux. On
Windows using M
would anyone object to fixing the numpy mean and stdv functions, so that they
always used a 64-bit value to track sums, or so that they used a running
calculation. That way
np.mean(np.zeros([4000,4000],'f4')+500)
would not equal 511.493408?
`
On May 31, 2011, at 6:08 PM, Charles R Harris w
I'd love to see something like a "count_unique" function included. The
numpy.unique function is handy, but it can be a little awkward to
efficiently go back and get counts of each unique value after the
fact.
-Mark
On Wed, Jun 1, 2011 at 8:17 AM, Keith Goodman wrote:
> On Tue, May 31, 2011 at
On Tue, May 31, 2011 at 8:41 PM, Charles R Harris
wrote:
> On Tue, May 31, 2011 at 8:50 PM, Bruce Southey wrote:
>> How about including all or some of Keith's Bottleneck package?
>> He has tried to include some of the discussed functions and tried to
>> make them very fast.
>
> I don't think the
When the dimensionality gets high, grid methods like you're describing
start to be a problem ("the curse of dimensionality"). The standard
approaches are simple Monte Carlo integration or its refinements
(Metropolis-Hasings, for example). These converge somewhat slowly, but
are not much affected by
Hello everyone,
I am currently tackling the issue to numerically solve an integral of
higher dimensions numerically. I am comparing models
and their dimension increase with 2^n order.
Taking a closer look to its projections along the axes, down to a two
dimensions picture, the projections are
Hi,
This is a response to
http://mail.scipy.org/pipermail/numpy-discussion/2011-April/055908.html
A cleaner workaround that doesn't mess with your system Python (see
https://github.com/pypa/virtualenv/issues/118)
Activate the virtualenv
mkdir $VIRTUAL_ENV/local
ln -s $VIRTUAL_ENV/lib $VIRTUAL_EN
29 matches
Mail list logo