Hi David
On Fri, Mar 14, 2008 at 9:19 AM, David Huard [EMAIL PROTECTED] wrote:
I added a test for ticket 691. Problem is, there seems to be a new bug. I
don't know it its related to the change or if it was there before. Please
check this out.
Fantastic, thanks for jumping in and addressing
Hi,
Numpy is great : I can see several IDL/matlab projects switching to numpy :)
However, it would be s nice to be able to put some OpenMP into the
numpy code.
It would be nice to be able to be able to use several CPU using the
numpy syntax ie A=sqrt(B).
Ok, we can use some inline C/C++
On Sat, Mar 15, 2008 at 2:48 PM, Gnata Xavier [EMAIL PROTECTED] wrote:
Hi,
Numpy is great : I can see several IDL/matlab projects switching to numpy :)
However, it would be s nice to be able to put some OpenMP into the
numpy code.
It would be nice to be able to be able to use
On 15/03/2008, Damian Eads [EMAIL PROTECTED] wrote:
Robert Kern wrote:
Eric Jones tried to use multithreading to split the computation of
ufuncs across CPUs. Ultimately, the overhead of locking and unlocking
made it prohibitive for medium-sized arrays and only somewhat
disappointing
On Sat, Mar 15, 2008 at 07:33:51PM -0400, Anne Archibald wrote:
...
To answer the OP's question, there is a relatively small number of C
inner loops that could be marked up with OpenMP #pragmas to cover most
matrix operations. Matrix linear algebra is a separate question, since
numpy/scipy
Scott Ransom wrote:
On Sat, Mar 15, 2008 at 07:33:51PM -0400, Anne Archibald wrote:
...
To answer the OP's question, there is a relatively small number of C
inner loops that could be marked up with OpenMP #pragmas to cover most
matrix operations. Matrix linear algebra is a separate
Hi,
I want to fix up the average function. I note that the return dtype is not
specified, nor is the precision of the accumulator. Both of these can be
specified for the mean method and I wonder what should be the case for
average. Or should we just use double precision? That would seem
Anne,
Sure. I've found multi-threaded scientific computation to give mixed
results. For some things, it results in very significant performance
gains, and other things, it's not worth the trouble at all. It really
does depend on what you're doing. But, I don't think it's fair to paint
On Sat, Mar 15, 2008 at 8:25 PM, Damian Eads [EMAIL PROTECTED] wrote:
Robert: what benchmarks were performed showing less than pleasing
performance gains?
The implementation is in the multicore branch. This particular file is
the main benchmark Eric was using.