as soon as you do it, I'd like to compare them with the benchmarks I posted
here few days ago (compiled with gcc):
http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/
lorenzo.
On 4/17/07, rex [EMAIL PROTECTED] wrote:
I'm about to build numpy using Intel's
rex [EMAIL PROTECTED] [2007-04-16 15:53]:
I'm about to build numpy using Intel's MKL 9.1 beta and want to compare
it with the version I built using MKL 8.1. Is the LINPACK
benchmark the most appropriate?
I'm buried in responses. Not.
A well-known benchmark (Scimark?) coded using NumPy/SciPy
Christian K wrote:
David Cournapeau wrote:
On Ubuntu and debian, you do NOT need any site.cfg to compile numpy with
atlas support. Just install the package atlas3-base-dev, and you are
done. The reason is that when *compiling* a software which needs atlas,
the linker will try to find
Matthieu Brucher wrote:
you can probably use numpy.hypot(v-y) to speed this up more...
Tried it today, hypot takes two arguments :(
Is there a function that does the square root of the sum of squares ?
then maybe you want:
numpy.hypot(v-y,v-y), though you should probably make a
Hello
- Original Message -
From: Ray Schumacher [EMAIL PROTECTED]
To: numpy-discussion@scipy.org
Sent: Tuesday, April 17, 2007 4:56 PM
Subject: Re: [Numpy-discussion] NumPy benchmark
I'm still curious about the licensing aspects of using Intel's
compiler and libs. Is the compiled
I try to use the expression as you said, but I'm not getting the desired
result,
My text file look like this:
# num rows=115 num columns=2634
AbassiM.txt 0.033023 0.033023 0.033023 0.165115 0.4623210.00
AgricoleW.txt 0.038691 0.038691 0.038691 0.232147 0.5416760.215300
AliR.txt
I recently made the switch from Matlab to Python and am very
interested in optimizing certain routines that I find too slow in
python/numpy (long loops).
I have looked and learned about the different methods used for such
problems such as blitz, weave and pyrex but had a question for more
I'm still curious about the licensing aspects of using Intel's
compiler and libs. Is the compiled Python/numpy result distributable,
like any other compiled program?
Ray
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Or
f = sqrt(dot(x,x))
Am 17.04.2007 um 16:12 schrieb Sturla Molden:
f = lambda x : sqrt(sum(x**2))
PGP.sig
Description: Signierter Teil der Nachricht
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Hi,
You can find various suggestions to improve performance like Tim
Hochberg's list:
0. Think about your algorithm.
1. Vectorize your inner loop.
2. Eliminate temporaries
3. Ask for help
4. Recode in C.
5. Accept that your code will never be fast.
Step zero should probably be repeated after
Hi,
I've found the next expression write it in Matlab,
Rtx = sqrt(Rt);
Rtx is a matrix, and that's why I need sqrt() to operate elementwise. I've
read NumPy tutorial, and I know it's possible,
A set of this functions, has been provided wich optimize certain kinds of
calculations on arrays.
Miquel Poch wrote:
Hi,
I've found the next expression write it in Matlab,
Rtx = sqrt(Rt);
Rtx is a matrix, and that's why I need sqrt() to operate elementwise.
I've read NumPy tutorial, and I know it's possible,
A set of this functions, has been provided wich optimize certain kinds
lorenzo bolla [EMAIL PROTECTED] [2007-04-17 00:37]:
as soon as you do it, I'd like to compare them with the benchmarks I posted
here few days ago (compiled with gcc):
http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/
Thanks for the link.
I haven't built
El dt 17 de 04 del 2007 a les 20:58 +0200, en/na Miquel Poch va
escriure:
Hi,
I've found the next expression write it in Matlab,
Rtx = sqrt(Rt);
Rtx is a matrix, and that's why I need sqrt() to operate elementwise.
I've read NumPy tutorial, and I know it's possible,
A set of this
Now, I didn't know that. That's cool because I have a
new dual core Intel Mac Pro. I see I have some
learning to do with multithreading. Thanks.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
You should probably look over your code and see if
On 17/04/07, Francesc Altet [EMAIL PROTECTED] wrote:
Finally, don't let benchmarks fool you. If you can, it is always better
to run your own benchmarks made of your own problems. A tool that can be
killer for one application can be just mediocre for another (that's
somewhat extreme, but I
On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote:
Now, I didn't know that. That's cool because I have a
new dual core Intel Mac Pro. I see I have some
learning to do with multithreading. Thanks.
No problem. I had completely forgotten about the global interpreter
lock, wrote a little
Ii get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Thanks.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
Now, I didn't know that. That's cool because I
have a
new dual core Intel Mac Pro. I
Hi Anne,
Your reply to Lou raises a naive follow-up question of my own...
Normally, python's multithreading is effectively cooperative, because
the interpreter's data structures are all stored under the same lock,
so only one thread can be executing python bytecode at a time.
However, many
On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote:
I get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Well, it's hardly wonderful, but I wrote a little package to make idioms like:
d = {}
def work(f):
d[f] = sum(exp(2.j*pi*f*times))
On 17/04/07, James Turner [EMAIL PROTECTED] wrote:
Hi Anne,
Your reply to Lou raises a naive follow-up question of my own...
Normally, python's multithreading is effectively cooperative, because
the interpreter's data structures are all stored under the same lock,
so only one thread can
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
threads, as the lock is freed during the numpy operations - as I understood
for the
Oops. Looks like I forgot to attach the test program that generated
that output so you can tell what dist2g actually does.
Funny thing is -- despite being written in C, hypot doesn't actually
win any of the test cases for which it's applicable.
--bb
On 4/17/07, Bill Baxter [EMAIL PROTECTED]
Be sure to check out the numpy examples page too.
http://www.scipy.org/Numpy_Example_List
Always a good resource if you're not sure how to call a particular command.
--bb
On 4/18/07, Miquel Poch [EMAIL PROTECTED] wrote:
Hi,
I've found the next expression write it in Matlab,
Rtx =
Very nice. Thanks. Examples are welcome since they
are usually the best to get up to speed with
programming concepts.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
I get what you are saying, but I'm not even at the
Stupidly Easy Parallel
Using MKL 9.1_beta made no difference in the prior benchmark, but it
does improve speed in an earlier benchmark I posted.
From:
http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025673.html
''' A
Andrew Straw wrote:
Christian K wrote:
David Cournapeau wrote:
On Ubuntu and debian, you do NOT need any site.cfg to compile numpy with
atlas support. Just install the package atlas3-base-dev, and you are
done. The reason is that when *compiling* a software which needs atlas,
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
-- quick note:
you are mixing tabs and spaces :-(
what editor are you using !?
-Sebastian
On 4/17/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL
On 4/17/07, Robert Kern [EMAIL PROTECTED] wrote:
Matthieu Brucher wrote:
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not really, no.
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible
On 18/04/07, Robert Kern [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not really, no.
ATLAS is a library designed to implement linear algebra
On 18/04/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
-- quick note:
you are mixing tabs and spaces :-(
what editor are you using !?
Agh. vim is misbehaving. Sorry about that.
Anne Archibald wrote:
And the scope of improvement would be very limited; an
expression like A*B+C*D would be much more efficient, probably, if the
whole expression were evaluated at once for each element (due to
memory locality and temporary allocation). But it is impossible for
numpy,
33 matches
Mail list logo