-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Anne Archibald wrote:
It would be perfectly possible, in principle, to implement an
ATLAS-like library that handled a variety (perhaps all) of numpy's
basic operations in platform-optimized fashion. But implementing ATLAS
is not a simple
--- Anne Archibald [EMAIL PROTECTED] wrote:
I just
took another look at
that code and added a parallel_map I hadn't got
around to writing
before, too. I'd be happy to stick it (and test
file) on the wiki
under some open license or other (do what thou wilt
shall be the
whole of the
On 4/17/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 18/04/07, Robert Kern [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not really, no.
Sebastian Haase wrote:
On 4/17/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 18/04/07, Robert Kern [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not
I recently made the switch from Matlab to Python and am very
interested in optimizing certain routines that I find too slow in
python/numpy (long loops).
I have looked and learned about the different methods used for such
problems such as blitz, weave and pyrex but had a question for more
Hi,
You can find various suggestions to improve performance like Tim
Hochberg's list:
0. Think about your algorithm.
1. Vectorize your inner loop.
2. Eliminate temporaries
3. Ask for help
4. Recode in C.
5. Accept that your code will never be fast.
Step zero should probably be repeated after
Now, I didn't know that. That's cool because I have a
new dual core Intel Mac Pro. I see I have some
learning to do with multithreading. Thanks.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
You should probably look over your code and see if
On 17/04/07, Francesc Altet [EMAIL PROTECTED] wrote:
Finally, don't let benchmarks fool you. If you can, it is always better
to run your own benchmarks made of your own problems. A tool that can be
killer for one application can be just mediocre for another (that's
somewhat extreme, but I
On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote:
Now, I didn't know that. That's cool because I have a
new dual core Intel Mac Pro. I see I have some
learning to do with multithreading. Thanks.
No problem. I had completely forgotten about the global interpreter
lock, wrote a little
Ii get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Thanks.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
Now, I didn't know that. That's cool because I
have a
new dual core Intel Mac Pro. I
Hi Anne,
Your reply to Lou raises a naive follow-up question of my own...
Normally, python's multithreading is effectively cooperative, because
the interpreter's data structures are all stored under the same lock,
so only one thread can be executing python bytecode at a time.
However, many
On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote:
I get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Well, it's hardly wonderful, but I wrote a little package to make idioms like:
d = {}
def work(f):
d[f] = sum(exp(2.j*pi*f*times))
On 17/04/07, James Turner [EMAIL PROTECTED] wrote:
Hi Anne,
Your reply to Lou raises a naive follow-up question of my own...
Normally, python's multithreading is effectively cooperative, because
the interpreter's data structures are all stored under the same lock,
so only one thread can
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
threads, as the lock is freed during the numpy operations - as I understood
for the
Very nice. Thanks. Examples are welcome since they
are usually the best to get up to speed with
programming concepts.
--- Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL PROTECTED]
wrote:
I get what you are saying, but I'm not even at the
Stupidly Easy Parallel
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
-- quick note:
you are mixing tabs and spaces :-(
what editor are you using !?
-Sebastian
On 4/17/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 17/04/07, Lou Pecora [EMAIL
On 4/17/07, Robert Kern [EMAIL PROTECTED] wrote:
Matthieu Brucher wrote:
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not really, no.
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible
On 18/04/07, Robert Kern [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented using ATLAS !?
Any ?
Not really, no.
ATLAS is a library designed to implement linear algebra
On 18/04/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
-- quick note:
you are mixing tabs and spaces :-(
what editor are you using !?
Agh. vim is misbehaving. Sorry about that.
Anne Archibald wrote:
And the scope of improvement would be very limited; an
expression like A*B+C*D would be much more efficient, probably, if the
whole expression were evaluated at once for each element (due to
memory locality and temporary allocation). But it is impossible for
numpy,
21 matches
Mail list logo