On Thu, Jan 22, 2009 at 17:09, Wes McKinney wesmck...@gmail.com wrote:
Windows XP, Pentium D, Python 2.5.2
I can replicate the negative numbers on my Windows VM. I'll take a look at it.
Wrote profile results to foo.py.lprof
Timer unit: 4.17601e-010 s
File: foo.py
Function: f at line 1
Total
import cProfile
def f():
pass
def g():
for i in xrange(100):
f()
cProfile.run(g())
test.py
103 function calls in 1.225 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
10.0000.000
On Thu, Jan 22, 2009 at 17:00, Wes McKinney wesmck...@gmail.com wrote:
import cProfile
def f():
pass
def g():
for i in xrange(100):
f()
cProfile.run(g())
test.py
103 function calls in 1.225 CPU seconds
Ordered by: standard name
ncalls tottime
Windows XP, Pentium D, Python 2.5.2
On Thu, Jan 22, 2009 at 6:03 PM, Robert Kern robert.k...@gmail.com wrote:
On Thu, Jan 22, 2009 at 17:00, Wes McKinney wesmck...@gmail.com wrote:
import cProfile
def f():
pass
def g():
for i in xrange(100):
f()
On Tue, Jan 20, 2009 at 20:57, Neal Becker ndbeck...@gmail.com wrote:
I see the problem. Thanks for the great profiler! You ought to make this
more widely known.
http://pypi.python.org/pypi/line_profiler
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless
T J wrote:
On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker ndbeck...@gmail.com wrote:
It seems the big chunks of time are used in data conversion between numpy
and my own vectors classes. Mine are wrappers around boost::ublas. The
conversion must be falling back on a very inefficient method
Robert Kern wrote:
On Tue, Jan 20, 2009 at 20:57, Neal Becker ndbeck...@gmail.com wrote:
I see the problem. Thanks for the great profiler! You ought to make
this more widely known.
I'll be making a release shortly.
It seems the big chunks of time are used in data conversion between
On 1/21/2009 1:27 PM, Neal Becker wrote:
It might if I had used this for all of my c++ code, but I have a big library
of c++ wrapped code that doesn't use pyublas. Pyublas takes numpy objects
from python and allows the use of c++ ublas on it (without conversion).
If you can get a pointer
On 1/21/2009 2:38 PM, Sturla Molden wrote:
If you can get a pointer (as integer) to your C++ data, and the shape
and dtype is known, you may use this (rather unsafe) 'fromaddress' hack:
And opposite, if you need to get the address referenced to by an
ndarray, you can do this:
def
Hi Neal,
On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
It might if I had used this for all of my c++ code, but I have a big
library of c++ wrapped code that doesn't use pyublas. Pyublas takes numpy
objects from python and allows the use of c++ ublas on it (without
conversion).
Ravi wrote:
Hi Neal,
On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
It might if I had used this for all of my c++ code, but I have a big
library of c++ wrapped code that doesn't use pyublas. Pyublas takes
numpy objects from python and allows the use of c++ ublas on it (without
Ravi wrote:
Hi Neal,
On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
It might if I had used this for all of my c++ code, but I have a big
library of c++ wrapped code that doesn't use pyublas. Pyublas takes
numpy objects from python and allows the use of c++ ublas on it (without
On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
[http://mail.python.org/pipermail/cplusplus-sig/2008-October/013825.html
Thanks for reminding me about this!
Do you have a current version of the code? I grabbed the files from the
above message, but I see some additional subsequent
Neal Becker wrote:
I tried a little experiment, implementing some code in numpy
It sounds like you've found your core issue, but a couple comments:
from numpy import *
I'm convinced that import * is a bad idea. I think the standard
syntax is now import numpy as np
from math import pi
Robert-- this is a great little piece of code, I already think it will be a
part of my workflow. However, I seem to be getting negative % time taken on
the more time consuming lines, perhaps getting some overflow?
Thanks a lot,
Wes
On Wed, Jan 21, 2009 at 3:23 AM, Robert Kern
Ravi wrote:
On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
[http://mail.python.org/pipermail/cplusplus-sig/2008-
October/013825.html
Thanks for reminding me about this!
Do you have a current version of the code? I grabbed the files from the
above message, but I see some
Ravi wrote:
On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
[http://mail.python.org/pipermail/cplusplus-sig/2008-
October/013825.html
Thanks for reminding me about this!
Do you have a current version of the code? I grabbed the files from the
above message, but I see some
On Wednesday 21 January 2009 13:55:49 Neal Becker wrote:
I'm only interested in simple strided 1-d vectors. In that case, I think
your code already works. If you have c++ code using the iterator
interface, the iterators dereference will use (*array )[index]. This will
use operator[], which
Ravi wrote:
On Wednesday 21 January 2009 13:55:49 Neal Becker wrote:
I'm only interested in simple strided 1-d vectors. In that case, I think
your code already works. If you have c++ code using the iterator
interface, the iterators dereference will use (*array )[index]. This
will use
On Wednesday 21 January 2009 14:57:59 Neal Becker wrote:
ublas::vectorT func (numpy::array_from_pyT::type const)
But not for a function that modifies it arg in-place ( instead of const):
void func (numpy::array_from_pyT::type )
Use void func
On Wed, Jan 21, 2009 at 12:13, Wes McKinney wesmck...@gmail.com wrote:
Robert-- this is a great little piece of code, I already think it will be a
part of my workflow. However, I seem to be getting negative % time taken on
the more time consuming lines, perhaps getting some overflow?
That's
I have been using your profiler extensively and it has contributed to my
achieving significant improvements in the application I work on largely due
to the usefulness of the line by line breakdown enabling me to easily select
the next part of code to work on optimizing. So firstly many thanks for
On Thu, Jan 22, 2009 at 01:46, Hanni Ali hanni@gmail.com wrote:
I have been using your profiler extensively and it has contributed to my
achieving significant improvements in the application I work on largely due
to the usefulness of the line by line breakdown enabling me to easily select
2009/1/20 Neal Becker ndbeck...@gmail.com:
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large vectors, I hoped it would be reasonably efficient.
The code in question is simple. It is a
2009/1/20 Neal Becker ndbeck...@gmail.com:
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large vectors, I hoped it would be reasonably efficient.
The code in question is simple. It is a
Robert Kern wrote:
2009/1/20 Neal Becker ndbeck...@gmail.com:
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large vectors, I hoped it would be reasonably efficient.
The code in question
On Tue, Jan 20, 2009 at 20:44, Neal Becker ndbeck...@gmail.com wrote:
Robert Kern wrote:
2009/1/20 Neal Becker ndbeck...@gmail.com:
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large
Robert Kern wrote:
2009/1/20 Neal Becker ndbeck...@gmail.com:
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large vectors, I hoped it would be reasonably efficient.
The code in question
On Tue, Jan 20, 2009 at 20:57, Neal Becker ndbeck...@gmail.com wrote:
I see the problem. Thanks for the great profiler! You ought to make this
more widely known.
I'll be making a release shortly.
It seems the big chunks of time are used in data conversion between numpy
and my own vectors
On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker ndbeck...@gmail.com wrote:
It seems the big chunks of time are used in data conversion between numpy
and my own vectors classes. Mine are wrappers around boost::ublas. The
conversion must be falling back on a very inefficient method since there is
30 matches
Mail list logo