Hi,

I am working on a script which reads rather large amounts of data in a
binary format and then
processes it through different test functions.
I optimized the beast as much as I possibly could: using tuples
instead of lists,
then moving to cython and declaring the types, optimizing the calls to numpy fn
by use of the  buffer notation...

All in all I gain a factor 10 in speed. Not bad but still not really enough...

What I still see as factors slowing me down could be (see my code in attach):
- the use of the file.read() function from python to get a string
which I then process (is an fread call
from c faster... how to implement it?)
- the use of the struct.unpack
- the bit masking technique I use... (is it good or bad)

The above might seem irrelevant but I have millions of events to process...

One more question related to this... I do I profile a cython file (the
info from the
python profiler is no longer split into the different subfunctions...)?


Thanks in advance for your tips,
JF

Attachment: next_ev.pyx
Description: Binary data

_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to