I should add that over the past couple days, we've done more testing to try 
and isolate the issue. In particular, I created a simple script that built 
a 100 million character string one byte at a time (just an example of a 
memory and cpu intensive task) and compared the execution times of that 
script in source form versus pyinstaller exe. There was practically no 
difference, thus this issue isn't necessarily generic/widespread - its 
caused by something more specific we're doing in our application. 

Since our program involves a lot of file I/O (reading, seeking), I tested 
another small script that wrote 100 million characters 1 byte at a time to 
a file and then used random.randomrange(0, 100000000) to seek to a random 
location in that same file and read 1 byte (100 million times). Once again, 
there was not a significant difference in execution times of the source 
versus exe code. 

Next I tried importing a bunch of modules like PyQt, SQLalchmeny, PyCrypto, 
etc into the simple test script until the size of the exe file was about 40 
MB. That also did not explicitly cause a slowdown in the execution time. 

I did manage to find an older post here with a similar sounding issue: 
https://groups.google.com/forum/#!topic/pyinstaller/_iX2NjXckRI. The 
suggestion was to look at pyi_importers.py and use -v to python to trace 
imports. I'm not quite sure how that would help diagnose the issue. What 
should we be looking for specifically in that file and/or output? 

Thanks,
MHL

On Wednesday, June 3, 2015 at 2:19:31 AM UTC-5, Michael Hale Ligh wrote:
>
> Hello, 
>
> I was wondering if anyone has experienced significant performance 
> degradation after compiling a program with pyinstaller. When I run a 
> program via source, its approximately 10 times faster than the exact same 
> code in binary form. I'm aware of the minimal delay during startup that is 
> known/expected with pyinstaller binaries and I don't believe that is the 
> issue here. 
>
> Here's an example: 
>
> 1) check out the volatility source 
>
> $ git clone https://github.com/volatilityfoundation/volatility.git
>
> 2) run the "filescan" plugin with volatility as source
>
> $ time python volatility/vol.py -f memdump.mem filescan > /dev/null 
>
> real    1m31.799s
> user    1m25.953s
> sys    0m5.660s
>
> 3) check out the latest dev pyinstaller 
>
> $ git clone https://github.com/pyinstaller/pyinstaller.git
>
> 4) compile volatility (this is being done on a 64-bit Debian Linux system)
>
> $ python pyinstaller/pyinstaller.py -F volatility/pyinstaller.spec 
>
> 5) run the "filescan" plugin with volatility as a pyinstaller binary
>
> $ time ./dist/volatility -f memdump.mem filescan > /dev/null
>
> real    14m31.405s
> user    14m23.970s
> sys    0m5.700s
>
> As you can see, the exact same code took 14m31s after being compiled, but 
> it only took 1m31s in source form. If you're not familiar with volatility 
> (or the filescan plugin) it is essentially scanning through a large memory 
> dump file (several GB big) looking for specific signatures/byte patterns 
> and then interpreting data at the matching addresses as C data structures 
> (in short, memory forensics). 
>
> What would you suggest for troubleshooting this type of performance 
> problem? Also, are there any known types of activities (i.e. disk I/O, 
> network I/O, GUI interactions) or specific modules/APIs that result in 
> severe slowdowns when compiled or should speeds theoretically be pretty 
> similar between the source and binary versions (minus the tiny initial 
> startup delay)?
>
> Thank you! 
> MHL
>

-- 
You received this message because you are subscribed to the Google Groups 
"PyInstaller" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/pyinstaller.
For more options, visit https://groups.google.com/d/optout.

Reply via email to