Re: [web2py] Re: Python Performance Issue

2014-03-18 Thread Massimo Di Pierro
People have found lots of variability in performance with apache+mod_wsgi. 
Performance is very sensitive to memeory/etc.

This is because Apache is not async (like nginx) and it either uses threads 
or processes. Both have issues with Python. Threads slow you down because 
of the GIL. Parallel processes may consume lots of memory which may also 
cause performance issues. Things get worse and worse if processes hand 
(think of clients sending requests but not loading because of slow 
connections).

Apache is fine for static files. gunicorn and nginx are known to have much 
better performance with Pyhton web apps.

Massimo


On Monday, 17 March 2014 21:19:11 UTC-5, horridohobbyist wrote:

 I'm disturbed by the fact that the defaults are sensible. That suggests 
 there is no way to improve the performance. A 2x-10x performance hit is 
 very serious.

 I was considering dropping Apache and going with nginx/gunicorn in my 
 Linux server, but I'm not sure that's a good idea. Apache is a nearly 
 universal web server and one cannot simply ignore it.

 Also, I'm not sure I can duplicate the functionality in my current Apache 
 setup in nginx/gunicorn.


 On Monday, 17 March 2014 21:15:12 UTC-4, Tim Richardson wrote:


 (I am the furthest thing from being an Apache expert as you can find.)


 Well, whereever that puts you, I'll be in shouting distance. 

 I guess this means you are using defaults. The defaults are sensible for 
 small loads, so I don't think you would get better performance from 
 tweaking. These default settings should set you up with 15 threads running 
 under one process which for a small load should be optimal that is, it's as 
 good as it's going to get. You get these sensible defaults if you used the 
 deployment script mentioned in the web2py book (the settings are in the 
 /etc/apache2/sites-available/default file)
  
 threads are faster than processes, but gunicorn and nginx don't even use 
 threads. They manage their workloads inside a single thread which makes 
 them fast as long as nothing CPU intensive is happening. 













 \


 Thanks.


 On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-18 Thread Niphlod
apache isn't fine for static files either. 
The move to evented-like webservers of practically all tech-savvy peoples 
in the need is a good estimate on how much the uber-standard apache lacks 
in easy-to-debug scenario (I won't even start with the know-how of the 
syntax to make it work as you'd like).
It grew big with cgi, php and java and practically every shared hosting out 
there back in the days where no alternatives were available. It shows all 
of its age ^__^

BTW: nginx doesn't run python as apache does. Usually you have something to 
manage python processes (gunicorn or uwsgi) and nginx just buffers in/out 
requests (and being evented-like is a perfect candidate).

On Tuesday, March 18, 2014 7:21:29 AM UTC+1, Massimo Di Pierro wrote:

 People have found lots of variability in performance with apache+mod_wsgi. 
 Performance is very sensitive to memeory/etc.

 This is because Apache is not async (like nginx) and it either uses 
 threads or processes. Both have issues with Python. Threads slow you down 
 because of the GIL. Parallel processes may consume lots of memory which may 
 also cause performance issues. Things get worse and worse if processes hand 
 (think of clients sending requests but not loading because of slow 
 connections).

 Apache is fine for static files. gunicorn and nginx are known to have much 
 better performance with Pyhton web apps.

 Massimo



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-18 Thread Niphlod
BTW: apache still suffers the SLOWLORIS attack if not carefully configured. 
ATM only workarounds to mitigate the issue are there, but not a definitive 
solution.

On Tuesday, March 18, 2014 9:46:38 PM UTC+1, Niphlod wrote:

 apache isn't fine for static files either. 
 The move to evented-like webservers of practically all tech-savvy 
 peoples in the need is a good estimate on how much the uber-standard apache 
 lacks in easy-to-debug scenario (I won't even start with the know-how of 
 the syntax to make it work as you'd like).
 It grew big with cgi, php and java and practically every shared hosting 
 out there back in the days where no alternatives were available. It shows 
 all of its age ^__^

 BTW: nginx doesn't run python as apache does. Usually you have something 
 to manage python processes (gunicorn or uwsgi) and nginx just buffers 
 in/out requests (and being evented-like is a perfect candidate).

 On Tuesday, March 18, 2014 7:21:29 AM UTC+1, Massimo Di Pierro wrote:

 People have found lots of variability in performance with 
 apache+mod_wsgi. Performance is very sensitive to memeory/etc.

 This is because Apache is not async (like nginx) and it either uses 
 threads or processes. Both have issues with Python. Threads slow you down 
 because of the GIL. Parallel processes may consume lots of memory which may 
 also cause performance issues. Things get worse and worse if processes hand 
 (think of clients sending requests but not loading because of slow 
 connections).

 Apache is fine for static files. gunicorn and nginx are known to have 
 much better performance with Pyhton web apps.

 Massimo



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Michele Comitini
@Massiom, @hb

 python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000

with the above just one worker is started, hence requests are serialized.



2014-03-17 1:03 GMT+01:00 Massimo Di Pierro massimo.dipie...@gmail.com:
 easy_install gunicorn
 cd web2py
 python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000

 Anyway, you need to run a test that does not include import Package first
 Because definitively treats imports differently. That must be tested
 separately.

 Massimo



 On Sunday, 16 March 2014 15:31:17 UTC-5, horridohobbyist wrote:

 Well, I managed to get gunicorn working in a roundabout way. Here are my
 findings for the fred.py/hello.py test:

 Elapsed time: 0.028
 Elapsed time: 0.068

 Basically, it's as fast as the command line test!

 I'm not sure this tells us much. Is it Apache's fault? Is it web2py's
 fault? The test is run without the full web2py scaffolding. I don't know how
 to run web2py on gunicorn, unless someone can tell me.


 On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote:

 gunicorn instructions:

 $ pip install gunicorn
 $ cd root dir of web2py
 $ gunicorn -w 4 gluon.main:wsgibase



 2014-03-16 14:47 GMT+01:00 horridohobbyist horrido...@gmail.com:
  I've conducted a test with Flask.
 
  fred.py is the command line program.
  hello.py is the Flask program.
  default.py is the Welcome controller.
  testdata.txt is the test data.
  shippackage.py is a required module.
 
  fred.py:
  0.024 second
  0.067 second
 
  hello.py:
  0.029 second
  0.073 second
 
  default.py:
  0.27 second
  0.78 second
 
  The Flask program is slightly slower than the command line. However,
  the
  Welcome app is about 10x slower!
 
  Web2py is much, much slower than Flask.
 
  I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB
  memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.
 
 
  I can't quite figure out how to use gunicom.
 
 
  On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:
 
  I'll see what I can do. It will take time for me to learn how to use
  another framework.
 
  As for trying a different web server, my (production) Linux server is
  intimately reliant on Apache. I'd have to learn how to use another web
  server, and then try it in my Linux VM.
 
 
  On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:
 
  Are you able to replicate the exact task in another web framework,
  such
  as Flask (with the same server setup)?
 
  On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:
 
  Well, putting back all my apps hasn't widened the discrepancy. So I
  don't know why my previous web2py installation was so slow.
 
  While the Welcome app with the calculations test shows a 2x
  discrepancy,
  the original app that initiated this thread now shows a 13x
  discrepancy
  instead of 100x. That's certainly an improvement, but it's still too
  slow.
 
  The size of the discrepancy depends on the code that is executed.
  Clearly, what I'm doing in the original app (performing
  permutations) is
  more demanding than mere arithmetical operations. Hence, 13x vs 2x.
 
  I anxiously await any resolution to this performance issue, whether
  it
  be in WSGI or in web2py. I'll check in on this thread
  periodically...
 
 
  On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:
 
  Interestingly, now that I've got a fresh install of web2py with
  only
  the Welcome app, my Welcome vs command line test shows a consistent
  2x
  discrepancy, just as you had observed.
 
  My next step is to gradually add back all the other apps I had in
  web2py (I had 8 of them!) and see whether the discrepancy grows
  with the
  number of apps. That's the theory I'm working on.
 
  Yes, yes, I know, according to the Book, I shouldn't have so many
  apps
  installed in web2py. This apparently affects performance. But the
  truth is,
  most of those apps are hardly ever executed, so their existence
  merely
  represents a static overhead in web2py. In my mind, this shouldn't
  widen the
  discrepancy, but you never know.
 
 
  On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:
 
  @mcm: you got me worried. Your test function was clocking a hell
  lower
  than the original script. But then I found out why; one order of
  magnitude
  less (5000 vs 5). Once that was corrected, you got the exact
  same clock
  times as my app (i.e. function directly in the controller). I
  also
  stripped out the logging part making the app just return the
  result and no
  visible changes to the timings happened.
 
  @hh: glad at least we got some grounds to hold on.
  @mariano: compiled or not, it doesn't seem to change the mean. a
  compiled app has just lower variance.
 
  @all: jlundell definitively hit something. Times are much more
  lower
  when threads are 1.
 
  BTW: if I change originalscript.py to
 
  # -*- coding: utf-8 -*-
  import time
  import threading
 
  def test():
  start = time.time()
  x = 0.0
  for i in 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know 
what Parallels uses.


On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? Is 
 VM using all the cores? The only thing I can think of is the GIL and the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the command 
 line execution.

 I'd really, really, **really** like to know why the shipping code is 10x 
 slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in my 
 shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, 
 the Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 
 (1GB memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, 
 such as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x 
 discrepancy, the original app that initiated this thread now shows a 
 13x 
 discrepancy instead of 100x. 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Cliff Kachinske
Apparently the number of cores is adjustable. Try this link.

http://download.parallels.com/desktop/v5/docs/en/Parallels_Desktop_Users_Guide/23076.htm

On Monday, March 17, 2014 10:02:13 AM UTC-4, horridohobbyist wrote:

 Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know 
 what Parallels uses.


 On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? 
 Is VM using all the cores? The only thing I can think of is the GIL and the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the 
 command line execution.

 I'd really, really, **really** like to know why the shipping code is 10x 
 slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in 
 my shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  
 This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, 
 the Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 
 (1GB memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server 
 is intimately reliant on Apache. I'd have to learn how to use another 
 web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, 
 such as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist 
 wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So 
 I don't 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
Anyway, I ran the shipping code Welcome test with both Apache2 and 
Gunicorn. Here are the results:

Apache:Begin...
Apache:Elapsed time: 0.28248500824
Apache:Elapsed time: 0.805250167847
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.284092903137
Apache:Elapsed time: 0.797535896301
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.266696929932
Apache:Elapsed time: 0.793596029282
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.271706104279
Apache:Elapsed time: 0.770045042038
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.26541185379
Apache:Elapsed time: 0.798058986664
Apache:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0273849964142
Gunicorn:Elapsed time: 0.0717470645905
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0259709358215
Gunicorn:Elapsed time: 0.0712919235229
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0273978710175
Gunicorn:Elapsed time: 0.0727338790894
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0260291099548
Gunicorn:Elapsed time: 0.0724799633026
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0249080657959
Gunicorn:Elapsed time: 0.0711901187897
Gunicorn:Percentage fill: 60.0

There is no question that the fault lies with Apache.


On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? Is 
 VM using all the cores? The only thing I can think of is the GIL and the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the command 
 line execution.

 I'd really, really, **really** like to know why the shipping code is 10x 
 slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in my 
 shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Massimo Di Pierro
very interesting. What is code being benchmarked here?
could you post your apache configuration?

On Monday, 17 March 2014 11:08:53 UTC-5, horridohobbyist wrote:

 Anyway, I ran the shipping code Welcome test with both Apache2 and 
 Gunicorn. Here are the results:

 Apache:Begin...
 Apache:Elapsed time: 0.28248500824
 Apache:Elapsed time: 0.805250167847
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.284092903137
 Apache:Elapsed time: 0.797535896301
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.266696929932
 Apache:Elapsed time: 0.793596029282
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.271706104279
 Apache:Elapsed time: 0.770045042038
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.26541185379
 Apache:Elapsed time: 0.798058986664
 Apache:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.0273849964142
 Gunicorn:Elapsed time: 0.0717470645905
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.0259709358215
 Gunicorn:Elapsed time: 0.0712919235229
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.0273978710175
 Gunicorn:Elapsed time: 0.0727338790894
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.0260291099548
 Gunicorn:Elapsed time: 0.0724799633026
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.0249080657959
 Gunicorn:Elapsed time: 0.0711901187897
 Gunicorn:Percentage fill: 60.0

 There is no question that the fault lies with Apache.


 On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? 
 Is VM using all the cores? The only thing I can think of is the GIL and the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the 
 command line execution.

 I'd really, really, **really** like to know why the shipping code is 10x 
 slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in 
 my shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  
 This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
I bumped up the number of processors from 1 to 4. Here are the results:

Apache:Begin...
Apache:Elapsed time: 2.31899785995
Apache:Elapsed time: 6.31404495239
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.274327039719
Apache:Elapsed time: 0.832695960999
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.277992010117
Apache:Elapsed time: 0.875190019608
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.284713983536
Apache:Elapsed time: 0.82108092308
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.289800882339
Apache:Elapsed time: 0.850221157074
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.287453889847
Apache:Elapsed time: 0.822550058365
Apache:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 1.9300968647
Gunicorn:Elapsed time: 5.28614592552
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.315547943115
Gunicorn:Elapsed time: 0.944733142853
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.321009159088
Gunicorn:Elapsed time: 0.95100903511
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.310179948807
Gunicorn:Elapsed time: 0.930527925491
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.311529874802
Gunicorn:Elapsed time: 0.939922809601
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.308799028397
Gunicorn:Elapsed time: 0.932448863983
Gunicorn:Percentage fill: 60.0

WTF. Now, both Apache and Gunicorn are slow. *Equally slow!*

I am befuddled. I think I'll go get stinking drunk...


On Monday, 17 March 2014 11:58:07 UTC-4, Cliff Kachinske wrote:

 Apparently the number of cores is adjustable. Try this link.


 http://download.parallels.com/desktop/v5/docs/en/Parallels_Desktop_Users_Guide/23076.htm

 On Monday, March 17, 2014 10:02:13 AM UTC-4, horridohobbyist wrote:

 Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know 
 what Parallels uses.


 On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? 
 Is VM using all the cores? The only thing I can think of is the GIL and the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the 
 command line execution.

 I'd really, really, **really** like to know why the shipping code is 
 10x slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in 
 my shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you 
 proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  
 This 
 should 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
I don't know if bumping up the number of processors from 1 to 4 makes 
sense. I have a dual-core Mac mini. The VM may be doing something funny.

I changed to 2 processors and we're back to the 10x performance 
discrepancy. So whether it's 1 or 2 processors makes very little difference.

Apache:Elapsed time: 2.27643203735
Apache:Elapsed time: 6.1853530407
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.270731925964
Apache:Elapsed time: 0.80504989624
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.292776823044
Apache:Elapsed time: 0.856013059616
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.28355884552
Apache:Elapsed time: 0.832424879074
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.310907125473
Apache:Elapsed time: 0.810643911362
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.282160043716
Apache:Elapsed time: 0.809345960617
Apache:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0269491672516
Gunicorn:Elapsed time: 0.0727801322937
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0269680023193
Gunicorn:Elapsed time: 0.0745708942413
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0281398296356
Gunicorn:Elapsed time: 0.0747048854828
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0501861572266
Gunicorn:Elapsed time: 0.0854380130768
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.0284719467163
Gunicorn:Elapsed time: 0.0778048038483
Gunicorn:Percentage fill: 60.0
Gunicorn:Begin...
Gunicorn:Elapsed time: 0.026153087616
Gunicorn:Elapsed time: 0.0714471340179
Gunicorn:Percentage fill: 60.0


On Monday, 17 March 2014 12:21:33 UTC-4, horridohobbyist wrote:

 I bumped up the number of processors from 1 to 4. Here are the results:

 Apache:Begin...
 Apache:Elapsed time: 2.31899785995
 Apache:Elapsed time: 6.31404495239
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.274327039719
 Apache:Elapsed time: 0.832695960999
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.277992010117
 Apache:Elapsed time: 0.875190019608
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.284713983536
 Apache:Elapsed time: 0.82108092308
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.289800882339
 Apache:Elapsed time: 0.850221157074
 Apache:Percentage fill: 60.0
 Apache:Begin...
 Apache:Elapsed time: 0.287453889847
 Apache:Elapsed time: 0.822550058365
 Apache:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 1.9300968647
 Gunicorn:Elapsed time: 5.28614592552
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.315547943115
 Gunicorn:Elapsed time: 0.944733142853
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.321009159088
 Gunicorn:Elapsed time: 0.95100903511
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.310179948807
 Gunicorn:Elapsed time: 0.930527925491
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.311529874802
 Gunicorn:Elapsed time: 0.939922809601
 Gunicorn:Percentage fill: 60.0
 Gunicorn:Begin...
 Gunicorn:Elapsed time: 0.308799028397
 Gunicorn:Elapsed time: 0.932448863983
 Gunicorn:Percentage fill: 60.0

 WTF. Now, both Apache and Gunicorn are slow. *Equally slow!*

 I am befuddled. I think I'll go get stinking drunk...


 On Monday, 17 March 2014 11:58:07 UTC-4, Cliff Kachinske wrote:

 Apparently the number of cores is adjustable. Try this link.


 http://download.parallels.com/desktop/v5/docs/en/Parallels_Desktop_Users_Guide/23076.htm

 On Monday, March 17, 2014 10:02:13 AM UTC-4, horridohobbyist wrote:

 Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know 
 what Parallels uses.


 On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:

 What kind of VM is this? What is the host platform? How many CPU cores? 
 Is VM using all the cores? The only thing I can think of is the GIL and 
 the 
 fact that multithreaded code in python gets slower and slower the more 
 cores I have. On my laptop, with two cores, I do not see any slow down. 
 Rocket preallocate a thread pool. The rationale is that it decreases the 
 latency time. Perhaps you can also try rocket in this way:

 web2py.py --minthreads=1 --maxthreads=1

 This will reduce the number of worker threads to 1. Rocket also runs a 
 background non-worker thread that monitors worker threads and kills them 
 if 
 they get stuck.

 On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's 

Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Tim Richardson



 There is no question that the fault lies with Apache.


Perhaps it is fairer to say the fault lies with mod_wsgi ?

What are the mod_wsgi settings in your apache config? 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
How or where do I locate the mod_wsgi settings? (I am the furthest thing 
from being an Apache expert as you can find.)

Thanks.


On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Tim Richardson

(I am the furthest thing from being an Apache expert as you can find.)


Well, whereever that puts you, I'll be in shouting distance. 

I guess this means you are using defaults. The defaults are sensible for 
small loads, so I don't think you would get better performance from 
tweaking. These default settings should set you up with 15 threads running 
under one process which for a small load should be optimal that is, it's as 
good as it's going to get. You get these sensible defaults if you used the 
deployment script mentioned in the web2py book (the settings are in the 
/etc/apache2/sites-available/default file)
 
threads are faster than processes, but gunicorn and nginx don't even use 
threads. They manage their workloads inside a single thread which makes 
them fast as long as nothing CPU intensive is happening. 













\


 Thanks.


 On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
I don't know if this is relevant, but in apache2.conf, there is a 
MaxClients parameter for the prefork MPM and it's set to 150. This is the 
default.

I changed it to 15, but it made no difference in the test.


On Monday, 17 March 2014 21:15:12 UTC-4, Tim Richardson wrote:


 (I am the furthest thing from being an Apache expert as you can find.)


 Well, whereever that puts you, I'll be in shouting distance. 

 I guess this means you are using defaults. The defaults are sensible for 
 small loads, so I don't think you would get better performance from 
 tweaking. These default settings should set you up with 15 threads running 
 under one process which for a small load should be optimal that is, it's as 
 good as it's going to get. You get these sensible defaults if you used the 
 deployment script mentioned in the web2py book (the settings are in the 
 /etc/apache2/sites-available/default file)
  
 threads are faster than processes, but gunicorn and nginx don't even use 
 threads. They manage their workloads inside a single thread which makes 
 them fast as long as nothing CPU intensive is happening. 













 \


 Thanks.


 On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread horridohobbyist
I'm disturbed by the fact that the defaults are sensible. That suggests 
there is no way to improve the performance. A 2x-10x performance hit is 
very serious.

I was considering dropping Apache and going with nginx/gunicorn in my Linux 
server, but I'm not sure that's a good idea. Apache is a nearly universal 
web server and one cannot simply ignore it.

Also, I'm not sure I can duplicate the functionality in my current Apache 
setup in nginx/gunicorn.


On Monday, 17 March 2014 21:15:12 UTC-4, Tim Richardson wrote:


 (I am the furthest thing from being an Apache expert as you can find.)


 Well, whereever that puts you, I'll be in shouting distance. 

 I guess this means you are using defaults. The defaults are sensible for 
 small loads, so I don't think you would get better performance from 
 tweaking. These default settings should set you up with 15 threads running 
 under one process which for a small load should be optimal that is, it's as 
 good as it's going to get. You get these sensible defaults if you used the 
 deployment script mentioned in the web2py book (the settings are in the 
 /etc/apache2/sites-available/default file)
  
 threads are faster than processes, but gunicorn and nginx don't even use 
 threads. They manage their workloads inside a single thread which makes 
 them fast as long as nothing CPU intensive is happening. 













 \


 Thanks.


 On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-17 Thread Tim Richardson
I use apache. I think while your results are precise and interesting, real 
world experience of site visitors is very different. nginx met the 10K 
challenge: i.e., 1 simultaneous requests. That's the kind of load that 
gives Apache problems. But under lower loads, there are many other factors 
which influence performance from your visitor's point of view.


On Tuesday, 18 March 2014 13:19:11 UTC+11, horridohobbyist wrote:

 I'm disturbed by the fact that the defaults are sensible. That suggests 
 there is no way to improve the performance. A 2x-10x performance hit is 
 very serious.

 I was considering dropping Apache and going with nginx/gunicorn in my 
 Linux server, but I'm not sure that's a good idea. Apache is a nearly 
 universal web server and one cannot simply ignore it.

 Also, I'm not sure I can duplicate the functionality in my current Apache 
 setup in nginx/gunicorn.


 On Monday, 17 March 2014 21:15:12 UTC-4, Tim Richardson wrote:


 (I am the furthest thing from being an Apache expert as you can find.)


 Well, whereever that puts you, I'll be in shouting distance. 

 I guess this means you are using defaults. The defaults are sensible for 
 small loads, so I don't think you would get better performance from 
 tweaking. These default settings should set you up with 15 threads running 
 under one process which for a small load should be optimal that is, it's as 
 good as it's going to get. You get these sensible defaults if you used the 
 deployment script mentioned in the web2py book (the settings are in the 
 /etc/apache2/sites-available/default file)
  
 threads are faster than processes, but gunicorn and nginx don't even use 
 threads. They manage their workloads inside a single thread which makes 
 them fast as long as nothing CPU intensive is happening. 













 \


 Thanks.


 On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:



 There is no question that the fault lies with Apache.


 Perhaps it is fairer to say the fault lies with mod_wsgi ?

 What are the mod_wsgi settings in your apache config? 



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Tim Richardson
Apache on linux can run WSGI in multi process mode as well as multithreaded 
mode according to the docs. This would eliminate the GIL as a factor.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread horridohobbyist
I've conducted a test with Flask.

fred.py is the command line program.
hello.py is the Flask program.
default.py is the Welcome controller.
testdata.txt is the test data.
shippackage.py is a required module.

fred.py:
0.024 second
0.067 second

hello.py:
0.029 second
0.073 second

default.py:
0.27 second
0.78 second

The Flask program is slightly slower than the command line. However, the 
Welcome app is about 10x slower!

*Web2py is much, much slower than Flask.*

I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


I can't quite figure out how to use gunicom.


On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, such 
 as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x discrepancy, 
 the original app that initiated this thread now shows a 13x discrepancy 
 instead of 100x. That's certainly an improvement, but it's still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app (performing permutations) is 
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it 
 be in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only 
 the Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in 
 web2py (I had 8 of them!) and see whether the discrepancy grows with the 
 number of apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth 
 is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of 
 magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same 
 clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and 
 no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower 
 when threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
430x300x200 430x300x200 400x370x330 390x285x140 585x285x200
430x300x200 400x370x330 553x261x152 290x210x160 390x285x140import time
import sys
import os
debug_path = os.path.join(request.folder,'static/debug.out')
def debug(str):
f = open(debug_path,'a')
f.write(str+'\n')
f.close()
return

#
# pyShipping 1.8a
#
import time
import random
from shippackage import Package

def packstrip(bin, p):
Creates a Strip which fits into bin.

Returns the Packages to be used in the strip, the dimensions of 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Michele Comitini
gunicorn instructions:

$ pip install gunicorn
$ cd root dir of web2py
$ gunicorn -w 4 gluon.main:wsgibase



2014-03-16 14:47 GMT+01:00 horridohobbyist horrido.hobb...@gmail.com:
 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, the
 Welcome app is about 10x slower!

 Web2py is much, much slower than Flask.

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB
 memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use
 another framework.

 As for trying a different web server, my (production) Linux server is
 intimately reliant on Apache. I'd have to learn how to use another web
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, such
 as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x discrepancy,
 the original app that initiated this thread now shows a 13x discrepancy
 instead of 100x. That's certainly an improvement, but it's still too slow.

 The size of the discrepancy depends on the code that is executed.
 Clearly, what I'm doing in the original app (performing permutations) is
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it
 be in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only
 the Welcome app, my Welcome vs command line test shows a consistent 2x
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in
 web2py (I had 8 of them!) and see whether the discrepancy grows with the
 number of apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps
 installed in web2py. This apparently affects performance. But the truth 
 is,
 most of those apps are hardly ever executed, so their existence merely
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the
 discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower
 than the original script. But then I found out why; one order of 
 magnitude
 less (5000 vs 5). Once that was corrected, you got the exact same 
 clock
 times as my app (i.e. function directly in the controller). I also
 stripped out the logging part making the app just return the result and 
 no
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on.
 @mariano: compiled or not, it doesn't seem to change the mean. a
 compiled app has just lower variance.

 @all: jlundell definitively hit something. Times are much more lower
 when threads are 1.

 BTW: if I change originalscript.py to

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only
 tests, i.e.
 0.23 min, 0.26 max, ~0.24 mean

 --
 Resources:
 - http://web2py.com
 - http://web2py.com/book (Documentation)
 - http://github.com/web2py/web2py (Source code)
 - https://code.google.com/p/web2py/issues/list (Report Issues)
 ---
 You received this message because you are subscribed to the Google Groups
 web2py-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to web2py+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread horridohobbyist
Well, I managed to get *gunicorn* working in a roundabout way. Here are my 
findings for the fred.py/hello.py test:

Elapsed time: 0.028
Elapsed time: 0.068

Basically, it's as fast as the command line test!

I'm not sure this tells us much. Is it Apache's fault? Is it web2py's 
fault? The test is run without the full web2py scaffolding. I don't know 
how to run web2py on gunicorn, unless someone can tell me.


On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote:

 gunicorn instructions: 

 $ pip install gunicorn 
 $ cd root dir of web2py 
 $ gunicorn -w 4 gluon.main:wsgibase 



 2014-03-16 14:47 GMT+01:00 horridohobbyist 
 horrido...@gmail.comjavascript:: 

  I've conducted a test with Flask. 
  
  fred.py is the command line program. 
  hello.py is the Flask program. 
  default.py is the Welcome controller. 
  testdata.txt is the test data. 
  shippackage.py is a required module. 
  
  fred.py: 
  0.024 second 
  0.067 second 
  
  hello.py: 
  0.029 second 
  0.073 second 
  
  default.py: 
  0.27 second 
  0.78 second 
  
  The Flask program is slightly slower than the command line. However, the 
  Welcome app is about 10x slower! 
  
  Web2py is much, much slower than Flask. 
  
  I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
  memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB. 
  
  
  I can't quite figure out how to use gunicom. 
  
  
  On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote: 
  
  I'll see what I can do. It will take time for me to learn how to use 
  another framework. 
  
  As for trying a different web server, my (production) Linux server is 
  intimately reliant on Apache. I'd have to learn how to use another web 
  server, and then try it in my Linux VM. 
  
  
  On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote: 
  
  Are you able to replicate the exact task in another web framework, 
 such 
  as Flask (with the same server setup)? 
  
  On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote: 
  
  Well, putting back all my apps hasn't widened the discrepancy. So I 
  don't know why my previous web2py installation was so slow. 
  
  While the Welcome app with the calculations test shows a 2x 
 discrepancy, 
  the original app that initiated this thread now shows a 13x 
 discrepancy 
  instead of 100x. That's certainly an improvement, but it's still too 
 slow. 
  
  The size of the discrepancy depends on the code that is executed. 
  Clearly, what I'm doing in the original app (performing permutations) 
 is 
  more demanding than mere arithmetical operations. Hence, 13x vs 2x. 
  
  I anxiously await any resolution to this performance issue, whether 
 it 
  be in WSGI or in web2py. I'll check in on this thread periodically... 
  
  
  On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote: 
  
  Interestingly, now that I've got a fresh install of web2py with only 
  the Welcome app, my Welcome vs command line test shows a consistent 
 2x 
  discrepancy, just as you had observed. 
  
  My next step is to gradually add back all the other apps I had in 
  web2py (I had 8 of them!) and see whether the discrepancy grows with 
 the 
  number of apps. That's the theory I'm working on. 
  
  Yes, yes, I know, according to the Book, I shouldn't have so many 
 apps 
  installed in web2py. This apparently affects performance. But the 
 truth is, 
  most of those apps are hardly ever executed, so their existence 
 merely 
  represents a static overhead in web2py. In my mind, this shouldn't 
 widen the 
  discrepancy, but you never know. 
  
  
  On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote: 
  
  @mcm: you got me worried. Your test function was clocking a hell 
 lower 
  than the original script. But then I found out why; one order of 
 magnitude 
  less (5000 vs 5). Once that was corrected, you got the exact 
 same clock 
  times as my app (i.e. function directly in the controller). I 
 also 
  stripped out the logging part making the app just return the result 
 and no 
  visible changes to the timings happened. 
  
  @hh: glad at least we got some grounds to hold on. 
  @mariano: compiled or not, it doesn't seem to change the mean. a 
  compiled app has just lower variance. 
  
  @all: jlundell definitively hit something. Times are much more 
 lower 
  when threads are 1. 
  
  BTW: if I change originalscript.py to 
  
  # -*- coding: utf-8 -*- 
  import time 
  import threading 
  
  def test(): 
  start = time.time() 
  x = 0.0 
  for i in range(1,5): 
  x += (float(i+10)*(i+25)+175.0)/3.14 
  res = str(time.time()-start) 
  print elapsed time: + res + '\n' 
  
  if __name__ == '__main__': 
  t = threading.Thread(target=test) 
  t.start() 
  t.join() 
  
  I'm getting really close timings to wsgi environment, 1 thread 
 only 
  tests, i.e. 
  0.23 min, 0.26 max, ~0.24 mean 
  
  -- 
  Resources: 
  - http://web2py.com 
  - 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Michele Comitini
You basically need to cd into the directory where you have unzipped
web2py.  Then run gunicorn like the following:
gunicorn -w 4 gluon.main:wsgibase


There you have web2py reachable on http://localhost:8000

Which part does not work for you?

2014-03-16 21:31 GMT+01:00 horridohobbyist horrido.hobb...@gmail.com:
 Well, I managed to get gunicorn working in a roundabout way. Here are my
 findings for the fred.py/hello.py test:

 Elapsed time: 0.028
 Elapsed time: 0.068

 Basically, it's as fast as the command line test!

 I'm not sure this tells us much. Is it Apache's fault? Is it web2py's fault?
 The test is run without the full web2py scaffolding. I don't know how to run
 web2py on gunicorn, unless someone can tell me.


 On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote:

 gunicorn instructions:

 $ pip install gunicorn
 $ cd root dir of web2py
 $ gunicorn -w 4 gluon.main:wsgibase



 2014-03-16 14:47 GMT+01:00 horridohobbyist horrido...@gmail.com:
  I've conducted a test with Flask.
 
  fred.py is the command line program.
  hello.py is the Flask program.
  default.py is the Welcome controller.
  testdata.txt is the test data.
  shippackage.py is a required module.
 
  fred.py:
  0.024 second
  0.067 second
 
  hello.py:
  0.029 second
  0.073 second
 
  default.py:
  0.27 second
  0.78 second
 
  The Flask program is slightly slower than the command line. However, the
  Welcome app is about 10x slower!
 
  Web2py is much, much slower than Flask.
 
  I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB
  memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.
 
 
  I can't quite figure out how to use gunicom.
 
 
  On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:
 
  I'll see what I can do. It will take time for me to learn how to use
  another framework.
 
  As for trying a different web server, my (production) Linux server is
  intimately reliant on Apache. I'd have to learn how to use another web
  server, and then try it in my Linux VM.
 
 
  On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:
 
  Are you able to replicate the exact task in another web framework,
  such
  as Flask (with the same server setup)?
 
  On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:
 
  Well, putting back all my apps hasn't widened the discrepancy. So I
  don't know why my previous web2py installation was so slow.
 
  While the Welcome app with the calculations test shows a 2x
  discrepancy,
  the original app that initiated this thread now shows a 13x
  discrepancy
  instead of 100x. That's certainly an improvement, but it's still too
  slow.
 
  The size of the discrepancy depends on the code that is executed.
  Clearly, what I'm doing in the original app (performing permutations)
  is
  more demanding than mere arithmetical operations. Hence, 13x vs 2x.
 
  I anxiously await any resolution to this performance issue, whether
  it
  be in WSGI or in web2py. I'll check in on this thread periodically...
 
 
  On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:
 
  Interestingly, now that I've got a fresh install of web2py with only
  the Welcome app, my Welcome vs command line test shows a consistent
  2x
  discrepancy, just as you had observed.
 
  My next step is to gradually add back all the other apps I had in
  web2py (I had 8 of them!) and see whether the discrepancy grows with
  the
  number of apps. That's the theory I'm working on.
 
  Yes, yes, I know, according to the Book, I shouldn't have so many
  apps
  installed in web2py. This apparently affects performance. But the
  truth is,
  most of those apps are hardly ever executed, so their existence
  merely
  represents a static overhead in web2py. In my mind, this shouldn't
  widen the
  discrepancy, but you never know.
 
 
  On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:
 
  @mcm: you got me worried. Your test function was clocking a hell
  lower
  than the original script. But then I found out why; one order of
  magnitude
  less (5000 vs 5). Once that was corrected, you got the exact
  same clock
  times as my app (i.e. function directly in the controller). I
  also
  stripped out the logging part making the app just return the result
  and no
  visible changes to the timings happened.
 
  @hh: glad at least we got some grounds to hold on.
  @mariano: compiled or not, it doesn't seem to change the mean. a
  compiled app has just lower variance.
 
  @all: jlundell definitively hit something. Times are much more
  lower
  when threads are 1.
 
  BTW: if I change originalscript.py to
 
  # -*- coding: utf-8 -*-
  import time
  import threading
 
  def test():
  start = time.time()
  x = 0.0
  for i in range(1,5):
  x += (float(i+10)*(i+25)+175.0)/3.14
  res = str(time.time()-start)
  print elapsed time: + res + '\n'
 
  if __name__ == '__main__':
  t = threading.Thread(target=test)
  t.start()
  

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Jonathan Lundell
On 16 Mar 2014, at 1:31 PM, horridohobbyist horrido.hobb...@gmail.com wrote:
 Well, I managed to get gunicorn working in a roundabout way. Here are my 
 findings for the fred.py/hello.py test:
 
 Elapsed time: 0.028
 Elapsed time: 0.068
 
 Basically, it's as fast as the command line test!
 
 I'm not sure this tells us much. Is it Apache's fault? Is it web2py's fault? 
 The test is run without the full web2py scaffolding. I don't know how to run 
 web2py on gunicorn, unless someone can tell me.
 

The point of gunicorn (in this context) is to run requests on separate worker 
processes, one thread per process. It tends to confirm the idea that the 
underlying problem is related to having other outstanding Python threads in the 
process that's running your request, as is typical with Apache (and Rocket).

You're almost certainly seeing the effect of the GIL. Have a look at slide 2-7: 
http://www.dabeaz.com/python/NewGIL.pdf

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Massimo Di Pierro
In order to isolate the problem one must take it in steps. This is a good 
test but you must first perform this test with the code you proposed before:

def test():
t = time.time
start = t()
x = 0.0
for i in range(1,5000):
x += (float(i+10)*(i+25)+175.0)/3.14
debug(elapsed time: +str(t()-start))
return

I would like to know the results about this test code first.

The other code you are using performs an import:

from shippackage import Package


Now that is something that is very different in web2py and flask for 
example. In web2py the import is executed at every request (although it 
should be cached by Python) while in flask it is executed only once.  This 
should also not cause a performance difference but it is a different test 
than the one above.

TLTR: we should test separately python code execution (which may be 
affected by threading) and import statements (which may be affected by 
web2py custom_import and/or module weird behavior).



On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, the 
 Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
 memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, such 
 as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x 
 discrepancy, the original app that initiated this thread now shows a 13x 
 discrepancy instead of 100x. That's certainly an improvement, but it's 
 still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app (performing permutations) is 
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it 
 be in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only 
 the Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in 
 web2py (I had 8 of them!) and see whether the discrepancy grows with the 
 number of apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth 
 is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell 
 lower than the original script. But then I found out why; one order of 
 magnitude less (5000 vs 5). Once that was corrected, you got the 
 exact 
 same clock times as my app (i.e. function directly in the controller). 
 I 
 also stripped out the logging part making the app just return the result 
 and no visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower 
 when threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread horridohobbyist
Failed to find application: 'gluon.main'
2014-03-15 02:23:51 [22339] [INFO] Worker exiting (pid: 22339)
...
Traceback (most recent call last):
  File /usr/local/bin/gunicorn, line 9, in module
load_entry_point('gunicorn==18.0', 'console_scripts', 'gunicorn')()
...
gunicorn.errors.HaltServer: HaltServer 'App failed to load.' 4



On Sunday, 16 March 2014 16:39:28 UTC-4, Michele Comitini wrote:

 You basically need to cd into the directory where you have unzipped 
 web2py.  Then run gunicorn like the following: 
 gunicorn -w 4 gluon.main:wsgibase 


 There you have web2py reachable on http://localhost:8000 

 Which part does not work for you? 

 2014-03-16 21:31 GMT+01:00 horridohobbyist 
 horrido...@gmail.comjavascript:: 

  Well, I managed to get gunicorn working in a roundabout way. Here are my 
  findings for the fred.py/hello.py test: 
  
  Elapsed time: 0.028 
  Elapsed time: 0.068 
  
  Basically, it's as fast as the command line test! 
  
  I'm not sure this tells us much. Is it Apache's fault? Is it web2py's 
 fault? 
  The test is run without the full web2py scaffolding. I don't know how to 
 run 
  web2py on gunicorn, unless someone can tell me. 
  
  
  On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote: 
  
  gunicorn instructions: 
  
  $ pip install gunicorn 
  $ cd root dir of web2py 
  $ gunicorn -w 4 gluon.main:wsgibase 
  
  
  
  2014-03-16 14:47 GMT+01:00 horridohobbyist horrido...@gmail.com: 
   I've conducted a test with Flask. 
   
   fred.py is the command line program. 
   hello.py is the Flask program. 
   default.py is the Welcome controller. 
   testdata.txt is the test data. 
   shippackage.py is a required module. 
   
   fred.py: 
   0.024 second 
   0.067 second 
   
   hello.py: 
   0.029 second 
   0.073 second 
   
   default.py: 
   0.27 second 
   0.78 second 
   
   The Flask program is slightly slower than the command line. However, 
 the 
   Welcome app is about 10x slower! 
   
   Web2py is much, much slower than Flask. 
   
   I conducted the test in a Parallels VM running Ubuntu Server 12.04 
 (1GB 
   memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB. 
   
   
   I can't quite figure out how to use gunicom. 
   
   
   On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote: 
   
   I'll see what I can do. It will take time for me to learn how to use 
   another framework. 
   
   As for trying a different web server, my (production) Linux server 
 is 
   intimately reliant on Apache. I'd have to learn how to use another 
 web 
   server, and then try it in my Linux VM. 
   
   
   On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote: 
   
   Are you able to replicate the exact task in another web framework, 
   such 
   as Flask (with the same server setup)? 
   
   On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist 
 wrote: 
   
   Well, putting back all my apps hasn't widened the discrepancy. So 
 I 
   don't know why my previous web2py installation was so slow. 
   
   While the Welcome app with the calculations test shows a 2x 
   discrepancy, 
   the original app that initiated this thread now shows a 13x 
   discrepancy 
   instead of 100x. That's certainly an improvement, but it's still 
 too 
   slow. 
   
   The size of the discrepancy depends on the code that is executed. 
   Clearly, what I'm doing in the original app (performing 
 permutations) 
   is 
   more demanding than mere arithmetical operations. Hence, 13x vs 
 2x. 
   
   I anxiously await any resolution to this performance issue, 
 whether 
   it 
   be in WSGI or in web2py. I'll check in on this thread 
 periodically... 
   
   
   On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote: 
   
   Interestingly, now that I've got a fresh install of web2py with 
 only 
   the Welcome app, my Welcome vs command line test shows a 
 consistent 
   2x 
   discrepancy, just as you had observed. 
   
   My next step is to gradually add back all the other apps I had in 
   web2py (I had 8 of them!) and see whether the discrepancy grows 
 with 
   the 
   number of apps. That's the theory I'm working on. 
   
   Yes, yes, I know, according to the Book, I shouldn't have so many 
   apps 
   installed in web2py. This apparently affects performance. But the 
   truth is, 
   most of those apps are hardly ever executed, so their existence 
   merely 
   represents a static overhead in web2py. In my mind, this 
 shouldn't 
   widen the 
   discrepancy, but you never know. 
   
   
   On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote: 
   
   @mcm: you got me worried. Your test function was clocking a hell 
   lower 
   than the original script. But then I found out why; one order of 
   magnitude 
   less (5000 vs 5). Once that was corrected, you got the exact 
   same clock 
   times as my app (i.e. function directly in the controller). I 
   also 
   stripped out the logging part making the app just return the 
 result 
   and 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Massimo Di Pierro
web2py comes with anyserver.py

you just do:

python anyserver.py -H

for help. One of the command line options is to run with gunicorn. You can 
try tornado, and any other server out there.

On Sunday, 16 March 2014 17:09:40 UTC-5, horridohobbyist wrote:

 Failed to find application: 'gluon.main'
 2014-03-15 02:23:51 [22339] [INFO] Worker exiting (pid: 22339)
 ...
 Traceback (most recent call last):
   File /usr/local/bin/gunicorn, line 9, in module
 load_entry_point('gunicorn==18.0', 'console_scripts', 'gunicorn')()
 ...
 gunicorn.errors.HaltServer: HaltServer 'App failed to load.' 4



 On Sunday, 16 March 2014 16:39:28 UTC-4, Michele Comitini wrote:

 You basically need to cd into the directory where you have unzipped 
 web2py.  Then run gunicorn like the following: 
 gunicorn -w 4 gluon.main:wsgibase 


 There you have web2py reachable on http://localhost:8000 

 Which part does not work for you? 

 2014-03-16 21:31 GMT+01:00 horridohobbyist horrido...@gmail.com: 
  Well, I managed to get gunicorn working in a roundabout way. Here are 
 my 
  findings for the fred.py/hello.py test: 
  
  Elapsed time: 0.028 
  Elapsed time: 0.068 
  
  Basically, it's as fast as the command line test! 
  
  I'm not sure this tells us much. Is it Apache's fault? Is it web2py's 
 fault? 
  The test is run without the full web2py scaffolding. I don't know how 
 to run 
  web2py on gunicorn, unless someone can tell me. 
  
  
  On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote: 
  
  gunicorn instructions: 
  
  $ pip install gunicorn 
  $ cd root dir of web2py 
  $ gunicorn -w 4 gluon.main:wsgibase 
  
  
  
  2014-03-16 14:47 GMT+01:00 horridohobbyist horrido...@gmail.com: 
   I've conducted a test with Flask. 
   
   fred.py is the command line program. 
   hello.py is the Flask program. 
   default.py is the Welcome controller. 
   testdata.txt is the test data. 
   shippackage.py is a required module. 
   
   fred.py: 
   0.024 second 
   0.067 second 
   
   hello.py: 
   0.029 second 
   0.073 second 
   
   default.py: 
   0.27 second 
   0.78 second 
   
   The Flask program is slightly slower than the command line. However, 
 the 
   Welcome app is about 10x slower! 
   
   Web2py is much, much slower than Flask. 
   
   I conducted the test in a Parallels VM running Ubuntu Server 12.04 
 (1GB 
   memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB. 
   
   
   I can't quite figure out how to use gunicom. 
   
   
   On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote: 
   
   I'll see what I can do. It will take time for me to learn how to 
 use 
   another framework. 
   
   As for trying a different web server, my (production) Linux server 
 is 
   intimately reliant on Apache. I'd have to learn how to use another 
 web 
   server, and then try it in my Linux VM. 
   
   
   On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote: 
   
   Are you able to replicate the exact task in another web framework, 
   such 
   as Flask (with the same server setup)? 
   
   On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist 
 wrote: 
   
   Well, putting back all my apps hasn't widened the discrepancy. So 
 I 
   don't know why my previous web2py installation was so slow. 
   
   While the Welcome app with the calculations test shows a 2x 
   discrepancy, 
   the original app that initiated this thread now shows a 13x 
   discrepancy 
   instead of 100x. That's certainly an improvement, but it's still 
 too 
   slow. 
   
   The size of the discrepancy depends on the code that is executed. 
   Clearly, what I'm doing in the original app (performing 
 permutations) 
   is 
   more demanding than mere arithmetical operations. Hence, 13x vs 
 2x. 
   
   I anxiously await any resolution to this performance issue, 
 whether 
   it 
   be in WSGI or in web2py. I'll check in on this thread 
 periodically... 
   
   
   On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote: 
   
   Interestingly, now that I've got a fresh install of web2py with 
 only 
   the Welcome app, my Welcome vs command line test shows a 
 consistent 
   2x 
   discrepancy, just as you had observed. 
   
   My next step is to gradually add back all the other apps I had 
 in 
   web2py (I had 8 of them!) and see whether the discrepancy grows 
 with 
   the 
   number of apps. That's the theory I'm working on. 
   
   Yes, yes, I know, according to the Book, I shouldn't have so 
 many 
   apps 
   installed in web2py. This apparently affects performance. But 
 the 
   truth is, 
   most of those apps are hardly ever executed, so their existence 
   merely 
   represents a static overhead in web2py. In my mind, this 
 shouldn't 
   widen the 
   discrepancy, but you never know. 
   
   
   On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote: 
   
   @mcm: you got me worried. Your test function was clocking a 
 hell 
   lower 
   than the original script. But then I found out why; one 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Massimo Di Pierro
easy_install gunicorn
cd web2py
python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000 

Anyway, you need to run a test that does not include import Package first 
Because definitively treats imports differently. That must be tested 
separately.

Massimo


On Sunday, 16 March 2014 15:31:17 UTC-5, horridohobbyist wrote:

 Well, I managed to get *gunicorn* working in a roundabout way. Here are 
 my findings for the fred.py/hello.py test:

 Elapsed time: 0.028
 Elapsed time: 0.068

 Basically, it's as fast as the command line test!

 I'm not sure this tells us much. Is it Apache's fault? Is it web2py's 
 fault? The test is run without the full web2py scaffolding. I don't know 
 how to run web2py on gunicorn, unless someone can tell me.


 On Sunday, 16 March 2014 16:21:00 UTC-4, Michele Comitini wrote:

 gunicorn instructions: 

 $ pip install gunicorn 
 $ cd root dir of web2py 
 $ gunicorn -w 4 gluon.main:wsgibase 



 2014-03-16 14:47 GMT+01:00 horridohobbyist horrido...@gmail.com: 
  I've conducted a test with Flask. 
  
  fred.py is the command line program. 
  hello.py is the Flask program. 
  default.py is the Welcome controller. 
  testdata.txt is the test data. 
  shippackage.py is a required module. 
  
  fred.py: 
  0.024 second 
  0.067 second 
  
  hello.py: 
  0.029 second 
  0.073 second 
  
  default.py: 
  0.27 second 
  0.78 second 
  
  The Flask program is slightly slower than the command line. However, 
 the 
  Welcome app is about 10x slower! 
  
  Web2py is much, much slower than Flask. 
  
  I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
  memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB. 
  
  
  I can't quite figure out how to use gunicom. 
  
  
  On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote: 
  
  I'll see what I can do. It will take time for me to learn how to use 
  another framework. 
  
  As for trying a different web server, my (production) Linux server is 
  intimately reliant on Apache. I'd have to learn how to use another web 
  server, and then try it in my Linux VM. 
  
  
  On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote: 
  
  Are you able to replicate the exact task in another web framework, 
 such 
  as Flask (with the same server setup)? 
  
  On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote: 
  
  Well, putting back all my apps hasn't widened the discrepancy. So I 
  don't know why my previous web2py installation was so slow. 
  
  While the Welcome app with the calculations test shows a 2x 
 discrepancy, 
  the original app that initiated this thread now shows a 13x 
 discrepancy 
  instead of 100x. That's certainly an improvement, but it's still too 
 slow. 
  
  The size of the discrepancy depends on the code that is executed. 
  Clearly, what I'm doing in the original app (performing 
 permutations) is 
  more demanding than mere arithmetical operations. Hence, 13x vs 2x. 
  
  I anxiously await any resolution to this performance issue, whether 
 it 
  be in WSGI or in web2py. I'll check in on this thread 
 periodically... 
  
  
  On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote: 
  
  Interestingly, now that I've got a fresh install of web2py with 
 only 
  the Welcome app, my Welcome vs command line test shows a consistent 
 2x 
  discrepancy, just as you had observed. 
  
  My next step is to gradually add back all the other apps I had in 
  web2py (I had 8 of them!) and see whether the discrepancy grows 
 with the 
  number of apps. That's the theory I'm working on. 
  
  Yes, yes, I know, according to the Book, I shouldn't have so many 
 apps 
  installed in web2py. This apparently affects performance. But the 
 truth is, 
  most of those apps are hardly ever executed, so their existence 
 merely 
  represents a static overhead in web2py. In my mind, this shouldn't 
 widen the 
  discrepancy, but you never know. 
  
  
  On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote: 
  
  @mcm: you got me worried. Your test function was clocking a hell 
 lower 
  than the original script. But then I found out why; one order of 
 magnitude 
  less (5000 vs 5). Once that was corrected, you got the exact 
 same clock 
  times as my app (i.e. function directly in the controller). I 
 also 
  stripped out the logging part making the app just return the 
 result and no 
  visible changes to the timings happened. 
  
  @hh: glad at least we got some grounds to hold on. 
  @mariano: compiled or not, it doesn't seem to change the mean. a 
  compiled app has just lower variance. 
  
  @all: jlundell definitively hit something. Times are much more 
 lower 
  when threads are 1. 
  
  BTW: if I change originalscript.py to 
  
  # -*- coding: utf-8 -*- 
  import time 
  import threading 
  
  def test(): 
  start = time.time() 
  x = 0.0 
  for i in range(1,5): 
  x += (float(i+10)*(i+25)+175.0)/3.14 
  res = str(time.time()-start) 
  

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread horridohobbyist
Okay, I did the calculations test in my Linux VM using command line 
(fred0), Flask (hello0), and web2py (Welcome).

fred0: elapsed time: 0.00159001350403

fred0: elapsed time: 0.0015709400177

fred0: elapsed time: 0.00156021118164

fred0: elapsed time: 0.0015971660614

fred0: elapsed time: 0.0031584741

hello0: elapsed time: 0.00271105766296

hello0: elapsed time: 0.00213503837585

hello0: elapsed time: 0.00195693969727

hello0: elapsed time: 0.00224900245667

hello0: elapsed time: 0.00205492973328
Welcome: elapsed time: 0.0484869480133

Welcome: elapsed time: 0.00296783447266

Welcome: elapsed time: 0.00293898582458

Welcome: elapsed time: 0.00300216674805

Welcome: elapsed time: 0.00312614440918

The Welcome discrepancy is just under 2x, not nearly as bad as 10x in my 
shipping code.


On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a good 
 test but you must first perform this test with the code you proposed before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, the 
 Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
 memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, such 
 as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x 
 discrepancy, the original app that initiated this thread now shows a 13x 
 discrepancy instead of 100x. That's certainly an improvement, but it's 
 still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app (performing permutations) is 
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it 
 be in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only 
 the Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in 
 web2py (I had 8 of them!) and see whether the discrepancy grows with the 
 number of apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many 
 apps installed in web2py. This apparently affects performance. But the 
 truth is, most of those apps are hardly ever executed, so their 
 existence 
 merely represents a static overhead in web2py. In my mind, this 
 shouldn't 
 widen the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell 
 lower than the original script. But then I found out why; one order of 
 magnitude less (5000 vs 5). Once that was corrected, 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread horridohobbyist
Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

Welcome: elapsed time: 0.0511929988861
Welcome: elapsed time: 0.0024790763855
Welcome: elapsed time: 0.00262713432312
Welcome: elapsed time: 0.00224614143372
Welcome: elapsed time: 0.00218415260315
Welcome: elapsed time: 0.00213503837585

Oddly enough, it's slightly faster! But still 37% slower than the command 
line execution.

I'd really, really, **really** like to know why the shipping code is 10x 
slower...


On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in my 
 shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a good 
 test but you must first perform this test with the code you proposed before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, the 
 Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
 memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, 
 such as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x 
 discrepancy, the original app that initiated this thread now shows a 13x 
 discrepancy instead of 100x. That's certainly an improvement, but it's 
 still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app (performing permutations) is 
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether 
 it be in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only 
 the Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in 
 web2py (I had 8 of them!) and see whether the discrepancy grows with 
 the 
 number of apps. That's the theory I'm working on.

 Yes, yes, I know, 

Re: [web2py] Re: Python Performance Issue

2014-03-16 Thread Massimo Di Pierro
What kind of VM is this? What is the host platform? How many CPU cores? Is 
VM using all the cores? The only thing I can think of is the GIL and the 
fact that multithreaded code in python gets slower and slower the more 
cores I have. On my laptop, with two cores, I do not see any slow down. 
Rocket preallocate a thread pool. The rationale is that it decreases the 
latency time. Perhaps you can also try rocket in this way:

web2py.py --minthreads=1 --maxthreads=1

This will reduce the number of worker threads to 1. Rocket also runs a 
background non-worker thread that monitors worker threads and kills them if 
they get stuck.

On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:

 Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:

 Welcome: elapsed time: 0.0511929988861
 Welcome: elapsed time: 0.0024790763855
 Welcome: elapsed time: 0.00262713432312
 Welcome: elapsed time: 0.00224614143372
 Welcome: elapsed time: 0.00218415260315
 Welcome: elapsed time: 0.00213503837585

 Oddly enough, it's slightly faster! But still 37% slower than the command 
 line execution.

 I'd really, really, **really** like to know why the shipping code is 10x 
 slower...


 On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:

 Okay, I did the calculations test in my Linux VM using command line 
 (fred0), Flask (hello0), and web2py (Welcome).

 fred0: elapsed time: 0.00159001350403

 fred0: elapsed time: 0.0015709400177

 fred0: elapsed time: 0.00156021118164

 fred0: elapsed time: 0.0015971660614

 fred0: elapsed time: 0.0031584741

 hello0: elapsed time: 0.00271105766296

 hello0: elapsed time: 0.00213503837585

 hello0: elapsed time: 0.00195693969727

 hello0: elapsed time: 0.00224900245667

 hello0: elapsed time: 0.00205492973328
 Welcome: elapsed time: 0.0484869480133

 Welcome: elapsed time: 0.00296783447266

 Welcome: elapsed time: 0.00293898582458

 Welcome: elapsed time: 0.00300216674805

 Welcome: elapsed time: 0.00312614440918

 The Welcome discrepancy is just under 2x, not nearly as bad as 10x in my 
 shipping code.


 On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:

 In order to isolate the problem one must take it in steps. This is a 
 good test but you must first perform this test with the code you proposed 
 before:

 def test():
 t = time.time
 start = t()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(t()-start))
 return

 I would like to know the results about this test code first.

 The other code you are using performs an import:

 from shippackage import Package


 Now that is something that is very different in web2py and flask for 
 example. In web2py the import is executed at every request (although it 
 should be cached by Python) while in flask it is executed only once.  This 
 should also not cause a performance difference but it is a different test 
 than the one above.

 TLTR: we should test separately python code execution (which may be 
 affected by threading) and import statements (which may be affected by 
 web2py custom_import and/or module weird behavior).



 On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:

 I've conducted a test with Flask.

 fred.py is the command line program.
 hello.py is the Flask program.
 default.py is the Welcome controller.
 testdata.txt is the test data.
 shippackage.py is a required module.

 fred.py:
 0.024 second
 0.067 second

 hello.py:
 0.029 second
 0.073 second

 default.py:
 0.27 second
 0.78 second

 The Flask program is slightly slower than the command line. However, 
 the Welcome app is about 10x slower!

 *Web2py is much, much slower than Flask.*

 I conducted the test in a Parallels VM running Ubuntu Server 12.04 (1GB 
 memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.


 I can't quite figure out how to use gunicom.


 On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:

 I'll see what I can do. It will take time for me to learn how to use 
 another framework.

 As for trying a different web server, my (production) Linux server is 
 intimately reliant on Apache. I'd have to learn how to use another web 
 server, and then try it in my Linux VM.


 On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, 
 such as Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I 
 don't know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x 
 discrepancy, the original app that initiated this thread now shows a 
 13x 
 discrepancy instead of 100x. That's certainly an improvement, but it's 
 still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app 

Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Michele Comitini
About the logging:
http://web2py.com/books/default/chapter/29/04/the-core?search=logger#Logging



2014-03-15 2:32 GMT+01:00 horridohobbyist horrido.hobb...@gmail.com:
 I don't understand logging. How do I examine the log? Where is it??


 On Friday, 14 March 2014 18:29:15 UTC-4, Michele Comitini wrote:

 Can you try with the following?

 note: no DAL, no sessions

 2014-03-14 22:23 GMT+01:00 Niphlod nip...@gmail.com:
 
  On Friday, March 14, 2014 10:17:40 PM UTC+1, Jonathan Lundell wrote:
 
  On 14 Mar 2014, at 2:16 PM, Jonathan Lundell jlun...@pobox.com wrote:
 
  Setting aside that your 2x is a lot better than HH's, what's been
  bothering me (assuming the effect is real) is: what could possibly be
  the
  mechanism?
 
 
  I'm always luckier than users. What can I say ? I love my computer ^__^
 
 
 
  Running it with web2py -S eliminates some possibilities, too, relating
  to
  the restricted environment stuff.
 
  That's what I thought
 
  So I'm thinking it must be thread activity. Yappi comes to mind, but
  not
  sure how to invoke it in a wsgi environment.
 
  How about Rocket with min  max threads set to 1?
 
 
  ykes!
 
  0.23 min, 0.27 max, ~0.25 mean
 
  --
  Resources:
  - http://web2py.com
  - http://web2py.com/book (Documentation)
  - http://github.com/web2py/web2py (Source code)
  - https://code.google.com/p/web2py/issues/list (Report Issues)
  ---
  You received this message because you are subscribed to the Google
  Groups
  web2py-users group.
  To unsubscribe from this group and stop receiving emails from it, send
  an
  email to web2py+un...@googlegroups.com.
  For more options, visit https://groups.google.com/d/optout.

 --
 Resources:
 - http://web2py.com
 - http://web2py.com/book (Documentation)
 - http://github.com/web2py/web2py (Source code)
 - https://code.google.com/p/web2py/issues/list (Report Issues)
 ---
 You received this message because you are subscribed to the Google Groups
 web2py-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to web2py+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Niphlod
@mcm: you got me worried. Your test function was clocking a hell lower than 
the original script. But then I found out why; one order of magnitude less 
(5000 vs 5). Once that was corrected, you got the exact same clock 
times as my app (i.e. function directly in the controller). I also 
stripped out the logging part making the app just return the result and no 
visible changes to the timings happened.

@hh: glad at least we got some grounds to hold on. 
@mariano: compiled or not, it doesn't seem to change the mean. a compiled 
app has just lower variance. 

@all: jlundell definitively hit something. Times are much more lower when 
threads are 1.

BTW: if I change originalscript.py to 

# -*- coding: utf-8 -*-
import time
import threading

def test():
start = time.time()
x = 0.0
for i in range(1,5):
x += (float(i+10)*(i+25)+175.0)/3.14
res = str(time.time()-start)
print elapsed time: + res + '\n'

if __name__ == '__main__':
t = threading.Thread(target=test)
t.start()
t.join()

I'm getting really close timings to wsgi environment, 1 thread only 
tests, i.e. 
0.23 min, 0.26 max, ~0.24 mean

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread horridohobbyist
Interestingly, now that I've got a fresh install of web2py with only the 
Welcome app, my Welcome vs command line test shows a consistent 2x 
discrepancy, just as you had observed.

My next step is to gradually add back all the other apps I had in web2py (I 
had 8 of them!) and see whether the discrepancy grows with the number of 
apps. That's the theory I'm working on.

Yes, yes, I know, according to the Book, I shouldn't have so many apps 
installed in web2py. This apparently affects performance. But the truth is, 
most of those apps are hardly ever executed, so their existence merely 
represents a static overhead in web2py. In my mind, this shouldn't widen 
the discrepancy, but you never know.


On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower when 
 threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread horridohobbyist
Well, putting back all my apps hasn't widened the discrepancy. So I don't 
know why my previous web2py installation was so slow.

While the Welcome app with the calculations test shows a 2x discrepancy, 
the original app that initiated this thread now shows a 13x discrepancy 
instead of 100x. That's certainly an improvement, but it's still too slow.

The size of the discrepancy depends on the code that is executed. Clearly, 
what I'm doing in the original app (performing permutations) is more 
demanding than mere arithmetical operations. Hence, 13x vs 2x.

I anxiously await any resolution to this performance issue, whether it be 
in WSGI or in web2py. I'll check in on this thread periodically...


On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only the 
 Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in web2py 
 (I had 8 of them!) and see whether the discrepancy grows with the number of 
 apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower when 
 threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Massimo Di Pierro
Could it be the GIIL. web2py is a multi-threaded app. Are the threads 
created by the web server doing anything?
What if you use a non-threaded server like gunicorn instead?


On Saturday, 15 March 2014 21:34:56 UTC-5, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I don't 
 know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x discrepancy, 
 the original app that initiated this thread now shows a 13x discrepancy 
 instead of 100x. That's certainly an improvement, but it's still too slow.

 The size of the discrepancy depends on the code that is executed. Clearly, 
 what I'm doing in the original app (performing permutations) is more 
 demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it be 
 in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only the 
 Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in web2py 
 (I had 8 of them!) and see whether the discrepancy grows with the number of 
 apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower 
 when threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Anthony
Are you able to replicate the exact task in another web framework, such as 
Flask (with the same server setup)?

On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I don't 
 know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x discrepancy, 
 the original app that initiated this thread now shows a 13x discrepancy 
 instead of 100x. That's certainly an improvement, but it's still too slow.

 The size of the discrepancy depends on the code that is executed. Clearly, 
 what I'm doing in the original app (performing permutations) is more 
 demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it be 
 in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only the 
 Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in web2py 
 (I had 8 of them!) and see whether the discrepancy grows with the number of 
 apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower 
 when threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread horridohobbyist
I'll see what I can do. It will take time for me to learn how to use 
another framework.

As for trying a different web server, my (production) Linux server is 
intimately reliant on Apache. I'd have to learn how to use another web 
server, and then try it in my Linux VM.


On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:

 Are you able to replicate the exact task in another web framework, such as 
 Flask (with the same server setup)?

 On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:

 Well, putting back all my apps hasn't widened the discrepancy. So I don't 
 know why my previous web2py installation was so slow.

 While the Welcome app with the calculations test shows a 2x discrepancy, 
 the original app that initiated this thread now shows a 13x discrepancy 
 instead of 100x. That's certainly an improvement, but it's still too slow.

 The size of the discrepancy depends on the code that is executed. 
 Clearly, what I'm doing in the original app (performing permutations) is 
 more demanding than mere arithmetical operations. Hence, 13x vs 2x.

 I anxiously await any resolution to this performance issue, whether it be 
 in WSGI or in web2py. I'll check in on this thread periodically...


 On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:

 Interestingly, now that I've got a fresh install of web2py with only the 
 Welcome app, my Welcome vs command line test shows a consistent 2x 
 discrepancy, just as you had observed.

 My next step is to gradually add back all the other apps I had in web2py 
 (I had 8 of them!) and see whether the discrepancy grows with the number of 
 apps. That's the theory I'm working on.

 Yes, yes, I know, according to the Book, I shouldn't have so many apps 
 installed in web2py. This apparently affects performance. But the truth is, 
 most of those apps are hardly ever executed, so their existence merely 
 represents a static overhead in web2py. In my mind, this shouldn't widen 
 the discrepancy, but you never know.


 On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:

 @mcm: you got me worried. Your test function was clocking a hell lower 
 than the original script. But then I found out why; one order of magnitude 
 less (5000 vs 5). Once that was corrected, you got the exact same 
 clock 
 times as my app (i.e. function directly in the controller). I also 
 stripped out the logging part making the app just return the result and no 
 visible changes to the timings happened.

 @hh: glad at least we got some grounds to hold on. 
 @mariano: compiled or not, it doesn't seem to change the mean. a 
 compiled app has just lower variance. 

 @all: jlundell definitively hit something. Times are much more lower 
 when threads are 1.

 BTW: if I change originalscript.py to 

 # -*- coding: utf-8 -*-
 import time
 import threading

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5):
 x += (float(i+10)*(i+25)+175.0)/3.14
 res = str(time.time()-start)
 print elapsed time: + res + '\n'

 if __name__ == '__main__':
 t = threading.Thread(target=test)
 t.start()
 t.join()

 I'm getting really close timings to wsgi environment, 1 thread only 
 tests, i.e. 
 0.23 min, 0.26 max, ~0.24 mean



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Jonathan Lundell
On 15 Mar 2014, at 7:45 PM, Massimo Di Pierro massimo.dipie...@gmail.com 
wrote:
 Could it be the GIIL. web2py is a multi-threaded app. Are the threads created 
 by the web server doing anything?
 What if you use a non-threaded server like gunicorn instead?
 

I believe that Niphlod reproduced the problem with Rocket, in which case the 
other threads must be waiting for IO.

It's suggestive that he did not see the performance degradation when Rocket was 
restricted to a single thread. I proposed a couple of lines of attack in an 
earlier message today on this thread. (I regret that I'm hip-deep in getting a 
beta release out, with no time to spend on this interesting problem.)

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-15 Thread Massimo Di Pierro
It is a well known problem of the Python GIL that multithreaded apps on 
multicore CPU run slower than non-threaded apps.

http://stackoverflow.com/questions/3121109/python-threading-unexpectedly-slower

The solution is not to use threads but non-threaded servers, like gunicorn 
or nginx. That is the first test I would do isolate the problem.
BTW. As I said, I cannot reproduce the problem on MacBook Air, Lion  
Python 2.7.3.  How many cores do you have? Usually the more cores the worst 
the GIL problem.

Massimo


On Saturday, 15 March 2014 22:52:49 UTC-5, Jonathan Lundell wrote:

 On 15 Mar 2014, at 7:45 PM, Massimo Di Pierro 
 massimo@gmail.comjavascript: 
 wrote:

 Could it be the GIIL. web2py is a multi-threaded app. Are the threads 
 created by the web server doing anything?
 What if you use a non-threaded server like gunicorn instead?


 I believe that Niphlod reproduced the problem with Rocket, in which case 
 the other threads must be waiting for IO.

 It's suggestive that he did not see the performance degradation when 
 Rocket was restricted to a single thread. I proposed a couple of lines of 
 attack in an earlier message today on this thread. (I regret that I'm 
 hip-deep in getting a beta release out, with no time to spend on this 
 interesting problem.)



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 6:28 AM, horridohobbyist horrido.hobb...@gmail.com wrote:
 I conducted a simple experiment. I took the Welcome app, surely the 
 simplest you can have (no databases, no concurrency, etc.), and added the 
 following to the index page:
 
 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(time.time()-start))
 return
 
 I get an elapsed time of 0.103 seconds.
 
 The same exact code in a command line program...
 
 if __name__ == '__main__':
 test()
 
 gives an elapsed time of 0.003 seconds. That's 35 times faster! It's not the 
 2 orders of magnitude I'm seeing in the pyShipping code, but my point is 
 proven. There is something hinky about web2py that makes Python code execute 
 much more slowly. Is web2py using a different Python version? As far as I can 
 tell, I only have Python 2.6.5 installed on my Linux server.
 

Easy enough to find out: print sys.version.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
Okay, version 2.6.5 is verified. No difference in the Python version.

So how to explain the performance difference?


On Friday, 14 March 2014 09:36:29 UTC-4, Jonathan Lundell wrote:

 On 14 Mar 2014, at 6:28 AM, horridohobbyist 
 horrido...@gmail.comjavascript: 
 wrote:

 I conducted a simple experiment. I took the Welcome app, surely the 
 simplest you can have (no databases, no concurrency, etc.), and added the 
 following to the index page:

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(time.time()-start))
 return

 I get an elapsed time of 0.103 seconds.

 The same exact code in a command line program...

 if __name__ == '__main__':
 test()

 gives an elapsed time of 0.003 seconds. *That's 35 times faster!* It's 
 not the 2 orders of magnitude I'm seeing in the pyShipping code, but my 
 point is proven. There is something hinky about web2py that makes Python 
 code execute much more slowly. Is web2py using a different Python version? 
 As far as I can tell, I only have Python 2.6.5 installed on my Linux server.


 Easy enough to find out: print sys.version.


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 7:39 AM, horridohobbyist horrido.hobb...@gmail.com wrote:
 Okay, version 2.6.5 is verified. No difference in the Python version.
 
 So how to explain the performance difference?

It's getting to be interesting.

To make the result more robust, I'd try it with a much bigger range, maybe 
100x, to be sure that the per-loop time is dominating the report. And just for 
the heck of it I'd replace range with xrange to see if it makes any difference 
at all.

Something else to keep in mind, especially if you're running this on a shared 
VM, is that time.time() is giving you clock time, and that can lead to very 
random results in a shared-hardware environment. Or even in a non-shared one, 
if there's any other system activity going on. The only way around that is to 
repeat the experiment a lot (which you're doing, sounds like).

 
 
 On Friday, 14 March 2014 09:36:29 UTC-4, Jonathan Lundell wrote:
 On 14 Mar 2014, at 6:28 AM, horridohobbyist horrido...@gmail.com wrote:
 I conducted a simple experiment. I took the Welcome app, surely the 
 simplest you can have (no databases, no concurrency, etc.), and added the 
 following to the index page:
 
 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(time.time()-start))
 return
 
 I get an elapsed time of 0.103 seconds.
 
 The same exact code in a command line program...
 
 if __name__ == '__main__':
 test()
 
 gives an elapsed time of 0.003 seconds. That's 35 times faster! It's not the 
 2 orders of magnitude I'm seeing in the pyShipping code, but my point is 
 proven. There is something hinky about web2py that makes Python code execute 
 much more slowly. Is web2py using a different Python version? As far as I 
 can tell, I only have Python 2.6.5 installed on my Linux server.
 
 
 Easy enough to find out: print sys.version.


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
xrange makes no difference. And, yes, I've run the Welcome program dozens 
of times and the results are very consistent. There is little randomness in 
time.time().

My Linux server is a dedicated machine at the datacentre. I have it all to 
myself. Not much else is running on it. Apache2, web2py.


On Friday, 14 March 2014 10:53:38 UTC-4, Jonathan Lundell wrote:

 On 14 Mar 2014, at 7:39 AM, horridohobbyist 
 horrido...@gmail.comjavascript: 
 wrote: 
  Okay, version 2.6.5 is verified. No difference in the Python version. 
  
  So how to explain the performance difference? 

 It's getting to be interesting. 

 To make the result more robust, I'd try it with a much bigger range, maybe 
 100x, to be sure that the per-loop time is dominating the report. And just 
 for the heck of it I'd replace range with xrange to see if it makes any 
 difference at all. 

 Something else to keep in mind, especially if you're running this on a 
 shared VM, is that time.time() is giving you clock time, and that can lead 
 to very random results in a shared-hardware environment. Or even in a 
 non-shared one, if there's any other system activity going on. The only way 
 around that is to repeat the experiment a lot (which you're doing, sounds 
 like). 

  
  
  On Friday, 14 March 2014 09:36:29 UTC-4, Jonathan Lundell wrote: 
  On 14 Mar 2014, at 6:28 AM, horridohobbyist horrido...@gmail.com 
 wrote: 
  I conducted a simple experiment. I took the Welcome app, surely the 
 simplest you can have (no databases, no concurrency, etc.), and added the 
 following to the index page: 
  
  def test(): 
  start = time.time() 
  x = 0.0 
  for i in range(1,5000): 
  x += (float(i+10)*(i+25)+175.0)/3.14 
  debug(elapsed time: +str(time.time()-start)) 
  return 
  
  I get an elapsed time of 0.103 seconds. 
  
  The same exact code in a command line program... 
  
  if __name__ == '__main__': 
  test() 
  
  gives an elapsed time of 0.003 seconds. That's 35 times faster! It's 
 not the 2 orders of magnitude I'm seeing in the pyShipping code, but my 
 point is proven. There is something hinky about web2py that makes Python 
 code execute much more slowly. Is web2py using a different Python version? 
 As far as I can tell, I only have Python 2.6.5 installed on my Linux 
 server. 
  
  
  Easy enough to find out: print sys.version. 




-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 8:59 AM, horridohobbyist horrido.hobb...@gmail.com wrote:
 I disagree. I'm getting very consistent results with time.time().

Right, I see no problem with the experiment. And the arguments to debug() must 
be computed before debug() gets called, so no problem there either.

 
 With a print statement, Welcome yields 0.587778091431 second, while the 
 command line execution gives 0.0202300548553 second. Again, that's 29 times 
 faster.
 
 
 On Friday, 14 March 2014 11:51:04 UTC-4, Leonel Câmara wrote:
 Time is still a bad way to measure as the web2py version process may be 
 getting preempted and not getting as much CPU time. Althoug,h I would agree 
 there seems to be something odd going on here. Possibly dead code 
 elimination. What happens with the time if you add a print x after the 
 for to both versions?
 
 


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Michele Comitini
Please try to profile as suggested we need more info.

2014-03-14 18:18 GMT+01:00 horridohobbyist horrido.hobb...@gmail.com:
 I originally installed web2py according to the Book. This was several years
 ago.

 I recently upgraded to the latest version, but I had to do it manually, as
 the administrative interface had all kinds of permission problems with the
 upgrade.

 I have a Dell server box, 2.4GHz quad-core Xeon with 4GB of RAM and 500GB
 hard drive. It's running Ubuntu Server 10.04.


 On Friday, 14 March 2014 12:26:44 UTC-4, Massimo Di Pierro wrote:

 Just adding one datapoint. I am trying this with my mac. In both cases I
 see 0.002xx seconds. Therefore I cannot reproduce the discrepancy.
 Are you using web2py from source? What kind of machine do you have?

 Massimo

 On Friday, 14 March 2014 08:28:48 UTC-5, horridohobbyist wrote:

 I conducted a simple experiment. I took the Welcome app, surely the
 simplest you can have (no databases, no concurrency, etc.), and added the
 following to the index page:

 def test():
 start = time.time()
 x = 0.0
 for i in range(1,5000):
 x += (float(i+10)*(i+25)+175.0)/3.14
 debug(elapsed time: +str(time.time()-start))
 return

 I get an elapsed time of 0.103 seconds.

 The same exact code in a command line program...

 if __name__ == '__main__':
 test()

 gives an elapsed time of 0.003 seconds. That's 35 times faster! It's not
 the 2 orders of magnitude I'm seeing in the pyShipping code, but my point is
 proven. There is something hinky about web2py that makes Python code execute
 much more slowly. Is web2py using a different Python version? As far as I
 can tell, I only have Python 2.6.5 installed on my Linux server.


 On Friday, 14 March 2014 08:17:00 UTC-4, Leonel Câmara wrote:

 If you have a performance issue why haven't you used a profiler yet? No
 one is going to guess it,

 web2py.py -F foldername

 Then use something like runsnakerun or pstats.

 --
 Resources:
 - http://web2py.com
 - http://web2py.com/book (Documentation)
 - http://github.com/web2py/web2py (Source code)
 - https://code.google.com/p/web2py/issues/list (Report Issues)
 ---
 You received this message because you are subscribed to the Google Groups
 web2py-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to web2py+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
First, I don't know how to use the profiler.

Second, for something as trivially simple as the Welcome app with the 
calculation loop, what is the profiler going to tell us? That simple 
multiplication and division are too slow? That the for loop is somehow 
broken?

Should I try to profile the entirety of the web2py framework?

Clearly, the Welcome app is pointing to a fundamental issue with my 
Ubuntu/Apache2/Python/web2py installation (assuming no one else can 
replicate the problem). As the Linux server is a production system, I am 
limited to how much tinkering I can actually do on it.

BTW, how does one actually shutdown web2py once it's installed and running 
via Apache?


On Friday, 14 March 2014 14:00:35 UTC-4, Michele Comitini wrote:

 Please try to profile as suggested we need more info. 

 2014-03-14 18:18 GMT+01:00 horridohobbyist 
 horrido...@gmail.comjavascript:: 

  I originally installed web2py according to the Book. This was several 
 years 
  ago. 
  
  I recently upgraded to the latest version, but I had to do it manually, 
 as 
  the administrative interface had all kinds of permission problems with 
 the 
  upgrade. 
  
  I have a Dell server box, 2.4GHz quad-core Xeon with 4GB of RAM and 
 500GB 
  hard drive. It's running Ubuntu Server 10.04. 
  
  
  On Friday, 14 March 2014 12:26:44 UTC-4, Massimo Di Pierro wrote: 
  
  Just adding one datapoint. I am trying this with my mac. In both cases 
 I 
  see 0.002xx seconds. Therefore I cannot reproduce the discrepancy. 
  Are you using web2py from source? What kind of machine do you have? 
  
  Massimo 
  
  On Friday, 14 March 2014 08:28:48 UTC-5, horridohobbyist wrote: 
  
  I conducted a simple experiment. I took the Welcome app, surely the 
  simplest you can have (no databases, no concurrency, etc.), and added 
 the 
  following to the index page: 
  
  def test(): 
  start = time.time() 
  x = 0.0 
  for i in range(1,5000): 
  x += (float(i+10)*(i+25)+175.0)/3.14 
  debug(elapsed time: +str(time.time()-start)) 
  return 
  
  I get an elapsed time of 0.103 seconds. 
  
  The same exact code in a command line program... 
  
  if __name__ == '__main__': 
  test() 
  
  gives an elapsed time of 0.003 seconds. That's 35 times faster! It's 
 not 
  the 2 orders of magnitude I'm seeing in the pyShipping code, but my 
 point is 
  proven. There is something hinky about web2py that makes Python code 
 execute 
  much more slowly. Is web2py using a different Python version? As far 
 as I 
  can tell, I only have Python 2.6.5 installed on my Linux server. 
  
  
  On Friday, 14 March 2014 08:17:00 UTC-4, Leonel Câmara wrote: 
  
  If you have a performance issue why haven't you used a profiler yet? 
 No 
  one is going to guess it, 
  
  web2py.py -F foldername 
  
  Then use something like runsnakerun or pstats. 
  
  -- 
  Resources: 
  - http://web2py.com 
  - http://web2py.com/book (Documentation) 
  - http://github.com/web2py/web2py (Source code) 
  - https://code.google.com/p/web2py/issues/list (Report Issues) 
  --- 
  You received this message because you are subscribed to the Google 
 Groups 
  web2py-users group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to web2py+un...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 11:28 AM, horridohobbyist horrido.hobb...@gmail.com wrote:
 First, I don't know how to use the profiler.
 
 Second, for something as trivially simple as the Welcome app with the 
 calculation loop, what is the profiler going to tell us? That simple 
 multiplication and division are too slow? That the for loop is somehow broken?
 
 Should I try to profile the entirety of the web2py framework?

I doubt that the profile would tell you much about the loop itself, but it 
might show work going on elsewhere, which might be instructive.

 
 Clearly, the Welcome app is pointing to a fundamental issue with my 
 Ubuntu/Apache2/Python/web2py installation (assuming no one else can replicate 
 the problem). As the Linux server is a production system, I am limited to how 
 much tinkering I can actually do on it.
 
 BTW, how does one actually shutdown web2py once it's installed and running 
 via Apache?


It's running as a wsgi process under Apache, so you really need to shut down 
Apache, or at least reconfigure it to not run web2py and then do a graceful 
restart.

For this kind of testing (not production), it might be easier to run web2py 
directly and use Rocket.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
Okay, I have some excellent news to report. Well, excellent for me, not so 
much for you guys...

I can reproduce the problem on another system. Here's what I did:

My Mac has Parallels installed. I created a new VM, downloaded Ubuntu 
Server 12.04, and installed it. Then I updated it with the latest patches.

Then, following the recipe from the Book for One step production 
deployment, I installed web2py 2.9.4.

I then ran the same Welcome vs command line test. The result?

Welcome:
elapsed time: 0.0491468906403

command line:
elapsed time: 0.00160121917725

Again, the command line is 30.6 times faster!!!

What more evidence do you need? Sorry to say, but there is something wrong 
with web2py.


On Friday, 14 March 2014 14:44:58 UTC-4, Jonathan Lundell wrote:

 On 14 Mar 2014, at 11:28 AM, horridohobbyist 
 horrido...@gmail.comjavascript: 
 wrote:

 First, I don't know how to use the profiler.

 Second, for something as trivially simple as the Welcome app with the 
 calculation loop, what is the profiler going to tell us? That simple 
 multiplication and division are too slow? That the for loop is somehow 
 broken?

 Should I try to profile the entirety of the web2py framework?


 I doubt that the profile would tell you much about the loop itself, but it 
 might show work going on elsewhere, which might be instructive.


 Clearly, the Welcome app is pointing to a fundamental issue with my 
 Ubuntu/Apache2/Python/web2py installation (assuming no one else can 
 replicate the problem). As the Linux server is a production system, I am 
 limited to how much tinkering I can actually do on it.

 BTW, how does one actually shutdown web2py once it's installed and running 
 via Apache?


 It's running as a wsgi process under Apache, so you really need to shut 
 down Apache, or at least reconfigure it to not run web2py and then do a 
 graceful restart.

 For this kind of testing (not production), it might be easier to run 
 web2py directly and use Rocket.


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Niphlod
Well well, it was some time that someone didn't start a web2py is slow(er) 
thread. 

start mode=resuming old dir of all kinds of tests to verify that the user 
is wrong

let's start clearing some field

ubuntu 12.10 64 bit, python 2.7.3. Original script:

# -*- coding: utf-8 -*-
import time
def test():
start = time.time()
x = 0.0
for i in range(1,5):
x += (float(i+10)*(i+25)+175.0)/3.14
res = str(time.time()-start)
return elapsed time: + res + '\n'

if __name__ == '__main__':
print test()

vs the attached app (one controller only, same function, without the 
__main__ logic). 
#commented lines is my brain working

 ./web2py.py -a password ...
 curl http://127.0.0.1:8000/pyperf/default/test
0.27 min, 0.32 max, ~0.29 mean
#kill web2py, we got a baseline
#let's execute the original script
 python originalscript.py
0.17 min, 0.25 max, ~0.20 mean
#oh gods. User is right... Something is slower. Ok, let's test web2py 
real overhead using shell mode
 ./web2py.py -M -S pyperf/default/test
0.17 min, 0.25 max, ~0.20 mean
#roger... something in the web world is making things slower. Maybe 
rocket or the infamous GIL ?!?!
#let's go with uwsgi!!!
 uwsgi -i web2py.ini
 curl http://127.0.0.1:8000/pyperf/default/test
0.25 min, 0.30 max, ~0.27 mean
# just kidding! even uwsgi is slower than originalscript.py. so it's web2py.
# wait a sec. I know a tonload of webframeworks, but I'll pick one of the 
smallest to introduce as less overhead as I can
# let's go with web.py (app_perf.py) 
 curl http://127.0.0.1:8000/pyperf
0.27 min, 0.31 max, ~0.29 mean
#gotta be kidding me. No taste whatsoever in choosing frameworks. All my 
choiches are taking 2x hit.
#let's go with the superduper flask (often recognized as one of the best 
performance-wise, without going full-blown esotherical)
# attached app_flask_perf.py
 curl http://127.0.0.1:8000/pyperf
0.27 min, 0.31 max, ~0.29 mean
# OMG! not frameworks' fault. this is wsgi fault.


So seems that web2py shell and python script behaves exactly the same 
(if web2py was introducing complexity it should show right there).
The same environment executed by rocket or uwsgi gets some gap (roughly 2x 
time).
uwsgi behaves a little better, but not that much (surely in concurrency but 
that's another deal).
Every wsgi implementation takes the hit, micro or macro framework.

S maybe the wsgi environment is posing some limits ?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


web2py.app.pyperf.tar.gz
Description: Binary data
import web
import time

urls = (/pyperf, welcome)
app = web.application(urls, globals())


class welcome:

def GET(self):
start = time.time()
x = 0.0
for i in range(1,5):
x += (float(i+10)*(i+25)+175.0)/3.14
res = str(time.time()-start)
return elapsed time: + res + '\n'

if __name__ == __main__:
web.httpserver.runsimple(app.wsgifunc(), (127.0.0.1, 8000))
import time
from flask import Flask
app = Flask(__name__)

@app.route(/pyperf)
def hello():
start = time.time()
x = 0.0
for i in range(1,5):
x += (float(i+10)*(i+25)+175.0)/3.14
res = str(time.time()-start)
return elapsed time: + res + '\n'

if __name__ == __main__:
app.run(port=8000)

Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 2:03 PM, Niphlod niph...@gmail.com wrote:
 So seems that web2py shell and python script behaves exactly the same (if 
 web2py was introducing complexity it should show right there).
 The same environment executed by rocket or uwsgi gets some gap (roughly 2x 
 time).
 uwsgi behaves a little better, but not that much (surely in concurrency but 
 that's another deal).
 Every wsgi implementation takes the hit, micro or macro framework.
 
 S maybe the wsgi environment is posing some limits ?

Setting aside that your 2x is a lot better than HH's, what's been bothering me 
(assuming the effect is real) is: what could possibly be the mechanism? 

Running it with web2py -S eliminates some possibilities, too, relating to the 
restricted environment stuff. 

So I'm thinking it must be thread activity. Yappi comes to mind, but not sure 
how to invoke it in a wsgi environment.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 2:16 PM, Jonathan Lundell jlund...@pobox.com wrote:
 On 14 Mar 2014, at 2:03 PM, Niphlod niph...@gmail.com wrote:
 So seems that web2py shell and python script behaves exactly the same 
 (if web2py was introducing complexity it should show right there).
 The same environment executed by rocket or uwsgi gets some gap (roughly 2x 
 time).
 uwsgi behaves a little better, but not that much (surely in concurrency but 
 that's another deal).
 Every wsgi implementation takes the hit, micro or macro framework.
 
 S maybe the wsgi environment is posing some limits ?
 
 Setting aside that your 2x is a lot better than HH's, what's been bothering 
 me (assuming the effect is real) is: what could possibly be the mechanism? 
 
 Running it with web2py -S eliminates some possibilities, too, relating to the 
 restricted environment stuff. 
 
 So I'm thinking it must be thread activity. Yappi comes to mind, but not sure 
 how to invoke it in a wsgi environment.
 
 

How about Rocket with min  max threads set to 1?

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Niphlod

On Friday, March 14, 2014 10:17:40 PM UTC+1, Jonathan Lundell wrote:

 On 14 Mar 2014, at 2:16 PM, Jonathan Lundell jlun...@pobox.comjavascript: 
 wrote:

 Setting aside that your 2x is a lot better than HH's, what's been 
 bothering me (assuming the effect is real) is: what could possibly be the 
 mechanism? 


I'm always luckier than users. What can I say ? I love my computer ^__^
 


 Running it with web2py -S eliminates some possibilities, too, relating to 
 the restricted environment stuff. 

 That's what I thought

 So I'm thinking it must be thread activity. Yappi comes to mind, but not 
 sure how to invoke it in a wsgi environment.

 How about Rocket with min  max threads set to 1?


ykes!

0.23 min, 0.27 max, ~0.25 mean

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
I don't understand logging. How do I examine the log? Where is it??


On Friday, 14 March 2014 18:29:15 UTC-4, Michele Comitini wrote:

 Can you try with the following? 

 note: no DAL, no sessions 

 2014-03-14 22:23 GMT+01:00 Niphlod nip...@gmail.com javascript:: 
  
  On Friday, March 14, 2014 10:17:40 PM UTC+1, Jonathan Lundell wrote: 
  
  On 14 Mar 2014, at 2:16 PM, Jonathan Lundell jlun...@pobox.com 
 wrote: 
  
  Setting aside that your 2x is a lot better than HH's, what's been 
  bothering me (assuming the effect is real) is: what could possibly be 
 the 
  mechanism? 
  
  
  I'm always luckier than users. What can I say ? I love my computer ^__^ 
  
  
  
  Running it with web2py -S eliminates some possibilities, too, relating 
 to 
  the restricted environment stuff. 
  
  That's what I thought 
  
  So I'm thinking it must be thread activity. Yappi comes to mind, but 
 not 
  sure how to invoke it in a wsgi environment. 
  
  How about Rocket with min  max threads set to 1? 
  
  
  ykes! 
  
  0.23 min, 0.27 max, ~0.25 mean 
  
  -- 
  Resources: 
  - http://web2py.com 
  - http://web2py.com/book (Documentation) 
  - http://github.com/web2py/web2py (Source code) 
  - https://code.google.com/p/web2py/issues/list (Report Issues) 
  --- 
  You received this message because you are subscribed to the Google 
 Groups 
  web2py-users group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to web2py+un...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread horridohobbyist
Astonishingly, I've discovered something else...

When I ran the test in my newly-created VM, I only ran it once. Later, I 
noticed I wasn't getting the 30x ratio anymore; I was only getting 2x, like 
Niphlod did.

Luckily, I had taken a snapshot of the VM before running the test, so I 
reverted back to it. This time, I ran the test repeatedly. Here are the 
results:

elapsed time: 0.0515658855438
elapsed time: 0.00306177139282
elapsed time: 0.00300478935242
elapsed time: 0.00301694869995
elapsed time: 0.00319504737854

Note that it is only *the first run* that shows the 30x ratio. Thereafter, 
I'm only getting the 2x ratio. *This pattern is repeatable*.

I wish I could get 2x ratio on my production server; I could live with 
that. However, I'm still getting 30x. For some reason, it's not settling 
down to 2x like in my VM. Go figure.


On Friday, 14 March 2014 15:21:12 UTC-4, horridohobbyist wrote:

 Okay, I have some excellent news to report. Well, excellent for me, not so 
 much for you guys...

 I can reproduce the problem on another system. Here's what I did:

 My Mac has Parallels installed. I created a new VM, downloaded Ubuntu 
 Server 12.04, and installed it. Then I updated it with the latest patches.

 Then, following the recipe from the Book for One step production 
 deployment, I installed web2py 2.9.4.

 I then ran the same Welcome vs command line test. The result?

 Welcome:
 elapsed time: 0.0491468906403

 command line:
 elapsed time: 0.00160121917725

 Again, the command line is 30.6 times faster!!!

 What more evidence do you need? Sorry to say, but there is something wrong 
 with web2py.


 On Friday, 14 March 2014 14:44:58 UTC-4, Jonathan Lundell wrote:

 On 14 Mar 2014, at 11:28 AM, horridohobbyist horrido...@gmail.com 
 wrote:

 First, I don't know how to use the profiler.

 Second, for something as trivially simple as the Welcome app with the 
 calculation loop, what is the profiler going to tell us? That simple 
 multiplication and division are too slow? That the for loop is somehow 
 broken?

 Should I try to profile the entirety of the web2py framework?


 I doubt that the profile would tell you much about the loop itself, but 
 it might show work going on elsewhere, which might be instructive.


 Clearly, the Welcome app is pointing to a fundamental issue with my 
 Ubuntu/Apache2/Python/web2py installation (assuming no one else can 
 replicate the problem). As the Linux server is a production system, I am 
 limited to how much tinkering I can actually do on it.

 BTW, how does one actually shutdown web2py once it's installed and 
 running via Apache?


 It's running as a wsgi process under Apache, so you really need to shut 
 down Apache, or at least reconfigure it to not run web2py and then do a 
 graceful restart.

 For this kind of testing (not production), it might be easier to run 
 web2py directly and use Rocket.



-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Mariano Reingart
Is web2py bytecode compiled?
.pyo or .pyc appears in gluon folder?
Maybe in tour production server there is some permission/date issue and
.pyc files cannot be saved, so they are compiled on each run (that takes
time).

Just and idea

Best regards,



Mariano Reingart
http://www.sistemasagiles.com.ar
http://reingart.blogspot.com


On Fri, Mar 14, 2014 at 11:33 PM, horridohobbyist horrido.hobb...@gmail.com
 wrote:

 Astonishingly, I've discovered something else...

 When I ran the test in my newly-created VM, I only ran it once. Later, I
 noticed I wasn't getting the 30x ratio anymore; I was only getting 2x, like
 Niphlod did.

 Luckily, I had taken a snapshot of the VM before running the test, so I
 reverted back to it. This time, I ran the test repeatedly. Here are the
 results:

 elapsed time: 0.0515658855438
 elapsed time: 0.00306177139282
 elapsed time: 0.00300478935242
 elapsed time: 0.00301694869995
 elapsed time: 0.00319504737854

 Note that it is only *the first run* that shows the 30x ratio.
 Thereafter, I'm only getting the 2x ratio. *This pattern is repeatable*.

 I wish I could get 2x ratio on my production server; I could live with
 that. However, I'm still getting 30x. For some reason, it's not settling
 down to 2x like in my VM. Go figure.


 On Friday, 14 March 2014 15:21:12 UTC-4, horridohobbyist wrote:

 Okay, I have some excellent news to report. Well, excellent for me, not
 so much for you guys...

 I can reproduce the problem on another system. Here's what I did:

 My Mac has Parallels installed. I created a new VM, downloaded Ubuntu
 Server 12.04, and installed it. Then I updated it with the latest patches.

 Then, following the recipe from the Book for One step production
 deployment, I installed web2py 2.9.4.

 I then ran the same Welcome vs command line test. The result?

 Welcome:
 elapsed time: 0.0491468906403

 command line:
 elapsed time: 0.00160121917725

 Again, the command line is 30.6 times faster!!!

 What more evidence do you need? Sorry to say, but there is something
 wrong with web2py.


 On Friday, 14 March 2014 14:44:58 UTC-4, Jonathan Lundell wrote:

 On 14 Mar 2014, at 11:28 AM, horridohobbyist horrido...@gmail.com
 wrote:

 First, I don't know how to use the profiler.

 Second, for something as trivially simple as the Welcome app with the
 calculation loop, what is the profiler going to tell us? That simple
 multiplication and division are too slow? That the for loop is somehow
 broken?

 Should I try to profile the entirety of the web2py framework?


 I doubt that the profile would tell you much about the loop itself, but
 it might show work going on elsewhere, which might be instructive.


 Clearly, the Welcome app is pointing to a fundamental issue with my
 Ubuntu/Apache2/Python/web2py installation (assuming no one else can
 replicate the problem). As the Linux server is a production system, I am
 limited to how much tinkering I can actually do on it.

 BTW, how does one actually shutdown web2py once it's installed and
 running via Apache?


 It's running as a wsgi process under Apache, so you really need to shut
 down Apache, or at least reconfigure it to not run web2py and then do a
 graceful restart.

 For this kind of testing (not production), it might be easier to run
 web2py directly and use Rocket.

  --
 Resources:
 - http://web2py.com
 - http://web2py.com/book (Documentation)
 - http://github.com/web2py/web2py (Source code)
 - https://code.google.com/p/web2py/issues/list (Report Issues)
 ---
 You received this message because you are subscribed to the Google Groups
 web2py-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to web2py+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [web2py] Re: Python Performance Issue

2014-03-14 Thread Jonathan Lundell
On 14 Mar 2014, at 9:13 PM, Mariano Reingart reing...@gmail.com wrote:
 Is web2py bytecode compiled?
 .pyo or .pyc appears in gluon folder?
 Maybe in tour production server there is some permission/date issue and .pyc 
 files cannot be saved, so they are compiled on each run (that takes time).
 

But the compilation is O(1), and shouldn't be affected by the iteration count.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.