very interesting. What is code being benchmarked here?
could you post your apache configuration?

On Monday, 17 March 2014 11:08:53 UTC-5, horridohobbyist wrote:
>
> Anyway, I ran the shipping code Welcome test with both Apache2 and 
> Gunicorn. Here are the results:
>
> Apache:Begin...
> Apache:Elapsed time: 0.28248500824
> Apache:Elapsed time: 0.805250167847
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.284092903137
> Apache:Elapsed time: 0.797535896301
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.266696929932
> Apache:Elapsed time: 0.793596029282
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.271706104279
> Apache:Elapsed time: 0.770045042038
> Apache:Percentage fill: 60.0
> Apache:Begin...
> Apache:Elapsed time: 0.26541185379
> Apache:Elapsed time: 0.798058986664
> Apache:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.0273849964142
> Gunicorn:Elapsed time: 0.0717470645905
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.0259709358215
> Gunicorn:Elapsed time: 0.0712919235229
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.0273978710175
> Gunicorn:Elapsed time: 0.0727338790894
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.0260291099548
> Gunicorn:Elapsed time: 0.0724799633026
> Gunicorn:Percentage fill: 60.0
> Gunicorn:Begin...
> Gunicorn:Elapsed time: 0.0249080657959
> Gunicorn:Elapsed time: 0.0711901187897
> Gunicorn:Percentage fill: 60.0
>
> There is no question that the fault lies with Apache.
>
>
> On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:
>>
>> What kind of VM is this? What is the host platform? How many CPU cores? 
>> Is VM using all the cores? The only thing I can think of is the GIL and the 
>> fact that multithreaded code in python gets slower and slower the more 
>> cores I have. On my laptop, with two cores, I do not see any slow down. 
>> Rocket preallocate a thread pool. The rationale is that it decreases the 
>> latency time. Perhaps you can also try rocket in this way:
>>
>> web2py.py --minthreads=1 --maxthreads=1
>>
>> This will reduce the number of worker threads to 1. Rocket also runs a 
>> background non-worker thread that monitors worker threads and kills them if 
>> they get stuck.
>>
>> On Sunday, 16 March 2014 20:22:45 UTC-5, horridohobbyist wrote:
>>>
>>> Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:
>>>
>>> Welcome: elapsed time: 0.0511929988861
>>> Welcome: elapsed time: 0.0024790763855
>>> Welcome: elapsed time: 0.00262713432312
>>> Welcome: elapsed time: 0.00224614143372
>>> Welcome: elapsed time: 0.00218415260315
>>> Welcome: elapsed time: 0.00213503837585
>>>
>>> Oddly enough, it's slightly faster! But still 37% slower than the 
>>> command line execution.
>>>
>>> I'd really, really, **really** like to know why the shipping code is 10x 
>>> slower...
>>>
>>>
>>> On Sunday, 16 March 2014 21:13:56 UTC-4, horridohobbyist wrote:
>>>>
>>>> Okay, I did the calculations test in my Linux VM using command line 
>>>> (fred0), Flask (hello0), and web2py (Welcome).
>>>>
>>>> fred0: elapsed time: 0.00159001350403
>>>>
>>>> fred0: elapsed time: 0.0015709400177
>>>>
>>>> fred0: elapsed time: 0.00156021118164
>>>>
>>>> fred0: elapsed time: 0.0015971660614
>>>>
>>>> fred0: elapsed time: 0.00315999984741
>>>>
>>>> hello0: elapsed time: 0.00271105766296
>>>>
>>>> hello0: elapsed time: 0.00213503837585
>>>>
>>>> hello0: elapsed time: 0.00195693969727
>>>>
>>>> hello0: elapsed time: 0.00224900245667
>>>>
>>>> hello0: elapsed time: 0.00205492973328
>>>> Welcome: elapsed time: 0.0484869480133
>>>>
>>>> Welcome: elapsed time: 0.00296783447266
>>>>
>>>> Welcome: elapsed time: 0.00293898582458
>>>>
>>>> Welcome: elapsed time: 0.00300216674805
>>>>
>>>> Welcome: elapsed time: 0.00312614440918
>>>>
>>>> The Welcome discrepancy is just under 2x, not nearly as bad as 10x in 
>>>> my shipping code.
>>>>
>>>>
>>>> On Sunday, 16 March 2014 17:52:00 UTC-4, Massimo Di Pierro wrote:
>>>>>
>>>>> In order to isolate the problem one must take it in steps. This is a 
>>>>> good test but you must first perform this test with the code you proposed 
>>>>> before:
>>>>>
>>>>> def test():
>>>>>     t = time.time
>>>>>     start = t()
>>>>>     x = 0.0
>>>>>     for i in range(1,5000):
>>>>>         x += (float(i+10)*(i+25)+175.0)/3.14
>>>>>     debug("elapsed time: "+str(t()-start))
>>>>>     return
>>>>>
>>>>> I would like to know the results about this test code first.
>>>>>
>>>>> The other code you are using performs an import:
>>>>>
>>>>>     from shippackage import Package
>>>>>
>>>>>
>>>>> Now that is something that is very different in web2py and flask for 
>>>>> example. In web2py the import is executed at every request (although it 
>>>>> should be cached by Python) while in flask it is executed only once.  
>>>>> This 
>>>>> should also not cause a performance difference but it is a different test 
>>>>> than the one above.
>>>>>
>>>>> TLTR: we should test separately python code execution (which may be 
>>>>> affected by threading) and import statements (which may be affected by 
>>>>> web2py custom_import and/or module weird behavior).
>>>>>
>>>>>
>>>>>
>>>>> On Sunday, 16 March 2014 08:47:13 UTC-5, horridohobbyist wrote:
>>>>>>
>>>>>> I've conducted a test with Flask.
>>>>>>
>>>>>> fred.py is the command line program.
>>>>>> hello.py is the Flask program.
>>>>>> default.py is the Welcome controller.
>>>>>> testdata.txt is the test data.
>>>>>> shippackage.py is a required module.
>>>>>>
>>>>>> fred.py:
>>>>>> 0.024 second
>>>>>> 0.067 second
>>>>>>
>>>>>> hello.py:
>>>>>> 0.029 second
>>>>>> 0.073 second
>>>>>>
>>>>>> default.py:
>>>>>> 0.27 second
>>>>>> 0.78 second
>>>>>>
>>>>>> The Flask program is slightly slower than the command line. However, 
>>>>>> the Welcome app is about 10x slower!
>>>>>>
>>>>>> *Web2py is much, much slower than Flask.*
>>>>>>
>>>>>> I conducted the test in a Parallels VM running Ubuntu Server 12.04 
>>>>>> (1GB memory allocated). I have a 2.5GHz dual-core Mac mini with 8GB.
>>>>>>
>>>>>>
>>>>>> I can't quite figure out how to use gunicom.
>>>>>>
>>>>>>
>>>>>> On Saturday, 15 March 2014 23:41:49 UTC-4, horridohobbyist wrote:
>>>>>>>
>>>>>>> I'll see what I can do. It will take time for me to learn how to use 
>>>>>>> another framework.
>>>>>>>
>>>>>>> As for trying a different web server, my (production) Linux server 
>>>>>>> is intimately reliant on Apache. I'd have to learn how to use another 
>>>>>>> web 
>>>>>>> server, and then try it in my Linux VM.
>>>>>>>
>>>>>>>
>>>>>>> On Saturday, 15 March 2014 22:45:27 UTC-4, Anthony wrote:
>>>>>>>>
>>>>>>>> Are you able to replicate the exact task in another web framework, 
>>>>>>>> such as Flask (with the same server setup)?
>>>>>>>>
>>>>>>>> On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist 
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Well, putting back all my apps hasn't widened the discrepancy. So 
>>>>>>>>> I don't know why my previous web2py installation was so slow.
>>>>>>>>>
>>>>>>>>> While the Welcome app with the calculations test shows a 2x 
>>>>>>>>> discrepancy, the original app that initiated this thread now shows a 
>>>>>>>>> 13x 
>>>>>>>>> discrepancy instead of 100x. That's certainly an improvement, but 
>>>>>>>>> it's 
>>>>>>>>> still too slow.
>>>>>>>>>
>>>>>>>>> The size of the discrepancy depends on the code that is executed. 
>>>>>>>>> Clearly, what I'm doing in the original app (performing permutations) 
>>>>>>>>> is 
>>>>>>>>> more demanding than mere arithmetical operations. Hence, 13x vs 2x.
>>>>>>>>>
>>>>>>>>> I anxiously await any resolution to this performance issue, 
>>>>>>>>> whether it be in WSGI or in web2py. I'll check in on this thread 
>>>>>>>>> periodically...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Saturday, 15 March 2014 16:19:12 UTC-4, horridohobbyist wrote:
>>>>>>>>>>
>>>>>>>>>> Interestingly, now that I've got a fresh install of web2py with 
>>>>>>>>>> only the Welcome app, my Welcome vs command line test shows a 
>>>>>>>>>> consistent 2x 
>>>>>>>>>> discrepancy, just as you had observed.
>>>>>>>>>>
>>>>>>>>>> My next step is to gradually add back all the other apps I had in 
>>>>>>>>>> web2py (I had 8 of them!) and see whether the discrepancy grows with 
>>>>>>>>>> the 
>>>>>>>>>> number of apps. That's the theory I'm working on.
>>>>>>>>>>
>>>>>>>>>> Yes, yes, I know, according to the Book, I shouldn't have so many 
>>>>>>>>>> apps installed in web2py. This apparently affects performance. But 
>>>>>>>>>> the 
>>>>>>>>>> truth is, most of those apps are hardly ever executed, so their 
>>>>>>>>>> existence 
>>>>>>>>>> merely represents a static overhead in web2py. In my mind, this 
>>>>>>>>>> shouldn't 
>>>>>>>>>> widen the discrepancy, but you never know.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Saturday, 15 March 2014 11:19:06 UTC-4, Niphlod wrote:
>>>>>>>>>>>
>>>>>>>>>>> @mcm: you got me worried. Your test function was clocking a hell 
>>>>>>>>>>> lower than the original script. But then I found out why; one order 
>>>>>>>>>>> of 
>>>>>>>>>>> magnitude less (5000 vs 50000). Once that was corrected, you got 
>>>>>>>>>>> the exact 
>>>>>>>>>>> same clock times as "my app" (i.e. function directly in the 
>>>>>>>>>>> controller). I 
>>>>>>>>>>> also stripped out the logging part making the app just return the 
>>>>>>>>>>> result 
>>>>>>>>>>> and no visible changes to the timings happened.
>>>>>>>>>>>
>>>>>>>>>>> @hh: glad at least we got some grounds to hold on. 
>>>>>>>>>>> @mariano: compiled or not, it doesn't seem to "change" the mean. 
>>>>>>>>>>> a compiled app has just lower variance. 
>>>>>>>>>>>
>>>>>>>>>>> @all: jlundell definitively hit something. Times are much more 
>>>>>>>>>>> lower when threads are 1.
>>>>>>>>>>>
>>>>>>>>>>> BTW: if I change "originalscript.py" to 
>>>>>>>>>>>
>>>>>>>>>>> # -*- coding: utf-8 -*-
>>>>>>>>>>> import time
>>>>>>>>>>> import threading
>>>>>>>>>>>
>>>>>>>>>>> def test():
>>>>>>>>>>>     start = time.time()
>>>>>>>>>>>     x = 0.0
>>>>>>>>>>>     for i in range(1,50000):
>>>>>>>>>>>         x += (float(i+10)*(i+25)+175.0)/3.14
>>>>>>>>>>>     res = str(time.time()-start)
>>>>>>>>>>>     print "elapsed time: "+ res + '\n'
>>>>>>>>>>>
>>>>>>>>>>> if __name__ == '__main__':
>>>>>>>>>>>     t = threading.Thread(target=test)
>>>>>>>>>>>     t.start()
>>>>>>>>>>>     t.join()
>>>>>>>>>>>
>>>>>>>>>>> I'm getting really close timings to "wsgi environment, 1 thread 
>>>>>>>>>>> only" tests, i.e. 
>>>>>>>>>>> 0.23 min, 0.26 max, ~0.24 mean
>>>>>>>>>>>
>>>>>>>>>>>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to