BTW: apache still suffers the SLOWLORIS attack if not carefully configured.
ATM only workarounds to mitigate the issue are there, but not a definitive
solution.
On Tuesday, March 18, 2014 9:46:38 PM UTC+1, Niphlod wrote:
>
> apache isn't fine for static files either.
> The "move" to evented-lik
apache isn't fine for static files either.
The "move" to evented-like webservers of practically all tech-savvy peoples
in the need is a good estimate on how much the uber-standard apache lacks
in easy-to-debug scenario (I won't even start with the know-how of the
syntax to make it work as you'd
People have found lots of variability in performance with apache+mod_wsgi.
Performance is very sensitive to memeory/etc.
This is because Apache is not async (like nginx) and it either uses threads
or processes. Both have issues with Python. Threads slow you down because
of the GIL. Parallel pro
I use apache. I think while your results are precise and interesting, real
world experience of site visitors is very different. nginx met the "10K"
challenge: i.e., 1 simultaneous requests. That's the kind of load that
gives Apache problems. But under lower loads, there are many other factor
I'm disturbed by the fact that the defaults are "sensible". That suggests
there is no way to improve the performance. A 2x-10x performance hit is
very serious.
I was considering dropping Apache and going with nginx/gunicorn in my Linux
server, but I'm not sure that's a good idea. Apache is a ne
I don't know if this is relevant, but in apache2.conf, there is a
MaxClients parameter for the "prefork" MPM and it's set to 150. This is the
default.
I changed it to 15, but it made no difference in the test.
On Monday, 17 March 2014 21:15:12 UTC-4, Tim Richardson wrote:
>
>
> (I am the furth
(I am the furthest thing from being an Apache expert as you can find.)
Well, whereever that puts you, I'll be in shouting distance.
I guess this means you are using defaults. The defaults are sensible for
small loads, so I don't think you would get better performance from
tweaking. These def
How or where do I locate the mod_wsgi settings? (I am the furthest thing
from being an Apache expert as you can find.)
Thanks.
On Monday, 17 March 2014 20:20:00 UTC-4, Tim Richardson wrote:
>
>
>
>> There is no question that the fault lies with Apache.
>>
>>
> Perhaps it is fairer to say the fa
>
> There is no question that the fault lies with Apache.
>
>
Perhaps it is fairer to say the fault lies with mod_wsgi ?
What are the mod_wsgi settings in your apache config?
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source c
I don't know if bumping up the number of processors from 1 to 4 makes
sense. I have a dual-core Mac mini. The VM may be doing something funny.
I changed to 2 processors and we're back to the 10x performance
discrepancy. So whether it's 1 or 2 processors makes very little difference.
Apache:Elap
I bumped up the number of processors from 1 to 4. Here are the results:
Apache:Begin...
Apache:Elapsed time: 2.31899785995
Apache:Elapsed time: 6.31404495239
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.274327039719
Apache:Elapsed time: 0.832695960999
Apache:Percentage fill:
very interesting. What is code being benchmarked here?
could you post your apache configuration?
On Monday, 17 March 2014 11:08:53 UTC-5, horridohobbyist wrote:
>
> Anyway, I ran the shipping code Welcome test with both Apache2 and
> Gunicorn. Here are the results:
>
> Apache:Begin...
> Apache:El
Anyway, I ran the shipping code Welcome test with both Apache2 and
Gunicorn. Here are the results:
Apache:Begin...
Apache:Elapsed time: 0.28248500824
Apache:Elapsed time: 0.805250167847
Apache:Percentage fill: 60.0
Apache:Begin...
Apache:Elapsed time: 0.284092903137
Apache:Elapsed time: 0.7975358
Apparently the number of cores is adjustable. Try this link.
http://download.parallels.com/desktop/v5/docs/en/Parallels_Desktop_Users_Guide/23076.htm
On Monday, March 17, 2014 10:02:13 AM UTC-4, horridohobbyist wrote:
>
> Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know
>
Parallels VM running on a 2.5GHz dual-core Mac mini. I really don't know
what Parallels uses.
On Monday, 17 March 2014 00:05:58 UTC-4, Massimo Di Pierro wrote:
>
> What kind of VM is this? What is the host platform? How many CPU cores? Is
> VM using all the cores? The only thing I can think of
@Massiom, @hb
> python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000
with the above just one worker is started, hence requests are serialized.
2014-03-17 1:03 GMT+01:00 Massimo Di Pierro :
> easy_install gunicorn
> cd web2py
> python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000
>
> Anyway, yo
What kind of VM is this? What is the host platform? How many CPU cores? Is
VM using all the cores? The only thing I can think of is the GIL and the
fact that multithreaded code in python gets slower and slower the more
cores I have. On my laptop, with two cores, I do not see any slow down.
Rock
Using gunicorn (Thanks, Massimo), I ran the full web2py Welcome code:
Welcome: elapsed time: 0.0511929988861
Welcome: elapsed time: 0.0024790763855
Welcome: elapsed time: 0.00262713432312
Welcome: elapsed time: 0.00224614143372
Welcome: elapsed time: 0.00218415260315
Welcome: elapsed time: 0.00213
Okay, I did the calculations test in my Linux VM using command line
(fred0), Flask (hello0), and web2py (Welcome).
fred0: elapsed time: 0.00159001350403
fred0: elapsed time: 0.0015709400177
fred0: elapsed time: 0.00156021118164
fred0: elapsed time: 0.0015971660614
fred0: elapsed time: 0.00315
easy_install gunicorn
cd web2py
python anyserver.py -s gunicorn -i 127.0.0.1 -p 8000
Anyway, you need to run a test that does not include "import Package first"
Because definitively treats imports differently. That must be tested
separately.
Massimo
On Sunday, 16 March 2014 15:31:17 UTC-5, h
web2py comes with anyserver.py
you just do:
python anyserver.py -H
for help. One of the command line options is to run with gunicorn. You can
try tornado, and any other server out there.
On Sunday, 16 March 2014 17:09:40 UTC-5, horridohobbyist wrote:
>
> Failed to find application: 'gluon.main
Failed to find application: 'gluon.main'
2014-03-15 02:23:51 [22339] [INFO] Worker exiting (pid: 22339)
...
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 9, in
load_entry_point('gunicorn==18.0', 'console_scripts', 'gunicorn')()
...
gunicorn.errors.HaltServer:
On
In order to isolate the problem one must take it in steps. This is a good
test but you must first perform this test with the code you proposed before:
def test():
t = time.time
start = t()
x = 0.0
for i in range(1,5000):
x += (float(i+10)*(i+25)+175.0)/3.14
debug("elap
On 16 Mar 2014, at 1:31 PM, horridohobbyist wrote:
> Well, I managed to get gunicorn working in a roundabout way. Here are my
> findings for the fred.py/hello.py test:
>
> Elapsed time: 0.028
> Elapsed time: 0.068
>
> Basically, it's as fast as the command line test!
>
> I'm not sure this tell
You basically need to cd into the directory where you have unzipped
web2py. Then run gunicorn like the following:
gunicorn -w 4 gluon.main:wsgibase
There you have web2py reachable on http://localhost:8000
Which part does not work for you?
2014-03-16 21:31 GMT+01:00 horridohobbyist :
> Well, I
Well, I managed to get *gunicorn* working in a roundabout way. Here are my
findings for the fred.py/hello.py test:
Elapsed time: 0.028
Elapsed time: 0.068
Basically, it's as fast as the command line test!
I'm not sure this tells us much. Is it Apache's fault? Is it web2py's
fault? The test is
gunicorn instructions:
$ pip install gunicorn
$ cd
$ gunicorn -w 4 gluon.main:wsgibase
2014-03-16 14:47 GMT+01:00 horridohobbyist :
> I've conducted a test with Flask.
>
> fred.py is the command line program.
> hello.py is the Flask program.
> default.py is the Welcome controller.
> testdata.t
I've conducted a test with Flask.
fred.py is the command line program.
hello.py is the Flask program.
default.py is the Welcome controller.
testdata.txt is the test data.
shippackage.py is a required module.
fred.py:
0.024 second
0.067 second
hello.py:
0.029 second
0.073 second
default.py:
0.27
Apache on linux can run WSGI in multi process mode as well as multithreaded
mode according to the docs. This would eliminate the GIL as a factor.
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p
It is a well known problem of the Python GIL that multithreaded apps on
multicore CPU run slower than non-threaded apps.
http://stackoverflow.com/questions/3121109/python-threading-unexpectedly-slower
The solution is not to use threads but non-threaded servers, like gunicorn
or nginx. That is t
On 15 Mar 2014, at 7:45 PM, Massimo Di Pierro
wrote:
> Could it be the GIIL. web2py is a multi-threaded app. Are the threads created
> by the web server doing anything?
> What if you use a non-threaded server like gunicorn instead?
>
I believe that Niphlod reproduced the problem with Rocket, i
I'll see what I can do. It will take time for me to learn how to use
another framework.
As for trying a different web server, my (production) Linux server is
intimately reliant on Apache. I'd have to learn how to use another web
server, and then try it in my Linux VM.
On Saturday, 15 March 20
Are you able to replicate the exact task in another web framework, such as
Flask (with the same server setup)?
On Saturday, March 15, 2014 10:34:56 PM UTC-4, horridohobbyist wrote:
>
> Well, putting back all my apps hasn't widened the discrepancy. So I don't
> know why my previous web2py install
Could it be the GIIL. web2py is a multi-threaded app. Are the threads
created by the web server doing anything?
What if you use a non-threaded server like gunicorn instead?
On Saturday, 15 March 2014 21:34:56 UTC-5, horridohobbyist wrote:
>
> Well, putting back all my apps hasn't widened the dis
Well, putting back all my apps hasn't widened the discrepancy. So I don't
know why my previous web2py installation was so slow.
While the Welcome app with the calculations test shows a 2x discrepancy,
the original app that initiated this thread now shows a 13x discrepancy
instead of 100x. That'
Interestingly, now that I've got a fresh install of web2py with only the
Welcome app, my Welcome vs command line test shows a consistent 2x
discrepancy, just as you had observed.
My next step is to gradually add back all the other apps I had in web2py (I
had 8 of them!) and see whether the disc
@mcm: you got me worried. Your test function was clocking a hell lower than
the original script. But then I found out why; one order of magnitude less
(5000 vs 5). Once that was corrected, you got the exact same clock
times as "my app" (i.e. function directly in the controller). I also
stri
About the logging:
http://web2py.com/books/default/chapter/29/04/the-core?search=logger#Logging
2014-03-15 2:32 GMT+01:00 horridohobbyist :
> I don't understand logging. How do I examine the log? Where is it??
>
>
> On Friday, 14 March 2014 18:29:15 UTC-4, Michele Comitini wrote:
>>
>> Can you t
On 14 Mar 2014, at 9:13 PM, Mariano Reingart wrote:
> Is web2py bytecode compiled?
> .pyo or .pyc appears in gluon folder?
> Maybe in tour production server there is some permission/date issue and .pyc
> files cannot be saved, so they are compiled on each run (that takes time).
>
But the compil
Is web2py bytecode compiled?
.pyo or .pyc appears in gluon folder?
Maybe in tour production server there is some permission/date issue and
.pyc files cannot be saved, so they are compiled on each run (that takes
time).
Just and idea
Best regards,
Mariano Reingart
http://www.sistemasagiles.com.
Astonishingly, I've discovered something else...
When I ran the test in my newly-created VM, I only ran it once. Later, I
noticed I wasn't getting the 30x ratio anymore; I was only getting 2x, like
Niphlod did.
Luckily, I had taken a snapshot of the VM before running the test, so I
reverted ba
I don't understand logging. How do I examine the log? Where is it??
On Friday, 14 March 2014 18:29:15 UTC-4, Michele Comitini wrote:
>
> Can you try with the following?
>
> note: no DAL, no sessions
>
> 2014-03-14 22:23 GMT+01:00 Niphlod >:
> >
> > On Friday, March 14, 2014 10:17:40 PM UTC+1,
On Friday, March 14, 2014 10:17:40 PM UTC+1, Jonathan Lundell wrote:
>
> On 14 Mar 2014, at 2:16 PM, Jonathan Lundell >
> wrote:
>
> Setting aside that your 2x is a lot better than HH's, what's been
> bothering me (assuming the effect is real) is: what could possibly be the
> mechanism?
>
>
I'
On 14 Mar 2014, at 2:16 PM, Jonathan Lundell wrote:
> On 14 Mar 2014, at 2:03 PM, Niphlod wrote:
>> So seems that web2py shell and python script behaves exactly the same
>> (if web2py was introducing complexity it should show right there).
>> The same environment executed by rocket or uwsgi
On 14 Mar 2014, at 2:03 PM, Niphlod wrote:
> So seems that web2py shell and python script behaves exactly the same (if
> web2py was introducing complexity it should show right there).
> The same environment executed by rocket or uwsgi gets some gap (roughly 2x
> time).
> uwsgi behaves a litt
Well well, it was some time that someone didn't start a "web2py is slow(er)
thread".
let's start clearing some field
ubuntu 12.10 64 bit, python 2.7.3. Original script:
# -*- coding: utf-8 -*-
import time
def test():
start = time.time()
x = 0.0
for i in range(1,5):
Okay, I have some excellent news to report. Well, excellent for me, not so
much for you guys...
I can reproduce the problem on another system. Here's what I did:
My Mac has Parallels installed. I created a new VM, downloaded Ubuntu
Server 12.04, and installed it. Then I updated it with the late
On 14 Mar 2014, at 11:28 AM, horridohobbyist wrote:
> First, I don't know how to use the profiler.
>
> Second, for something as trivially simple as the Welcome app with the
> calculation loop, what is the profiler going to tell us? That simple
> multiplication and division are too slow? That th
First, I don't know how to use the profiler.
Second, for something as trivially simple as the Welcome app with the
calculation loop, what is the profiler going to tell us? That simple
multiplication and division are too slow? That the for loop is somehow
broken?
Should I try to profile the ent
Please try to profile as suggested we need more info.
2014-03-14 18:18 GMT+01:00 horridohobbyist :
> I originally installed web2py according to the Book. This was several years
> ago.
>
> I recently upgraded to the latest version, but I had to do it manually, as
> the administrative interface had
On 14 Mar 2014, at 8:59 AM, horridohobbyist wrote:
> I disagree. I'm getting very consistent results with time.time().
Right, I see no problem with the experiment. And the arguments to debug() must
be computed before debug() gets called, so no problem there either.
>
> With a print statement,
xrange makes no difference. And, yes, I've run the Welcome program dozens
of times and the results are very consistent. There is little randomness in
time.time().
My Linux server is a dedicated machine at the datacentre. I have it all to
myself. Not much else is running on it. Apache2, web2py.
On 14 Mar 2014, at 7:39 AM, horridohobbyist wrote:
> Okay, version 2.6.5 is verified. No difference in the Python version.
>
> So how to explain the performance difference?
It's getting to be interesting.
To make the result more robust, I'd try it with a much bigger range, maybe
100x, to be su
Okay, version 2.6.5 is verified. No difference in the Python version.
So how to explain the performance difference?
On Friday, 14 March 2014 09:36:29 UTC-4, Jonathan Lundell wrote:
>
> On 14 Mar 2014, at 6:28 AM, horridohobbyist
> >
> wrote:
>
> I conducted a simple experiment. I took the "Wel
On 14 Mar 2014, at 6:28 AM, horridohobbyist wrote:
> I conducted a simple experiment. I took the "Welcome" app, surely the
> simplest you can have (no databases, no concurrency, etc.), and added the
> following to the index page:
>
> def test():
> start = time.time()
> x = 0.0
> for
55 matches
Mail list logo