Something very strange is going on. After I've run the Welcome test where
the results are consistently fast (ie, ~1.6 seconds), if I wait an hour or
so and run the test again, I get something like the following:
Begin...
Elapsed time: 97.1873888969
Percentage fill: 41.9664268585
Begin...
Scratch my solution. It's not correct. My test results are all over the
place. You don't even have to wait an hour. Within the span of 15 minutes,
I've gone from fast, fast, fast, fast, fast, fast to super-slow (90+
seconds), super-slow to slow, slow, slow, slow. The variability seems to be
Have you checked memory consumption?
On Saturday, 22 March 2014 10:15:59 UTC-5, horridohobbyist wrote:
Scratch my solution. It's not correct. My test results are all over the
place. You don't even have to wait an hour. Within the span of 15 minutes,
I've gone from fast, fast, fast, fast,
Well, according to the 'free' command, even when I'm getting these
slowdowns, I'm nowhere close to the memory limits:
total used free shared buffers cached
Mem: 39252443929003532344 0 23608 123856
Like I said, my Linux server
I'm considering delving into DTrace to find out what's going on, but any
such instrumentation is apparently very problematic in Linux (eg, poor
support, poor documentation, etc.). Is there any other way to find out what
the hell is going on?
On Saturday, 22 March 2014 16:24:20 UTC-4,
I don't understand why the Flask version of the Welcome test doesn't
exhibit this slowdown under Apache. It's executing the same application
code. It's configured with the same processes=1 and threads=1 WSGI
parameters. It's running the same Python interpreter (and presumably using
the same
processes=1 and threads=30 also seems to solve the performance problem.
BTW, I'm having a dickens of a time reproducing the problem in my servers
(either the actual server or the VM). I have not been able to discover how
to reset the state of my tests, so I have to blindly go around trying to
i think it make the other users more clear, if you can also provide the
configuration and procedures also for what are you doing.
best regards,
stifan
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
-
if threads=0 does not work use threads=1 and make mod_wsgi happy. If
you remove threads it defaults to 15.
2014-03-19 4:34 GMT+01:00 horridohobbyist horrido.hobb...@gmail.com:
threads=0 is no good–Apache restart upchucks on this.
BTW, I haven't experimented with the threads value. Might this
Did you explicitly set the number of threads as well? By default you get 15
threads per process. The documentation implies that this is a hard limit,
but I'm not sure.
Maybe you have simply found a bottleneck in threads. Did you also try
increasing the number of threads instead of adding more
Multi-threaded apache is supposed to be faster than multi-process apache
under real load (i.e. multiple users) because starting processes is expensive
in time and memory.
IMHO under linux the difference is really negligible. Popularity of
threads rose in mid '90 because a very popular OS
Yes, processes=3 and threads=1.
I tried processes=1 and threads=3, and performance was still 10x bad.
So I guess that answers my question: the threads parameter is not helpful.
On Wednesday, 19 March 2014 05:24:01 UTC-4, Tim Richardson wrote:
Did you explicitly set the number of threads as
In 2007, I wrote my first web application using Smalltalk/Seaside. I chose
Seaside because it was a very easy-to-learn, easy-to-program,
easy-to-deploy, highly productive, self-contained all-in-one web framework.
(It still is, today.) Unfortunately, web2py hadn't been born yet, but
clearly the
Try threads = 30 or 50 or 100; that would be interesting.
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
---
You received this message because you are
I took the shipping code that I ran in Flask (without Apache) and adapted
it to run under Apache as a Flask app. That way, I'm comparing apples to
apples. I'm comparing the performance of the shipping code between Flask
and web2py.
Below, I've included the 'default' file from
WSGIDaemonProcess hello user=www-data group=www-data threads=5
with web2py try the following instead:
WSGIDaemonProcess hello user=www-data group=www-data processes=number
of cores + 1 threads=(0 or 1)
If it's faster, then the GIL must be the cause. flask by default has
much less features
Done. With processes=3, the 10x discrepancy is eliminated! (And this is in
a Linux VM configured for 1 CPU.)
On Tuesday, 18 March 2014 16:26:24 UTC-4, Michele Comitini wrote:
WSGIDaemonProcess hello user=www-data group=www-data threads=5
with web2py try the following instead:
Thank you for all your tests. You should write a summary of your results
with recommendations for Apache users.
On Tuesday, 18 March 2014 19:44:29 UTC-5, horridohobbyist wrote:
Done. With processes=3, the 10x discrepancy is eliminated! (And this is in
a Linux VM configured for 1 CPU.)
On
I shall do that. Thanks.
With the knowledge about processes=, I've tuned my actual Linux server to
eliminate the 10x slowdown. As it turns out, for my 2.4GHz quad-core Xeon
with 4GB RAM, processes=2 works best. I found that any other value (3, 4,
5) gave very inconsistent results–sometimes I
threads=0 is no good–Apache restart upchucks on this.
BTW, I haven't experimented with the threads value. Might this also improve
performance (with respect to GIL)?
Also, I was wondering. Is the processes= solution related to whether you
are using the prefork MPM or the worker MPM? I know that
20 matches
Mail list logo