On May 25, 3:06 pm, Tim <[email protected]> wrote:
> The webpy hello world application is like this:
>
> import web
>
> urls = (
> '/(.*)', 'hello'
> )
> app = web.application(urls, globals())
>
> class hello:
> def GET(self, *args):
> return 'Hello, World!'
>
> if __name__ == "__main__":
> app.run()
>
> And it served <300 requests per second under lighttpd.
>
> With setting "max-procs" => 2 on the 2 cores box, the performance
> could be improved to about 500 requests per second though. The other
> thing is that I prefer to running the script as a single process but
> increasing the threads limit of webpy. Do you know how to do that?
Don't know enough about web.py to know. I think it perhaps hides the
ability to override the flup settings. Thus you probably need to
create a WSGI application object and invoke flup yourself to override
it. At a guess start with:
import web
urls = (
'/(.*)', 'hello'
)
class hello:
def GET(self, name):
i = web.input(times=1)
if not name: name = 'world'
for c in xrange(int(i.times)): print 'Hello,', name+'!'
application = web.application(urls, globals()).wsgifunc()
if __name__ = "__main__":
import flup.server.fcgi as flups
return flups.WSGIServer(application, multiplexed=True, bindAddress=
('localhost', 8000)).run()
Then look at what additional option flup WSGIServer object takes.
This is presuming that web.py doesn't give you a way of passing
options that way.
Graham
> On May 25, 12:51 pm, Graham Dumpleton <[email protected]>
> wrote:
>
> > On May 25, 2:46 pm, 柯甫敬 <[email protected]> wrote:
>
> > > Again, I tested
>
> > > def application(environ, start_response):
> > > status = '200 OK'
> > > output = 'Hello World!'
> > > response_headers = [('Content-type', 'text/plain'),
> > > ('Content-Length', str(len(output)))]
> > > start_response(status, response_headers)
> > > return [output]
>
> > > But not webpy. And got 1,500 requests per second under Apache/mod_wsgi.
> > > Does
> > > it make sense to you?
>
> > If your web.py hello world test is similar to WSGI hello world test
> > then would confirm that overhead of web.py for simple stuff is quite
> > is small.
>
> > One of the things web.py is known for is being light weight, although
> > am surprised it would be so close though to raw hello world. :-)
>
> > Graham
>
> > > 2009/5/25 Graham Dumpleton <[email protected]>
>
> > > > On May 25, 2:17 pm, Tim <[email protected]> wrote:
> > > > > As I talked in the last posts, the hello world application was run
> > > > > under apache/mod_wsgi. "I just tested with mod_wsgi under apache and
> > > > > the requests/second is about 1,500 with a hello world application. "
>
> > > > > Please refer to the 3rd post for the details.
>
> > > > I originally suggested a plain WSGI hello world, not a web.py hello
> > > > world. There is a difference, as plain WSGI hello world doesn't depend
> > > > on web.py. If what you did was a web.py hello world test then fine.
> > > > One reason for suggesting a plain WSGI hello world test was that it
> > > > would also show you what overhead there was in web.py dispatching.
> > > > Depending on the complexity of your application, if the difference
> > > > between web.py and plain WSGI hello world was marked, it might show
> > > > you that perhaps using WSGI directly may yield better performance. For
> > > > something as simple as serving advertisements where URL routing
> > > > probably not needed, may not even be a need for web.py.
>
> > > > So, if you haven't done a WSGI hello world, would be curious to see
> > > > results of that relative to web.py. WSGI hello world program below.
>
> > > > def application(environ, start_response):
> > > > status = '200 OK'
> > > > output = 'Hello World!'
>
> > > > response_headers = [('Content-type', 'text/plain'),
> > > > ('Content-Length', str(len(output)))]
> > > > start_response(status, response_headers)
>
> > > > return [output]
>
> > > > Graham
>
> > > > > On May 25, 12:02 pm, Graham Dumpleton <[email protected]>
> > > > > wrote:
>
> > > > > > On May 25, 1:41 pm, 柯甫敬 <[email protected]> wrote:
>
> > > > > > > For the performance bottlenecks on my application side, I think I
> > > > could have
> > > > > > > a couple of ways to optimize. You could regard it as a simple
> > > > > > > hello
> > > > world
> > > > > > > application for now. My real question actually was "is it possible
> > > > for
> > > > > > > lighttpd/fastcgi/webpy to support up to 1,000 requests/second for
> > > > > > > a
> > > > simple
> > > > > > > hello world application"? I think increasing the threads limit in
> > > > webpy
> > > > > > > could help, but I didn't find where to set or change in webpy. Can
> > > > anyone
> > > > > > > help on this?
>
> > > > > > What does a hello world program using web.py on Apache/mod_wsgi
> > > > > > yield?
> > > > > > If you can't even get 1000 requests per second on Apache/mod_wsgi,
> > > > > > which is already proving to yield better performance than your fcgi
> > > > > > setup, then unlikely that even if fcgi is tuned that you will
> > > > > > achieve
> > > > > > anything better.
>
> > > > > > Even though lighttpd or nginx may be faster than Apache for yielding
> > > > > > static files, flup when used with fcgi adds a fair bit more
> > > > > > overhead.
> > > > > > End result is that more often that not, a fcgi/flup application will
> > > > > > run slower than Apache/mod_wsgi. That is why Apache/mod_wsgi is a
> > > > > > good
> > > > > > reference here as is close to maximum you can expect to get without
> > > > > > starting to load balance across multiple boxes etc.
>
> > > > > > Graham
>
> > > > > > > Thanks in advance.
> > > > > > > 2009/5/25 Tim <[email protected]>
>
> > > > > > > > Thanks Graham.
> > > > > > > > My application is actual an ad serving application. I just
> > > > > > > > tested
> > > > with
> > > > > > > > mod_wsgi under apache and the requests/second is about 1,500
> > > > > > > > with a
> > > > hello
> > > > > > > > world application.
> > > > > > > > It looks like a problem with fcgi then. How do you think?
> > > > > > > > 2009/5/25 Graham Dumpleton <[email protected]>
>
> > > > > > > >> On May 25, 12:08 pm, Tim <[email protected]> wrote:
> > > > > > > >> > I'm recently porting an apache module to be
> > > > > > > >> > webpy/fcgi/lighttpd
> > > > based
> > > > > > > >> > application, but the performance doesn't look like good.
>
> > > > > > > >> > With the apache module writen in C, it was able to process
> > > > > > > >> > up to
> > > > 2,000
> > > > > > > >> > requests per second, however, when I ran a simple helloworld
> > > > > > > >> > application with webpy/fcgi/lighttpd, the server was
> > > > > > > >> > processing
> > > > about
> > > > > > > >> > 300 requests per second.
>
> > > > > > > >> > I was followinghttp://webpy.org/installfortheinstallationand
> > > > > > > >> > default lighttpd configurations. I'm new to lighttpd and
> > > > > > > >> > webpy,
> > > > so I'm
> > > > > > > >> > wondering if there are any configuration options in
> > > > lighttpd/webpy to
> > > > > > > >> > tune the performance to make the requests/second to be more
> > > > > > > >> > than
> > > > > > > >> > 1,000? Also, is there any way to increase the threads limit
> > > > > > > >> > in
> > > > webpy
> > > > > > > >> > since when I tested with more than 120 concurrent users I
> > > > > > > >> > would
> > > > get
> > > > > > > >> > "backend is overloaded" errors?
>
> > > > > > > >> What does your application do? Without knowing that, it is very
> > > > hard
> > > > > > > >> to comment on whether your problems may actually be in your
> > > > > > > >> application, the fastcgi bridge, or your web server.
>
> > > > > > > >> You should perhaps create a simple WSGI hello world program
> > > > > > > >> that
> > > > > > > >> doesn't even use web.py and determine how many requests that
> > > > > > > >> can
> > > > > > > >> handle. You might also as comparison to gauge whether the web
> > > > server
> > > > > > > >> and/or fastcgi is the issue, compare that to WSGI hello world
> > > > program
> > > > > > > >> under Apache/mod_wsgi on same system.
>
> > > > > > > >> Graham- Hide quoted text -
>
> > > > > > - Show quoted text -- Hide quoted text -
>
> > - Show quoted text -
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---