Robert Leftwich wrote:
> Ian Bicking wrote:
>> Cliff Wells wrote:
>>> I'll probably do
>>> some better tests later, but my initial suspicion is that the default
>>> "paster serve" isn't as fast as CherryPy (both are proxyied to via
>>> Nginx).
>> In my simplistic tests CherryPy 3 is about 50% faster than 
>> paste.httpserver.  That's keeping everything else equivalent, and not 
>> including any framework.  I don't know how other factors would effect that.
> 
> I'm going to be doing some performance tests on my setup in the next few 
> days, 
> but one thing I've  noticed in preliminary playing is that using fastcgi/flup 
> with nginx is noticeably faster than a straight proxy.

FastCGI doesn't seem substantially easier to parse than HTTP, so I'm not 
sure why that'd be.  Maybe flup is just faster than paste.httpserver. 
Or maybe there's something different about the way connections are 
handled (are FastCGI connections persistent in any way?).

>> Pylons has some performance gotchas if you use threadlocals in certain 
>> ways.  
> 
> Care to elaborate?

Everytime you access an attribute on c or g (or pylons.request/response) 
there is a threadlocal lookup which adds some overhead.  If you do it in 
a loop (e.g., "for c.name in big_list") it can become a big hit.  If you 
have just a handful of such lookups it's not a big deal.

-- 
Ian Bicking | [EMAIL PROTECTED] | http://blog.ianbicking.org

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to