On Jan 18, 1:38 pm, GSP <[EMAIL PROTECTED]> wrote: > I stumbled upon a thread on the django group recently where questions > about Turbogears performance were raised: > > http://groups.google.ca/group/django-users/browse_thread/thread/ab111... > > "TurboGears would be a terrible choice. Python does not do well on > threads > and has been known to lock up solid when executing a fork() out of a > thread. Also, unless you feel your webserver should use very little > of > your computers resources, the threaded approach of TurboGears may not > give > you what you want. Python folk made a design decision way back to > implement a Global Interpreter Lock that means one thread runs at a > time > in any process, even if you have 100 threads and 32 processor cores, > one > thread will be running on one processor. So while TurboGears has a > very > short learning curve, it is not really for production performance. " > > I have a feeling that this poster is perhaps misinformed but I don't > have enough Turbogears specific knowledge to comment. Can anyone else > offer an informed opinion about this poster's comment?
Yes, what was posted by them was in part FUD, in as much as deployment options exist which counter the issue. One could just as easily deploy Django in a single process and it would have the exact same issues they say is a problem with TurboGears. Why they are trying to say TurboGears is no good is that when using internal CherryPy as the web server, ie., its default configuration, everything runs in a single process and as a result of the Python GIL one can't utilise properly a multi core or multi CPU system. If using CherryPy server inside of TurboGears, the only option would be to run multiple instances of the TurboGears application as separate processes and then load balance across them using mod_proxy. Only problem with this is that older versions of TurboGears which use SQLObject have problems when run in a multi process configuration due to caching issues within database layer. Ie., different process may see different copies of cached data. As a result you can't actually do that and use of a single process is the only option. In newer TurboGears versions, the multi process problem goes away if using SQLAlchemy instead of SQLObject as it doesn't have the caching issue. Thus, when SQLAlchemy is used, you can do the mod_proxy load balancing thing. Alternatively, you can use mod_wsgi and thereby get parity with Django in the sense of being able to deploy it in same Apache configuration. Why they probably don't believe that Django has the problem is that their recommended configuration is UNIX Apache and prefork MPM and thus their preferred configuration is multiprocess to begin with. This doesn't mean that Django couldn't be run on top of CherryPy WSGI server or paste server in a single process, or even Apache on Windows, at which point it would also be affected by the same problems they are attributing to TurboGears. So, the FUD is that it is the TurboGears application itself that is the problem, when it isn't really. Instead it is how TurboGears is deployed. It just so happens that TurboGears standard deployment option does have the problems described where as Django's recommended deployment option doesn't. Note that I don't use TurboGears but understand what I have said about database caching to be correct. Someone please correct me if the situation has changed from the last time I investigated those issues through the list. In terms of deployment options and why the Python GIL is not an issue when using Apache, read: http://blog.dscpl.com.au/2007/09/parallel-python-discussion-and-modwsgi.html http://blog.dscpl.com.au/2007/07/web-hosting-landscape-and-modwsgi.html Graham --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "TurboGears" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/turbogears?hl=en -~----------~----~----~----~------~----~------~--~---

