Whether your application will work fine in a single process will depend on the application and you will only know from doing proper broad load testing across the range of functionality the application provides.
Many people just get too carried away up front with trying to put together a super scalable solution when a lot of the time their applications will never see enough load for it to be an issue. So, start out with a single process, whether that be mod_proxy to TG/ CherryPy, or mod_wsgi daemon with single multithread process, and do the necessary testing to scope out whether it can handle it or not. Graham On Jan 18, 5:19 pm, GSP <[EMAIL PROTECTED]> wrote: > Thanks for clearing that up. I am however in the process of converting > an existing app over to TG1 and this existing app uses SQLObject. It > isn't a massive app(domain model is comprised of approx 60 classes) > but I don't think it would be feasible to switch to SA(I am not super > familiar with SA so perhaps a switch wouldn't be as time consuming as > I . Given that fact I am a bit concerned by Graham's comments > regarding performance with TG and SO. > I am aware of the caching issue with SQLObject having dealt with that > issue because the existing app is based on mod_python(and the multiple > processes lead to stale data). Since a multi-process deployment is not > a reliable configuration in this scenario, are there any suggestions > about how to approach this situation(if in fact TG1/SO is a > suboptimal combination) > > Thanks in advance! > > On Jan 17, 7:44 pm, "Mark Ramm" <[EMAIL PROTECTED]> wrote: > > > > On Jan 18, 1:38 pm, GSP <[EMAIL PROTECTED]> wrote: > > > > I stumbled upon a thread on the django group recently where questions > > > > about Turbogears performance were raised: > > > > >http://groups.google.ca/group/django-users/browse_thread/thread/ab111... > > > > > "TurboGears would be a terrible choice. Python does not do well on > > > > threads > > > > and has been known to lock up solid when executing a fork() out of a > > > > thread. > > > TurboGears, even TurboGears 1 works very well in a multi-process > > deployment with load balancing between processes behind a reverse > > proxy server. > > > Multithreaded python web servers do perform better than single > > threaded servers, particularly per unit of memory consumed, but > > multi-process configurations allow you to scale across multiple > > processors better. In my experience a mix of the two provides the > > best of both worlds, and that's exactly the kind deployment senario > > that high traffic TurboGears sites use. > > > If you want to scale, I think TurboGears+SQLAlchemy will do very well, > > and we are definitely working to add lots of scalability features to > > TG2, so that it should do even better. > > > Django works, and is great. But whoever said TurboGears will not > > scale has clearly not tried. We may need to highlight multi-process > > deployments more in our documentation, but since the single process > > way can handle lots and lots of scenarios, we've made that the first > > thing we show people because it is simple to setup. > > > --ark Ramm --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "TurboGears" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/turbogears?hl=en -~----------~----~----~----~------~----~------~--~---

