Hello,

In our new project we want to implement the web application from the
beginning to be
easily scalable to 1) multiple cores (on the same server) and to 2) multiple
separate servers.

Due to the infamous GIL ruining multithreading scalability of Python, the
only sensible way to
implement both 1) and 2) seems to be to run multiple instances of the server
(we plan to
use the Paster to serve the app) and use a separate load-balancer (possibly
some Apache mod...
any recommendations?) to redirect requests to each of the server instances
running either on the
same machine (to take advantage of 1) or to separate servers (to implement
2).

Of course, in this setup there's no real difference between 1) and 2) which
is kind of nice.

However, we started to think the practical issues with this in Pylons. In
principle, making this work
reliably means to distribute the session data so all server processes can
access each session's data.
For this we plan to store the session data to the database and reduce its
overhead using memcached.

How to implement this reliably on Pylons? The first thing that pops into my
mind is to add code
in __call__() of the base controller to load the session data (from
memcached or from DB). But how
about saving? This would be best implemented in session.save() so there's no
useless saving (which
invalidates the memcached entry) if nothing hasn't been changed. Is there a
way to do this nicely
without poking with Pylons code?

Any ideas and comments considering this kind of scalable Pylons
implementations are welcome.

Thanks,

-- 
--PJ

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to