On Wed, 2008-04-23 at 12:51 -0500, Devin Torres wrote:
> So we're using Pylons and Python in general for our new company
> platform. 

I'm reading this as meaning intranet, not internet?

> Given this situation, I believe that despite paste making an effort to
> be multithreaded, it would still be advantageous to run a cluster of
> four Pylons instances and proxy to these using nginx.
> 
> Using our setup we'd have four pylons instances being proxied to by
> four nginx worker threads.

Unless you are expecting hundreds of req/s or are serving large static
files, I'd suggest just one or maybe two nginx workers. Nginx will not
be the bottleneck in this situation.  One Nginx worker can easily handle
proxying four Pylons apps (or a hundred, for that matter).

> In nginx you can set the processor affinity for each worker thread,
> thus placing each worker on a different core 0..3.

Not true on Linux.  This has been broken for some time.

> Here's where things get tricky:
> I've found a Python package that apparently allows Python applications
> to set their processor affinity (I'm afraid it doesn't work on OS X):
> http://pypi.python.org/pypi/affinity/0.1.0
> 
> Using this, what do you guys thing on my idea to write a custom
> cluster controller, perhaps using supervisord, that will start nginx
> and the four worker processes, and then fork()'s my Pylons app into
> into a cluster of four?
> 
> Is this overkill? Is Paste more mulithreaded than I'm giving it credit
> for? Is there a better way to go about this? Does an alternative to
> the 'affinity' package exist?

I think it's overkill, but not for the reasons you seem to think.  Much
easier is to simply run four Pylons processes from the command line,
each with a custom .ini file. Just use a shell script.  

You can also set the CPU affinity for Pylons (or more specifically,
Python) from the command line using a small C program (and I'm sure
there are pre-written utilities or recipes you could follow).

You'll want to follow a shared-nothing approach or use something like
Memcached to share data between processes.  It's probably also possible
to use Memcached as a secondary cache for SQLAlchemy (although I haven't
tried it).  There's a thread about someone doing this here:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg02499.html


Regards,
Cliff



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to