On Thu, Oct 22, 2009 at 1:37 AM, Grzegorz Nosek <[email protected]> wrote:
> On śro, paź 21, 2009 at 05:15:58 -0700, Roger Hoover wrote:
> > Hi Grzegorz,
> >
> > The other way to group identical programs is to set numprocs > 1. Also,
> you
> > have some control over the name of the processes with the process_name
> > config option. For example,
> >
> > [program:cat]
> > command = /bin/cat
> > process_name=%(program_name)s_%(process_num)s
> > numprocs = 2
> >
> > Then you would start them as cat:cat_0 and cat:cat_1
>
> Yes. And I'd like to call them simply "cat_0" and "cat_1". Is there any
> deeper reason it's not possible?
>
For a config like this,
[fcgi-program:test]
command=/foo//bar/test.fcgi
socket=unix:///var/run/fcgi/test.socket
process_name=foo_%(process_num)s
numprocs=2
[program:test2]
command=/foo//bar/test.pl
process_name=foo_%(process_num)s
numprocs=2
There has to be a way to disambiguate between test:foo_0 and test2:foo_0.
This issue seems to be that nginx will not allow backend names to contain
":"? Is there no way to translate between a name used in nginx and the name
used in supervisord? Maybe the nginx backend name could be "test__foo_0"
which gets mapped to "test:foo_0" when doing supervisord XMLRPC calls?
>
> > On the nginx mailing list, you mentioned that your module was not using
> > fcgi-program. Maybe you could explain more about the reasons? I tend to
> > think that it should use fcgi-program b/c it allows all the processes in
> the
> > fcgi pool to share a single socket. I would expect that to make the
> nginx
> > module simpler if it does not have to load balance over multiple sockets,
> > one for each fcgi process. Also, it will make for much better load
> > balancing because when all the processes share a socket, the kernel will
> > distribute connection requests to each process as soon as the process is
> > ready to accept() and it guarantees that the connections are delegated in
> > the order they were received. If connection requests start queueing up
> on
> > the shared socket, the nginx module can trigger more processes to be
> spawned
> > and they can immediately start accepting from the existing connection
> queue
> > on the shared socket.
> >
> > Without a shared socket, the nginx module has to do it's best to load
> > balance all the sockets but it may not end up sending the connection to
> the
> > socket with the smallest queue. Also, if one socket connection queue
> starts
> > backing up and the module triggers another process to be started, the new
> > process will create it's own socket and be able to take new requests but
> > cannot accept connections that are already queued on existing sockets.
> >
> > Please let me know if I'm mistaken here. Thanks,
>
> QFT. You're basically 100% correct here. The problem is purely in
> implementation. To keep things simple, we generate program names from
> Nginx upstream name, which may not contain a colon. So we cannot refer
> to programs in groups at all, including pools of FastCGI servers. This
> forces us to either:
> a) implement explicit support for program groups in our module (which I
> believe is supervisord's detail and outside world need not know about
> it)
> b) go the easy route, spawn a pool of independent servers on different
> sockets
>
> If we could "start cat_1" given the config you posted, we'd be set, as
> in:
>
> upstream cat_ {
> supervisord http://127.0.0.1:9000 user pass;
> server unix:/tmp/sock;
> server unix:/tmp/sock;
> fair;
> }
>
> The load balancing logic would be mostly useless here (and mistaken
> about individual backends' load) but we don't care as the real load
> balancing is done in the kernel and the overhead is minimal. Also we
> could easily implement a dummy "load balancer" (more like process
> manager interface) for multiple servers on a single socket.
>
> BTW, and straying a bit from the original topic, it's my personal pet
> peeve -- why did FastCGI do the right thing and standardise on receiving
> the listening socket from high above but no application server using
> HTTP I've seen can do it? It solves so many problems cleanly.
>
Hmm...great question. I've never looked at Apache internals but assumed it
worked that way.
>
> Best regards,
> Grzegorz Nosek
>
_______________________________________________
Supervisor-users mailing list
[email protected]
http://lists.supervisord.org/mailman/listinfo/supervisor-users