Going back to logging with zeromq, I've since tried to implement this on
other machines and I can't get it to work.  For some reason, I've installed
it identically on both machines (sudo pip install uwsgi==1.4.6).  I've
checked the --logger-list and zeromq wasn't listed for the one that wasn't
working.  I then downloaded the LTS and latest stable tarballs here
http://uwsgi-docs.readthedocs.org/en/latest/Download.html.  the zeromq
logger doesn't appear to be in either of the plugin folders, not sure if
that means anything.

When I do:
logger = zeromq:tcp://127.0.0.1:9191 (exactly what i have on my dev server)
I get something like logger zeromq not found (whcih I guess makes sense
since its not in the logger list).

When I do:
log-zeromq = tcp://54.224.57.80:9191
Nothing happens.  Does that mean its not recognized?

I think a lot of this could be avoided by doing custom builds, but I find
what has happened odd (but I'm probably just being dumb).

Any guidance would be much appreciated

Thanks,
Tony



On Wed, Jun 5, 2013 at 9:15 AM, <[email protected]> wrote:

> Send uWSGI mailing list submissions to
>         [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> or, via email, send a message with subject or body 'help' to
>         [email protected]
>
> You can reach the person managing the list at
>         [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of uWSGI digest..."
>
>
> Today's Topics:
>
>    1. Re: Offloading big responses (?ukasz Mierzwa)
>    2. Re: Offloading big responses (Roberto De Ioris)
>    3. emperor dies on SIGINT/SIGQUIT (Damjan)
>    4. Re: emperor dies on SIGINT/SIGQUIT (Roberto De Ioris)
>    5. Re: emperor dies on SIGINT/SIGQUIT (Damjan)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 5 Jun 2013 12:40:07 +0200
> From: ?ukasz Mierzwa <[email protected]>
> To: uWSGI developers and users list <[email protected]>
> Subject: Re: [uWSGI] Offloading big responses
> Message-ID:
>         <
> cafcbevujrj2fmy-tj9xxyq-m84p-3opnchy0k-9z+c1n3jg...@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-2"
>
> EDIT: it seems that bug #305 is fixed in latest 1.9.12 tarball so no need
> to patch it.
>
>
> 2013/6/5 ?ukasz Mierzwa <[email protected]>
>
> > I did some tests with 1.9.12 simulating big responses and enough
> > concurrent connections to fill the queue. It clearly shows the advantage
> of
> > offload transformation. Workers push responses to offload engine and can
> > process new requests right away, this keeps backlog queue low at all
> times
> > (without offload I get a lot of "*** uWSGI listen queue of socket 3 full
> > !!! (11/10) ***" messages), response times are also lower since client
> > request doesn't need to wait before being processed.
> >
> > https://gist.github.com/prymitive/5712706
> >
> > If anyone wants to try this be sure to use current master or apply
> >
> https://github.com/unbit/uwsgi/commit/0f397eeb55532296b3bcddb9148bc5b6fa8346f2on
> > top of 1.9.12 or it will eat up disk space (see
> > https://github.com/unbit/uwsgi/issues/305).
> >
> >
> > 2013/6/3 ?ukasz Mierzwa <[email protected]>
> >
> >> I've just tested to see if offload threads are really async as
> advertised
> >> and it seems they are, great ;)
> >>
> >> What I've done (zero is 1GB file with zeros):
> >>
> >> $ uwsgi --http :8080 --static-map="/zero=zero" --stats :4444
> >> --offload-threads 2
> >> $ ab -c 10 -n 100 http://localhost:8080/zero
> >>
> >> With only 1 worker and 2 offload threads I had 10 concurrent connections
> >> (not queued but running).
> >>
> >>
> >> 2013/6/3 Roberto De Ioris <[email protected]>
> >>
> >>>
> >>> > I'll give it a try once 1.9.12 is out.
> >>> >
> >>> > AFAIK uWSGI is blocking and this is the cause of offload threads,
> this
> >>> is
> >>> > fine for dynamic requests that needs to run app code, but it also
> means
> >>> > that uWSGI will probably do worse that lighttpd or nginx in real
> world
> >>> > contest with serving static files under a lot of load and few
> thousands
> >>> > client connections. AFAIK both lighttpd and nginx are asynchronous.
> It
> >>> > isn't big issue since we can put uWSGI behind nginx and use it only
> for
> >>> > non-static requests, but since HTTP frontend is getting more
> features I
> >>> > wonder what's the goal here, is uWSGI intended to be as fast as
> others
> >>> (or
> >>> > maybe it is already), or nginx will always be required when maximum
> >>> > possible performance is required?
> >>>
> >>>
> >>> Since 1.9 it is no more blocking, each write must end in
> --socket-timeout
> >>> and if you enable a coroutine engine (like ugreen or coroae or gevent
> or
> >>> ...) it will automatically start to manage another request.
> >>>
> >>> Offloading is a way to free "cores" (it could be a worker, a thread a
> >>> coroutine...) delegating common task to a pool of threads that can
> manage
> >>> them without using too much resources (even coroutines are a finite
> >>> resource while offloading is only limited by file descriptor and
> memory,
> >>> and each offload task consume only 128 bytes)
> >>>
> >>> So speaking at higher level, offload threads can be seen as a little
> >>> nginx/lighttpd embedded in uWSGI that can do simple task using all of
> >>> your
> >>> cpu cores)
> >>>
> >>> I like to compare offload threads with hardware DMA, it can do only
> >>> simple
> >>> tasks (transfer memory) freeing the CPU from them.
> >>>
> >>> Having said that, speed in serving static is better (even if probably
> we
> >>> talk about microseconds difference) in pure-webservers as uWSGI need to
> >>> be
> >>> customizable for very specific uses (and this has a cost).
> >>>
> >>> I suppose when you start adding caching of path resolutions and similar
> >>> micro-optimizations you can build a uWSGI file-server faster than the
> >>> others, but it requires a very deep knowledge of your specific case, so
> >>> for "general-purpose" serving, a pure-webserver is a better bet.
> >>>
> >>>
> >>> Obviously this is the current status, i do not know what will happen
> >>> next :)
> >>>
> >>> --
> >>> Roberto De Ioris
> >>> http://unbit.it
> >>> _______________________________________________
> >>> uWSGI mailing list
> >>> [email protected]
> >>> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >>>
> >>
> >>
> >>
> >> --
> >> ?ukasz Mierzwa
> >>
> >
> >
> >
> > --
> > ?ukasz Mierzwa
> >
>
>
>
> --
> ?ukasz Mierzwa
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.unbit.it/pipermail/uwsgi/attachments/20130605/2f9ef5f1/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 5 Jun 2013 12:42:09 +0200
> From: "Roberto De Ioris" <[email protected]>
> To: "uWSGI developers and users list" <[email protected]>
> Subject: Re: [uWSGI] Offloading big responses
> Message-ID:
>         <[email protected]>
> Content-Type: text/plain;charset=utf-8
>
>
> > EDIT: it seems that bug #305 is fixed in latest 1.9.12 tarball so no need
> > to patch it.
> >
>
> Yes, i forgot to upload the tarball before the announce so i was able to
> rebuild it with the patch applied (and i have retagged the repository)
>
>
> >
> > 2013/6/5 ?ukasz Mierzwa <[email protected]>
> >
> >> I did some tests with 1.9.12 simulating big responses and enough
> >> concurrent connections to fill the queue. It clearly shows the advantage
> >> of
> >> offload transformation. Workers push responses to offload engine and can
> >> process new requests right away, this keeps backlog queue low at all
> >> times
> >> (without offload I get a lot of "*** uWSGI listen queue of socket 3 full
> >> !!! (11/10) ***" messages), response times are also lower since client
> >> request doesn't need to wait before being processed.
> >>
> >> https://gist.github.com/prymitive/5712706
> >>
> >> If anyone wants to try this be sure to use current master or apply
> >>
> https://github.com/unbit/uwsgi/commit/0f397eeb55532296b3bcddb9148bc5b6fa8346f2
> >> on
> >> top of 1.9.12 or it will eat up disk space (see
> >> https://github.com/unbit/uwsgi/issues/305).
> >>
> >>
> >> 2013/6/3 ?ukasz Mierzwa <[email protected]>
> >>
> >>> I've just tested to see if offload threads are really async as
> >>> advertised
> >>> and it seems they are, great ;)
> >>>
> >>> What I've done (zero is 1GB file with zeros):
> >>>
> >>> $ uwsgi --http :8080 --static-map="/zero=zero" --stats :4444
> >>> --offload-threads 2
> >>> $ ab -c 10 -n 100 http://localhost:8080/zero
> >>>
> >>> With only 1 worker and 2 offload threads I had 10 concurrent
> >>> connections
> >>> (not queued but running).
> >>>
> >>>
> >>> 2013/6/3 Roberto De Ioris <[email protected]>
> >>>
> >>>>
> >>>> > I'll give it a try once 1.9.12 is out.
> >>>> >
> >>>> > AFAIK uWSGI is blocking and this is the cause of offload threads,
> >>>> this
> >>>> is
> >>>> > fine for dynamic requests that needs to run app code, but it also
> >>>> means
> >>>> > that uWSGI will probably do worse that lighttpd or nginx in real
> >>>> world
> >>>> > contest with serving static files under a lot of load and few
> >>>> thousands
> >>>> > client connections. AFAIK both lighttpd and nginx are asynchronous.
> >>>> It
> >>>> > isn't big issue since we can put uWSGI behind nginx and use it only
> >>>> for
> >>>> > non-static requests, but since HTTP frontend is getting more
> >>>> features I
> >>>> > wonder what's the goal here, is uWSGI intended to be as fast as
> >>>> others
> >>>> (or
> >>>> > maybe it is already), or nginx will always be required when maximum
> >>>> > possible performance is required?
> >>>>
> >>>>
> >>>> Since 1.9 it is no more blocking, each write must end in
> >>>> --socket-timeout
> >>>> and if you enable a coroutine engine (like ugreen or coroae or gevent
> >>>> or
> >>>> ...) it will automatically start to manage another request.
> >>>>
> >>>> Offloading is a way to free "cores" (it could be a worker, a thread a
> >>>> coroutine...) delegating common task to a pool of threads that can
> >>>> manage
> >>>> them without using too much resources (even coroutines are a finite
> >>>> resource while offloading is only limited by file descriptor and
> >>>> memory,
> >>>> and each offload task consume only 128 bytes)
> >>>>
> >>>> So speaking at higher level, offload threads can be seen as a little
> >>>> nginx/lighttpd embedded in uWSGI that can do simple task using all of
> >>>> your
> >>>> cpu cores)
> >>>>
> >>>> I like to compare offload threads with hardware DMA, it can do only
> >>>> simple
> >>>> tasks (transfer memory) freeing the CPU from them.
> >>>>
> >>>> Having said that, speed in serving static is better (even if probably
> >>>> we
> >>>> talk about microseconds difference) in pure-webservers as uWSGI need
> >>>> to
> >>>> be
> >>>> customizable for very specific uses (and this has a cost).
> >>>>
> >>>> I suppose when you start adding caching of path resolutions and
> >>>> similar
> >>>> micro-optimizations you can build a uWSGI file-server faster than the
> >>>> others, but it requires a very deep knowledge of your specific case,
> >>>> so
> >>>> for "general-purpose" serving, a pure-webserver is a better bet.
> >>>>
> >>>>
> >>>> Obviously this is the current status, i do not know what will happen
> >>>> next :)
> >>>>
> >>>> --
> >>>> Roberto De Ioris
> >>>> http://unbit.it
> >>>> _______________________________________________
> >>>> uWSGI mailing list
> >>>> [email protected]
> >>>> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> ?ukasz Mierzwa
> >>>
> >>
> >>
> >>
> >> --
> >> ?ukasz Mierzwa
> >>
> >
> >
> >
> > --
> > ?ukasz Mierzwa
> > _______________________________________________
> > uWSGI mailing list
> > [email protected]
> > http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
> >
>
>
> --
> Roberto De Ioris
> http://unbit.it
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 05 Jun 2013 17:32:17 +0200
> From: Damjan <[email protected]>
> To: uWSGI developers and users list <[email protected]>
> Subject: [uWSGI] emperor dies on SIGINT/SIGQUIT
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> There's a strange thing going on with the emperor. When I send a HUP
> signal to the master of the emperor, it reloads and then decides to die.
>
> I do use a master process for the emperor (to re-exec it since I upgrade
> uwsgi often), and use the emperor-on-demand-extension=.socket option.
>
> It seems it sends itself the SIGINT/SIGQUIT signal
>
>
> This is my emperor.ini config file:
>
> [uwsgi]
> master    =  true
> daemonize =  %d/run/uwsgi.log
> pidfile   =  %d/run/uwsgi.pid
> emperor   =  %d/*/uwsgi.ini
> emperor-on-demand-extension=.socket
>
> # env = PYTHONUSERBASE=%d/py-env
> # vassal-logto = %d/run/uwsgi.log
> env   = UWSGI_IDLE=60
> env   = UWSGI_DIE_ON_IDLE=1
>
>
> this is the log
>
> The Emperor has been buried (pid: 31351)
> ...gracefully killing workers...
> binary reloading uWSGI...
> chdir() to /home/damjan
> closing all non-uwsgi socket fds > 2 (max_fd = 1024)...
> running /opt/nginx/sbin/uwsgi
> [uWSGI] getting INI configuration from /home/damjan/web-apps/emperor.ini
> *** Starting uWSGI 1.9.12 (64bit) on [Wed Jun  5 12:53:21 2013] ***
> compiled with version: 4.4.5 on 05 June 2013 12:50:23
> os: Linux-2.6.32-042stab065.3 #1 SMP Mon Nov 12 21:59:14 MSK 2012
> nodename: lists
> machine: x86_64
> clock source: unix
> pcre jit disabled
> detected number of CPU cores: 8
> current working directory: /home/damjan
> detected binary path: /opt/nginx/sbin/uwsgi
> your memory page size is 4096 bytes
> detected max file descriptor number: 1024
> lock engine: pthread robust mutexes
> *** starting uWSGI Emperor ***
> your mercy for graceful operations on workers is 60 seconds
> *** Operational MODE: no-workers ***
> !!!!!!!!!!!!!! WARNING !!!!!!!!!!!!!!
> no request plugin is loaded, you will not be able to manage requests.
> you may need to install the package for your language of choice, or
> simply load it with --plugin.
> !!!!!!!!!!! END OF WARNING !!!!!!!!!!
> gracefully (RE)spawned uWSGI master process (pid: 31349)
> [uwsgi-emperor] /home/damjan/web-apps//blog/uwsgi.ini -> "on demand"
> instance detected, waiting for connections on socket "127.0.0.1:4005" ...
> [uwsgi-emperor] /home/damjan/web-apps//cgi-bin/uwsgi.ini -> "on demand"
> instance detected, waiting for connections on socket "127.0.0.1:4003" ...
> [uwsgi-emperor] /home/damjan/web-apps//froide/uwsgi.ini -> "on demand"
> instance detected, waiting for connections on socket "127.0.0.1:4006" ...
> [uwsgi-emperor] /home/damjan/web-apps//loadavg-sse/uwsgi.ini -> "on
> demand" instance detected, waiting for connections on socket
> "127.0.0.1:4004" ...
> [uwsgi-emperor] /home/damjan/web-apps//test/uwsgi.ini -> "on demand"
> instance detected, waiting for connections on socket "127.0.0.1:4002" ...
> [uwsgi-emperor] /home/damjan/web-apps//wiki/uwsgi.ini -> "on demand"
> instance detected, waiting for connections on socket "127.0.0.1:4001" ...
> workers have been inactive for more than 60 seconds (1370436863-1370436802)
> SIGINT/SIGQUIT received...killing workers...
> The Emperor has been buried (pid: 25251)
> goodbye to uWSGI.
>
>
> --
> ??????
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 5 Jun 2013 17:44:38 +0200
> From: "Roberto De Ioris" <[email protected]>
> To: "uWSGI developers and users list" <[email protected]>
> Subject: Re: [uWSGI] emperor dies on SIGINT/SIGQUIT
> Message-ID:
>         <[email protected]>
> Content-Type: text/plain;charset=utf-8
>
>
> > There's a strange thing going on with the emperor. When I send a HUP
> > signal to the master of the emperor, it reloads and then decides to die.
> >
> > I do use a master process for the emperor (to re-exec it since I upgrade
> > uwsgi often), and use the emperor-on-demand-extension=.socket option.
> >
> > It seems it sends itself the SIGINT/SIGQUIT signal
> >
> >
> > This is my emperor.ini config file:
> >
> > [uwsgi]
> > master    =  true
> > daemonize =  %d/run/uwsgi.log
> > pidfile   =  %d/run/uwsgi.pid
> > emperor   =  %d/*/uwsgi.ini
> > emperor-on-demand-extension=.socket
> >
> > # env = PYTHONUSERBASE=%d/py-env
> > # vassal-logto = %d/run/uwsgi.log
> > env   = UWSGI_IDLE=60
> > env   = UWSGI_DIE_ON_IDLE=1
>
>
> Why this two envs ?
>
> you are telling the master to die when workers are inactive for more than
> 60 seconds but your instance has no workers (obviously it is something we
> should manage better ;)
>
> If you want to pass this two options to vassals you have to use
>
> UWSGI_VASSAL_IDLE=60
> UWSGI_VALLSAL_DIE_ON_IDLE=1
>
>
>
>
> --
> Roberto De Ioris
> http://unbit.it
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 05 Jun 2013 18:18:22 +0200
> From: Damjan <[email protected]>
> To: [email protected]
> Subject: Re: [uWSGI] emperor dies on SIGINT/SIGQUIT
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> >> env   = UWSGI_IDLE=60
> >> env   = UWSGI_DIE_ON_IDLE=1
> >
> >
> > Why this two envs ?
> >
> > you are telling the master to die when workers are inactive for more than
> > 60 seconds but your instance has no workers (obviously it is something we
> > should manage better ;)
> >
> > If you want to pass this two options to vassals you have to use
> >
> > UWSGI_VASSAL_IDLE=60
> > UWSGI_VALLSAL_DIE_ON_IDLE=1
>
> I might have had problems with UWSGI_VASSAL_ for these options, I'll try
> it again now.
>
> It's interesting that when initially started everything works fine. The
> vassals are auto spawned, and do die on 60 seconds of idle. And the
> master and the emperor do not die.
>
> But only when I send a HUP to the master, then it dies with the message
> I sent before.
>
>
>
> --
> ??????
>
>
> ------------------------------
>
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>
>
> End of uWSGI Digest, Vol 45, Issue 7
> ************************************
>
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to