> Roberto
>
> Thanks for these rapid fixes and your comment in the other thread. 1.4.9
> is
> working very well for us.
>
> We run a couple of applications; two of them are fairly large. Their load
> profiles don't match, so we're able to run them on the same servers as
> they
> scale up at different times. We configure uWSGI to have a handful of
> workers at any time, but allow for each site to summon zergs on demand. A
> single emperor runs per box, and it picks up and loads app vassal files
> when we symlink them to a certain dir. Each vassal file defines an app and
> whether or not that app will act as a broodlord. The config is nice and
> easy, and I imagine that this is a fairly common setup for uWSGI.
>
> So... I'd love to be able to easily get stats on what uWSGI is doing. With
> your fix, as of 1.4.9, I could have a script which polled the sockets of
> the workers and zergs for a given app, mashed the data together and
> proxied
> it on. But it seems like a pain to have so many sockets floating around
> for
> what is essentially the same app, especially where the workers and zergs
> are running under the same emperor. I can only run `uwsgitop` on one of
> the
> sockets at a time.
>
> Before I started looking into this, I assumed I could monitor a single
> socket and get info on everything. Making stats server or `uwsgitop` work
> usefully in production feels difficult and a bit hacky, which makes me
> think that I might be approaching it the wrong way. How would you expect
> users to do it?

I suppose for your case, you'd better to "reverse" the situation.

Your instances spit out stats:

http://uwsgi-docs.readthedocs.org/en/latest/PushingStats.html

do not use the "file" plugin as it is only a proof of concept.

The mongodb one is best suited for a massive situation and already used in
production.

The problem is that you lose realtime (data are generated at a fixed rate)
and that you need to periodically clear it (immagine that uwsgi 1.9
generates the content of the whole request vars, so when you will upgrade
to 2.0 you could get a huge amount of data)

Another approach is hacking uwsgitop (it is a pure python app) to read
from more sockets and appending the values one after another (i suppose a
simple for loop will be enough without screwing up curses)

Another area of interest (i know only one people using that technique) is
using the subscription system to send the name of the sockets to a central
server so you do not need to poll each one. There was a perl server on
CPAN (sorry completely forgot its name) that can receive uWSGI
subscriptions request and send back a hash of the subscribed nodes (very
simple, but very useful to get automatic list generated from instances).

So, there is still nothing "automagic" but you have all of the pieces.

Some example:

https://github.com/unbit/uwsgi/blob/master/contrib/uwsgisubscribers.ru
https://github.com/unbit/uwsgi/blob/master/contrib/subscribe.pl

Finally, what about exporting to emperor stats the BROODLORD_NUM value of
zerg instances ? in that way you could automate your tools to connect to
the right stats sockets

-- 
Roberto De Ioris
http://unbit.it
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to