On November 13, 2015 at 5:04:28 AM, Michael Fischer 
([email protected](mailto:[email protected])) wrote:

> On Fri, Nov 13, 2015 at 8:51 AM, Jeff Utter wrote:
> 
> > I was wondering if anyone can offer any advice in handling stats
> > collections between worker processes in forking servers (like unicorn).
> > Specifically, I am attempting to work on a solution for the Prometheus ruby
> > gem. Some details are in this issue here:
> > https://github.com/prometheus/client_ruby/issues/9
> 
> We run a statsd server on our application servers, and our
> applications invoke statsd operations against various counters and
> gauges. The statsd protocol is UDP based and very fast. statsd
> itself keeps all data in memory and flushes it to its backend every
> few seconds.

Yeah, this does seem simpler in the case of forking servers. Part of 
Prometheus' ethos, however, is that metrics are scraped. I suppose it might be 
possible to have each worker push (with statsd) to a locally running collector 
that then creates a scraping endpoint. This, however creates additional load on 
the server to handle all the incoming stats into the statsd server which would 
otherwise not be needed if the workers could just increment their own counts.

For this specific project I may look into statsd instead of Prometheus, since 
it doesn't seem to play well with forking servers at the moment. However, I 
would really prefer to find a way to make it play well with forking servers.

Reply via email to