Hello, I was wondering if anyone can offer any advice in handling stats collections between worker processes in forking servers (like unicorn). Specifically, I am attempting to work on a solution for the Prometheus ruby gem. Some details are in this issue here: https://github.com/prometheus/client_ruby/issues/9
Prometheus works with a "scrape" model, where every few seconds a prometheus server hits a http endpoint that exposes status. With the current middleware the stats will only represent whichever worker is hit. I have read through the documentation for unicorn and poked around the source code some -- as well as searched for similar projects for inspiration. The earliest, promising solution I considered was raindrops, but it looks as though you need to know all of the possible metrics up front - which won't necessarily work as prometheus could use metrics based on parameters which could vary. Does anyone have any experience working with something like this? Thanks for any suggestions.
