On Sep 27, 2016, at 2:46 PM, Israel Brewster wrote:

> I do have those on, and I could write a parser that scans through the logs 
> counting connections and disconnections to give a number of current 
> connections at any given time. Trying to make it operate "in real time" would 
> be interesting, though, as PG logs into different files by day-of-the-week 
> (at least, with the settings I have), rather than into a single file that 
> gets rotated out. I was kind of hoping such a tool, such as pgbadger (which, 
> unfortunately, only seems to track connections per second and not consecutive 
> connections), already existed, or that there was some way to have the 
> database itself track this metric. If not, well, I guess that's another 
> project :)

There are a lot of postgres configs and server specific tools... but on the 
application side and for general debugging, have you looked at statsd ?  
https://github.com/etsy/statsd

it's a lightweight node.js app that runs on your server and listens for UDP 
signals, which your apps can emit for counting or timing.  We have a ton of 
Python apps logging to it, including every postgres connection open/close and 
error.  The overhead of clients and server is negligible.  When combined with 
the graphite app for browsing data via charts, it becomes really useful at 
detecting issues with load or errors stemming from a deployment  -- you just 
look for spikes and cliffs.  We even use it to log the volume of INSERTS vs 
SELECTS vs UPDATES being sent to postgres.

The more services/apps you run, the more useful it gets, as you can figure out 
which apps/deployments are screwing up postgres and the exact moment things 
went wrong.

Reply via email to