On Mon, Feb 28, 2011 at 6:31 PM, Josh Berkus <j...@agliodbs.com> wrote: > Like replacing each statistic with a series of time-based buckets, which > would then increase the size of the table by 5X to 10X. That was the > first solution I thought of, and rejected. >
I don't understand what you're talking about at all here. I think there are a lot of unsolved problems in monitoring but the one thing I think everyone is pretty clear on is that the right way to export metrics like these is to export a counter and then have some external component periodically copy the counter into some history table and calculate the derivative, second derivative, running average of the first derivative, etc. What's needed here is for someone to write a good mrtg/rrd/whatever replacement using postgres as its data store. If you're monitoring something sensitive then you would store the data in a *different* postgres server to avoid Tom's complaint. There may be aspects of the job that Postgres does poorly but we can focus on improving those parts of Postgres rather than looking for another database. And frankly Postgres isn't that bad a tool for it -- when I did some performance analysis recently I actually ended up loading the data into Postgres so I could do some of the aggregations using window functions anyways. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers