On 6/26/15 6:09 PM, Joel Jacobson wrote:
Can't we just use the infrastructure of PostgreSQL to handle the few
megabytes of data we are talking about here? Why not just store the data
in a regular table? Why bother with special files and special data
structures? If it's just a table we want to produce as output, why can't
we just store it in a regular table, in the pg_catalog schema?

The problem is the update rate. I've never tried measuring it, but I'd bet that the stats collector can end up with 10s of thousands of updates per second. MVCC would collapse under that kind of load.

What might be interesting is setting things up so the collector simply inserted into history tables every X seconds and then had a separate process to prune that data. The big problem with that is I see no way for that to easily allow access to real-time data (which is certainly necessary sometimes).
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to