On Mon, Jun 29, 2015 at 11:14 PM, Jim Nasby <jim.na...@bluetreble.com>
wrote:

> What might be interesting is setting things up so the collector simply
> inserted into history tables every X seconds and then had a separate
> process to prune that data. The big problem with that is I see no way for
> that to easily allow access to real-time data (which is certainly necessary
> sometimes)


I think the idea sounds promising. If near real-time data is required, we
could just update once every second, which should be often enough for
everybody.

Each backend process could then simply INSERT the stats for each txn that
committed/rollbacked into an UNLOGGED table, and then the collector would
do one single UPDATE of the collector stats based on the aggregate of the
rows inserted since the previous update a second ago and then delete the
processed rows (naturally in one operation, using DELETE FROM .. RETURNING
*).

That way we could get rid of the legacy communication protocol between the
backends and the collector and instead rely on unlogged tables for the
submission of data from the backends to the collector.

INSERTing 100 000 rows to an unlogged table takes 70 ms on my laptop, so
should be fast enough to handle the 10s of thousands of updates per second
we need to handle.

Reply via email to