Hi all,

We are currently "stuck" with a performance bottleneck in our server using PG 
and we are thinking of two potential solutions which I would be happy to hear 
your opinion about.

Our system has a couple of tables that hold client generated information.  The 
clients communicate every minute with the server and thus we perform an update 
on these two tables every minute.  We are talking about ~50K clients (and 
therefore records).

These constant updates have made the table sizes to grow drastically and index 
bloating.  So the two solutions that we are talking about are:

 1.  Configure autovacuum to work more intensively in both time and cost 
parameters.
Pros:
Not a major architectural change.
Cons:
Autovacuum does not handle index bloating and thus we will need to periodically 
reindex the tables.
Perhaps we will also need to run vacuum full periodically if the autovacuum 
cleaning is not at the required pace and therefore defragmentation of the 
tables is needed?


 1.  Creating a new table every minute and inserting the data into this new 
temporary table (only inserts).  This process will happen every minute.  Note 
that in this process we will also need to copy missing data (clients that 
didn't communicate) from older table.
Pros:
Tables are always compact.
We will not reach a limit of autovacuum.
Cons:
Major architectural change.

So to sum it up, we would be happy to refrain from performing a major change to 
the system (solution #2), but we are not certain that the correct way to work 
in our situation, constant updates of records, is to configure an aggressive 
autovacuum or perhaps the "known methodology" is to work with temporary tables 
that are always inserted into?


Thank you,
Ofer

Reply via email to