From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> Years ago I've implemented an optimization for many DROP TABLE commands
> in a single transaction - instead of scanning buffers for each relation,
> the code now accumulates a small number of relations into an array, and
> then does a bsearch for each buffer.
> 
> Would something like that be applicable/useful here? That is, if we do
> multiple TRUNCATE commands in a single transaction, can we optimize it
> like this?

Unfortunately not.  VACUUM and autovacuum handles each table in a different 
transaction.

BTW, what we really want to do is to keep the failover time within 10 seconds.  
The customer periodically TRUNCATEs tens of thousands of tables.  If failover 
unluckily happens immediately after those TRUNCATEs, the recovery on the 
standby could take much longer.  But your past improvement seems likely to 
prevent that problem, if the customer TRUNCATEs tables in the same transaction.

On the other hand, it's now highly possible that the customer can only TRUNCATE 
a single table in a transaction, thus run as many transactions as the TRUNCATEd 
tables.  So, we also want to speed up each TRUNCATE by touching only the 
buffers for the table, not scanning the whole shared buffers.  Andres proposed 
one method that uses a radix tree, but we don't have an idea how to do it yet.

Speeding up each TRUNCATE and its recovery is a different topic.  The patch 
proposed here is one possible improvement to shorten the failover time.


Regards
Takayuki Tsunakawa






Reply via email to