I have a million-row table (two text columns of ~25 characters each plus two 
integers, one of which is PK) that is replaced every week.  Since I'm doing it 
on a live system, it's run inside a transaction.  This is the only time the 
table is modified; all other access is read-only.

I wanted to use "truncate table" for efficiency, to avoid vacuum and index bloat, etc.  But when I 
do "truncate" inside a transaction, all clients are blocked from read until the entire transaction 
is complete.  If I switch to "delete from ...", it's slower, but other clients can continue to use 
the old data until the transaction commits.

The only work-around I've thought of is to create a brand new table, populate 
it and index it, then start a transaction that drops the old table and renames 
the new one.

Any thoughts?

Thanks,
Craig


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to