On Mon, 2009-11-30 at 10:50 -0800, Craig James wrote:
> I have a million-row table (two text columns of ~25 characters each plus two 
> integers, one of which is PK) that is replaced every week.  Since I'm doing 
> it on a live system, it's run inside a transaction.  This is the only time 
> the table is modified; all other access is read-only.
> 
> I wanted to use "truncate table" for efficiency, to avoid vacuum and index 
> bloat, etc.  But when I do "truncate" inside a transaction, all clients are 
> blocked from read until the entire transaction is complete.  If I switch to 
> "delete from ...", it's slower, but other clients can continue to use the old 
> data until the transaction commits.
> 
> The only work-around I've thought of is to create a brand new table, populate 
> it and index it, then start a transaction that drops the old table and 
> renames the new one.
> 
> Any thoughts?

Use partitioning so you can roll off data.

http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html

Joshua D. Drake


> 
> Thanks,
> Craig
> 
> 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to