Sven Willenberger wrote:
> Trying to determine the best overall approach for the following
> scenario:
> 
> Each month our primary table accumulates some 30 million rows (which
> could very well hit 60+ million rows per month by year's end). Basically
> there will end up being a lot of historical data with little value
> beyond archival.
> 
> The question arises then as the best approach of which I have enumerated
> three:
> 
> 1) Just allow the records to accumulate and maintain constant vacuuming,
> etc allowing for the fact that most queries will only be from a recent
> subset of data and should be mostly cached.
> 
> 2) Each month:
> SELECT * INTO 3monthsago_dynamically_named_table FROM bigtable WHERE
> targetdate < $3monthsago;
> DELETE FROM bigtable where targetdate < $3monthsago;
> VACUUM ANALYZE bigtable;
> pg_dump 3monthsago_dynamically_named_table for archiving;


In my experience copy/delete in a single transaction 60+ million rows
is not feseable, at least on my 1 GB ram, 2 way CPU box.



Regards
Gaetano Mendola


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to