On Tue, Mar 29, 2011 at 9:26 PM, Nic Chidu <n...@chidu.net> wrote:

> Got a situation where a 130 mil rows (137GB) table needs to be brought down
> in size to  10 mil records (most recent)
> with the least amount of downtime.
>
> Doing a full vacuum would be faster on:
>  - 120 mil rows deleted and 10 mil active (delete most of them then full
> vacuum)
>  - 10 mil deleted and 120 mil active. (delete small batches and full vacuum
> after each delete).
>
> Any other suggestions?
>


Best recommended way is, take the dump of the table after dropping un-used
rows from the table and restored back to the database. Dump and reload would
be faster than a VACUUM FULL.

--Raghu Ram

>
> Thanks,
>
> Nic
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

Reply via email to