I think I see a (my) fatal flaw that will cause the cluster to fail.


 From the info I received from previous posts, I am going to change
my game plan. If anyone has thoughts as to different process or
can confirm that I am on the right track, I would appreciate your
input.

1. I am going to run a CLUSTER on the table instead of a VACUUM
FULL.
Kevin Grittner stated:
If you have room for a second copy of your data, that is almost
always much faster, and less prone to problems.

I looked at the sizes for the tables in the database and the table I am trying to run the cluster on is 275G and I only have 57G free. I don't know how much of that 275G has data in it and how much is empty to allow for a second copy of the data. I am guessing the cluster would fail due to lack of space.

Are there any other options??

If I unload the table to a flat file; then drop the table from the database; then recreate the table; and finally reload the data - will that reclaim the space?

Kevin - thanks for the book recommendation.  Will order it tomorrow.

Thanks again for all the technical help!

Dave

<<attachment: david_ondrejik.vcf>>

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to