Rajesh Kumar Mallah wrote:
Bill Moran wrote:

Rajesh Kumar Mallah wrote:

Hi,

The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.


If dropping/recreating the table improves things, then we can reasonably assume that the table is pretty active with updates/inserts. Correct?

Yes the table results from an import process and under goes lots of inserts and updates , but thats before the vacuum full operation. the table is not accessed during vacuum. What i want to know is is there any wat to automate the dumping and reload of a table individually. will the below be safe and effective:

The CLUSTER command I described is one way of doing this. It essentially automates the task of copying the table, dropping the old one, and recreating it.

If the data gets too fragmented, a vacuum may not be enough.  Also, read
up on the recommendations _against_ vacuum full (recommending only using
vacuum on databases)  With full, vacuum condenses the database, which may
actually hurt performance.  A regular vacuum just fixes things up, and
may leave unused space lying around.  However, this should apparently
achieve a balance between usage and vacuum.  See the docs, they are much
better at describing this than I am.

i understand simultaneous vacuum and usage detoriates performance mostly.
but this case is different.

Just want to make sure we're on the same page here. I'm not talking about vacuuming simultaneous with anything. I'm simply saying that "vacuum full" isn't always the best choice. You should probably only be doing "vacuum". The reason and details for this are in the admin docs.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com


---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly

Reply via email to