There were defintely 219,177,133 deletions.
The deletions are most likely from the beginning, it was based on the
reception_time of the data.
I would rather not use re-index, unless it is faster then using vacuum.
What do you think would be the best way to get around this?
Increase vacuum_mem to a higher amount 1.5 to 2 GB or try a re-index (rather
not re-index so that data can be queried without soing a seqscan).
Once the index is cleaned up, how does vacuum handle the table?
Does it take as long as the index or is it faster?
From: Manfred Koizar [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 24, 2004 1:57 PM
To: Shea,Dan [CIS]
Cc: 'Josh Berkus'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
On Sat, 24 Apr 2004 10:45:40 -0400, "Shea,Dan [CIS]" <[EMAIL PROTECTED]>
>[...] 87 GB table with a 39 GB index?
>The vacuum keeps redoing the index, but there is no indication as to why it
>is doing this.
If VACUUM finds a dead tuple, if does not immediately remove index
entries pointing to that tuple. It instead collects such tuple ids and
later does a bulk delete, i.e. scans the whole index and removes all
index items pointing to one of those tuples. The number of tuple ids
that can be remembered is controlled by vacuum_mem: it is
VacuumMem * 1024 / 6
Whenever this number of dead tuples has been found, VACUUM scans the
index (which takes ca. 60000 seconds, more than 16 hours), empties the
list and continues to scan the heap ...
>From the number of dead tuples you can estimate how often your index
will be scanned. If dead tuples are evenly distributed, expect there to
be 15 index scans with your current vacuum_mem setting of 196608. So
your VACUUM will run for 11 days :-(
OTOH this would mean that there are 500 million dead tuples. Do you
think this is possible?
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend