Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?
I do not think that the verbose option of vacuum is verbose enough.
The vacuum keeps redoing the index, but there is no indication as to why it
is doing this.
I see alot of activity with transaction logs being recycled
Dan,
Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?
Depends:
-- What's your disk support?
-- VACUUM, VACUUM ANALYZE, or VACUUM FULL?
-- What's your vacuum_mem setting?
-- What are checkpoint and wal settings?
I see alot of activity with transaction logs being
Tom,
It's possible that Jan's recent buffer-management improvements will
change the story as of 7.5. I kinda doubt it myself, but it'd be worth
re-running any experiments you've done when you start working with 7.5.
Yes, Jan has indicated to me that he expects to make much heavier use of
On Sat, 24 Apr 2004 10:45:40 -0400, Shea,Dan [CIS] [EMAIL PROTECTED]
wrote:
[...] 87 GB table with a 39 GB index?
The vacuum keeps redoing the index, but there is no indication as to why it
is doing this.
If VACUUM finds a dead tuple, if does not immediately remove index
entries pointing to
Manfred is indicating the reason it is taking so long is due to the number
of dead tuples in my index and the vacuum_mem setting.
The last delete that I did before starting a vacuum had 219,177,133
deletions.
Dan.
Dan,
Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?
There were defintely 219,177,133 deletions.
The deletions are most likely from the beginning, it was based on the
reception_time of the data.
I would rather not use re-index, unless it is faster then using vacuum.
What do you think would be the best way to get around this?
Increase vacuum_mem to
On Sat, 24 Apr 2004 15:48:19 -0400, Shea,Dan [CIS] [EMAIL PROTECTED]
wrote:
Manfred is indicating the reason it is taking so long is due to the number
of dead tuples in my index and the vacuum_mem setting.
nitpicking
Not dead tuples in the index, but dead tuples in the table.
/nitpicking
The
On Sat, 24 Apr 2004 15:58:08 -0400, Shea,Dan [CIS] [EMAIL PROTECTED]
wrote:
There were defintely 219,177,133 deletions.
The deletions are most likely from the beginning, it was based on the
reception_time of the data.
I would rather not use re-index, unless it is faster then using vacuum.
I
Dan,
There were defintely 219,177,133 deletions.
The deletions are most likely from the beginning, it was based on the
reception_time of the data.
You need to run VACUUM more often, I think.Vacuuming out 219 million dead
tuples is going to take a long time no matter how you look at it.