Bruce Momjian <[EMAIL PROTECTED]> writes: > Agreed. What I am wondering is with our system where every update gets > a new row, how would this help us? I know we try to keep an update on > the same row as the original, but is there any significant performance > benefit to doing that which would offset the compaction advantage?
Hm. Posit a system where all transactions are short updates executed in autocommit mode. In such a system as soon as a transaction commits it would take a very short time before the previous record was a dead tuple. If every backend kept a small list of tuples it had marked deleted and whenever it was idle checked to see if they were dead yet, it might avoid much of the need for vacuum. And in such a circumstance I think you wouldn't need more than a pctfree of 50% even on a busy table. Every tuple would need about one extra slot. This would only be a reasonable idea if a) if the list of potential dead tuples is short and if it overflows it just forgets them leaving them for vacuum to deal with. and b) It only checks the potentially dead tuples when the backend is otherwise idle. Even so it would be less efficient than a batch vacuum, and it would be taking up i/o bandwidth (to maintain indexes even if the heap buffer is in ram), even if that backend is idle it doesn't mean other backends couldn't have used that i/o bandwidth. But I think it would deal with a lot of the complaints about vacuum and it would make it more feasible to use a pctfree parameter to make clustering more effective. -- greg ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match