Patch applied. Thanks. ---------------------------------------------------------------------------
Gregory Stark wrote: > > Tom Lane <[EMAIL PROTECTED]> writes: > > > The reason the patch is so short is that it's a kluge. If we really > > cared about supporting this case, more wide-ranging changes would be > > needed (eg, there's no need to eat maintenance_work_mem worth of RAM > > for the dead-TIDs array); and a decent respect to the opinions of > > mankind would require some attention to updating the header comments > > and function descriptions, too. > > The only part that seems klugy to me is how it releases the lock and > reacquires it rather than wait in the first place until it can acquire the > lock. Fixed that and changed lazy_space_alloc to allocate only as much space > as is really necessary. > > Gosh, I've never been accused of offending all mankind before. > > > > --- vacuumlazy.c 31 Jul 2006 21:09:00 +0100 1.76 > +++ vacuumlazy.c 28 Aug 2006 09:58:41 +0100 > @@ -16,6 +16,10 @@ > * perform a pass of index cleanup and page compaction, then resume the heap > * scan with an empty TID array. > * > + * As a special exception if we're processing a table with no indexes we can > + * vacuum each page as we go so we don't need to allocate more space than > + * enough to hold as many heap tuples fit on one page. > + * > * We can limit the storage for page free space to MaxFSMPages entries, > * since that's the most the free space map will be willing to remember > * anyway. If the relation has fewer than that many pages with free space, > @@ -106,7 +110,7 @@ > TransactionId > OldestXmin); > static BlockNumber count_nondeletable_pages(Relation onerel, > LVRelStats *vacrelstats, > TransactionId OldestXmin); > -static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks); > +static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks, > unsigned nindexes); > static void lazy_record_dead_tuple(LVRelStats *vacrelstats, > ItemPointer itemptr); > static void lazy_record_free_space(LVRelStats *vacrelstats, > @@ -206,7 +210,8 @@ > * This routine sets commit status bits, builds lists of dead > tuples > * and pages with free space, and calculates statistics on the > number > * of live tuples in the heap. When done, or when we run low on > space > - * for dead-tuple TIDs, invoke vacuuming of indexes and heap. > + * for dead-tuple TIDs, or after every page if the table has no > indexes > + * invoke vacuuming of indexes and heap. > * > * It also updates the minimum Xid found anywhere on the table in > * vacrelstats->minxid, for later storing it in pg_class.relminxid. > @@ -247,7 +252,7 @@ > vacrelstats->rel_pages = nblocks; > vacrelstats->nonempty_pages = 0; > > - lazy_space_alloc(vacrelstats, nblocks); > + lazy_space_alloc(vacrelstats, nblocks, nindexes); > > for (blkno = 0; blkno < nblocks; blkno++) > { > @@ -282,8 +287,14 @@ > > buf = ReadBuffer(onerel, blkno); > > - /* In this phase we only need shared access to the buffer */ > - LockBuffer(buf, BUFFER_LOCK_SHARE); > + /* In this phase we only need shared access to the buffer > unless we're > + * going to do the vacuuming now which we do if there are no > indexes > + */ > + > + if (nindexes) > + LockBuffer(buf, BUFFER_LOCK_SHARE); > + else > + LockBufferForCleanup(buf); > > page = BufferGetPage(buf); > > @@ -450,6 +461,12 @@ > { > lazy_record_free_space(vacrelstats, blkno, > > PageGetFreeSpace(page)); > + } else if (!nindexes) { > + /* If there are no indexes we can vacuum the page right > now instead > + * of doing a second scan */ > + lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats); > + lazy_record_free_space(vacrelstats, blkno, > PageGetFreeSpace(BufferGetPage(buf))); > + vacrelstats->num_dead_tuples = 0; > } > > /* Remember the location of the last page with nonremovable > tuples */ > @@ -891,16 +908,20 @@ > * See the comments at the head of this file for rationale. > */ > static void > -lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks) > +lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks, unsigned > nindexes) > { > long maxtuples; > int maxpages; > > - maxtuples = (maintenance_work_mem * 1024L) / sizeof(ItemPointerData); > - maxtuples = Min(maxtuples, INT_MAX); > - maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData)); > - /* stay sane if small maintenance_work_mem */ > - maxtuples = Max(maxtuples, MaxHeapTuplesPerPage); > + if (nindexes) { > + maxtuples = (maintenance_work_mem * 1024L) / > sizeof(ItemPointerData); > + maxtuples = Min(maxtuples, INT_MAX); > + maxtuples = Min(maxtuples, MaxAllocSize / > sizeof(ItemPointerData)); > + /* stay sane if small maintenance_work_mem */ > + maxtuples = Max(maxtuples, MaxHeapTuplesPerPage); > + } else { > + maxtuples = MaxHeapTuplesPerPage; > + } > > vacrelstats->num_dead_tuples = 0; > vacrelstats->max_dead_tuples = (int) maxtuples; > -- > Gregory Stark > EnterpriseDB http://www.enterprisedb.com > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match