On Sun, Mar 17, 2024 at 2:53 AM Thomas Munro <thomas.mu...@gmail.com> wrote: > > On Tue, Mar 12, 2024 at 10:03 AM Melanie Plageman > <melanieplage...@gmail.com> wrote: > > I've rebased the attached v10 over top of the changes to > > lazy_scan_heap() Heikki just committed and over the v6 streaming read > > patch set. I started testing them and see that you are right, we no > > longer pin too many buffers. However, the uncached example below is > > now slower with streaming read than on master -- it looks to be > > because it is doing twice as many WAL writes and syncs. I'm still > > investigating why that is.
--snip-- > 4. For learning/exploration only, I rebased my experimental vectored > FlushBuffers() patch, which teaches the checkpointer to write relation > data out using smgrwritev(). The checkpointer explicitly sorts > blocks, but I think ring buffers should naturally often contain > consecutive blocks in ring order. Highly experimental POC code pushed > to a public branch[2], but I am not proposing anything here, just > trying to understand things. The nicest looking system call trace was > with BUFFER_USAGE_LIMIT set to 512kB, so it could do its writes, reads > and WAL writes 128kB at a time: > > pwrite(32,...,131072,0xfc6000) = 131072 (0x20000) > fdatasync(32) = 0 (0x0) > pwrite(27,...,131072,0x6c0000) = 131072 (0x20000) > pread(27,...,131072,0x73e000) = 131072 (0x20000) > pwrite(27,...,131072,0x6e0000) = 131072 (0x20000) > pread(27,...,131072,0x75e000) = 131072 (0x20000) > pwritev(27,[...],3,0x77e000) = 131072 (0x20000) > preadv(27,[...],3,0x77e000) = 131072 (0x20000) > > That was a fun experiment, but... I recognise that efficient cleaning > of ring buffers is a Hard Problem requiring more concurrency: it's > just too late to be flushing that WAL. But we also don't want to > start writing back data immediately after dirtying pages (cf. OS > write-behind for big sequential writes in traditional Unixes), because > we're not allowed to write data out without writing the WAL first and > we currently need to build up bigger WAL writes to do so efficiently > (cf. some other systems that can write out fragments of WAL > concurrently so the latency-vs-throughput trade-off doesn't have to be > so extreme). So we want to defer writing it, but not too long. We > need something cleaning our buffers (or at least flushing the > associated WAL, but preferably also writing the data) not too late and > not too early, and more in sync with our scan than the WAL writer is. > What that machinery should look like I don't know (but I believe > Andres has ideas). I've attached a WIP v11 streaming vacuum patch set here that is rebased over master (by Thomas), so that I could add a CF entry for it. It still has the problem with the extra WAL write and fsync calls investigated by Thomas above. Thomas has some work in progress doing streaming write-behind to alleviate the issues with the buffer access strategy and streaming reads. When he gets a version of that ready to share, he will start a new "Streaming Vacuum" thread. - Melanie
From 050b6c3fa73c9153aeef58fcd306533c1008802e Mon Sep 17 00:00:00 2001 From: Thomas Munro <thomas.mu...@gmail.com> Date: Fri, 26 Apr 2024 08:32:44 +1200 Subject: [PATCH v11 2/3] Refactor tidstore.c memory management. Previously, TidStoreIterateNext() would expand the set of offsets for each block into a buffer that it overwrote each time. In order to be able to collect the offsets for multiple blocks before working with them, change the contract. Now, the offsets are obtained by a separate call to TidStoreGetBlockOffsets(), which can be called at a later time, and TidStoreIteratorResult objects are safe to copy and store in a queue. This will be used by a later patch, to avoid the need for expensive extra copies of offset array and associated memory management. --- src/backend/access/common/tidstore.c | 68 +++++++++---------- src/backend/access/heap/vacuumlazy.c | 9 ++- src/include/access/tidstore.h | 12 ++-- .../modules/test_tidstore/test_tidstore.c | 9 ++- 4 files changed, 53 insertions(+), 45 deletions(-) diff --git a/src/backend/access/common/tidstore.c b/src/backend/access/common/tidstore.c index fb3949d69f6..c3c1987204b 100644 --- a/src/backend/access/common/tidstore.c +++ b/src/backend/access/common/tidstore.c @@ -147,9 +147,6 @@ struct TidStoreIter TidStoreIterResult output; }; -static void tidstore_iter_extract_tids(TidStoreIter *iter, BlockNumber blkno, - BlocktableEntry *page); - /* * Create a TidStore. The TidStore will live in the memory context that is * CurrentMemoryContext at the time of this call. The TID storage, backed @@ -486,13 +483,6 @@ TidStoreBeginIterate(TidStore *ts) iter = palloc0(sizeof(TidStoreIter)); iter->ts = ts; - /* - * We start with an array large enough to contain at least the offsets - * from one completely full bitmap element. - */ - iter->output.max_offset = 2 * BITS_PER_BITMAPWORD; - iter->output.offsets = palloc(sizeof(OffsetNumber) * iter->output.max_offset); - if (TidStoreIsShared(ts)) iter->tree_iter.shared = shared_ts_begin_iterate(ts->tree.shared); else @@ -503,9 +493,9 @@ TidStoreBeginIterate(TidStore *ts) /* - * Scan the TidStore and return the TIDs of the next block. The offsets in - * each iteration result are ordered, as are the block numbers over all - * iterations. + * Return a result that contains the next block number and that can be used to + * obtain the set of offsets by calling TidStoreGetBlockOffsets(). The result + * is copyable. */ TidStoreIterResult * TidStoreIterateNext(TidStoreIter *iter) @@ -521,10 +511,10 @@ TidStoreIterateNext(TidStoreIter *iter) if (page == NULL) return NULL; - /* Collect TIDs from the key-value pair */ - tidstore_iter_extract_tids(iter, (BlockNumber) key, page); + iter->output.blkno = key; + iter->output.internal_page = page; - return &(iter->output); + return &iter->output; } /* @@ -540,7 +530,6 @@ TidStoreEndIterate(TidStoreIter *iter) else local_ts_end_iterate(iter->tree_iter.local); - pfree(iter->output.offsets); pfree(iter); } @@ -575,16 +564,19 @@ TidStoreGetHandle(TidStore *ts) return (dsa_pointer) shared_ts_get_handle(ts->tree.shared); } -/* Extract TIDs from the given key-value pair */ -static void -tidstore_iter_extract_tids(TidStoreIter *iter, BlockNumber blkno, - BlocktableEntry *page) +/* + * Given a TidStoreIterResult returned by TidStoreIterateNext(), extract the + * offset numbers. Returns the number of offsets filled in, if <= + * max_offsets. Otherwise, fills in as much as it can in the given space, and + * returns the size of the buffer that would be needed. + */ +int +TidStoreGetBlockOffsets(TidStoreIterResult *result, + OffsetNumber *offsets, + int max_offsets) { - TidStoreIterResult *result = (&iter->output); - int wordnum; - - result->num_offsets = 0; - result->blkno = blkno; + BlocktableEntry *page = result->internal_page; + int num_offsets = 0; if (page->header.nwords == 0) { @@ -592,31 +584,33 @@ tidstore_iter_extract_tids(TidStoreIter *iter, BlockNumber blkno, for (int i = 0; i < NUM_FULL_OFFSETS; i++) { if (page->header.full_offsets[i] != InvalidOffsetNumber) - result->offsets[result->num_offsets++] = page->header.full_offsets[i]; + { + if (num_offsets < max_offsets) + offsets[num_offsets] = page->header.full_offsets[i]; + num_offsets++; + } } } else { - for (wordnum = 0; wordnum < page->header.nwords; wordnum++) + for (int wordnum = 0; wordnum < page->header.nwords; wordnum++) { bitmapword w = page->words[wordnum]; int off = wordnum * BITS_PER_BITMAPWORD; - /* Make sure there is enough space to add offsets */ - if ((result->num_offsets + BITS_PER_BITMAPWORD) > result->max_offset) - { - result->max_offset *= 2; - result->offsets = repalloc(result->offsets, - sizeof(OffsetNumber) * result->max_offset); - } - while (w != 0) { if (w & 1) - result->offsets[result->num_offsets++] = (OffsetNumber) off; + { + if (num_offsets < max_offsets) + offsets[num_offsets] = (OffsetNumber) off; + num_offsets++; + } off++; w >>= 1; } } } + + return num_offsets; } diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index f76ef2e7c63..19c13671666 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -2144,12 +2144,17 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) Buffer buf; Page page; Size freespace; + OffsetNumber offsets[MaxOffsetNumber]; + int num_offsets; vacuum_delay_point(); blkno = iter_result->blkno; vacrel->blkno = blkno; + num_offsets = TidStoreGetBlockOffsets(iter_result, offsets, lengthof(offsets)); + Assert(num_offsets <= lengthof(offsets)); + /* * Pin the visibility map page in case we need to mark the page * all-visible. In most cases this will be very cheap, because we'll @@ -2161,8 +2166,8 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, vacrel->bstrategy); LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); - lazy_vacuum_heap_page(vacrel, blkno, buf, iter_result->offsets, - iter_result->num_offsets, vmbuffer); + lazy_vacuum_heap_page(vacrel, blkno, buf, offsets, + num_offsets, vmbuffer); /* Now that we've vacuumed the page, record its available space */ page = BufferGetPage(buf); diff --git a/src/include/access/tidstore.h b/src/include/access/tidstore.h index 32aa9995193..d95cabd7b5e 100644 --- a/src/include/access/tidstore.h +++ b/src/include/access/tidstore.h @@ -20,13 +20,14 @@ typedef struct TidStore TidStore; typedef struct TidStoreIter TidStoreIter; -/* Result struct for TidStoreIterateNext */ +/* + * Result struct for TidStoreIterateNext. This is copyable, but should be + * treated as opaque. Call TidStoreGetOffsets() to obtain the offsets. + */ typedef struct TidStoreIterResult { BlockNumber blkno; - int max_offset; - int num_offsets; - OffsetNumber *offsets; + void *internal_page; } TidStoreIterResult; extern TidStore *TidStoreCreateLocal(size_t max_bytes, bool insert_only); @@ -42,6 +43,9 @@ extern void TidStoreSetBlockOffsets(TidStore *ts, BlockNumber blkno, OffsetNumbe extern bool TidStoreIsMember(TidStore *ts, ItemPointer tid); extern TidStoreIter *TidStoreBeginIterate(TidStore *ts); extern TidStoreIterResult *TidStoreIterateNext(TidStoreIter *iter); +extern int TidStoreGetBlockOffsets(TidStoreIterResult *result, + OffsetNumber *offsets, + int max_offsets); extern void TidStoreEndIterate(TidStoreIter *iter); extern size_t TidStoreMemoryUsage(TidStore *ts); extern dsa_pointer TidStoreGetHandle(TidStore *ts); diff --git a/src/test/modules/test_tidstore/test_tidstore.c b/src/test/modules/test_tidstore/test_tidstore.c index 3f6a11bf21c..94ddcf1de82 100644 --- a/src/test/modules/test_tidstore/test_tidstore.c +++ b/src/test/modules/test_tidstore/test_tidstore.c @@ -267,9 +267,14 @@ check_set_block_offsets(PG_FUNCTION_ARGS) iter = TidStoreBeginIterate(tidstore); while ((iter_result = TidStoreIterateNext(iter)) != NULL) { - for (int i = 0; i < iter_result->num_offsets; i++) + OffsetNumber offsets[MaxOffsetNumber]; + int num_offsets; + + num_offsets = TidStoreGetBlockOffsets(iter_result, offsets, lengthof(offsets)); + Assert(num_offsets <= lengthof(offsets)); + for (int i = 0; i < num_offsets; i++) ItemPointerSet(&(items.iter_tids[num_iter_tids++]), iter_result->blkno, - iter_result->offsets[i]); + offsets[i]); } TidStoreEndIterate(iter); TidStoreUnlock(tidstore); -- 2.34.1
From fe4ab2059580d52c2855a6ea2c6bac80d06970c4 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Mon, 11 Mar 2024 16:19:56 -0400 Subject: [PATCH v11 1/3] Use streaming I/O in VACUUM first pass. Now vacuum's first pass, which HOT-prunes and records the TIDs of non-removable dead tuples, uses the streaming read API by converting heap_vac_scan_next_block() to a read stream callback. Author: Melanie Plageman <melanieplage...@gmail.com> --- src/backend/access/heap/vacuumlazy.c | 80 +++++++++++++++++----------- 1 file changed, 49 insertions(+), 31 deletions(-) diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index 3f88cf1e8ef..f76ef2e7c63 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -55,6 +55,7 @@ #include "storage/bufmgr.h" #include "storage/freespace.h" #include "storage/lmgr.h" +#include "storage/read_stream.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/pg_rusage.h" @@ -229,8 +230,9 @@ typedef struct LVSavedErrInfo /* non-export function prototypes */ static void lazy_scan_heap(LVRelState *vacrel); -static bool heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, - bool *all_visible_according_to_vm); +static BlockNumber heap_vac_scan_next_block(ReadStream *stream, + void *callback_private_data, + void *per_buffer_data); static void find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis); static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno, Page page, @@ -815,10 +817,11 @@ heap_vacuum_rel(Relation rel, VacuumParams *params, static void lazy_scan_heap(LVRelState *vacrel) { + Buffer buf; + ReadStream *stream; BlockNumber rel_pages = vacrel->rel_pages, - blkno, next_fsm_block_to_vacuum = 0; - bool all_visible_according_to_vm; + bool *all_visible_according_to_vm; TidStore *dead_items = vacrel->dead_items; VacDeadItemsInfo *dead_items_info = vacrel->dead_items_info; @@ -836,19 +839,33 @@ lazy_scan_heap(LVRelState *vacrel) initprog_val[2] = dead_items_info->max_bytes; pgstat_progress_update_multi_param(3, initprog_index, initprog_val); + stream = read_stream_begin_relation(READ_STREAM_MAINTENANCE, + vacrel->bstrategy, + vacrel->rel, + MAIN_FORKNUM, + heap_vac_scan_next_block, + vacrel, + sizeof(bool)); + /* Initialize for the first heap_vac_scan_next_block() call */ vacrel->current_block = InvalidBlockNumber; vacrel->next_unskippable_block = InvalidBlockNumber; vacrel->next_unskippable_allvis = false; vacrel->next_unskippable_vmbuffer = InvalidBuffer; - while (heap_vac_scan_next_block(vacrel, &blkno, &all_visible_according_to_vm)) + while (BufferIsValid(buf = read_stream_next_buffer(stream, + (void **) &all_visible_according_to_vm))) { - Buffer buf; + BlockNumber blkno; Page page; bool has_lpdead_items; bool got_cleanup_lock = false; + vacrel->blkno = blkno = BufferGetBlockNumber(buf); + + CheckBufferIsPinnedOnce(buf); + page = BufferGetPage(buf); + vacrel->scanned_pages++; /* Report as block scanned, update error traceback information */ @@ -914,10 +931,6 @@ lazy_scan_heap(LVRelState *vacrel) */ visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); - buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, - vacrel->bstrategy); - page = BufferGetPage(buf); - /* * We need a buffer cleanup lock to prune HOT chains and defragment * the page in lazy_scan_prune. But when it's not possible to acquire @@ -973,7 +986,7 @@ lazy_scan_heap(LVRelState *vacrel) */ if (got_cleanup_lock) lazy_scan_prune(vacrel, buf, blkno, page, - vmbuffer, all_visible_according_to_vm, + vmbuffer, *all_visible_according_to_vm, &has_lpdead_items); /* @@ -1027,7 +1040,7 @@ lazy_scan_heap(LVRelState *vacrel) ReleaseBuffer(vmbuffer); /* report that everything is now scanned */ - pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno); + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, rel_pages); /* now we can compute the new value for pg_class.reltuples */ vacrel->new_live_tuples = vac_estimate_reltuples(vacrel->rel, rel_pages, @@ -1042,6 +1055,8 @@ lazy_scan_heap(LVRelState *vacrel) Max(vacrel->new_live_tuples, 0) + vacrel->recently_dead_tuples + vacrel->missed_dead_tuples; + read_stream_end(stream); + /* * Do index vacuuming (call each index's ambulkdelete routine), then do * related heap vacuuming @@ -1053,11 +1068,11 @@ lazy_scan_heap(LVRelState *vacrel) * Vacuum the remainder of the Free Space Map. We must do this whether or * not there were indexes, and whether or not we bypassed index vacuuming. */ - if (blkno > next_fsm_block_to_vacuum) - FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, blkno); + if (rel_pages > next_fsm_block_to_vacuum) + FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum, rel_pages); /* report all blocks vacuumed */ - pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno); + pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, rel_pages); /* Do final index cleanup (call each index's amvacuumcleanup routine) */ if (vacrel->nindexes > 0 && vacrel->do_index_cleanup) @@ -1067,14 +1082,14 @@ lazy_scan_heap(LVRelState *vacrel) /* * heap_vac_scan_next_block() -- get next block for vacuum to process * - * lazy_scan_heap() calls here every time it needs to get the next block to - * prune and vacuum. The function uses the visibility map, vacuum options, - * and various thresholds to skip blocks which do not need to be processed and - * sets blkno to the next block to process. + * The streaming read callback invokes heap_vac_scan_next_block() every time + * lazy_scan_heap() needs the next block to prune and vacuum. The function + * uses the visibility map, vacuum options, and various thresholds to skip + * blocks which do not need to be processed and returns the next block to + * process or InvalidBlockNumber if there are no remaining blocks. * - * The block number and visibility status of the next block to process are set - * in *blkno and *all_visible_according_to_vm. The return value is false if - * there are no further blocks to process. + * The visibility status of the next block to process is set in the + * per_buffer_data. * * vacrel is an in/out parameter here. Vacuum options and information about * the relation are read. vacrel->skippedallvis is set if we skip a block @@ -1082,11 +1097,14 @@ lazy_scan_heap(LVRelState *vacrel) * relfrozenxid in that case. vacrel also holds information about the next * unskippable block, as bookkeeping for this function. */ -static bool -heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, - bool *all_visible_according_to_vm) +static BlockNumber +heap_vac_scan_next_block(ReadStream *stream, + void *callback_private_data, + void *per_buffer_data) { BlockNumber next_block; + LVRelState *vacrel = callback_private_data; + bool *all_visible_according_to_vm = per_buffer_data; /* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */ next_block = vacrel->current_block + 1; @@ -1099,8 +1117,8 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, ReleaseBuffer(vacrel->next_unskippable_vmbuffer); vacrel->next_unskippable_vmbuffer = InvalidBuffer; } - *blkno = vacrel->rel_pages; - return false; + vacrel->current_block = vacrel->rel_pages; + return InvalidBlockNumber; } /* @@ -1149,9 +1167,9 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, * but chose not to. We know that they are all-visible in the VM, * otherwise they would've been unskippable. */ - *blkno = vacrel->current_block = next_block; + vacrel->current_block = next_block; *all_visible_according_to_vm = true; - return true; + return vacrel->current_block; } else { @@ -1161,9 +1179,9 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno, */ Assert(next_block == vacrel->next_unskippable_block); - *blkno = vacrel->current_block = next_block; + vacrel->current_block = next_block; *all_visible_according_to_vm = vacrel->next_unskippable_allvis; - return true; + return vacrel->current_block; } } -- 2.34.1
From 5c74eccade69374a449f0b0fd4003545863c9538 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Tue, 27 Feb 2024 14:35:36 -0500 Subject: [PATCH v11 3/3] Use streaming I/O in VACUUM second pass. Now vacuum's second pass, which removes dead items referring to dead tuples collected in the first pass, uses a read stream that looks ahead in the TidStore. Originally developed by Melanie, refactored to work with the new TidStore by Thomas. Author: Melanie Plageman <melanieplage...@gmail.com> Author: Thomas Munro <thomas.mu...@gmail.com> --- src/backend/access/heap/vacuumlazy.c | 38 +++++++++++++++++++++++----- 1 file changed, 32 insertions(+), 6 deletions(-) diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index 19c13671666..14eee89af83 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -2098,6 +2098,24 @@ lazy_vacuum_all_indexes(LVRelState *vacrel) return allindexes; } +static BlockNumber +vacuum_reap_lp_read_stream_next(ReadStream *stream, + void *callback_private_data, + void *per_buffer_data) +{ + TidStoreIter *iter = callback_private_data; + TidStoreIterResult *iter_result; + + iter_result = TidStoreIterateNext(iter); + if (iter_result == NULL) + return InvalidBlockNumber; + + /* Save the TidStoreIterResult for later, so we can extract the offsets. */ + memcpy(per_buffer_data, iter_result, sizeof(*iter_result)); + + return iter_result->blkno; +} + /* * lazy_vacuum_heap_rel() -- second pass over the heap for two pass strategy * @@ -2118,6 +2136,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel) static void lazy_vacuum_heap_rel(LVRelState *vacrel) { + Buffer buf; + ReadStream *stream; BlockNumber vacuumed_pages = 0; Buffer vmbuffer = InvalidBuffer; LVSavedErrInfo saved_err_info; @@ -2138,10 +2158,18 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) InvalidBlockNumber, InvalidOffsetNumber); iter = TidStoreBeginIterate(vacrel->dead_items); - while ((iter_result = TidStoreIterateNext(iter)) != NULL) + stream = read_stream_begin_relation(READ_STREAM_MAINTENANCE, + vacrel->bstrategy, + vacrel->rel, + MAIN_FORKNUM, + vacuum_reap_lp_read_stream_next, + iter, + sizeof(TidStoreIterResult)); + + while (BufferIsValid(buf = read_stream_next_buffer(stream, + (void **) &iter_result))) { BlockNumber blkno; - Buffer buf; Page page; Size freespace; OffsetNumber offsets[MaxOffsetNumber]; @@ -2149,8 +2177,7 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) vacuum_delay_point(); - blkno = iter_result->blkno; - vacrel->blkno = blkno; + vacrel->blkno = blkno = BufferGetBlockNumber(buf); num_offsets = TidStoreGetBlockOffsets(iter_result, offsets, lengthof(offsets)); Assert(num_offsets <= lengthof(offsets)); @@ -2163,8 +2190,6 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) visibilitymap_pin(vacrel->rel, blkno, &vmbuffer); /* We need a non-cleanup exclusive lock to mark dead_items unused */ - buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL, - vacrel->bstrategy); LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE); lazy_vacuum_heap_page(vacrel, blkno, buf, offsets, num_offsets, vmbuffer); @@ -2177,6 +2202,7 @@ lazy_vacuum_heap_rel(LVRelState *vacrel) RecordPageWithFreeSpace(vacrel->rel, blkno, freespace); vacuumed_pages++; } + read_stream_end(stream); TidStoreEndIterate(iter); vacrel->blkno = InvalidBlockNumber; -- 2.34.1