On 27/02/2024 21:47, Melanie Plageman wrote:
The attached v5 has some simplifications when compared to v4 but takes
largely the same approach.
0001-0004 are refactoring
I'm looking at just these 0001-0004 patches for now. I like those
changes a lot for the sake of readablity even without any of the later
patches.
I made some further changes. I kept them as separate commits for easier
review, see the commit messages for details. Any thoughts on those changes?
I feel heap_vac_scan_get_next_block() function could use some love.
Maybe just some rewording of the comments, or maybe some other
refactoring; not sure. But I'm pretty happy with the function signature
and how it's called.
BTW, do we have tests that would fail if we botched up
heap_vac_scan_get_next_block() so that it would skip pages incorrectly,
for example? Not asking you to write them for this patch, but I'm just
wondering.
--
Heikki Linnakangas
Neon (https://neon.tech)
From 6e0258c3e31e85526475f46b2e14cbbcbb861909 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Sat, 30 Dec 2023 16:30:59 -0500
Subject: [PATCH v6 1/9] lazy_scan_skip remove unneeded local var
nskippable_blocks
nskippable_blocks can be easily derived from next_unskippable_block's
progress when compared to the passed in next_block.
---
src/backend/access/heap/vacuumlazy.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 8b320c3f89a..1dc6cc8e4db 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1103,8 +1103,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
bool *next_unskippable_allvis, bool *skipping_current_range)
{
BlockNumber rel_pages = vacrel->rel_pages,
- next_unskippable_block = next_block,
- nskippable_blocks = 0;
+ next_unskippable_block = next_block;
bool skipsallvis = false;
*next_unskippable_allvis = true;
@@ -1161,7 +1160,6 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
vacuum_delay_point();
next_unskippable_block++;
- nskippable_blocks++;
}
/*
@@ -1174,7 +1172,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
* non-aggressive VACUUMs. If the range has any all-visible pages then
* skipping makes updating relfrozenxid unsafe, which is a real downside.
*/
- if (nskippable_blocks < SKIP_PAGES_THRESHOLD)
+ if (next_unskippable_block - next_block < SKIP_PAGES_THRESHOLD)
*skipping_current_range = false;
else
{
--
2.39.2
From 258bce44e3275bc628bf984892797eecaebf0404 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Sat, 30 Dec 2023 16:22:12 -0500
Subject: [PATCH v6 2/9] Add lazy_scan_skip unskippable state to LVRelState
Future commits will remove all skipping logic from lazy_scan_heap() and
confine it to lazy_scan_skip(). To make those commits more clear, first
introduce add a struct to LVRelState containing variables needed to skip
ranges less than SKIP_PAGES_THRESHOLD.
lazy_scan_prune() and lazy_scan_new_or_empty() can now access the
buffer containing the relevant block of the visibility map through the
LVRelState.skip, so it no longer needs to be a separate function
parameter.
While we are at it, add additional information to the lazy_scan_skip()
comment, including descriptions of the role and expectations for its
function parameters.
---
src/backend/access/heap/vacuumlazy.c | 154 ++++++++++++++++-----------
1 file changed, 90 insertions(+), 64 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 1dc6cc8e4db..0ddb986bc03 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -204,6 +204,22 @@ typedef struct LVRelState
int64 live_tuples; /* # live tuples remaining */
int64 recently_dead_tuples; /* # dead, but not yet removable */
int64 missed_dead_tuples; /* # removable, but not removed */
+
+ /*
+ * Parameters maintained by lazy_scan_skip() to manage skipping ranges of
+ * pages greater than SKIP_PAGES_THRESHOLD.
+ */
+ struct
+ {
+ /* Next unskippable block */
+ BlockNumber next_unskippable_block;
+ /* Buffer containing next unskippable block's visibility info */
+ Buffer vmbuffer;
+ /* Next unskippable block's visibility status */
+ bool next_unskippable_allvis;
+ /* Whether or not skippable blocks should be skipped */
+ bool skipping_current_range;
+ } skip;
} LVRelState;
/* Struct for saving and restoring vacuum error information. */
@@ -214,19 +230,15 @@ typedef struct LVSavedErrInfo
VacErrPhase phase;
} LVSavedErrInfo;
-
/* non-export function prototypes */
static void lazy_scan_heap(LVRelState *vacrel);
-static BlockNumber lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer,
- BlockNumber next_block,
- bool *next_unskippable_allvis,
- bool *skipping_current_range);
+static void lazy_scan_skip(LVRelState *vacrel, BlockNumber next_block);
static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
- bool sharelock, Buffer vmbuffer);
+ bool sharelock);
static void lazy_scan_prune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
- Buffer vmbuffer, bool all_visible_according_to_vm,
+ bool all_visible_according_to_vm,
bool *has_lpdead_items);
static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
@@ -803,12 +815,8 @@ lazy_scan_heap(LVRelState *vacrel)
{
BlockNumber rel_pages = vacrel->rel_pages,
blkno,
- next_unskippable_block,
next_fsm_block_to_vacuum = 0;
VacDeadItems *dead_items = vacrel->dead_items;
- Buffer vmbuffer = InvalidBuffer;
- bool next_unskippable_allvis,
- skipping_current_range;
const int initprog_index[] = {
PROGRESS_VACUUM_PHASE,
PROGRESS_VACUUM_TOTAL_HEAP_BLKS,
@@ -822,10 +830,9 @@ lazy_scan_heap(LVRelState *vacrel)
initprog_val[2] = dead_items->max_items;
pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
+ vacrel->skip.vmbuffer = InvalidBuffer;
/* Set up an initial range of skippable blocks using the visibility map */
- next_unskippable_block = lazy_scan_skip(vacrel, &vmbuffer, 0,
- &next_unskippable_allvis,
- &skipping_current_range);
+ lazy_scan_skip(vacrel, 0);
for (blkno = 0; blkno < rel_pages; blkno++)
{
Buffer buf;
@@ -834,26 +841,23 @@ lazy_scan_heap(LVRelState *vacrel)
bool has_lpdead_items;
bool got_cleanup_lock = false;
- if (blkno == next_unskippable_block)
+ if (blkno == vacrel->skip.next_unskippable_block)
{
/*
* Can't skip this page safely. Must scan the page. But
* determine the next skippable range after the page first.
*/
- all_visible_according_to_vm = next_unskippable_allvis;
- next_unskippable_block = lazy_scan_skip(vacrel, &vmbuffer,
- blkno + 1,
- &next_unskippable_allvis,
- &skipping_current_range);
+ all_visible_according_to_vm = vacrel->skip.next_unskippable_allvis;
+ lazy_scan_skip(vacrel, blkno + 1);
- Assert(next_unskippable_block >= blkno + 1);
+ Assert(vacrel->skip.next_unskippable_block >= blkno + 1);
}
else
{
/* Last page always scanned (may need to set nonempty_pages) */
Assert(blkno < rel_pages - 1);
- if (skipping_current_range)
+ if (vacrel->skip.skipping_current_range)
continue;
/* Current range is too small to skip -- just scan the page */
@@ -896,10 +900,10 @@ lazy_scan_heap(LVRelState *vacrel)
* correctness, but we do it anyway to avoid holding the pin
* across a lengthy, unrelated operation.
*/
- if (BufferIsValid(vmbuffer))
+ if (BufferIsValid(vacrel->skip.vmbuffer))
{
- ReleaseBuffer(vmbuffer);
- vmbuffer = InvalidBuffer;
+ ReleaseBuffer(vacrel->skip.vmbuffer);
+ vacrel->skip.vmbuffer = InvalidBuffer;
}
/* Perform a round of index and heap vacuuming */
@@ -924,7 +928,7 @@ lazy_scan_heap(LVRelState *vacrel)
* all-visible. In most cases this will be very cheap, because we'll
* already have the correct page pinned anyway.
*/
- visibilitymap_pin(vacrel->rel, blkno, &vmbuffer);
+ visibilitymap_pin(vacrel->rel, blkno, &vacrel->skip.vmbuffer);
buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL,
vacrel->bstrategy);
@@ -942,8 +946,7 @@ lazy_scan_heap(LVRelState *vacrel)
LockBuffer(buf, BUFFER_LOCK_SHARE);
/* Check for new or empty pages before lazy_scan_[no]prune call */
- if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock,
- vmbuffer))
+ if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock))
{
/* Processed as new/empty page (lock and pin released) */
continue;
@@ -985,7 +988,7 @@ lazy_scan_heap(LVRelState *vacrel)
*/
if (got_cleanup_lock)
lazy_scan_prune(vacrel, buf, blkno, page,
- vmbuffer, all_visible_according_to_vm,
+ all_visible_according_to_vm,
&has_lpdead_items);
/*
@@ -1035,8 +1038,11 @@ lazy_scan_heap(LVRelState *vacrel)
}
vacrel->blkno = InvalidBlockNumber;
- if (BufferIsValid(vmbuffer))
- ReleaseBuffer(vmbuffer);
+ if (BufferIsValid(vacrel->skip.vmbuffer))
+ {
+ ReleaseBuffer(vacrel->skip.vmbuffer);
+ vacrel->skip.vmbuffer = InvalidBuffer;
+ }
/* report that everything is now scanned */
pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);
@@ -1080,15 +1086,34 @@ lazy_scan_heap(LVRelState *vacrel)
* lazy_scan_skip() -- set up range of skippable blocks using visibility map.
*
* lazy_scan_heap() calls here every time it needs to set up a new range of
- * blocks to skip via the visibility map. Caller passes the next block in
- * line. We return a next_unskippable_block for this range. When there are
- * no skippable blocks we just return caller's next_block. The all-visible
- * status of the returned block is set in *next_unskippable_allvis for caller,
- * too. Block usually won't be all-visible (since it's unskippable), but it
- * can be during aggressive VACUUMs (as well as in certain edge cases).
+ * blocks to skip via the visibility map. Caller passes next_block, the next
+ * block in line. The parameters of the skipped range are recorded in skip.
+ * vacrel is an in/out parameter here; vacuum options and information about the
+ * relation are read and vacrel->skippedallvis is set to ensure we don't
+ * advance relfrozenxid when we have skipped vacuuming all visible blocks.
+ *
+ * skip->vmbuffer will contain the block from the VM containing visibility
+ * information for the next unskippable heap block. We may end up needed a
+ * different block from the VM (if we decide not to skip a skippable block).
+ * This is okay; visibilitymap_pin() will take care of this while processing
+ * the block.
+ *
+ * A block is unskippable if it is not all visible according to the visibility
+ * map. It is also unskippable if it is the last block in the relation, if the
+ * vacuum is an aggressive vacuum, or if DISABLE_PAGE_SKIPPING was passed to
+ * vacuum.
*
- * Sets *skipping_current_range to indicate if caller should skip this range.
- * Costs and benefits drive our decision. Very small ranges won't be skipped.
+ * Even if a block is skippable, we may choose not to skip it if the range of
+ * skippable blocks is too small (below SKIP_PAGES_THRESHOLD). As a
+ * consequence, we must keep track of the next truly unskippable block and its
+ * visibility status along with whether or not we are skipping the current
+ * range of skippable blocks. This can be used to derive the next block
+ * lazy_scan_heap() must process and its visibility status.
+ *
+ * The block number and visibility status of the next unskippable block are set
+ * in skip->next_unskippable_block and next_unskippable_allvis.
+ * skip->skipping_current_range indicates to the caller whether or not it is
+ * processing a skippable (and thus all-visible) block.
*
* Note: our opinion of which blocks can be skipped can go stale immediately.
* It's okay if caller "misses" a page whose all-visible or all-frozen marking
@@ -1098,25 +1123,26 @@ lazy_scan_heap(LVRelState *vacrel)
* older XIDs/MXIDs. The vacrel->skippedallvis flag will be set here when the
* choice to skip such a range is actually made, making everything safe.)
*/
-static BlockNumber
-lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
- bool *next_unskippable_allvis, bool *skipping_current_range)
+static void
+lazy_scan_skip(LVRelState *vacrel, BlockNumber next_block)
{
+ /* Use local variables for better optimized loop code */
BlockNumber rel_pages = vacrel->rel_pages,
next_unskippable_block = next_block;
+
bool skipsallvis = false;
- *next_unskippable_allvis = true;
+ vacrel->skip.next_unskippable_allvis = true;
while (next_unskippable_block < rel_pages)
{
uint8 mapbits = visibilitymap_get_status(vacrel->rel,
next_unskippable_block,
- vmbuffer);
+ &vacrel->skip.vmbuffer);
if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)
{
Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
- *next_unskippable_allvis = false;
+ vacrel->skip.next_unskippable_allvis = false;
break;
}
@@ -1137,7 +1163,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
if (!vacrel->skipwithvm)
{
/* Caller shouldn't rely on all_visible_according_to_vm */
- *next_unskippable_allvis = false;
+ vacrel->skip.next_unskippable_allvis = false;
break;
}
@@ -1162,6 +1188,8 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
next_unskippable_block++;
}
+ vacrel->skip.next_unskippable_block = next_unskippable_block;
+
/*
* We only skip a range with at least SKIP_PAGES_THRESHOLD consecutive
* pages. Since we're reading sequentially, the OS should be doing
@@ -1172,16 +1200,14 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
* non-aggressive VACUUMs. If the range has any all-visible pages then
* skipping makes updating relfrozenxid unsafe, which is a real downside.
*/
- if (next_unskippable_block - next_block < SKIP_PAGES_THRESHOLD)
- *skipping_current_range = false;
+ if (vacrel->skip.next_unskippable_block - next_block < SKIP_PAGES_THRESHOLD)
+ vacrel->skip.skipping_current_range = false;
else
{
- *skipping_current_range = true;
+ vacrel->skip.skipping_current_range = true;
if (skipsallvis)
vacrel->skippedallvis = true;
}
-
- return next_unskippable_block;
}
/*
@@ -1214,7 +1240,7 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,
*/
static bool
lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
- Page page, bool sharelock, Buffer vmbuffer)
+ Page page, bool sharelock)
{
Size freespace;
@@ -1300,7 +1326,7 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
PageSetAllVisible(page);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vmbuffer, InvalidTransactionId,
+ vacrel->skip.vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN);
END_CRIT_SECTION();
}
@@ -1336,10 +1362,11 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
* any tuple that becomes dead after the call to heap_page_prune() can't need to
* be frozen, because it was visible to another session when vacuum started.
*
- * vmbuffer is the buffer containing the VM block with visibility information
- * for the heap block, blkno. all_visible_according_to_vm is the saved
- * visibility status of the heap block looked up earlier by the caller. We
- * won't rely entirely on this status, as it may be out of date.
+ * vacrel->skipstate.vmbuffer is the buffer containing the VM block with
+ * visibility information for the heap block, blkno.
+ * all_visible_according_to_vm is the saved visibility status of the heap block
+ * looked up earlier by the caller. We won't rely entirely on this status, as
+ * it may be out of date.
*
* *has_lpdead_items is set to true or false depending on whether, upon return
* from this function, any LP_DEAD items are still present on the page.
@@ -1349,7 +1376,6 @@ lazy_scan_prune(LVRelState *vacrel,
Buffer buf,
BlockNumber blkno,
Page page,
- Buffer vmbuffer,
bool all_visible_according_to_vm,
bool *has_lpdead_items)
{
@@ -1783,7 +1809,7 @@ lazy_scan_prune(LVRelState *vacrel,
PageSetAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vmbuffer, visibility_cutoff_xid,
+ vacrel->skip.vmbuffer, visibility_cutoff_xid,
flags);
}
@@ -1794,11 +1820,11 @@ lazy_scan_prune(LVRelState *vacrel,
* buffer lock before concluding that the VM is corrupt.
*/
else if (all_visible_according_to_vm && !PageIsAllVisible(page) &&
- visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0)
+ visibilitymap_get_status(vacrel->rel, blkno, &vacrel->skip.vmbuffer) != 0)
{
elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u",
vacrel->relname, blkno);
- visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
+ visibilitymap_clear(vacrel->rel, blkno, vacrel->skip.vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
@@ -1822,7 +1848,7 @@ lazy_scan_prune(LVRelState *vacrel,
vacrel->relname, blkno);
PageClearAllVisible(page);
MarkBufferDirty(buf);
- visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
+ visibilitymap_clear(vacrel->rel, blkno, vacrel->skip.vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
@@ -1832,7 +1858,7 @@ lazy_scan_prune(LVRelState *vacrel,
* true, so we must check both all_visible and all_frozen.
*/
else if (all_visible_according_to_vm && all_visible &&
- all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))
+ all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vacrel->skip.vmbuffer))
{
/*
* Avoid relying on all_visible_according_to_vm as a proxy for the
@@ -1854,7 +1880,7 @@ lazy_scan_prune(LVRelState *vacrel,
*/
Assert(!TransactionIdIsValid(visibility_cutoff_xid));
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vmbuffer, InvalidTransactionId,
+ vacrel->skip.vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE |
VISIBILITYMAP_ALL_FROZEN);
}
--
2.39.2
From 6b2b0313ecf5a8f35a3fbab9e3ba817a5cff2c73 Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Sat, 30 Dec 2023 16:59:27 -0500
Subject: [PATCH v6 3/9] Confine vacuum skip logic to lazy_scan_skip
In preparation for vacuum to use the streaming read interface (and
eventually AIO), refactor vacuum's logic for skipping blocks such that
it is entirely confined to lazy_scan_skip(). This turns lazy_scan_skip()
and the skip state in LVRelState it uses into an iterator which yields
blocks to lazy_scan_heap(). Such a structure is conducive to an async
interface. While we are at it, rename lazy_scan_skip() to
heap_vac_scan_get_next_block(), which now more accurately describes it.
By always calling heap_vac_scan_get_next_block() -- instead of only when
we have reached the next unskippable block, we no longer need the
skipping_current_range variable. lazy_scan_heap() no longer needs to
manage the skipped range -- checking if we reached the end in order to
then call heap_vac_scan_get_next_block(). And
heap_vac_scan_get_next_block() can derive the visibility status of a
block from whether or not we are in a skippable range -- that is,
whether or not the next_block is equal to the next unskippable block.
---
src/backend/access/heap/vacuumlazy.c | 243 ++++++++++++++-------------
1 file changed, 126 insertions(+), 117 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 0ddb986bc03..99d160335e1 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -206,8 +206,8 @@ typedef struct LVRelState
int64 missed_dead_tuples; /* # removable, but not removed */
/*
- * Parameters maintained by lazy_scan_skip() to manage skipping ranges of
- * pages greater than SKIP_PAGES_THRESHOLD.
+ * Parameters maintained by heap_vac_scan_get_next_block() to manage
+ * skipping ranges of pages greater than SKIP_PAGES_THRESHOLD.
*/
struct
{
@@ -232,7 +232,9 @@ typedef struct LVSavedErrInfo
/* non-export function prototypes */
static void lazy_scan_heap(LVRelState *vacrel);
-static void lazy_scan_skip(LVRelState *vacrel, BlockNumber next_block);
+static bool heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
+ BlockNumber *blkno,
+ bool *all_visible_according_to_vm);
static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
bool sharelock);
@@ -814,8 +816,11 @@ static void
lazy_scan_heap(LVRelState *vacrel)
{
BlockNumber rel_pages = vacrel->rel_pages,
- blkno,
next_fsm_block_to_vacuum = 0;
+ bool all_visible_according_to_vm;
+
+ /* relies on InvalidBlockNumber overflowing to 0 */
+ BlockNumber blkno = InvalidBlockNumber;
VacDeadItems *dead_items = vacrel->dead_items;
const int initprog_index[] = {
PROGRESS_VACUUM_PHASE,
@@ -830,40 +835,17 @@ lazy_scan_heap(LVRelState *vacrel)
initprog_val[2] = dead_items->max_items;
pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
+ vacrel->skip.next_unskippable_block = InvalidBlockNumber;
vacrel->skip.vmbuffer = InvalidBuffer;
- /* Set up an initial range of skippable blocks using the visibility map */
- lazy_scan_skip(vacrel, 0);
- for (blkno = 0; blkno < rel_pages; blkno++)
+
+ while (heap_vac_scan_get_next_block(vacrel, blkno + 1,
+ &blkno, &all_visible_according_to_vm))
{
Buffer buf;
Page page;
- bool all_visible_according_to_vm;
bool has_lpdead_items;
bool got_cleanup_lock = false;
- if (blkno == vacrel->skip.next_unskippable_block)
- {
- /*
- * Can't skip this page safely. Must scan the page. But
- * determine the next skippable range after the page first.
- */
- all_visible_according_to_vm = vacrel->skip.next_unskippable_allvis;
- lazy_scan_skip(vacrel, blkno + 1);
-
- Assert(vacrel->skip.next_unskippable_block >= blkno + 1);
- }
- else
- {
- /* Last page always scanned (may need to set nonempty_pages) */
- Assert(blkno < rel_pages - 1);
-
- if (vacrel->skip.skipping_current_range)
- continue;
-
- /* Current range is too small to skip -- just scan the page */
- all_visible_according_to_vm = true;
- }
-
vacrel->scanned_pages++;
/* Report as block scanned, update error traceback information */
@@ -1083,20 +1065,14 @@ lazy_scan_heap(LVRelState *vacrel)
}
/*
- * lazy_scan_skip() -- set up range of skippable blocks using visibility map.
- *
- * lazy_scan_heap() calls here every time it needs to set up a new range of
- * blocks to skip via the visibility map. Caller passes next_block, the next
- * block in line. The parameters of the skipped range are recorded in skip.
- * vacrel is an in/out parameter here; vacuum options and information about the
- * relation are read and vacrel->skippedallvis is set to ensure we don't
- * advance relfrozenxid when we have skipped vacuuming all visible blocks.
+ * heap_vac_scan_get_next_block() -- get next block for vacuum to process
*
- * skip->vmbuffer will contain the block from the VM containing visibility
- * information for the next unskippable heap block. We may end up needed a
- * different block from the VM (if we decide not to skip a skippable block).
- * This is okay; visibilitymap_pin() will take care of this while processing
- * the block.
+ * lazy_scan_heap() calls here every time it needs to get the next block to
+ * prune and vacuum, using the visibility map, vacuum options, and various
+ * thresholds to skip blocks which do not need to be processed. Caller passes
+ * next_block, the next block in line. This block may end up being skipped.
+ * heap_vac_scan_get_next_block() sets blkno to next block that actually needs
+ * to be processed.
*
* A block is unskippable if it is not all visible according to the visibility
* map. It is also unskippable if it is the last block in the relation, if the
@@ -1106,14 +1082,25 @@ lazy_scan_heap(LVRelState *vacrel)
* Even if a block is skippable, we may choose not to skip it if the range of
* skippable blocks is too small (below SKIP_PAGES_THRESHOLD). As a
* consequence, we must keep track of the next truly unskippable block and its
- * visibility status along with whether or not we are skipping the current
- * range of skippable blocks. This can be used to derive the next block
- * lazy_scan_heap() must process and its visibility status.
+ * visibility status separate from the next block lazy_scan_heap() should
+ * process (and its visibility status).
*
* The block number and visibility status of the next unskippable block are set
- * in skip->next_unskippable_block and next_unskippable_allvis.
- * skip->skipping_current_range indicates to the caller whether or not it is
- * processing a skippable (and thus all-visible) block.
+ * in vacrel->skip->next_unskippable_block and next_unskippable_allvis.
+ *
+ * The block number and visibility status of the next block to process are set
+ * in blkno and all_visible_according_to_vm. heap_vac_scan_get_next_block()
+ * returns false if there are no further blocks to process.
+ *
+ * vacrel is an in/out parameter here; vacuum options and information about the
+ * relation are read and vacrel->skippedallvis is set to ensure we don't
+ * advance relfrozenxid when we have skipped vacuuming all visible blocks.
+ *
+ * skip->vmbuffer will contain the block from the VM containing visibility
+ * information for the next unskippable heap block. We may end up needed a
+ * different block from the VM (if we decide not to skip a skippable block).
+ * This is okay; visibilitymap_pin() will take care of this while processing
+ * the block.
*
* Note: our opinion of which blocks can be skipped can go stale immediately.
* It's okay if caller "misses" a page whose all-visible or all-frozen marking
@@ -1123,91 +1110,113 @@ lazy_scan_heap(LVRelState *vacrel)
* older XIDs/MXIDs. The vacrel->skippedallvis flag will be set here when the
* choice to skip such a range is actually made, making everything safe.)
*/
-static void
-lazy_scan_skip(LVRelState *vacrel, BlockNumber next_block)
+static bool
+heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
+ BlockNumber *blkno, bool *all_visible_according_to_vm)
{
- /* Use local variables for better optimized loop code */
- BlockNumber rel_pages = vacrel->rel_pages,
- next_unskippable_block = next_block;
-
bool skipsallvis = false;
- vacrel->skip.next_unskippable_allvis = true;
- while (next_unskippable_block < rel_pages)
+ if (next_block >= vacrel->rel_pages)
{
- uint8 mapbits = visibilitymap_get_status(vacrel->rel,
- next_unskippable_block,
- &vacrel->skip.vmbuffer);
+ *blkno = InvalidBlockNumber;
+ return false;
+ }
- if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)
+ if (vacrel->skip.next_unskippable_block == InvalidBlockNumber ||
+ next_block > vacrel->skip.next_unskippable_block)
+ {
+ /* Use local variables for better optimized loop code */
+ BlockNumber rel_pages = vacrel->rel_pages;
+ BlockNumber next_unskippable_block = vacrel->skip.next_unskippable_block;
+
+ while (++next_unskippable_block < rel_pages)
{
- Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
- vacrel->skip.next_unskippable_allvis = false;
- break;
- }
+ uint8 mapbits = visibilitymap_get_status(vacrel->rel,
+ next_unskippable_block,
+ &vacrel->skip.vmbuffer);
- /*
- * Caller must scan the last page to determine whether it has tuples
- * (caller must have the opportunity to set vacrel->nonempty_pages).
- * This rule avoids having lazy_truncate_heap() take access-exclusive
- * lock on rel to attempt a truncation that fails anyway, just because
- * there are tuples on the last page (it is likely that there will be
- * tuples on other nearby pages as well, but those can be skipped).
- *
- * Implement this by always treating the last block as unsafe to skip.
- */
- if (next_unskippable_block == rel_pages - 1)
- break;
+ vacrel->skip.next_unskippable_allvis = mapbits & VISIBILITYMAP_ALL_VISIBLE;
- /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */
- if (!vacrel->skipwithvm)
- {
- /* Caller shouldn't rely on all_visible_according_to_vm */
- vacrel->skip.next_unskippable_allvis = false;
- break;
- }
+ if (!vacrel->skip.next_unskippable_allvis)
+ {
+ Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
+ break;
+ }
- /*
- * Aggressive VACUUM caller can't skip pages just because they are
- * all-visible. They may still skip all-frozen pages, which can't
- * contain XIDs < OldestXmin (XIDs that aren't already frozen by now).
- */
- if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)
- {
- if (vacrel->aggressive)
+ /*
+ * Caller must scan the last page to determine whether it has
+ * tuples (caller must have the opportunity to set
+ * vacrel->nonempty_pages). This rule avoids having
+ * lazy_truncate_heap() take access-exclusive lock on rel to
+ * attempt a truncation that fails anyway, just because there are
+ * tuples on the last page (it is likely that there will be tuples
+ * on other nearby pages as well, but those can be skipped).
+ *
+ * Implement this by always treating the last block as unsafe to
+ * skip.
+ */
+ if (next_unskippable_block == rel_pages - 1)
break;
+ /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */
+ if (!vacrel->skipwithvm)
+ {
+ /* Caller shouldn't rely on all_visible_according_to_vm */
+ vacrel->skip.next_unskippable_allvis = false;
+ break;
+ }
+
/*
- * All-visible block is safe to skip in non-aggressive case. But
- * remember that the final range contains such a block for later.
+ * Aggressive VACUUM caller can't skip pages just because they are
+ * all-visible. They may still skip all-frozen pages, which can't
+ * contain XIDs < OldestXmin (XIDs that aren't already frozen by
+ * now).
*/
- skipsallvis = true;
+ if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)
+ {
+ if (vacrel->aggressive)
+ break;
+
+ /*
+ * All-visible block is safe to skip in non-aggressive case.
+ * But remember that the final range contains such a block for
+ * later.
+ */
+ skipsallvis = true;
+ }
+
+ vacuum_delay_point();
}
- vacuum_delay_point();
- next_unskippable_block++;
- }
+ vacrel->skip.next_unskippable_block = next_unskippable_block;
- vacrel->skip.next_unskippable_block = next_unskippable_block;
+ /*
+ * We only skip a range with at least SKIP_PAGES_THRESHOLD consecutive
+ * pages. Since we're reading sequentially, the OS should be doing
+ * readahead for us, so there's no gain in skipping a page now and
+ * then. Skipping such a range might even discourage sequential
+ * detection.
+ *
+ * This test also enables more frequent relfrozenxid advancement
+ * during non-aggressive VACUUMs. If the range has any all-visible
+ * pages then skipping makes updating relfrozenxid unsafe, which is a
+ * real downside.
+ */
+ if (vacrel->skip.next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)
+ {
+ next_block = vacrel->skip.next_unskippable_block;
+ if (skipsallvis)
+ vacrel->skippedallvis = true;
+ }
+ }
- /*
- * We only skip a range with at least SKIP_PAGES_THRESHOLD consecutive
- * pages. Since we're reading sequentially, the OS should be doing
- * readahead for us, so there's no gain in skipping a page now and then.
- * Skipping such a range might even discourage sequential detection.
- *
- * This test also enables more frequent relfrozenxid advancement during
- * non-aggressive VACUUMs. If the range has any all-visible pages then
- * skipping makes updating relfrozenxid unsafe, which is a real downside.
- */
- if (vacrel->skip.next_unskippable_block - next_block < SKIP_PAGES_THRESHOLD)
- vacrel->skip.skipping_current_range = false;
+ if (next_block == vacrel->skip.next_unskippable_block)
+ *all_visible_according_to_vm = vacrel->skip.next_unskippable_allvis;
else
- {
- vacrel->skip.skipping_current_range = true;
- if (skipsallvis)
- vacrel->skippedallvis = true;
- }
+ *all_visible_according_to_vm = true;
+
+ *blkno = next_block;
+ return true;
}
/*
--
2.39.2
From 026af3d4b19feabea8ed78795c64b3278fa93d7f Mon Sep 17 00:00:00 2001
From: Melanie Plageman <melanieplage...@gmail.com>
Date: Sun, 31 Dec 2023 12:49:56 -0500
Subject: [PATCH v6 4/9] Remove unneeded vacuum_delay_point from
heap_vac_scan_get_next_block
heap_vac_scan_get_next_block() does relatively little work, so there is
no need to call vacuum_delay_point(). A future commit will call
heap_vac_scan_get_next_block() from a callback, and we would like to
avoid calling vacuum_delay_point() in that callback.
---
src/backend/access/heap/vacuumlazy.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 99d160335e1..65d257aab83 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1184,8 +1184,6 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
*/
skipsallvis = true;
}
-
- vacuum_delay_point();
}
vacrel->skip.next_unskippable_block = next_unskippable_block;
--
2.39.2
From b4047b941182af0643838fde056c298d5cc3ae32 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Wed, 6 Mar 2024 20:13:42 +0200
Subject: [PATCH v6 5/9] Remove unused 'skipping_current_range' field
---
src/backend/access/heap/vacuumlazy.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 65d257aab83..51391870bf3 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -217,8 +217,6 @@ typedef struct LVRelState
Buffer vmbuffer;
/* Next unskippable block's visibility status */
bool next_unskippable_allvis;
- /* Whether or not skippable blocks should be skipped */
- bool skipping_current_range;
} skip;
} LVRelState;
--
2.39.2
From 27e431e8dc69bbf09d831cb1cf2903d16f177d74 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Wed, 6 Mar 2024 20:58:57 +0200
Subject: [PATCH v6 6/9] Move vmbuffer back to a local varible in
lazy_scan_heap()
It felt confusing that we passed around the current block, 'blkno', as
an argument to lazy_scan_new_or_empty() and lazy_scan_prune(), but
'vmbuffer' was accessed directly in the 'scan_state'.
It was also a bit vague, when exactly 'vmbuffer' was valid. Calling
heap_vac_scan_get_next_block() set it, sometimes, to a buffer that
might or might not contain the VM bit for 'blkno'. But other
functions, like lazy_scan_prune(), assumed it to contain the correct
buffer. That was fixed up visibilitymap_pin(). But clearly it was not
"owned" by heap_vac_scan_get_next_block(), like the other 'scan_state'
fields.
I moved it back to a local variable, like it was. Maybe there would be
even better ways to handle it, but at least this is not worse than
what we have in master currently.
---
src/backend/access/heap/vacuumlazy.c | 72 +++++++++++++---------------
1 file changed, 33 insertions(+), 39 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 51391870bf3..3f1661cea61 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -213,8 +213,6 @@ typedef struct LVRelState
{
/* Next unskippable block */
BlockNumber next_unskippable_block;
- /* Buffer containing next unskippable block's visibility info */
- Buffer vmbuffer;
/* Next unskippable block's visibility status */
bool next_unskippable_allvis;
} skip;
@@ -232,13 +230,14 @@ typedef struct LVSavedErrInfo
static void lazy_scan_heap(LVRelState *vacrel);
static bool heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
BlockNumber *blkno,
- bool *all_visible_according_to_vm);
+ bool *all_visible_according_to_vm,
+ Buffer *vmbuffer);
static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
- bool sharelock);
+ bool sharelock, Buffer vmbuffer);
static void lazy_scan_prune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
- bool all_visible_according_to_vm,
+ Buffer vmbuffer, bool all_visible_according_to_vm,
bool *has_lpdead_items);
static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
@@ -815,11 +814,10 @@ lazy_scan_heap(LVRelState *vacrel)
{
BlockNumber rel_pages = vacrel->rel_pages,
next_fsm_block_to_vacuum = 0;
- bool all_visible_according_to_vm;
-
- /* relies on InvalidBlockNumber overflowing to 0 */
- BlockNumber blkno = InvalidBlockNumber;
VacDeadItems *dead_items = vacrel->dead_items;
+ BlockNumber blkno;
+ bool all_visible_according_to_vm;
+ Buffer vmbuffer = InvalidBuffer;
const int initprog_index[] = {
PROGRESS_VACUUM_PHASE,
PROGRESS_VACUUM_TOTAL_HEAP_BLKS,
@@ -834,10 +832,9 @@ lazy_scan_heap(LVRelState *vacrel)
pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
vacrel->skip.next_unskippable_block = InvalidBlockNumber;
- vacrel->skip.vmbuffer = InvalidBuffer;
while (heap_vac_scan_get_next_block(vacrel, blkno + 1,
- &blkno, &all_visible_according_to_vm))
+ &blkno, &all_visible_according_to_vm, &vmbuffer))
{
Buffer buf;
Page page;
@@ -880,10 +877,10 @@ lazy_scan_heap(LVRelState *vacrel)
* correctness, but we do it anyway to avoid holding the pin
* across a lengthy, unrelated operation.
*/
- if (BufferIsValid(vacrel->skip.vmbuffer))
+ if (BufferIsValid(vmbuffer))
{
- ReleaseBuffer(vacrel->skip.vmbuffer);
- vacrel->skip.vmbuffer = InvalidBuffer;
+ ReleaseBuffer(vmbuffer);
+ vmbuffer = InvalidBuffer;
}
/* Perform a round of index and heap vacuuming */
@@ -908,7 +905,7 @@ lazy_scan_heap(LVRelState *vacrel)
* all-visible. In most cases this will be very cheap, because we'll
* already have the correct page pinned anyway.
*/
- visibilitymap_pin(vacrel->rel, blkno, &vacrel->skip.vmbuffer);
+ visibilitymap_pin(vacrel->rel, blkno, &vmbuffer);
buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno, RBM_NORMAL,
vacrel->bstrategy);
@@ -926,7 +923,8 @@ lazy_scan_heap(LVRelState *vacrel)
LockBuffer(buf, BUFFER_LOCK_SHARE);
/* Check for new or empty pages before lazy_scan_[no]prune call */
- if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock))
+ if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, !got_cleanup_lock,
+ vmbuffer))
{
/* Processed as new/empty page (lock and pin released) */
continue;
@@ -968,7 +966,7 @@ lazy_scan_heap(LVRelState *vacrel)
*/
if (got_cleanup_lock)
lazy_scan_prune(vacrel, buf, blkno, page,
- all_visible_according_to_vm,
+ vmbuffer, all_visible_according_to_vm,
&has_lpdead_items);
/*
@@ -1018,11 +1016,8 @@ lazy_scan_heap(LVRelState *vacrel)
}
vacrel->blkno = InvalidBlockNumber;
- if (BufferIsValid(vacrel->skip.vmbuffer))
- {
- ReleaseBuffer(vacrel->skip.vmbuffer);
- vacrel->skip.vmbuffer = InvalidBuffer;
- }
+ if (BufferIsValid(vmbuffer))
+ ReleaseBuffer(vmbuffer);
/* report that everything is now scanned */
pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);
@@ -1094,7 +1089,7 @@ lazy_scan_heap(LVRelState *vacrel)
* relation are read and vacrel->skippedallvis is set to ensure we don't
* advance relfrozenxid when we have skipped vacuuming all visible blocks.
*
- * skip->vmbuffer will contain the block from the VM containing visibility
+ * vmbuffer will contain the block from the VM containing visibility
* information for the next unskippable heap block. We may end up needed a
* different block from the VM (if we decide not to skip a skippable block).
* This is okay; visibilitymap_pin() will take care of this while processing
@@ -1110,7 +1105,7 @@ lazy_scan_heap(LVRelState *vacrel)
*/
static bool
heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
- BlockNumber *blkno, bool *all_visible_according_to_vm)
+ BlockNumber *blkno, bool *all_visible_according_to_vm, Buffer *vmbuffer)
{
bool skipsallvis = false;
@@ -1131,7 +1126,7 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
{
uint8 mapbits = visibilitymap_get_status(vacrel->rel,
next_unskippable_block,
- &vacrel->skip.vmbuffer);
+ vmbuffer);
vacrel->skip.next_unskippable_allvis = mapbits & VISIBILITYMAP_ALL_VISIBLE;
@@ -1245,7 +1240,7 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
*/
static bool
lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
- Page page, bool sharelock)
+ Page page, bool sharelock, Buffer vmbuffer)
{
Size freespace;
@@ -1331,7 +1326,7 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
PageSetAllVisible(page);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vacrel->skip.vmbuffer, InvalidTransactionId,
+ vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN);
END_CRIT_SECTION();
}
@@ -1367,11 +1362,10 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
* any tuple that becomes dead after the call to heap_page_prune() can't need to
* be frozen, because it was visible to another session when vacuum started.
*
- * vacrel->skipstate.vmbuffer is the buffer containing the VM block with
- * visibility information for the heap block, blkno.
- * all_visible_according_to_vm is the saved visibility status of the heap block
- * looked up earlier by the caller. We won't rely entirely on this status, as
- * it may be out of date.
+ * vmbuffer is the buffer containing the VM block with visibility information
+ * for the heap block, blkno. all_visible_according_to_vm is the saved
+ * visibility status of the heap block looked up earlier by the caller. We
+ * won't rely entirely on this status, as it may be out of date.
*
* *has_lpdead_items is set to true or false depending on whether, upon return
* from this function, any LP_DEAD items are still present on the page.
@@ -1381,6 +1375,7 @@ lazy_scan_prune(LVRelState *vacrel,
Buffer buf,
BlockNumber blkno,
Page page,
+ Buffer vmbuffer,
bool all_visible_according_to_vm,
bool *has_lpdead_items)
{
@@ -1814,8 +1809,7 @@ lazy_scan_prune(LVRelState *vacrel,
PageSetAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vacrel->skip.vmbuffer, visibility_cutoff_xid,
- flags);
+ vmbuffer, visibility_cutoff_xid, flags);
}
/*
@@ -1825,11 +1819,11 @@ lazy_scan_prune(LVRelState *vacrel,
* buffer lock before concluding that the VM is corrupt.
*/
else if (all_visible_according_to_vm && !PageIsAllVisible(page) &&
- visibilitymap_get_status(vacrel->rel, blkno, &vacrel->skip.vmbuffer) != 0)
+ visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0)
{
elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u",
vacrel->relname, blkno);
- visibilitymap_clear(vacrel->rel, blkno, vacrel->skip.vmbuffer,
+ visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
@@ -1853,7 +1847,7 @@ lazy_scan_prune(LVRelState *vacrel,
vacrel->relname, blkno);
PageClearAllVisible(page);
MarkBufferDirty(buf);
- visibilitymap_clear(vacrel->rel, blkno, vacrel->skip.vmbuffer,
+ visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
@@ -1863,7 +1857,7 @@ lazy_scan_prune(LVRelState *vacrel,
* true, so we must check both all_visible and all_frozen.
*/
else if (all_visible_according_to_vm && all_visible &&
- all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vacrel->skip.vmbuffer))
+ all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))
{
/*
* Avoid relying on all_visible_according_to_vm as a proxy for the
@@ -1885,7 +1879,7 @@ lazy_scan_prune(LVRelState *vacrel,
*/
Assert(!TransactionIdIsValid(visibility_cutoff_xid));
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
- vacrel->skip.vmbuffer, InvalidTransactionId,
+ vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE |
VISIBILITYMAP_ALL_FROZEN);
}
--
2.39.2
From 519e26a01b6e6974f9e0edb94b00756af053f7ee Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Wed, 6 Mar 2024 20:27:57 +0200
Subject: [PATCH v6 7/9] Rename skip_state
I don't want to emphasize the "skipping" part. Rather, it's the state
onwed by the heap_vac_scan_get_next_block() function
---
src/backend/access/heap/vacuumlazy.c | 32 ++++++++++++++--------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 3f1661cea61..17e06065f7e 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -206,8 +206,8 @@ typedef struct LVRelState
int64 missed_dead_tuples; /* # removable, but not removed */
/*
- * Parameters maintained by heap_vac_scan_get_next_block() to manage
- * skipping ranges of pages greater than SKIP_PAGES_THRESHOLD.
+ * State maintained by heap_vac_scan_get_next_block() to manage skipping
+ * ranges of pages greater than SKIP_PAGES_THRESHOLD.
*/
struct
{
@@ -215,7 +215,7 @@ typedef struct LVRelState
BlockNumber next_unskippable_block;
/* Next unskippable block's visibility status */
bool next_unskippable_allvis;
- } skip;
+ } get_next_block_state;
} LVRelState;
/* Struct for saving and restoring vacuum error information. */
@@ -831,7 +831,7 @@ lazy_scan_heap(LVRelState *vacrel)
initprog_val[2] = dead_items->max_items;
pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
- vacrel->skip.next_unskippable_block = InvalidBlockNumber;
+ vacrel->get_next_block_state.next_unskippable_block = InvalidBlockNumber;
while (heap_vac_scan_get_next_block(vacrel, blkno + 1,
&blkno, &all_visible_according_to_vm, &vmbuffer))
@@ -1079,7 +1079,7 @@ lazy_scan_heap(LVRelState *vacrel)
* process (and its visibility status).
*
* The block number and visibility status of the next unskippable block are set
- * in vacrel->skip->next_unskippable_block and next_unskippable_allvis.
+ * in vacrel->scan_sate->next_unskippable_block and next_unskippable_allvis.
*
* The block number and visibility status of the next block to process are set
* in blkno and all_visible_according_to_vm. heap_vac_scan_get_next_block()
@@ -1115,12 +1115,12 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
return false;
}
- if (vacrel->skip.next_unskippable_block == InvalidBlockNumber ||
- next_block > vacrel->skip.next_unskippable_block)
+ if (vacrel->get_next_block_state.next_unskippable_block == InvalidBlockNumber ||
+ next_block > vacrel->get_next_block_state.next_unskippable_block)
{
/* Use local variables for better optimized loop code */
BlockNumber rel_pages = vacrel->rel_pages;
- BlockNumber next_unskippable_block = vacrel->skip.next_unskippable_block;
+ BlockNumber next_unskippable_block = vacrel->get_next_block_state.next_unskippable_block;
while (++next_unskippable_block < rel_pages)
{
@@ -1128,9 +1128,9 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
next_unskippable_block,
vmbuffer);
- vacrel->skip.next_unskippable_allvis = mapbits & VISIBILITYMAP_ALL_VISIBLE;
+ vacrel->get_next_block_state.next_unskippable_allvis = mapbits & VISIBILITYMAP_ALL_VISIBLE;
- if (!vacrel->skip.next_unskippable_allvis)
+ if (!vacrel->get_next_block_state.next_unskippable_allvis)
{
Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);
break;
@@ -1155,7 +1155,7 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
if (!vacrel->skipwithvm)
{
/* Caller shouldn't rely on all_visible_according_to_vm */
- vacrel->skip.next_unskippable_allvis = false;
+ vacrel->get_next_block_state.next_unskippable_allvis = false;
break;
}
@@ -1179,7 +1179,7 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
}
}
- vacrel->skip.next_unskippable_block = next_unskippable_block;
+ vacrel->get_next_block_state.next_unskippable_block = next_unskippable_block;
/*
* We only skip a range with at least SKIP_PAGES_THRESHOLD consecutive
@@ -1193,16 +1193,16 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
* pages then skipping makes updating relfrozenxid unsafe, which is a
* real downside.
*/
- if (vacrel->skip.next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)
+ if (vacrel->get_next_block_state.next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)
{
- next_block = vacrel->skip.next_unskippable_block;
+ next_block = vacrel->get_next_block_state.next_unskippable_block;
if (skipsallvis)
vacrel->skippedallvis = true;
}
}
- if (next_block == vacrel->skip.next_unskippable_block)
- *all_visible_according_to_vm = vacrel->skip.next_unskippable_allvis;
+ if (next_block == vacrel->get_next_block_state.next_unskippable_block)
+ *all_visible_according_to_vm = vacrel->get_next_block_state.next_unskippable_allvis;
else
*all_visible_according_to_vm = true;
--
2.39.2
From 6dfae936a29e2d3479273f8ab47778a596258b16 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Wed, 6 Mar 2024 21:03:19 +0200
Subject: [PATCH v6 8/9] Track 'current_block' in the skip state
The caller was expected to always pass last blk + 1. It's not clear if
the next_unskippable block accounting would work correctly if you
passed something else. So rather than expecting the caller to do that,
have heap_vac_scan_get_next_block() keep track of the last returned
block itself, in the 'skip' state.
This is largely redundant with the LVRelState->blkno field. But that
one is currently only used for error reporting, so it feels best to
give heap_vac_scan_get_next_block() its own field that it owns.
---
src/backend/access/heap/vacuumlazy.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 17e06065f7e..535d70b71c3 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -211,6 +211,8 @@ typedef struct LVRelState
*/
struct
{
+ BlockNumber current_block;
+
/* Next unskippable block */
BlockNumber next_unskippable_block;
/* Next unskippable block's visibility status */
@@ -228,7 +230,7 @@ typedef struct LVSavedErrInfo
/* non-export function prototypes */
static void lazy_scan_heap(LVRelState *vacrel);
-static bool heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
+static bool heap_vac_scan_get_next_block(LVRelState *vacrel,
BlockNumber *blkno,
bool *all_visible_according_to_vm,
Buffer *vmbuffer);
@@ -831,10 +833,11 @@ lazy_scan_heap(LVRelState *vacrel)
initprog_val[2] = dead_items->max_items;
pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
+ /* initialize for first heap_vac_scan_get_next_block() call */
+ vacrel->get_next_block_state.current_block = InvalidBlockNumber;
vacrel->get_next_block_state.next_unskippable_block = InvalidBlockNumber;
- while (heap_vac_scan_get_next_block(vacrel, blkno + 1,
- &blkno, &all_visible_according_to_vm, &vmbuffer))
+ while (heap_vac_scan_get_next_block(vacrel, &blkno, &all_visible_according_to_vm, &vmbuffer))
{
Buffer buf;
Page page;
@@ -1061,11 +1064,9 @@ lazy_scan_heap(LVRelState *vacrel)
* heap_vac_scan_get_next_block() -- get next block for vacuum to process
*
* lazy_scan_heap() calls here every time it needs to get the next block to
- * prune and vacuum, using the visibility map, vacuum options, and various
- * thresholds to skip blocks which do not need to be processed. Caller passes
- * next_block, the next block in line. This block may end up being skipped.
- * heap_vac_scan_get_next_block() sets blkno to next block that actually needs
- * to be processed.
+ * prune and vacuum. We use the visibility map, vacuum options, and various
+ * thresholds to skip blocks which do not need to be processed, and set blkno
+ * to next block that actually needs to be processed.
*
* A block is unskippable if it is not all visible according to the visibility
* map. It is also unskippable if it is the last block in the relation, if the
@@ -1104,14 +1105,16 @@ lazy_scan_heap(LVRelState *vacrel)
* choice to skip such a range is actually made, making everything safe.)
*/
static bool
-heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
- BlockNumber *blkno, bool *all_visible_according_to_vm, Buffer *vmbuffer)
+heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber *blkno,
+ bool *all_visible_according_to_vm, Buffer *vmbuffer)
{
+ /* Relies on InvalidBlockNumber + 1 == 0 */
+ BlockNumber next_block = vacrel->get_next_block_state.current_block + 1;
bool skipsallvis = false;
if (next_block >= vacrel->rel_pages)
{
- *blkno = InvalidBlockNumber;
+ vacrel->get_next_block_state.current_block = *blkno = InvalidBlockNumber;
return false;
}
@@ -1206,7 +1209,7 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber next_block,
else
*all_visible_according_to_vm = true;
- *blkno = next_block;
+ vacrel->get_next_block_state.current_block = *blkno = next_block;
return true;
}
--
2.39.2
From 619556cad4aad68d1711c12b962e9002e56d8db2 Mon Sep 17 00:00:00 2001
From: Heikki Linnakangas <heikki.linnakan...@iki.fi>
Date: Wed, 6 Mar 2024 21:35:11 +0200
Subject: [PATCH v6 9/9] Comment & whitespace cleanup
I moved some of the paragraphs to inside the
heap_vac_scan_get_next_block() function. I found the explanation in
the function comment at the old place like too much detail. Someone
looking at the function signature and how to call it would not care
about all the details of what can or cannot be skipped.
The new place isn't great either, but will do for now
---
src/backend/access/heap/vacuumlazy.c | 41 ++++++++++++++++------------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 535d70b71c3..b8a2dcfbbac 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -228,6 +228,7 @@ typedef struct LVSavedErrInfo
VacErrPhase phase;
} LVSavedErrInfo;
+
/* non-export function prototypes */
static void lazy_scan_heap(LVRelState *vacrel);
static bool heap_vac_scan_get_next_block(LVRelState *vacrel,
@@ -1068,30 +1069,17 @@ lazy_scan_heap(LVRelState *vacrel)
* thresholds to skip blocks which do not need to be processed, and set blkno
* to next block that actually needs to be processed.
*
- * A block is unskippable if it is not all visible according to the visibility
- * map. It is also unskippable if it is the last block in the relation, if the
- * vacuum is an aggressive vacuum, or if DISABLE_PAGE_SKIPPING was passed to
- * vacuum.
- *
- * Even if a block is skippable, we may choose not to skip it if the range of
- * skippable blocks is too small (below SKIP_PAGES_THRESHOLD). As a
- * consequence, we must keep track of the next truly unskippable block and its
- * visibility status separate from the next block lazy_scan_heap() should
- * process (and its visibility status).
- *
- * The block number and visibility status of the next unskippable block are set
- * in vacrel->scan_sate->next_unskippable_block and next_unskippable_allvis.
- *
- * The block number and visibility status of the next block to process are set
- * in blkno and all_visible_according_to_vm. heap_vac_scan_get_next_block()
- * returns false if there are no further blocks to process.
+ * The block number and visibility status of the next block to process are
+ * returned in blkno and all_visible_according_to_vm.
+ * heap_vac_scan_get_next_block() returns false if there are no further blocks
+ * to process.
*
* vacrel is an in/out parameter here; vacuum options and information about the
* relation are read and vacrel->skippedallvis is set to ensure we don't
* advance relfrozenxid when we have skipped vacuuming all visible blocks.
*
* vmbuffer will contain the block from the VM containing visibility
- * information for the next unskippable heap block. We may end up needed a
+ * information for the next unskippable heap block. We may end up needing a
* different block from the VM (if we decide not to skip a skippable block).
* This is okay; visibilitymap_pin() will take care of this while processing
* the block.
@@ -1125,6 +1113,23 @@ heap_vac_scan_get_next_block(LVRelState *vacrel, BlockNumber *blkno,
BlockNumber rel_pages = vacrel->rel_pages;
BlockNumber next_unskippable_block = vacrel->get_next_block_state.next_unskippable_block;
+ /*
+ * A block is unskippable if it is not all visible according to the
+ * visibility map. It is also unskippable if it is the last block in
+ * the relation, if the vacuum is an aggressive vacuum, or if
+ * DISABLE_PAGE_SKIPPING was passed to vacuum.
+ *
+ * Even if a block is skippable, we may choose not to skip it if the
+ * range of skippable blocks is too small (below
+ * SKIP_PAGES_THRESHOLD). As a consequence, we must keep track of the
+ * next truly unskippable block and its visibility status separate
+ * from the next block lazy_scan_heap() should process (and its
+ * visibility status).
+ *
+ * The block number and visibility status of the next unskippable
+ * block are set in vacrel->scan_sate->next_unskippable_block and
+ * next_unskippable_allvis.
+ */
while (++next_unskippable_block < rel_pages)
{
uint8 mapbits = visibilitymap_get_status(vacrel->rel,
--
2.39.2