On Mon, Mar 1, 2021 at 1:40 PM Peter Geoghegan <p...@bowt.ie> wrote: > > Since it seems not a bug I personally think we don't need to do > > anything for back branches. But if we want not to trigger an index > > scan by vacuum_cleanup_index_scale_factor, we could change the default > > value to a high value (say, to 10000) so that it can skip an index > > scan in most cases. > > One reason to remove vacuum_cleanup_index_scale_factor in the back > branches is that it removes any need to fix the > "IndexVacuumInfo.num_heap_tuples is inaccurate outside of > btvacuumcleanup-only VACUUMs" bug -- it just won't matter if > btm_last_cleanup_num_heap_tuples is inaccurate anymore. (I am still > not sure about backpatch being a good idea, though.)
Attached is v8 of the patch series, which has new patches. No real changes compared to v7 for the first patch, though. There are now two additional prototype patches to remove the vacuum_cleanup_index_scale_factor GUC/param along the lines we've discussed. This requires teaching VACUUM ANALYZE about when to trust VACUUM cleanup to set the statistics (that's what v8-0002* does). The general idea for VACUUM ANALYZE in v8-0002* is to assume that cleanup-only VACUUMs won't set the statistics accurately -- so we need to keep track of this during VACUUM (in case it's a VACUUM ANALYZE, which now needs to know if index vacuuming was "cleanup only" or not). This is not a new thing for hash indexes -- they never did anything in the cleanup-only case (hashvacuumcleanup() just returns NULL). And now nbtree does the same thing (usually). Not all AMs will, but the new assumption is much better than the one it replaces. I thought of another existing case that violated the faulty assumption made by VACUUM ANALYZE (which v8-0002* fixes): VACUUM's INDEX_CLEANUP feature (which was added to Postgres 12 by commit a96c41feec6) is another case where VACUUM does nothing with indexes. VACUUM ANALYZE mistakenly considers that index vacuuming must have run and set the pg_class statistics to an accurate value (more accurate than it is capable of). But with INDEX_CLEANUP we won't even call amvacuumcleanup(). -- Peter Geoghegan
From 967a057607ce2d0b648e324a9085ab4ccecd828e Mon Sep 17 00:00:00 2001 From: Peter Geoghegan <p...@bowt.ie> Date: Thu, 25 Feb 2021 15:17:22 -0800 Subject: [PATCH v8 1/3] Recycle pages deleted during same VACUUM. Author: Peter Geoghegan <p...@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wzk76_P=67iuscb1un44-gyzl-kgpsxbsxq_bdcma7q...@mail.gmail.com --- src/include/access/nbtree.h | 22 ++++++- src/backend/access/nbtree/README | 31 +++++++++ src/backend/access/nbtree/nbtpage.c | 40 ++++++++++++ src/backend/access/nbtree/nbtree.c | 97 +++++++++++++++++++++++++++++ src/backend/access/nbtree/nbtxlog.c | 22 +++++++ 5 files changed, 211 insertions(+), 1 deletion(-) diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index b56b7b7868..876b8f3437 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -279,7 +279,8 @@ BTPageGetDeleteXid(Page page) * Is an existing page recyclable? * * This exists to centralize the policy on which deleted pages are now safe to - * re-use. + * re-use. The _bt_newly_deleted_pages_recycle() optimization behaves more + * aggressively, though that has certain known limitations. * * Note: PageIsNew() pages are always safe to recycle, but we can't deal with * them here (caller is responsible for that case themselves). Caller might @@ -316,14 +317,33 @@ BTPageIsRecyclable(Page page) * BTVacState is private nbtree.c state used during VACUUM. It is exported * for use by page deletion related code in nbtpage.c. */ +typedef struct BTPendingRecycle +{ + BlockNumber blkno; + FullTransactionId safexid; +} BTPendingRecycle; + typedef struct BTVacState { + /* + * VACUUM operation state + */ IndexVacuumInfo *info; IndexBulkDeleteResult *stats; IndexBulkDeleteCallback callback; void *callback_state; BTCycleId cycleid; + + /* + * Page deletion state for VACUUM + */ MemoryContext pagedelcontext; + BTPendingRecycle *deleted; + bool grow; + bool full; + uint32 ndeletedspace; + uint64 maxndeletedspace; + uint32 ndeleted; } BTVacState; /* diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README index 46d49bf025..265814ea46 100644 --- a/src/backend/access/nbtree/README +++ b/src/backend/access/nbtree/README @@ -430,6 +430,37 @@ whenever it is subsequently taken from the FSM for reuse. The deleted page's contents will be overwritten by the split operation (it will become the new right sibling page). +Prior to PostgreSQL 14, VACUUM was only able to recycle pages that were +deleted by a previous VACUUM operation (VACUUM typically placed all pages +deleted by the last VACUUM into the FSM, though there were and are no +certainties here). This had the obvious disadvantage of creating +uncertainty about when and how pages get recycled, especially with bursty +workloads. It was naive, even within the constraints of the design, since +there is no reason to think that it will take long for a deleted page to +become recyclable. It's convenient to use XIDs to implement the drain +technique, but that is totally unrelated to any of the other things that +VACUUM needs to do with XIDs. + +VACUUM operations now consider if it's possible to recycle any pages that +the same operation deleted after the physical scan of the index, the last +point it's convenient to do one last check. This changes nothing about +the basic design, and so it might still not be possible to recycle any +pages at that time (e.g., there might not even be one single new +transactions after an index page deletion, but before VACUUM ends). But +we have little to lose and plenty to gain by trying. We only need to keep +around a little information about recently deleted pages in local memory. +We don't even have to access the deleted pages a second time. + +Currently VACUUM delays considering the possibility of recycling its own +recently deleted page until the end of its btbulkdelete scan (or until the +end of btvacuumcleanup in cases where there were no tuples to delete in +the index). It would be slightly more effective if btbulkdelete page +deletions were deferred until btvacuumcleanup, simply because more time +will have passed. Our current approach works well enough in practice, +especially in cases where it really matters: cases where we're vacuuming a +large index, where recycling pages sooner rather than later is +particularly likely to matter. + Fastpath For Index Insertion ---------------------------- diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c index 629a23628e..9d7d0186d0 100644 --- a/src/backend/access/nbtree/nbtpage.c +++ b/src/backend/access/nbtree/nbtpage.c @@ -2687,6 +2687,46 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, BlockNumber scanblkno, if (target <= scanblkno) stats->pages_deleted++; + /* + * Maintain array of pages that were deleted during current btvacuumscan() + * call. We may well be able to recycle them in a separate pass at the + * end of the current btvacuumscan(). + * + * Need to respect work_mem/maxndeletedspace limitation on size of deleted + * array. Our strategy when the array can no longer grow within the + * bounds of work_mem is simple: keep earlier entries (which are likelier + * to be recyclable in the end), but stop saving new entries. + */ + if (vstate->full) + return true; + + if (vstate->ndeleted >= vstate->ndeletedspace) + { + uint64 newndeletedspace; + + if (!vstate->grow) + { + vstate->full = true; + return true; + } + + newndeletedspace = vstate->ndeletedspace * 2; + if (newndeletedspace > vstate->maxndeletedspace) + { + newndeletedspace = vstate->maxndeletedspace; + vstate->grow = false; + } + vstate->ndeletedspace = newndeletedspace; + + vstate->deleted = + repalloc(vstate->deleted, + sizeof(BTPendingRecycle) * vstate->ndeletedspace); + } + + vstate->deleted[vstate->ndeleted].blkno = target; + vstate->deleted[vstate->ndeleted].safexid = safexid; + vstate->ndeleted++; + return true; } diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index 504f5bef17..8aed93ff0a 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -21,7 +21,9 @@ #include "access/nbtree.h" #include "access/nbtxlog.h" #include "access/relscan.h" +#include "access/table.h" #include "access/xlog.h" +#include "catalog/index.h" #include "commands/progress.h" #include "commands/vacuum.h" #include "miscadmin.h" @@ -32,6 +34,7 @@ #include "storage/indexfsm.h" #include "storage/ipc.h" #include "storage/lmgr.h" +#include "storage/procarray.h" #include "storage/smgr.h" #include "utils/builtins.h" #include "utils/index_selfuncs.h" @@ -860,6 +863,71 @@ _bt_vacuum_needs_cleanup(IndexVacuumInfo *info) return false; } +/* + * _bt_newly_deleted_pages_recycle() -- Are _bt_pagedel pages recyclable now? + * + * Note that we assume that the array is ordered by safexid. No further + * entries can be safe to recycle once we encounter the first non-recyclable + * entry in the deleted array. + */ +static inline void +_bt_newly_deleted_pages_recycle(Relation rel, BTVacState *vstate) +{ + IndexBulkDeleteResult *stats = vstate->stats; + Relation heapRel; + + Assert(vstate->ndeleted > 0); + Assert(stats->pages_newly_deleted >= vstate->ndeleted); + + /* + * Recompute VACUUM XID boundaries. + * + * We don't actually care about the oldest non-removable XID. Computing + * the oldest such XID has a useful side-effect: It updates the procarray + * state that tracks XID horizon. This is not just an optimization; it's + * essential. It allows the GlobalVisCheckRemovableFullXid() calls we + * make here to notice if and when safexid values from pages this same + * VACUUM operation deleted are sufficiently old to allow recycling to + * take place safely. + */ + GetOldestNonRemovableTransactionId(NULL); + + /* + * Use the heap relation for GlobalVisCheckRemovableFullXid() calls (don't + * pass NULL rel argument). + * + * This is an optimization; it allows us to be much more aggressive in + * cases involving logical decoding (unless this happens to be a system + * catalog). We don't simply use BTPageIsRecyclable(). + * + * XXX: The BTPageIsRecyclable() criteria creates problems for this + * optimization. Its safexid test is applied in a redundant manner within + * _bt_getbuf() (via its BTPageIsRecyclable() call). Consequently, + * _bt_getbuf() may believe that it is still unsafe to recycle a page that + * we know to be recycle safe -- in which case it is unnecessarily + * discarded. + * + * We should get around to fixing this _bt_getbuf() issue some day. For + * now we can still proceed in the hopes that BTPageIsRecyclable() will + * catch up with us before _bt_getbuf() ever reaches the page. + */ + heapRel = table_open(IndexGetRelation(RelationGetRelid(rel), false), + AccessShareLock); + for (int i = 0; i < vstate->ndeleted; i++) + { + BlockNumber blkno = vstate->deleted[i].blkno; + FullTransactionId safexid = vstate->deleted[i].safexid; + + if (!GlobalVisCheckRemovableFullXid(heapRel, safexid)) + break; + + RecordFreeIndexPage(rel, blkno); + stats->pages_free++; + } + + table_close(heapRel, AccessShareLock); +} + /* * Bulk deletion of all index entries pointing to a set of heap tuples. * The set of target tuples is specified via a callback routine that tells @@ -945,6 +1013,14 @@ btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) * _bt_vacuum_needs_cleanup() to force the next VACUUM to proceed with a * btvacuumscan() call. * + * Note: Prior to PostgreSQL 14, we were completely reliant on the next + * VACUUM operation taking care of recycling whatever pages the current + * VACUUM operation found to be empty and then deleted. It is now usually + * possible for _bt_newly_deleted_pages_recycle() to recycle all of the + * pages that any given VACUUM operation deletes, as part of the same + * VACUUM operation. As a result, it is rare for num_delpages to actually + * exceed 0, including with indexes where page deletions are frequent. + * * Note: We must delay the _bt_set_cleanup_info() call until this late * stage of VACUUM (the btvacuumcleanup() phase), to keep num_heap_tuples * accurate. The btbulkdelete()-time num_heap_tuples value is generally @@ -1033,6 +1109,16 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, "_bt_pagedel", ALLOCSET_DEFAULT_SIZES); + /* Allocate _bt_newly_deleted_pages_recycle related information */ + vstate.ndeletedspace = 512; + vstate.grow = true; + vstate.full = false; + vstate.maxndeletedspace = ((work_mem * 1024L) / sizeof(BTPendingRecycle)); + vstate.maxndeletedspace = Min(vstate.maxndeletedspace, MaxBlockNumber); + vstate.maxndeletedspace = Max(vstate.maxndeletedspace, vstate.ndeletedspace); + vstate.ndeleted = 0; + vstate.deleted = palloc(sizeof(BTPendingRecycle) * vstate.ndeletedspace); + /* * The outer loop iterates over all index pages except the metapage, in * physical order (we hope the kernel will cooperate in providing @@ -1101,7 +1187,18 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, * * Note that if no recyclable pages exist, we don't bother vacuuming the * FSM at all. + * + * Before vacuuming the FSM, try to make the most of the pages we + * ourselves deleted: see if they can be recycled already (try to avoid + * waiting until the next VACUUM operation to recycle). Our approach is + * to check the local array of pages that were newly deleted during this + * VACUUM. */ + if (vstate.ndeleted > 0) + _bt_newly_deleted_pages_recycle(rel, &vstate); + + pfree(vstate.deleted); + if (stats->pages_free > 0) IndexFreeSpaceMapVacuum(rel); } diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 8b7c143db4..6ab9af4a43 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -999,6 +999,28 @@ btree_xlog_newroot(XLogReaderState *record) * the PGPROC->xmin > limitXmin test inside GetConflictingVirtualXIDs(). * Consequently, one XID value achieves the same exclusion effect on primary * and standby. + * + * XXX It would make a great deal more sense if each nbtree index's FSM (or + * some equivalent structure) was completely crash-safe. Importantly, this + * would enable page recycling's REDO side to work in a way that naturally + * matches original execution. + * + * Page deletion has to be crash safe already, plus xl_btree_reuse_page + * records are logged any time a backend has to recycle -- full crash safety + * is unlikely to add much overhead, and has clear efficiency benefits. It + * would also simplify things by more explicitly decoupling page deletion and + * page recycling. The benefits for REDO all follow from that. + * + * Under this scheme, the whole question of recycle safety could be moved from + * VACUUM to the consumer side. That is, VACUUM would no longer have to defer + * placing a page that it deletes in the FSM until BTPageIsRecyclable() starts + * to return true -- _bt_getbut() would handle all details of safely deferring + * recycling instead. _bt_getbut() would use the improved/crash-safe FSM to + * explicitly find a free page whose safexid is sufficiently old for recycling + * to be safe from the point of view of backends that run during original + * execution. That just leaves the REDO side. Instead of xl_btree_reuse_page + * records, we'd have FSM "consume/recycle page from the FSM" records that are + * associated with FSM page buffers/blocks. */ static void btree_xlog_reuse_page(XLogReaderState *record) -- 2.27.0
From 304839183156a11dbb33812ef040e0317f9d614b Mon Sep 17 00:00:00 2001 From: Peter Geoghegan <p...@bowt.ie> Date: Mon, 1 Mar 2021 14:40:57 -0800 Subject: [PATCH v8 2/3] VACUUM ANALYZE: Distrust cleanup-only stats. Distrust the stats from VACUUM within VACUUM ANALYZE when we know that index AMs must only have had amvacuumcleanup() calls, without any calls to ambulkdelete(). This establishes the convention that amvacuumcleanup() usually only gives an estimate for num_index_tuples. --- src/include/commands/vacuum.h | 6 +++++ src/backend/access/heap/vacuumlazy.c | 15 +++++++++++- src/backend/commands/analyze.c | 34 ++++++++++++++++++++++++---- src/backend/commands/vacuum.c | 1 + src/backend/postmaster/autovacuum.c | 1 + 5 files changed, 51 insertions(+), 6 deletions(-) diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index d029da5ac0..efacaf758a 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -221,6 +221,12 @@ typedef struct VacuumParams VacOptTernaryValue truncate; /* Truncate empty pages at the end, * default value depends on reloptions */ + /* XXX: output param approach is grotty, breaks backbranch ABI */ + + bool indexvacuuming; /* Output param: VACUUM took place and + * performed ambulkdelete calls for + * indexes? */ + /* * The number of parallel vacuum workers. 0 by default which means choose * based on the number of indexes. -1 indicates parallel vacuum is diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index d8f847b0e6..8716d305d0 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -1054,6 +1054,12 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats, lazy_vacuum_all_indexes(onerel, Irel, indstats, vacrelstats, lps, nindexes); + /* + * Remember index VACUUMing (not just cleanup) having taken + * place + */ + params->indexvacuuming = true; + /* Remove tuples from heap */ lazy_vacuum_heap(onerel, vacrelstats); @@ -1711,6 +1717,12 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats, lazy_vacuum_all_indexes(onerel, Irel, indstats, vacrelstats, lps, nindexes); + /* + * Remember index VACUUMing (not just cleanup) having taken + * place + */ + params->indexvacuuming = true; + /* Remove tuples from heap */ lazy_vacuum_heap(onerel, vacrelstats); } @@ -1737,7 +1749,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats, end_parallel_vacuum(indstats, lps, nindexes); /* Update index statistics */ - update_index_statistics(Irel, indstats, nindexes); + if (vacrelstats->useindex) + update_index_statistics(Irel, indstats, nindexes); /* If no indexes, make log report that lazy_vacuum_heap would've made */ if (vacuumed_pages) diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c index 7295cf0215..d28febdd4b 100644 --- a/src/backend/commands/analyze.c +++ b/src/backend/commands/analyze.c @@ -620,11 +620,21 @@ do_analyze_rel(Relation onerel, VacuumParams *params, } /* - * Same for indexes. Vacuum always scans all indexes, so if we're part of - * VACUUM ANALYZE, don't overwrite the accurate count already inserted by - * VACUUM. + * Same for indexes, at least in most cases. + * + * VACUUM usually scans all indexes. When we're part of VACUUM ANALYZE, + * and when VACUUM is known to have actually deleted index tuples, index + * AMs will generally give accurate reltuples -- so don't overwrite the + * accurate count already inserted by VACUUM. + * + * Most individual index AMs only give an estimate in the event of a + * cleanup-only VACUUM, though -- update stats in these cases, since our + * estimate will be at least as good anyway. (It's possible that + * individual index AMs will have accurate num_index_tuples statistics + * even for a cleanup-only VACUUM. We don't bother recognizing that; it's + * pretty rare.) */ - if (!inh && !(params->options & VACOPT_VACUUM)) + if (!inh && !params->indexvacuuming) { for (ind = 0; ind < nindexes; ind++) { @@ -654,9 +664,23 @@ do_analyze_rel(Relation onerel, VacuumParams *params, pgstat_report_analyze(onerel, totalrows, totaldeadrows, (va_cols == NIL)); - /* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */ + /* + * If this isn't part of VACUUM ANALYZE, let index AMs do cleanup. + * + * Note that most index AMs perform a no-op as a matter of policy for + * amvacuumcleanup() when called in ANALYZE-only mode, so in practice this + * usually does no work (GIN indexes rely on ANALYZE cleanup calls). + * + * Do not confuse this no-op case with the !indexvacuuming VACUUM ANALYZE + * case, which is the case where ambulkdelete() wasn't called for any + * indexes during a VACUUM or a VACUUM ANALYZE. There probably _were_ + * amvacuumcleanup() calls for VACUUM ANALYZE -- they probably did very + * little work, but they're not no-ops to the index AM generally. + */ if (!(params->options & VACOPT_VACUUM)) { + Assert(!params->indexvacuuming); + for (ind = 0; ind < nindexes; ind++) { IndexBulkDeleteResult *stats; diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index c064352e23..8e98ae98cd 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -110,6 +110,7 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel) /* Set default value */ params.index_cleanup = VACOPT_TERNARY_DEFAULT; params.truncate = VACOPT_TERNARY_DEFAULT; + params.indexvacuuming = false; /* For now */ /* By default parallel vacuum is enabled */ params.nworkers = 0; diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 23ef23c13e..27d87bac34 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2925,6 +2925,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, (!wraparound ? VACOPT_SKIP_LOCKED : 0); tab->at_params.index_cleanup = VACOPT_TERNARY_DEFAULT; tab->at_params.truncate = VACOPT_TERNARY_DEFAULT; + tab->at_params.indexvacuuming = false; /* for now */ /* As of now, we don't support parallel vacuum for autovacuum */ tab->at_params.nworkers = -1; tab->at_params.freeze_min_age = freeze_min_age; -- 2.27.0
From b357737d75b4e0827be987e6290292e4f912942e Mon Sep 17 00:00:00 2001 From: Peter Geoghegan <p...@bowt.ie> Date: Mon, 1 Mar 2021 15:58:56 -0800 Subject: [PATCH v8 3/3] Remove vacuum_cleanup_index_scale_factor GUC + param. Always skip full index scan during a VACUUM for nbtree indexes in the case where VACUUM never called btbulkdelete(), even when pg_class stats for the index relation would be considered "stale" by criteria applied using vacuum_cleanup_index_scale_factor (remove the GUC and storage param entirely). It should be fine to rely on ANALYZE to keep pg_class.reltuples up to date for nbtree indexes, which is the behavior of hashvacuumcleanup()/hash indexes. This still means that we can do a cleanup-only scan of the index for the one remaining case where that makes sense: to recycle pages known to be deleted but not yet recycled following a previous VACUUM. However, cleanup-only nbtree VACUUMS that scan the index will now be very rare. --- src/include/access/nbtree.h | 5 +- src/include/access/nbtxlog.h | 1 - src/include/miscadmin.h | 2 - src/backend/access/common/reloptions.c | 9 --- src/backend/access/nbtree/nbtinsert.c | 3 - src/backend/access/nbtree/nbtpage.c | 40 ++++------ src/backend/access/nbtree/nbtree.c | 75 ++++++------------- src/backend/access/nbtree/nbtutils.c | 2 - src/backend/access/nbtree/nbtxlog.c | 2 +- src/backend/access/rmgrdesc/nbtdesc.c | 5 +- src/backend/utils/init/globals.c | 2 - src/backend/utils/misc/guc.c | 10 --- src/backend/utils/misc/postgresql.conf.sample | 3 - src/bin/psql/tab-complete.c | 4 +- doc/src/sgml/config.sgml | 40 ---------- doc/src/sgml/ref/create_index.sgml | 14 ---- src/test/regress/expected/btree_index.out | 29 ------- src/test/regress/sql/btree_index.sql | 19 ----- 18 files changed, 43 insertions(+), 222 deletions(-) diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index 876b8f3437..0f1692fd07 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -1087,8 +1087,6 @@ typedef struct BTOptions { int32 varlena_header_; /* varlena header (do not touch directly!) */ int fillfactor; /* page fill factor in percent (0..100) */ - /* fraction of newly inserted tuples needed to trigger index cleanup */ - float8 vacuum_cleanup_index_scale_factor; bool deduplicate_items; /* Try to deduplicate items? */ } BTOptions; @@ -1191,8 +1189,7 @@ extern OffsetNumber _bt_findsplitloc(Relation rel, Page origpage, */ extern void _bt_initmetapage(Page page, BlockNumber rootbknum, uint32 level, bool allequalimage); -extern void _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, - float8 num_heap_tuples); +extern void _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages); extern void _bt_upgrademetapage(Page page); extern Buffer _bt_getroot(Relation rel, int access); extern Buffer _bt_gettrueroot(Relation rel); diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h index 3df34fcda2..0f7731856b 100644 --- a/src/include/access/nbtxlog.h +++ b/src/include/access/nbtxlog.h @@ -54,7 +54,6 @@ typedef struct xl_btree_metadata BlockNumber fastroot; uint32 fastlevel; uint32 last_cleanup_num_delpages; - float8 last_cleanup_num_heap_tuples; bool allequalimage; } xl_btree_metadata; diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 1bdc97e308..54693e047a 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -261,8 +261,6 @@ extern int64 VacuumPageDirty; extern int VacuumCostBalance; extern bool VacuumCostActive; -extern double vacuum_cleanup_index_scale_factor; - /* in tcop/postgres.c */ diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index c687d3ee9e..433e236722 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -461,15 +461,6 @@ static relopt_real realRelOpts[] = }, 0, -1.0, DBL_MAX }, - { - { - "vacuum_cleanup_index_scale_factor", - "Number of tuple inserts prior to index cleanup as a fraction of reltuples.", - RELOPT_KIND_BTREE, - ShareUpdateExclusiveLock - }, - -1, 0.0, 1e10 - }, /* list terminator */ {{NULL}} }; diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c index 1edb9f9579..0bc86943eb 100644 --- a/src/backend/access/nbtree/nbtinsert.c +++ b/src/backend/access/nbtree/nbtinsert.c @@ -1332,8 +1332,6 @@ _bt_insertonpg(Relation rel, xlmeta.fastroot = metad->btm_fastroot; xlmeta.fastlevel = metad->btm_fastlevel; xlmeta.last_cleanup_num_delpages = metad->btm_last_cleanup_num_delpages; - xlmeta.last_cleanup_num_heap_tuples = - metad->btm_last_cleanup_num_heap_tuples; xlmeta.allequalimage = metad->btm_allequalimage; XLogRegisterBuffer(2, metabuf, @@ -2549,7 +2547,6 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf) md.fastroot = rootblknum; md.fastlevel = metad->btm_level; md.last_cleanup_num_delpages = metad->btm_last_cleanup_num_delpages; - md.last_cleanup_num_heap_tuples = metad->btm_last_cleanup_num_heap_tuples; md.allequalimage = metad->btm_allequalimage; XLogRegisterBufData(2, (char *) &md, sizeof(xl_btree_metadata)); diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c index 9d7d0186d0..97f6e39ab6 100644 --- a/src/backend/access/nbtree/nbtpage.c +++ b/src/backend/access/nbtree/nbtpage.c @@ -175,26 +175,15 @@ _bt_getmeta(Relation rel, Buffer metabuf) * _bt_vacuum_needs_cleanup() to decide whether or not a btvacuumscan() * call should go ahead for an entire VACUUM operation. * - * See btvacuumcleanup() and _bt_vacuum_needs_cleanup() for details of - * the two fields that we maintain here. - * - * The information that we maintain for btvacuumcleanup() describes the - * state of the index (as well as the table it indexes) just _after_ the - * ongoing VACUUM operation. The next _bt_vacuum_needs_cleanup() call - * will consider the information we saved for it during the next VACUUM - * operation (assuming that there will be no btbulkdelete() call during - * the next VACUUM operation -- if there is then the question of skipping - * btvacuumscan() doesn't even arise). + * See btvacuumcleanup() and _bt_vacuum_needs_cleanup() for the + * definition of num_delpages. */ void -_bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, - float8 num_heap_tuples) +_bt_set_cleanup_info(Relation rel, BlockNumber num_delpages) { Buffer metabuf; Page metapg; BTMetaPageData *metad; - bool rewrite = false; - XLogRecPtr recptr; /* * On-disk compatibility note: The btm_last_cleanup_num_delpages metapage @@ -209,21 +198,20 @@ _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, * in reality there are only one or two. The worst that can happen is * that there will be a call to btvacuumscan a little earlier, which will * set btm_last_cleanup_num_delpages to a sane value when we're called. + * + * Note also that the metapage's btm_last_cleanup_num_heap_tuples field is + * no longer used as of PostgreSQL 14. We set it to -1.0 on rewrite, just + * to be consistent. */ metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ); metapg = BufferGetPage(metabuf); metad = BTPageGetMeta(metapg); - /* Always dynamically upgrade index/metapage when BTREE_MIN_VERSION */ - if (metad->btm_version < BTREE_NOVAC_VERSION) - rewrite = true; - else if (metad->btm_last_cleanup_num_delpages != num_delpages) - rewrite = true; - else if (metad->btm_last_cleanup_num_heap_tuples != num_heap_tuples) - rewrite = true; - - if (!rewrite) + /* Don't miss chance to upgrade index/metapage when BTREE_MIN_VERSION */ + if (metad->btm_version >= BTREE_NOVAC_VERSION && + metad->btm_last_cleanup_num_delpages == num_delpages) { + /* Usually means index continues to have num_delpages of 0 */ _bt_relbuf(rel, metabuf); return; } @@ -240,13 +228,14 @@ _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, /* update cleanup-related information */ metad->btm_last_cleanup_num_delpages = num_delpages; - metad->btm_last_cleanup_num_heap_tuples = num_heap_tuples; + metad->btm_last_cleanup_num_heap_tuples = -1.0; MarkBufferDirty(metabuf); /* write wal record if needed */ if (RelationNeedsWAL(rel)) { xl_btree_metadata md; + XLogRecPtr recptr; XLogBeginInsert(); XLogRegisterBuffer(0, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD); @@ -258,7 +247,6 @@ _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, md.fastroot = metad->btm_fastroot; md.fastlevel = metad->btm_fastlevel; md.last_cleanup_num_delpages = num_delpages; - md.last_cleanup_num_heap_tuples = num_heap_tuples; md.allequalimage = metad->btm_allequalimage; XLogRegisterBufData(0, (char *) &md, sizeof(xl_btree_metadata)); @@ -443,7 +431,6 @@ _bt_getroot(Relation rel, int access) md.fastroot = rootblkno; md.fastlevel = 0; md.last_cleanup_num_delpages = 0; - md.last_cleanup_num_heap_tuples = -1.0; md.allequalimage = metad->btm_allequalimage; XLogRegisterBufData(2, (char *) &md, sizeof(xl_btree_metadata)); @@ -2628,7 +2615,6 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, BlockNumber scanblkno, xlmeta.fastroot = metad->btm_fastroot; xlmeta.fastlevel = metad->btm_fastlevel; xlmeta.last_cleanup_num_delpages = metad->btm_last_cleanup_num_delpages; - xlmeta.last_cleanup_num_heap_tuples = metad->btm_last_cleanup_num_heap_tuples; xlmeta.allequalimage = metad->btm_allequalimage; XLogRegisterBufData(4, (char *) &xlmeta, sizeof(xl_btree_metadata)); diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index 8aed93ff0a..89dfa005f0 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -792,11 +792,8 @@ _bt_vacuum_needs_cleanup(IndexVacuumInfo *info) Buffer metabuf; Page metapg; BTMetaPageData *metad; - BTOptions *relopts; - float8 cleanup_scale_factor; uint32 btm_version; BlockNumber prev_num_delpages; - float8 prev_num_heap_tuples; /* * Copy details from metapage to local variables quickly. @@ -819,32 +816,8 @@ _bt_vacuum_needs_cleanup(IndexVacuumInfo *info) } prev_num_delpages = metad->btm_last_cleanup_num_delpages; - prev_num_heap_tuples = metad->btm_last_cleanup_num_heap_tuples; _bt_relbuf(info->index, metabuf); - /* - * If the underlying table has received a sufficiently high number of - * insertions since the last VACUUM operation that called btvacuumscan(), - * then have the current VACUUM operation call btvacuumscan() now. This - * happens when the statistics are deemed stale. - * - * XXX: We should have a more principled way of determining what - * "staleness" means. The vacuum_cleanup_index_scale_factor GUC (and the - * index-level storage param) seem hard to tune in a principled way. - */ - relopts = (BTOptions *) info->index->rd_options; - cleanup_scale_factor = (relopts && - relopts->vacuum_cleanup_index_scale_factor >= 0) - ? relopts->vacuum_cleanup_index_scale_factor - : vacuum_cleanup_index_scale_factor; - - if (cleanup_scale_factor <= 0 || - info->num_heap_tuples < 0 || - prev_num_heap_tuples <= 0 || - (info->num_heap_tuples - prev_num_heap_tuples) / - prev_num_heap_tuples >= cleanup_scale_factor) - return true; - /* * Trigger cleanup in rare cases where prev_num_delpages exceeds 5% of the * total size of the index. We can reasonably expect (though are not @@ -993,25 +966,36 @@ btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) /* * Since we aren't going to actually delete any leaf items, there's no - * need to go through all the vacuum-cycle-ID pushups here + * need to go through all the vacuum-cycle-ID pushups here. + * + * Posting list tuples are a source of inaccuracy for cleanup-only + * scans. btvacuumscan() will assume that the number of index tuples + * from each page can be used as num_index_tuples, even though + * num_index_tuples is supposed to represent the number of TIDs in the + * index. This naive approach can underestimate the number of tuples + * in the index significantly. + * + * We handle the problem by making num_index_tuples an estimate in + * cleanup-only case. */ stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult)); + stats->estimated_count = true; btvacuumscan(info, stats, NULL, NULL, 0); } /* * By here, we know for sure that this VACUUM operation won't be skipping - * its btvacuumscan() call. Maintain the count of the current number of - * heap tuples in the metapage. Also maintain the num_delpages value. - * This information will be used by _bt_vacuum_needs_cleanup() during - * future VACUUM operations that don't need to call btbulkdelete(). + * its btvacuumscan() call. Maintain the num_delpages value. This + * information will be used by _bt_vacuum_needs_cleanup() during future + * VACUUM operations that don't need to call btbulkdelete(). * * num_delpages is the number of deleted pages now in the index that were * not safe to place in the FSM to be recycled just yet. We expect that * it will almost certainly be possible to place all of these pages in the - * FSM during the next VACUUM operation. That factor alone might cause - * _bt_vacuum_needs_cleanup() to force the next VACUUM to proceed with a - * btvacuumscan() call. + * FSM during the next VACUUM operation. _bt_vacuum_needs_cleanup() will + * force the next VACUUM to consider this before allowing btvacuumscan() + * to be skipped entirely. This should be rare -- cleanup-only VACUUMs + * almost always manage to skip btvacuumscan() in practice. * * Note: Prior to PostgreSQL 14, we were completely reliant on the next * VACUUM operation taking care of recycling whatever pages the current @@ -1020,29 +1004,16 @@ btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats) * pages that any given VACUUM operation deletes, as part of the same * VACUUM operation. As a result, it is rare for num_delpages to actually * exceed 0, including with indexes where page deletions are frequent. - * - * Note: We must delay the _bt_set_cleanup_info() call until this late - * stage of VACUUM (the btvacuumcleanup() phase), to keep num_heap_tuples - * accurate. The btbulkdelete()-time num_heap_tuples value is generally - * just pg_class.reltuples for the heap relation _before_ VACUUM began. - * In general cleanup info should describe the state of the index/table - * _after_ VACUUM finishes. */ Assert(stats->pages_deleted >= stats->pages_free); num_delpages = stats->pages_deleted - stats->pages_free; - _bt_set_cleanup_info(info->index, num_delpages, info->num_heap_tuples); + _bt_set_cleanup_info(info->index, num_delpages); /* * It's quite possible for us to be fooled by concurrent page splits into * double-counting some index tuples, so disbelieve any total that exceeds * the underlying heap's count ... if we know that accurately. Otherwise * this might just make matters worse. - * - * Posting list tuples are another source of inaccuracy. Cleanup-only - * btvacuumscan calls assume that the number of index tuples can be used - * as num_index_tuples, even though num_index_tuples is supposed to - * represent the number of TIDs in the index. This naive approach can - * underestimate the number of tuples in the index. */ if (!info->estimated_count) { @@ -1092,7 +1063,6 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats, * pages in the index at the end of the VACUUM command.) */ stats->num_pages = 0; - stats->estimated_count = false; stats->num_index_tuples = 0; stats->pages_deleted = 0; stats->pages_free = 0; @@ -1518,7 +1488,10 @@ backtrack: * We don't count the number of live TIDs during cleanup-only calls to * btvacuumscan (i.e. when callback is not set). We count the number * of index tuples directly instead. This avoids the expense of - * directly examining all of the tuples on each page. + * directly examining all of the tuples on each page. VACUUM will + * treat num_index_tuples as an estimate in cleanup-only case, so it + * doesn't matter that this underestimates num_index_tuples + * significantly in some cases. */ if (minoff > maxoff) attempt_pagedel = (blkno == scanblkno); diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c index d524310723..fdbe0da472 100644 --- a/src/backend/access/nbtree/nbtutils.c +++ b/src/backend/access/nbtree/nbtutils.c @@ -2105,8 +2105,6 @@ btoptions(Datum reloptions, bool validate) { static const relopt_parse_elt tab[] = { {"fillfactor", RELOPT_TYPE_INT, offsetof(BTOptions, fillfactor)}, - {"vacuum_cleanup_index_scale_factor", RELOPT_TYPE_REAL, - offsetof(BTOptions, vacuum_cleanup_index_scale_factor)}, {"deduplicate_items", RELOPT_TYPE_BOOL, offsetof(BTOptions, deduplicate_items)} diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c index 6ab9af4a43..8ccf1be061 100644 --- a/src/backend/access/nbtree/nbtxlog.c +++ b/src/backend/access/nbtree/nbtxlog.c @@ -113,7 +113,7 @@ _bt_restore_meta(XLogReaderState *record, uint8 block_id) /* Cannot log BTREE_MIN_VERSION index metapage without upgrade */ Assert(md->btm_version >= BTREE_NOVAC_VERSION); md->btm_last_cleanup_num_delpages = xlrec->last_cleanup_num_delpages; - md->btm_last_cleanup_num_heap_tuples = xlrec->last_cleanup_num_heap_tuples; + md->btm_last_cleanup_num_heap_tuples = -1.0; md->btm_allequalimage = xlrec->allequalimage; pageop = (BTPageOpaque) PageGetSpecialPointer(metapg); diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c index f7cc4dd3e6..710efbd36a 100644 --- a/src/backend/access/rmgrdesc/nbtdesc.c +++ b/src/backend/access/rmgrdesc/nbtdesc.c @@ -113,9 +113,8 @@ btree_desc(StringInfo buf, XLogReaderState *record) xlrec = (xl_btree_metadata *) XLogRecGetBlockData(record, 0, NULL); - appendStringInfo(buf, "last_cleanup_num_delpages %u; last_cleanup_num_heap_tuples: %f", - xlrec->last_cleanup_num_delpages, - xlrec->last_cleanup_num_heap_tuples); + appendStringInfo(buf, "last_cleanup_num_delpages %u", + xlrec->last_cleanup_num_delpages); break; } } diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index a5976ad5b1..73e0a672ae 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -148,5 +148,3 @@ int64 VacuumPageDirty = 0; int VacuumCostBalance = 0; /* working state for vacuum */ bool VacuumCostActive = false; - -double vacuum_cleanup_index_scale_factor; diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index d626731723..783e2b0fc2 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -3693,16 +3693,6 @@ static struct config_real ConfigureNamesReal[] = NULL, NULL, NULL }, - { - {"vacuum_cleanup_index_scale_factor", PGC_USERSET, CLIENT_CONN_STATEMENT, - gettext_noop("Number of tuple inserts prior to index cleanup as a fraction of reltuples."), - NULL - }, - &vacuum_cleanup_index_scale_factor, - 0.1, 0.0, 1e10, - NULL, NULL, NULL - }, - { {"log_statement_sample_rate", PGC_SUSET, LOGGING_WHEN, gettext_noop("Fraction of statements exceeding log_min_duration_sample to be logged."), diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index ee06528bb0..3736c972a8 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -671,9 +671,6 @@ #vacuum_freeze_table_age = 150000000 #vacuum_multixact_freeze_min_age = 5000000 #vacuum_multixact_freeze_table_age = 150000000 -#vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples - # before index cleanup, 0 always performs - # index cleanup #bytea_output = 'hex' # hex, escape #xmlbinary = 'base64' #xmloption = 'content' diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 9f0208ac49..ecdb8d752b 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1789,14 +1789,14 @@ psql_completion(const char *text, int start, int end) /* ALTER INDEX <foo> SET|RESET ( */ else if (Matches("ALTER", "INDEX", MatchAny, "RESET", "(")) COMPLETE_WITH("fillfactor", - "vacuum_cleanup_index_scale_factor", "deduplicate_items", /* BTREE */ + "deduplicate_items", /* BTREE */ "fastupdate", "gin_pending_list_limit", /* GIN */ "buffering", /* GiST */ "pages_per_range", "autosummarize" /* BRIN */ ); else if (Matches("ALTER", "INDEX", MatchAny, "SET", "(")) COMPLETE_WITH("fillfactor =", - "vacuum_cleanup_index_scale_factor =", "deduplicate_items =", /* BTREE */ + "deduplicate_items =", /* BTREE */ "fastupdate =", "gin_pending_list_limit =", /* GIN */ "buffering =", /* GiST */ "pages_per_range =", "autosummarize =" /* BRIN */ diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index b5718fc136..3cf754a236 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -8512,46 +8512,6 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; </listitem> </varlistentry> - <varlistentry id="guc-vacuum-cleanup-index-scale-factor" xreflabel="vacuum_cleanup_index_scale_factor"> - <term><varname>vacuum_cleanup_index_scale_factor</varname> (<type>floating point</type>) - <indexterm> - <primary><varname>vacuum_cleanup_index_scale_factor</varname></primary> - <secondary>configuration parameter</secondary> - </indexterm> - </term> - <listitem> - <para> - Specifies the fraction of the total number of heap tuples counted in - the previous statistics collection that can be inserted without - incurring an index scan at the <command>VACUUM</command> cleanup stage. - This setting currently applies to B-tree indexes only. - </para> - - <para> - If no tuples were deleted from the heap, B-tree indexes are still - scanned at the <command>VACUUM</command> cleanup stage when the - index's statistics are stale. Index statistics are considered - stale if the number of newly inserted tuples exceeds the - <varname>vacuum_cleanup_index_scale_factor</varname> - fraction of the total number of heap tuples detected by the previous - statistics collection. The total number of heap tuples is stored in - the index meta-page. Note that the meta-page does not include this data - until <command>VACUUM</command> finds no dead tuples, so B-tree index - scan at the cleanup stage can only be skipped if the second and - subsequent <command>VACUUM</command> cycles detect no dead tuples. - </para> - - <para> - The value can range from <literal>0</literal> to - <literal>10000000000</literal>. - When <varname>vacuum_cleanup_index_scale_factor</varname> is set to - <literal>0</literal>, index scans are never skipped during - <command>VACUUM</command> cleanup. The default value is <literal>0.1</literal>. - </para> - - </listitem> - </varlistentry> - <varlistentry id="guc-bytea-output" xreflabel="bytea_output"> <term><varname>bytea_output</varname> (<type>enum</type>) <indexterm> diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml index 965dcf472c..b291b4dbc0 100644 --- a/doc/src/sgml/ref/create_index.sgml +++ b/doc/src/sgml/ref/create_index.sgml @@ -456,20 +456,6 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class= </note> </listitem> </varlistentry> - - <varlistentry id="index-reloption-vacuum-cleanup-index-scale-factor" xreflabel="vacuum_cleanup_index_scale_factor"> - <term><literal>vacuum_cleanup_index_scale_factor</literal> (<type>floating point</type>) - <indexterm> - <primary><varname>vacuum_cleanup_index_scale_factor</varname></primary> - <secondary>storage parameter</secondary> - </indexterm> - </term> - <listitem> - <para> - Per-index value for <xref linkend="guc-vacuum-cleanup-index-scale-factor"/>. - </para> - </listitem> - </varlistentry> </variablelist> <para> diff --git a/src/test/regress/expected/btree_index.out b/src/test/regress/expected/btree_index.out index cfd4338e36..bc113a70b4 100644 --- a/src/test/regress/expected/btree_index.out +++ b/src/test/regress/expected/btree_index.out @@ -308,35 +308,6 @@ alter table btree_tall_tbl alter COLUMN t set storage plain; create index btree_tall_idx on btree_tall_tbl (t, id) with (fillfactor = 10); insert into btree_tall_tbl select g, repeat('x', 250) from generate_series(1, 130) g; --- --- Test vacuum_cleanup_index_scale_factor --- --- Simple create -create table btree_test(a int); -create index btree_idx1 on btree_test(a) with (vacuum_cleanup_index_scale_factor = 40.0); -select reloptions from pg_class WHERE oid = 'btree_idx1'::regclass; - reloptions ------------------------------------------- - {vacuum_cleanup_index_scale_factor=40.0} -(1 row) - --- Fail while setting improper values -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = -10.0); -ERROR: value -10.0 out of bounds for option "vacuum_cleanup_index_scale_factor" -DETAIL: Valid values are between "0.000000" and "10000000000.000000". -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = 100.0); -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = 'string'); -ERROR: invalid value for floating point option "vacuum_cleanup_index_scale_factor": string -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = true); -ERROR: invalid value for floating point option "vacuum_cleanup_index_scale_factor": true --- Simple ALTER INDEX -alter index btree_idx1 set (vacuum_cleanup_index_scale_factor = 70.0); -select reloptions from pg_class WHERE oid = 'btree_idx1'::regclass; - reloptions ------------------------------------------- - {vacuum_cleanup_index_scale_factor=70.0} -(1 row) - -- -- Test for multilevel page deletion -- diff --git a/src/test/regress/sql/btree_index.sql b/src/test/regress/sql/btree_index.sql index 96f53818ff..c60312db2d 100644 --- a/src/test/regress/sql/btree_index.sql +++ b/src/test/regress/sql/btree_index.sql @@ -150,25 +150,6 @@ create index btree_tall_idx on btree_tall_tbl (t, id) with (fillfactor = 10); insert into btree_tall_tbl select g, repeat('x', 250) from generate_series(1, 130) g; --- --- Test vacuum_cleanup_index_scale_factor --- - --- Simple create -create table btree_test(a int); -create index btree_idx1 on btree_test(a) with (vacuum_cleanup_index_scale_factor = 40.0); -select reloptions from pg_class WHERE oid = 'btree_idx1'::regclass; - --- Fail while setting improper values -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = -10.0); -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = 100.0); -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = 'string'); -create index btree_idx_err on btree_test(a) with (vacuum_cleanup_index_scale_factor = true); - --- Simple ALTER INDEX -alter index btree_idx1 set (vacuum_cleanup_index_scale_factor = 70.0); -select reloptions from pg_class WHERE oid = 'btree_idx1'::regclass; - -- -- Test for multilevel page deletion -- -- 2.27.0