Hi, So, I attached a rough implementation of both the autovacuum failsafe reverts to shared buffers and the vacuum option (no tests or docs or anything).
The first three patches in the set are just for enabling use of shared buffers in failsafe mode for autovacuum. I haven't actually ensured it works (i.e. triggering failsafe mode and checking the stats for whether or not shared buffers were used). I was wondering about the status of the autovacuum wraparound failsafe test suggested in [1]. I don't see it registered for the March's commitfest. I'll probably review it since it will be useful for this patchset. The first patch in the set is to free the BufferAccessStrategy object that is made in do_autovacuum() -- I don't see when the memory context it is allocated in is destroyed, so it seems like it might be a leak? The last patch in the set is a trial implementation of the VACUUM option suggested -- BUFFER_USAGE_LIMIT. More on that below. On Wed, Jan 11, 2023 at 4:39 PM Andres Freund <and...@anarazel.de> wrote: > Hi, > > On 2023-01-11 16:18:34 -0500, Tom Lane wrote: > > Peter Geoghegan <p...@bowt.ie> writes: > > > On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <and...@anarazel.de> > wrote: > > >> I don't like that - it's also quite useful to disable use of > ringbuffers when > > >> you actually need to clean up indexes. Especially when we have a lot > of dead > > >> tuples we'll rescan indexes over and over... > > > > > That's a fair point. > > > > > My vote goes to "REUSE_BUFFERS", then. > > > > I wonder whether it could make sense to allow a larger ringbuffer size, > > rather than just the limit cases of "on" and "off". > > I can see that making sense, particularly if we were to later extend this > to > other users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading > of > data > 16MB but also << s_b vastly slower, but it can still be very > important > to use if there's lots of parallel processes loading data. > > Maybe BUFFER_USAGE_LIMIT, with a value from -1 to N, with -1 indicating the > default value, 0 preventing use of a buffer access strategy, and 1..N > indicating the size in blocks? > > I have found the implementation you suggested very hard to use. The attached fourth patch in the set implements it the way you suggest. I had to figure out what number to set the BUFER_USAGE_LIMIT to -- and, since I don't specify shared buffers in units of nbuffer, it's pretty annoying to have to figure out a valid number. I think that it would be better to have it be either a percentage of shared buffers or a size in units of bytes/kb/mb like that of shared buffers. Using a fraction or percentage appeals to me because you don't need to reference your shared buffers setting and calculate what size you want to set it to. Also, parsing the size in different units sounds like more work. Unfortunately, the fraction doesn't really work if we cap the ring size of a buffer access strategy to NBuffers / 8. Also, there are other issues like what would 0% and 100% mean. I have a list of other questions, issues, and TODOs related to the code I wrote to implement BUFFER_USAGE_LIMIT, but I'm not sure those are worth discussing until we shape up the interface. > Would we want to set an upper limit lower than implied by the memory limit > for > the BufferAccessStrategy allocation? > > So, I was wondering what you thought about NBuffers / 8 (the current limit). Does it make sense? If we clamp the user-specified value to this, I think we definitely need to inform them through some kind of logging or message. I am sure there are lots of other gucs doing this -- do you know any off the top of your head? - Melanie [1] https://www.postgresql.org/message-id/flat/CAB8KJ%3Dj1b3kscX8Cg5G%3DQ39ZQsv2x4URXsuTueJLz%3DfcvJ3eoQ%40mail.gmail.com#ee67664e85c4d11596a92cc71780d29c
From 313deb07f729924da650eb01d31cdb722950a5b1 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Wed, 22 Feb 2023 12:26:01 -0500 Subject: [PATCH v1 3/4] use shared buffers when failsafe active --- src/backend/access/heap/vacuumlazy.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index 8f14cf85f3..b319a244d5 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -2622,6 +2622,11 @@ lazy_check_wraparound_failsafe(LVRelState *vacrel) if (unlikely(vacuum_xid_failsafe_check(&vacrel->cutoffs))) { vacrel->failsafe_active = true; + /* + * Assume the caller who allocated the memory for the + * BufferAccessStrategy object will free it. + */ + vacrel->bstrategy = NULL; /* Disable index vacuuming, index cleanup, and heap rel truncation */ vacrel->do_index_vacuuming = false; -- 2.37.2
From 49552b63a87365d9ea7c4b9d74f77007e08b54fc Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Wed, 22 Feb 2023 11:59:53 -0500 Subject: [PATCH v1 1/4] dont leak strategy object --- src/backend/postmaster/autovacuum.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index ff6149a179..673ff99767 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2607,6 +2607,8 @@ deleted: if (did_vacuum || !found_concurrent_worker) vac_update_datfrozenxid(); + FreeAccessStrategy(bstrategy); + /* Finally close out the last transaction. */ CommitTransactionCommand(); } -- 2.37.2
From e93496939bbda8fabbd8bac9920d5ec18ff959c4 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Wed, 22 Feb 2023 12:06:41 -0500 Subject: [PATCH v1 2/4] remove global variable vac_strategy --- src/backend/commands/vacuum.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index aa79d9de4d..78980cf19c 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -74,7 +74,6 @@ int vacuum_multixact_failsafe_age; /* A few variables that don't seem worth passing around as parameters */ static MemoryContext vac_context = NULL; -static BufferAccessStrategy vac_strategy; /* @@ -93,7 +92,7 @@ static void vac_truncate_clog(TransactionId frozenXID, TransactionId lastSaneFrozenXid, MultiXactId lastSaneMinMulti); static bool vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, - bool skip_privs); + bool skip_privs, BufferAccessStrategy bstrategy); static double compute_parallel_delay(void); static VacOptValue get_vacoptval_from_boolean(DefElem *def); static bool vac_tid_reaped(ItemPointer itemptr, void *state); @@ -333,7 +332,7 @@ vacuum(List *relations, VacuumParams *params, in_outer_xact = IsInTransactionBlock(isTopLevel); /* - * Due to static variables vac_context, anl_context and vac_strategy, + * Due to static variable vac_context * vacuum() is not reentrant. This matters when VACUUM FULL or ANALYZE * calls a hostile index expression that itself calls ANALYZE. */ @@ -398,7 +397,6 @@ vacuum(List *relations, VacuumParams *params, bstrategy = GetAccessStrategy(BAS_VACUUM); MemoryContextSwitchTo(old_context); } - vac_strategy = bstrategy; /* * Build list of relation(s) to process, putting any new data in @@ -503,7 +501,7 @@ vacuum(List *relations, VacuumParams *params, if (params->options & VACOPT_VACUUM) { - if (!vacuum_rel(vrel->oid, vrel->relation, params, false)) + if (!vacuum_rel(vrel->oid, vrel->relation, params, false, bstrategy)) continue; } @@ -521,7 +519,7 @@ vacuum(List *relations, VacuumParams *params, } analyze_rel(vrel->oid, vrel->relation, params, - vrel->va_cols, in_outer_xact, vac_strategy); + vrel->va_cols, in_outer_xact, bstrategy); if (use_own_xacts) { @@ -1832,7 +1830,7 @@ vac_truncate_clog(TransactionId frozenXID, * At entry and exit, we are not inside a transaction. */ static bool -vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs) +vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs, BufferAccessStrategy bstrategy) { LOCKMODE lmode; Relation rel; @@ -2068,7 +2066,7 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs) cluster_rel(relid, InvalidOid, &cluster_params); } else - table_relation_vacuum(rel, params, vac_strategy); + table_relation_vacuum(rel, params, bstrategy); /* Roll back any GUC changes executed by index functions */ AtEOXact_GUC(false, save_nestlevel); @@ -2094,7 +2092,7 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs) * totally unimportant for toast relations. */ if (toast_relid != InvalidOid) - vacuum_rel(toast_relid, NULL, params, true); + vacuum_rel(toast_relid, NULL, params, true, bstrategy); /* * Now release the session-level lock on the main table. -- 2.37.2
From 61f3e0513efdc26625820a04ec5af185f3049398 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Wed, 22 Feb 2023 15:28:34 -0500 Subject: [PATCH v1 4/4] add vacuum option to specify nbuffers --- contrib/amcheck/verify_heapam.c | 2 +- contrib/amcheck/verify_nbtree.c | 2 +- contrib/bloom/blscan.c | 2 +- contrib/pg_visibility/pg_visibility.c | 4 +- contrib/pgstattuple/pgstatapprox.c | 2 +- contrib/pgstattuple/pgstatindex.c | 4 +- contrib/pgstattuple/pgstattuple.c | 2 +- src/backend/access/heap/heapam.c | 4 +- src/backend/commands/dbcommands.c | 2 +- src/backend/commands/vacuum.c | 50 ++++++++++++++++++++++- src/backend/commands/vacuumparallel.c | 8 +++- src/backend/postmaster/autovacuum.c | 5 ++- src/backend/storage/buffer/bufmgr.c | 4 +- src/backend/storage/buffer/freelist.c | 58 ++++++++++++++++++++++++--- src/include/commands/vacuum.h | 1 + src/include/storage/bufmgr.h | 2 +- 16 files changed, 127 insertions(+), 25 deletions(-) diff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c index 4fcfd6df72..e3aa89975d 100644 --- a/contrib/amcheck/verify_heapam.c +++ b/contrib/amcheck/verify_heapam.c @@ -330,7 +330,7 @@ verify_heapam(PG_FUNCTION_ARGS) PG_RETURN_NULL(); } - ctx.bstrategy = GetAccessStrategy(BAS_BULKREAD); + ctx.bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); ctx.buffer = InvalidBuffer; ctx.page = NULL; diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c index 257cff671b..b0552a94ef 100644 --- a/contrib/amcheck/verify_nbtree.c +++ b/contrib/amcheck/verify_nbtree.c @@ -547,7 +547,7 @@ bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace, state->targetcontext = AllocSetContextCreate(CurrentMemoryContext, "amcheck context", ALLOCSET_DEFAULT_SIZES); - state->checkstrategy = GetAccessStrategy(BAS_BULKREAD); + state->checkstrategy = GetAccessStrategy(BAS_BULKREAD, -1); /* Get true root block from meta-page */ metapage = palloc_btree_page(state, BTREE_METAPAGE); diff --git a/contrib/bloom/blscan.c b/contrib/bloom/blscan.c index 6cc7d07164..10e3086e39 100644 --- a/contrib/bloom/blscan.c +++ b/contrib/bloom/blscan.c @@ -119,7 +119,7 @@ blgetbitmap(IndexScanDesc scan, TIDBitmap *tbm) * We're going to read the whole index. This is why we use appropriate * buffer access strategy. */ - bas = GetAccessStrategy(BAS_BULKREAD); + bas = GetAccessStrategy(BAS_BULKREAD, -1); npages = RelationGetNumberOfBlocks(scan->indexRelation); for (blkno = BLOOM_HEAD_BLKNO; blkno < npages; blkno++) diff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c index 2a4acfd1ee..47a113db67 100644 --- a/contrib/pg_visibility/pg_visibility.c +++ b/contrib/pg_visibility/pg_visibility.c @@ -476,7 +476,7 @@ collect_visibility_data(Oid relid, bool include_pd) vbits *info; BlockNumber blkno; Buffer vmbuffer = InvalidBuffer; - BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD); + BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); rel = relation_open(relid, AccessShareLock); @@ -554,7 +554,7 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen) corrupt_items *items; BlockNumber blkno; Buffer vmbuffer = InvalidBuffer; - BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD); + BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); TransactionId OldestXmin = InvalidTransactionId; rel = relation_open(relid, AccessShareLock); diff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c index f601dc6121..e71a468988 100644 --- a/contrib/pgstattuple/pgstatapprox.c +++ b/contrib/pgstattuple/pgstatapprox.c @@ -72,7 +72,7 @@ statapprox_heap(Relation rel, output_type *stat) TransactionId OldestXmin; OldestXmin = GetOldestNonRemovableTransactionId(rel); - bstrategy = GetAccessStrategy(BAS_BULKREAD); + bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); nblocks = RelationGetNumberOfBlocks(rel); scanned = 0; diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c index d69ac1c93d..53c6e99bfa 100644 --- a/contrib/pgstattuple/pgstatindex.c +++ b/contrib/pgstattuple/pgstatindex.c @@ -219,7 +219,7 @@ pgstatindex_impl(Relation rel, FunctionCallInfo fcinfo) BlockNumber nblocks; BlockNumber blkno; BTIndexStat indexStat; - BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD); + BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); if (!IS_INDEX(rel) || !IS_BTREE(rel)) ereport(ERROR, @@ -612,7 +612,7 @@ pgstathashindex(PG_FUNCTION_ARGS) nblocks = RelationGetNumberOfBlocks(rel); /* prepare access strategy for this index */ - bstrategy = GetAccessStrategy(BAS_BULKREAD); + bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); /* Start from blkno 1 as 0th block is metapage */ for (blkno = 1; blkno < nblocks; blkno++) diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c index 93b7834b77..587d79bb38 100644 --- a/contrib/pgstattuple/pgstattuple.c +++ b/contrib/pgstattuple/pgstattuple.c @@ -518,7 +518,7 @@ pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn, pgstattuple_type stat = {0}; /* prepare access strategy for this index */ - bstrategy = GetAccessStrategy(BAS_BULKREAD); + bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); blkno = start; for (;;) diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 7eb79cee58..6a3efe3d2d 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -280,7 +280,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) { /* During a rescan, keep the previous strategy object. */ if (scan->rs_strategy == NULL) - scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD); + scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD, -1); } else { @@ -1772,7 +1772,7 @@ GetBulkInsertState(void) BulkInsertState bistate; bistate = (BulkInsertState) palloc(sizeof(BulkInsertStateData)); - bistate->strategy = GetAccessStrategy(BAS_BULKWRITE); + bistate->strategy = GetAccessStrategy(BAS_BULKWRITE, -1); bistate->current_buf = InvalidBuffer; return bistate; } diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c index a0259cc593..51f3d64de7 100644 --- a/src/backend/commands/dbcommands.c +++ b/src/backend/commands/dbcommands.c @@ -280,7 +280,7 @@ ScanSourceDatabasePgClass(Oid tbid, Oid dbid, char *srcpath) smgrclose(smgr); /* Use a buffer access strategy since this is a bulk read operation. */ - bstrategy = GetAccessStrategy(BAS_BULKREAD); + bstrategy = GetAccessStrategy(BAS_BULKREAD, -1); /* * As explained in the function header comments, we need a snapshot that diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index 78980cf19c..b3fded878b 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -126,6 +126,10 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel) /* By default parallel vacuum is enabled */ params.nworkers = 0; + /* default value of buffers buffer access strategy */ + params.buffers = -1; + + /* Parse options list */ foreach(lc, vacstmt->options) { @@ -207,6 +211,45 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel) skip_database_stats = defGetBoolean(opt); else if (strcmp(opt->defname, "only_database_stats") == 0) only_database_stats = defGetBoolean(opt); + else if (strcmp(opt->defname, "buffer_usage_limit") == 0) + { + // TODO: default_ring_size calculated here and GetAccessStrategy() == bad + int vac_buffers; + int default_ring_size = 256 * 1024 / BLCKSZ; + int max_ring_size = NBuffers / 8; + + if (opt->arg == NULL) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("buffer_usage_limit option requires an integer value"), + parser_errposition(pstate, opt->location))); + } + + vac_buffers = defGetInt64(opt); + + /* out-of-bounds cases */ + if (vac_buffers < -1 || vac_buffers > max_ring_size) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("buffer_usage_limit for a vacuum must be between -1 and %d", + max_ring_size), + parser_errposition(pstate, opt->location))); + } + + if (vac_buffers == -1) + vac_buffers = default_ring_size; + else if (vac_buffers < default_ring_size) + { + elog(WARNING, "buffer_usage_limit %d is below the minimum buffer_usage_limit of %d. setting it to %d", + vac_buffers, default_ring_size, default_ring_size); + + vac_buffers = default_ring_size; + } + + params.buffers = vac_buffers; + } else ereport(ERROR, (errcode(ERRCODE_SYNTAX_ERROR), @@ -393,8 +436,7 @@ vacuum(List *relations, VacuumParams *params, if (bstrategy == NULL) { MemoryContext old_context = MemoryContextSwitchTo(vac_context); - - bstrategy = GetAccessStrategy(BAS_VACUUM); + bstrategy = GetAccessStrategy(BAS_VACUUM, params->buffers); MemoryContextSwitchTo(old_context); } @@ -2066,7 +2108,11 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs, cluster_rel(relid, InvalidOid, &cluster_params); } else + { + if (params->buffers == 0) + bstrategy = NULL; table_relation_vacuum(rel, params, bstrategy); + } /* Roll back any GUC changes executed by index functions */ AtEOXact_GUC(false, save_nestlevel); diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c index bcd40c80a1..2c8380bd2e 100644 --- a/src/backend/commands/vacuumparallel.c +++ b/src/backend/commands/vacuumparallel.c @@ -1012,8 +1012,12 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc) pvs.indname = NULL; pvs.status = PARALLEL_INDVAC_STATUS_INITIAL; - /* Each parallel VACUUM worker gets its own access strategy */ - pvs.bstrategy = GetAccessStrategy(BAS_VACUUM); + /* + * Each parallel VACUUM worker gets its own access strategy + * For now, use the default buffer access strategy ring size. + * TODO: should this work differently even though they are only for indexes + */ + pvs.bstrategy = GetAccessStrategy(BAS_VACUUM, -1); /* Setup error traceback support for ereport() */ errcallback.callback = parallel_vacuum_error_callback; diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 673ff99767..3d88d72973 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2291,8 +2291,11 @@ do_autovacuum(void) * Create a buffer access strategy object for VACUUM to use. We want to * use the same one across all the vacuum operations we perform, since the * point is for VACUUM not to blow out the shared cache. + * If we later enter failsafe mode, we will cease use of the + * BufferAccessStrategy. Either way, we clean up the BufferAccessStrategy + * object at the end of this function. */ - bstrategy = GetAccessStrategy(BAS_VACUUM); + bstrategy = GetAccessStrategy(BAS_VACUUM, -1); /* * create a memory context to act as fake PortalContext, so that the diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 98904a7c05..62ef088306 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -3818,8 +3818,8 @@ RelationCopyStorageUsingBuffer(RelFileLocator srclocator, buf.data, true); /* This is a bulk operation, so use buffer access strategies. */ - bstrategy_src = GetAccessStrategy(BAS_BULKREAD); - bstrategy_dst = GetAccessStrategy(BAS_BULKWRITE); + bstrategy_src = GetAccessStrategy(BAS_BULKREAD, -1); + bstrategy_dst = GetAccessStrategy(BAS_BULKWRITE, -1); /* Iterate over each block of the source relation file. */ for (blkno = 0; blkno < nblocks; blkno++) diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c index c690d5f15f..24d21cc4f1 100644 --- a/src/backend/storage/buffer/freelist.c +++ b/src/backend/storage/buffer/freelist.c @@ -531,23 +531,61 @@ StrategyInitialize(bool init) * ---------------------------------------------------------------- */ +static const char * +btype_get_name(BufferAccessStrategyType btype) +{ + switch (btype) + { + case BAS_NORMAL: + return "normal"; + case BAS_BULKREAD: + return "bulkread"; + case BAS_BULKWRITE: + return "bulkwrite"; + case BAS_VACUUM: + return "vacuum"; + } + + elog(ERROR, "unrecognized BufferAccessStrategy: %d", btype); + pg_unreachable(); +} + + /* * GetAccessStrategy -- create a BufferAccessStrategy object * * The object is allocated in the current memory context. */ +// TODO: make a macro or something for nbuffers = -1 BufferAccessStrategy -GetAccessStrategy(BufferAccessStrategyType btype) +GetAccessStrategy(BufferAccessStrategyType btype, int buffers) { BufferAccessStrategy strategy; int ring_size; + const char *strategy_name = btype_get_name(btype); + + if (btype != BAS_VACUUM) + { + if (buffers == 0) + elog(ERROR, "Use of shared buffers unsupported for buffer access strategy: %s. nbuffers must be -1.", + strategy_name); + + if (buffers > 0) + elog(ERROR, "Specification of ring size in buffers unsupported for buffer access strategy: %s. nbuffers must be -1.", + strategy_name); + } + + // TODO: DEBUG logging message for dev? + if (buffers == 0) + btype = BAS_NORMAL; /* * Select ring size to use. See buffer/README for rationales. * * Note: if you change the ring size for BAS_BULKREAD, see also * SYNC_SCAN_REPORT_INTERVAL in access/heap/syncscan.c. + * TODO: update README */ switch (btype) { @@ -562,16 +600,26 @@ GetAccessStrategy(BufferAccessStrategyType btype) ring_size = 16 * 1024 * 1024 / BLCKSZ; break; case BAS_VACUUM: - ring_size = 256 * 1024 / BLCKSZ; - break; + { + int default_ring_size = 256 * 1024 / BLCKSZ; + if (buffers == -1) + ring_size = default_ring_size; + else + ring_size = Max(buffers, default_ring_size); + // TODO: log if we end up setting ring_size bigger than nbuffers because it would have been smaller than the default + break; + } + /* unreachable as long as we get the string representation of the type above */ default: elog(ERROR, "unrecognized buffer access strategy: %d", - (int) btype); - return NULL; /* keep compiler quiet */ + (int) btype); + + pg_unreachable(); } /* Make sure ring isn't an undue fraction of shared buffers */ + // TODO: log message that tells you if your specified number of buffers is clamped ring_size = Min(NBuffers / 8, ring_size); /* Allocate the object and initialize all elements to zeroes */ diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index 689dbb7702..341f1e2525 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -235,6 +235,7 @@ typedef struct VacuumParams * disabled. */ int nworkers; + int buffers; } VacuumParams; /* diff --git a/src/include/storage/bufmgr.h b/src/include/storage/bufmgr.h index b8a18b8081..2157200973 100644 --- a/src/include/storage/bufmgr.h +++ b/src/include/storage/bufmgr.h @@ -195,7 +195,7 @@ extern Size BufferShmemSize(void); extern void AtProcExit_LocalBuffers(void); /* in freelist.c */ -extern BufferAccessStrategy GetAccessStrategy(BufferAccessStrategyType btype); +extern BufferAccessStrategy GetAccessStrategy(BufferAccessStrategyType btype, int nbuffers); extern void FreeAccessStrategy(BufferAccessStrategy strategy); -- 2.37.2