Hello! I have made a patch, now we can allocate more than 1 GB of memory for the autoprewarm_dump_now function.
Best regards, Daria Shanina пт, 4 апр. 2025 г. в 19:36, Robert Haas <robertmh...@gmail.com>: > On Fri, Apr 4, 2025 at 12:17 PM Melanie Plageman > <melanieplage...@gmail.com> wrote: > > Unrelated to this problem, but I wondered why autoprewarm doesn''t > > launch background workers for each database simultaneously instead of > > waiting for each one to finish a db before moving onto the next one. > > Is it simply to limit the number of bgworkers taking up resources? > > That's probably part of it, but also (1) a system that allowed for > multiple workers would be somewhat more complex to implement and (2) > I'm not sure how beneficial it would be. We go to some trouble to make > the I/O as sequential as possible, and this would detract from that. I > also don't know how long prewarming normally takes -- if it's fast > enough already, then maybe this doesn't matter. But if somebody is > having a problem with autoprewarm being slow and wants to implement a > multi-worker system to make it faster, cool. > > -- > Robert Haas > EDB: http://www.enterprisedb.com >
From 7b8af18231b539378ec4fc432186b551c08a7774 Mon Sep 17 00:00:00 2001 From: Darya Shanina <d.shan...@postgrespro.ru> Date: Tue, 22 Apr 2025 11:27:01 +0300 Subject: [PATCH v1] [PGPRO-9971] Allocate enough memory with huge shared_buffers. Tags: commitfest_hotfix --- contrib/pg_prewarm/autoprewarm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c index d061731706a..ea5f7bc49c9 100644 --- a/contrib/pg_prewarm/autoprewarm.c +++ b/contrib/pg_prewarm/autoprewarm.c @@ -598,7 +598,7 @@ apw_dump_now(bool is_bgworker, bool dump_unlogged) } block_info_array = - (BlockInfoRecord *) palloc(sizeof(BlockInfoRecord) * NBuffers); + (BlockInfoRecord *) palloc_extended((sizeof(BlockInfoRecord) * NBuffers), MCXT_ALLOC_HUGE); for (num_blocks = 0, i = 0; i < NBuffers; i++) { -- 2.43.0