On Wed, Jul 24, 2024 at 8:19 AM John Naylor <johncnaylo...@gmail.com> wrote: > > On Wed, Jul 24, 2024 at 2:42 PM John Naylor <johncnaylo...@gmail.com> wrote: > > As for lowering the limit, we've experimented with 256kB here: > > > > https://www.postgresql.org/message-id/canwcazzutvz3lsypauyqvzcezxz7qe+9ntnhgyzdtwxpul+...@mail.gmail.com > > > > As I mention there, going lower than that would need a small amount of > > reorganization in the radix tree. Not difficult -- the thing I'm > > concerned about is that we'd likely need to document a separate > > minimum for DSA, since that behaves strangely with 256kB and might not > > work at all lower than that. > > For experimentation, here's a rough patch (really two, squashed > together for now) that allows m_w_m to go down to 64kB.
Oh, great, thanks! I didn't read this closely enough before I posed my upthread question about how small we should make the minimum. It sounds like you've thought a lot about this. I ran my test with your patch (on my 64-bit system, non-assert build) and the result is great: master with my test (slightly modified to now use DELETE instead of UPDATE as mentioned upthread) 3.09s master with your patch applied, MWM set to 64kB and 9000 rows instead of 800000 1.06s > drop table if exists test; > create table test (a int) with (autovacuum_enabled=false, fillfactor=10); > insert into test (a) select i from generate_series(1,2000) i; > create index on test (a); > update test set a = a + 1; > > set maintenance_work_mem = '64kB'; > vacuum (verbose) test; > > INFO: vacuuming "john.public.test" > INFO: finished vacuuming "john.public.test": index scans: 3 > pages: 0 removed, 91 remain, 91 scanned (100.00% of total) > > The advantage with this is that we don't need to care about > MEMORY_CONTEXT_CHECKING or 32/64 bit-ness, since allocating a single > large node will immediately blow the limit, and that will happen > fairly quickly regardless. I suspect going this low will not work with > dynamic shared memory and if so would need a warning comment. I took a look at the patch, but I can't say I know enough about the memory allocation subsystems and how TIDStore works to meaningfully review it -- nor enough about DSM to comment about the interactions. I suspect 256kB would also be fast enough to avoid my test timing out on the buildfarm, but it is appealing to have a minimum for maintenance_work_mem that is the same as work_mem. - Melanie