On Sun, 2020-04-05 at 16:48 -0700, Andres Freund wrote: > > I still think we should do something for v13, such as the > > originally- > > proposed patch[1]. It's not critical, but it simply reports a > > better > > number for memory consumption. Currently, the memory usage appears > > to > > jump, often right past work mem (by a reasonable but noticable > > amount), > > which could be confusing. > > Is that really a significant issue for most work mem sizes? Shouldn't > the way we increase sizes lead to the max difference between the > measurements to be somewhat limited?
For work_mem less than 16MB, it's essentially spilling when the table context is about 75% of what it could be (as bad as 50% and as good as 100% depending on a number of factors). That's not terrible, but it is significant. It also means that the reported memory peak jumps up rather than going up gradually, so it ends up surpassing work_mem (e.g. 4MB of work_mem might show a memory peak of 5MB). So it's a weird combination of under- utilizing and over-reporting. > I'd spec it so that context implementations are allowed to > unconditionally fill fields, even when the flag isn't specified. The > branches quoted don't buy us anyting... Done. > I'd add some reasoning as to why this is useful. Done. > s/MCXT_STAT/MCXT_STAT_NEED/? I changed to MCXT_COUNT_. MCXT_STAT_NEED seemed slightly verbose for me. > > +#define MCXT_STAT_ALL ((1 << 6) - 1) > > Hm, why not go for ~0 or such? Done. Regards, Jeff Davis