On Thu, Jun 2, 2022 at 5:37 PM David Rowley <dgrowle...@gmail.com> wrote: > I had a quick look at that for the problem case and we're very close > in terms of work_mem size to better performance. A work_mem of just > 64.3MB brings the performance back to better than PG14.
This is one of the things that I find super-frustrating about work_mem and sorting. I mean, we all know that work_mem is hard to tune because it's per-node rather than per-query or per-backend, but on top of that, sort performance doesn't change smoothly as you vary it. I've seen really different work_mem settings produce only slightly different performance, and here you have the opposite: only slightly different work_mem settings produce significantly different performance. It's not even the case that more memory is necessarily better than less. I have no idea what to do about this, and even if I did, it's too late to redesign v15. But I somehow feel like the whole model is just wrong. Sorting shouldn't use more memory unless it's actually going to speed things up -- and not just any speed-up, but one that's significant compared to the additional expenditure of memory. But the fact that the sorting code just treats the memory budget as an input, and is not adaptive in any way, seems pretty bad. It means we use up all that memory even if a much smaller amount of memory would deliver the same performance, or even better performance. -- Robert Haas EDB: http://www.enterprisedb.com