Hi, On 2025-11-23 19:03:44 -0500, Peter Geoghegan wrote: > On Fri, Nov 21, 2025 at 6:31 PM Andres Freund <[email protected]> wrote: > > On 2025-11-21 18:14:56 -0500, Peter Geoghegan wrote: > > > On Fri, Nov 21, 2025 at 5:38 PM Andres Freund <[email protected]> wrote: > > > > Another benfit is that it helps even more when there multiple queries > > > > running > > > > concurrently - the high rate of lock/unlock on the buffer rather badly > > > > hurts > > > > scalability. > > > > > > I haven't noticed that effect myself. In fact, it seemed to be the > > > other way around; it looked like it helped most with very low client > > > count workloads. > > > > It's possible that that effect is more visible on larger machines - I did > > test > > that on a 2x 24cores/48 threads machine. I do see a smaller effect on a > > 2x10c/20t machine. > > Update: I find that when I build Postgres with -march=native, I see > performance characteristics that are much more in line with what you > saw when you ran your own experiments (experiments with minimizing the > number of heap buffer locks acquired during index scans).
Huh. I wouldn't have expected -march=native to make a huge difference... > Are you in the habit of using -march=native? I'm not. I occasionally use it, but not regularly - I do however use -O3, as I found that to actually improve performance sufficiently in plenty cases. And it's something that's much more generally applicable than -march=native?. I don't think the precise gains here, particularly basedon on quick prototypes, make that much of a difference. There's so much more optimization potential other than the amortization of locking costs... Greetings, Andres Freund
