On Sat, 18 Apr 2020 at 03:30, Thomas Munro <thomas.mu...@gmail.com> wrote: > > On Sat, Apr 18, 2020 at 2:00 AM Ants Aasma <a...@cybertec.at> wrote: > > On Thu, 16 Apr 2020 at 10:33, Pavel Stehule <pavel.steh...@gmail.com> wrote: > > > what I know, pgbench cannot be used for testing spinlocks problems. > > > > > > Maybe you can see this issue when a) use higher number clients - > > > hundreds, thousands. Decrease share memory, so there will be press on > > > related spin lock. > > > > There really aren't many spinlocks left that could be tickled by a > > normal workload. I looked for a way to trigger spinlock contention > > when I prototyped a patch to replace spinlocks with futexes. The only > > one that I could figure out a way to make contended was the lock > > protecting parallel btree scan. A highly parallel index only scan on a > > fully cached index should create at least some spinlock contention. > > I suspect the snapshot-too-old "mutex_threshold" spinlock can become > contended under workloads that generate a high rate of > heap_page_prune_opt() calls with old_snapshot_threshold enabled. One > way to do that is with a bunch of concurrent index scans that hit the > heap in random order. Some notes about that: > > https://www.postgresql.org/message-id/flat/CA%2BhUKGKT8oTkp5jw_U4p0S-7UG9zsvtw_M47Y285bER6a2gD%2Bg%40mail.gmail.com
Thanks all for the inputs. Will keep these two particular scenarios in mind, and try to get some bandwidth on this soon. -- Thanks, -Amit Khandekar Huawei Technologies