Thanks for all the help Tom!
On 4/6/22, 6:07 PM, "Tom Lane" wrote:
CAUTION: This email originated from outside of the organization. Do not
click links or open attachments unless you can confirm the sender and know the
content is safe.
"Blake, Geoff" writes:
> Hi Tom, Andres,
"Blake, Geoff" writes:
> Hi Tom, Andres,
> Any additional feedback for this patch?
I did some more research and testing:
* Using a Mac with the M1 Pro chip (marginally beefier than the M1
I was testing on before), I think I can see some benefit in the
test case I proposed upthread. It's
Hi Tom, Andres,
Any additional feedback for this patch?
Thanks,
Geoff Blake
As promised, here is the remaining data:
1 worker, w/o patch: 5236 ms +/- 252ms
1 worker, w/ patch: 5529 ms +/- 168ms
2 worker, w/o patch: 4917 ms +/- 180ms
2 worker, w/ patch: 4745 ms +/- 169ms
4 worker, w/o patch: 6564 ms +/- 336ms
4 worker, w/ patch: 6105 ms +/- 177ms
8 worker, w/o
Tom, Andres,
I spun up a 64-core Graviton2 instance (where I reported seeing improvement
with this patch) and ran the provided regression test with and without my
proposed on top of mainline PG. I ran 4 runs each of 63 workers where we
should see the most contention and most impact from the
On 2022-01-06 22:23:38 -0500, Tom Lane wrote:
> No; there's just one spinlock. I'm re-purposing the spinlock that
> test_shm_mq uses to protect its setup operations (and thereafter
> ignores).
Oh, sorry, misread :(
> AFAICS the N+1 shm_mq instances don't internally contain
> spinlocks; they
Andres Freund writes:
> These separate shm_mq instances forward messages in a circle,
> "leader"->worker_1->worker_2->...->"leader". So there isn't a single contended
> spinlock, but a bunch of different spinlocks, each with at most two backends
> accessing it?
No; there's just one spinlock.
Hi,
On 2022-01-06 21:39:57 -0500, Tom Lane wrote:
> Andres Freund writes:
> > I wonder if this will show the full set of spinlock contention issues -
> > isn't
> > this only causing contention for one spinlock between two processes?
>
> I don't think so -- the point of using the "pipelined"
Andres Freund writes:
>> I landed on the idea of adding some intentional spinlock
>> contention to src/test/modules/test_shm_mq, which is a prefab test
>> framework for passing data among multiple worker processes. The
>> attached quick-hack patch makes it grab and release a spinlock once
>> per
Hi,
> I landed on the idea of adding some intentional spinlock
> contention to src/test/modules/test_shm_mq, which is a prefab test
> framework for passing data among multiple worker processes. The
> attached quick-hack patch makes it grab and release a spinlock once
> per passed message.
I
"Blake, Geoff" writes:
> Hope everything is well going into the new year. I'd like to pick this
> discussion back up and your thoughts on the patch with the data I posted 2
> weeks prior. Is there more data that would be helpful? Different setup?
> Data on older versions of Postgresql to
Tom,
Hope everything is well going into the new year. I'd like to pick this
discussion back up and your thoughts on the patch with the data I posted 2
weeks prior. Is there more data that would be helpful? Different setup? Data
on older versions of Postgresql to ascertain if it makes more
Hi Tom,
> What did you test exactly?
Tested 3 benchmark configurations on an m6g.16xlarge (Graviton2, 64 cpus, 256GB
RAM)
I set the scale factor to consume about 1/3 of 256GB and the other parameters
in the next line.
pgbench setup: -F 90 -s 5622 -c 256
Pgbench select-only w/ patch 662804
"Blake, Geoff" writes:
> Have a tiny patch to add an implementation of spin_delay() for Arm64
> processors to match behavior with x86's PAUSE instruction. See negligible
> benefit on the pgbench tpcb-like workload so at worst it appears to do no
> harm but should help some workloads that
14 matches
Mail list logo