On Tue, Apr 5, 2016 at 8:15 PM, Andres Freund <and...@anarazel.de> wrote:
> On 2016-04-05 17:36:49 +0300, Alexander Korotkov wrote:
> > Could the reason be that we're increasing concurrency for LWLock state
> > atomic variable by placing queue spinlock there?
> Don't think so, it's the same cache-line either way.
> > But I wonder why this could happen during "pgbench -S", because it
> > seem to have high traffic of exclusive LWLocks.
> Yea, that confuses me too. I suspect there's some mis-aligned
> datastructures somewhere. It's hard to investigate such things without
> access to hardware.

This fluctuation started appearing after commit 6150a1b0 which we have
discussed in another thread [1] and a colleague of mine is working on to
write a patch to try to revert it on current HEAD and then see the results.

> (FWIW, I'm working on getting pinunpin committed)

Good to know, but I am slightly worried that it will make the problem
harder to detect as it will reduce the reproducibility.  I understand that
we are running short of time and committing this patch is important, so we
should proceed with it as this is not a problem of this patch.  After this
patch gets committed, we always need to revert it locally to narrow down
the problem due to commit 6150a1b0.

[1] -

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to