Re: Fw: [HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple parts -- follow-up

2017-09-21 Thread Jim Van Fleet
> On 2017-09-21 15:51:54 -0500, Jim Van Fleet wrote:
> > Not to beat on a dead horse, or anything, but this fix was frowned 
upon 
> > because in one environment (one socket) it was 6% down and over 15% up 
in 
> > the right environment (two sockets).
> 
> > So, why not add a configuration parameter which specifies the number 
of 
> > parts? Default is 1 which would be "exactly" the same as no parts and 
> > hence no degradation in the single socket environment -- and with 2, 
you 
> > get some positive performance.
> 
> Several reasons:
> 
> - You'd either add a bunch of branches into a performance critical
>   parts, or you'd add a compile time flag, which most people would be
>   unable to toggle.
I agree, no compile time flags -- but no extra testing in the main path -- 
gets set at init and not changed from there.
> - It'd be something hard to tune, because even on multi-socket machines
>   it'll be highly load dependant. E.g. workloads that largely are
>   bottlenecked in a single backend / few backends will probably regress
>   as well.
Workloads are hard to tune -- with the default, you have what you have 
today. If you "know" the issue is ProcArrayLock, then you have an 
alternative to try.
> 
> FWIW, you started a new thread with this message, that doesn't seem
> helpful?

Sorry about that -- my mistake.

Jim



Re: Fw: [HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple parts -- follow-up

2017-09-21 Thread Andres Freund
On 2017-09-21 15:51:54 -0500, Jim Van Fleet wrote:
> Not to beat on a dead horse, or anything, but this fix was frowned upon 
> because in one environment (one socket) it was 6% down and over 15% up in 
> the right environment (two sockets).

> So, why not add a configuration parameter which specifies the number of 
> parts? Default is 1 which would be "exactly" the same as no parts and 
> hence no degradation in the single socket environment -- and with 2, you 
> get some positive performance.

Several reasons:

- You'd either add a bunch of branches into a performance critical
  parts, or you'd add a compile time flag, which most people would be
  unable to toggle.
- It'd be something hard to tune, because even on multi-socket machines
  it'll be highly load dependant. E.g. workloads that largely are
  bottlenecked in a single backend / few backends will probably regress
  as well.

FWIW, you started a new thread with this message, that doesn't seem
helpful?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Fw: [HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple parts -- follow-up

2017-09-21 Thread Jim Van Fleet
Howdy --

Not to beat on a dead horse, or anything, but this fix was frowned upon 
because in one environment (one socket) it was 6% down and over 15% up in 
the right environment (two sockets).

So, why not add a configuration parameter which specifies the number of 
parts? Default is 1 which would be "exactly" the same as no parts and 
hence no degradation in the single socket environment -- and with 2, you 
get some positive performance.

Jim
- Forwarded by Jim Van Fleet/Austin/Contr/IBM on 09/21/2017 03:37 PM 
-

pgsql-hackers-ow...@postgresql.org wrote on 06/09/2017 01:39:35 PM:

> From: "Jim Van Fleet" <vanfl...@us.ibm.com>
> To: "Pgsql Hackers" <pgsql-hackers@postgresql.org>
> Date: 06/09/2017 01:41 PM
> Subject: [HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple 
parts
> Sent by: pgsql-hackers-ow...@postgresql.org
> 
> I left out the retry in LWLockAcquire.
> 
> [attachment "ProcArrayLock_part.patch" deleted by Jim Van Fleet/
> Austin/Contr/IBM] 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple parts

2017-07-26 Thread Robert Haas
On Fri, Jun 9, 2017 at 2:39 PM, Jim Van Fleet  wrote:
> I left out the retry in LWLockAcquire.

If you want this to be considered, you should add it to the next
CommitFest, currently https://commitfest.postgresql.org/14/

However, I can't see this being acceptable in the current form:

1. I'm pretty sure that there will be no appetite for introducing
special cases for ProcArrayLock into lwlock.c directly; such logic
should be handled by the code that calls lwlock.c.  It's very
unpalatable to have LWLockConditionalAcquire() harcoded to fail always
for ProcArrayLock, and it's also adding overhead for every caller that
reaches those "if" statements and has to branch or not.

2. Always using the group-clearing approach instead of only when the
lock is uncontended seems like it will lead to a loss of performance
in some situations.

3. This adds a good amount of complexity to the code but it's not
clearly better overall.  Your own results in
http://postgr.es/m/ofbab24999.8db8c8de-on86258136.006aeb24-86258136.006b3...@notes.na.collabserv.com
show that some workloads benefit and others are harmed.  I don't think
a 6% loss on single-socket machines is acceptable; there are still
plenty of those out there.

I don't think the idea of partitioning ProcArrayLock is necessarily
bad -- Heikki tried it before -- but I don't think it's necessarily
easy to work out all the kinks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] HACKERS[PATCH] split ProcArrayLock into multiple parts

2017-06-09 Thread Jim Van Fleet
I left out the retry in LWLockAcquire.





ProcArrayLock_part.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers