On 09/29/2014 11:41 PM, Andres Freund wrote:
On 2014-09-29 16:35:12 -0400, Tom Lane wrote:
Andres Freund <and...@2ndquadrant.com> writes:
On 2014-09-29 16:16:38 -0400, Tom Lane wrote:
I wonder why it's a fixed constant at all, and not something like
"wal_buffers / 8".

Because that'd be horrible performancewise on a system with many
wal_buffers. There's several operations where all locks are checked in
sequence (to see whether there's any stragglers that need to finish
inserting) and even some where they're acquired concurrently (e.g. for
xlog switch, checkpoint and such).

Hm.  Well, if there are countervailing considerations as to how large is a
good value, that makes it even less likely that it's sensible to expose
it as a user tunable.

Aren't there such considerations for most of the performance critical

A relevant analogy is that we don't expose a way
to adjust the number of lock table partitions at runtime.

Which has worked out badly for e.g. the number of buffer partitions...

Number of buffer partitions is the analogy I've also had in mind. It has worked out pretty well, IMHO. Sure, with a system with a lot of CPUs you sometimes want to increase the hardwired default, but most configurations are not too sensitive to it.

No-one has stepped up to do any testing on the effects if the GUC during the beta period, while the GUC has been there. Somehow I don't think anyone's going to do any rigorous tuning of it before commissioning a system into production, either.

This might come up again in 9.5 cycle, once we get the improvements in LWLock and buffer cache scalability. That can make scalability of the WAL insertion more visible again.

There seems to be no decisive consensus here. I'm going to put my foot on the ground and go remove it, as I'm leaning towards that option, and we need to get the release out. But if someone objects loudly enough to actually write the documentation and commit it that way, I'm happy with that too.

- Heikki

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to