On Fri, Jun 3, 2016 at 12:39 PM, Andres Freund <and...@anarazel.de> wrote:
> On 2016-06-03 12:31:58 -0400, Robert Haas wrote:
>> Now, what varies IME is how much total RAM there is in the system and
>> how frequently they write that data, as opposed to reading it.  If
>> they are on a tightly RAM-constrained system, then this situation
>> won't arise because they won't be under the dirty background limit.
>> And if they aren't writing that much data then they'll be fine too.
>> But even putting all of that together I really don't see why you're
>> trying to suggest that this is some bizarre set of circumstances that
>> should only rarely happen in the real world.
> I'm saying that if that happens constantly, you're better off adjusting
> shared_buffers, because you're likely already suffering from latency
> spikes and other issues. Optimizing for massive random write throughput
> in a system that's not configured appropriately, at the cost of well
> configured systems to suffer, doesn't seem like a good tradeoff to me.

I really don't get it.  There's nothing in any set of guidelines for
setting shared_buffers that I've ever seen which would cause people to
avoid this scenario.  You're the first person I've ever heard describe
this as a misconfiguration.

> Note that other operating systems like windows and freebsd *alreaddy*
> write back much more aggressively (independent of this change). I seem
> to recall you yourself being quite passionately arguing that the linux
> behaviour around this is broken.

Sure, but being unhappy about the Linux behavior doesn't mean that I
want our TPS on Linux to go down.  Whether I like the behavior or not,
we have to live with it.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to