Hi,

On 12/23/2015 03:38 PM, Robert Haas wrote:

I think one thing that this conversation exposes is that the size of
the working set matters a lot. For example, if the workload is
pgbench, you're going to see a relatively short FPW-related spike at
scale factor 100, but at scale factor 3000 it's going to be longer
and at some larger scale factor it will be longer still. Therefore
you're probably right that 1.5 is unlikely to be optimal for
everyone.

Right.

Also, when you say "pgbench" you probably mean the default uniform distribution. But we now have gaussian and exponential distributions which might be handy to simulate other types of workloads.


Another point (which Jan Wieck made me think of) is that the optimal
behavior here likely depends on whether xlog and data are on the same
disk controller. If they aren't, the FPW spike and background writes
may not interact as much.

I'm not sure what exactly you mean by "optimal behavior" here. Surely if you want to minimize interference between WAL and regular I/O, you'll do that.

But I don't see what that has to do with the writes generated by the checkpoint? If we do much more writes at the beginning of the checkpoint (due to getting confused by FPW), and OS starts flushing that to disk because we exceed dirty_(background)_bytes, that surely interferes with reads (which is a major issue for queries).

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to