On 02/02/2015 04:21 PM, Andres Freund wrote:
Hi,

On 2015-02-02 08:36:41 -0500, Robert Haas wrote:
Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at least an
order of magnitude larger than the current default setting.

+1

I don't agree with that principle. I wouldn't mind increasing it a little bit, but not by an order of magnitude. For better or worse, *all* our defaults are tuned toward small systems, and so that PostgreSQL doesn't hog all the resources. We shouldn't make an exception for this.

I think we need to increase checkpoint_timeout too - that's actually
just as important for the default experience from my pov. 5 minutes
often just unnecessarily generates FPWs en masse.

I'll open the bidding at 1600MB (aka 100).

Fine with me.

I wouldn't object to raising it a little bit, but that's way too high. It's entirely possible to have a small database that generates a lot of WAL. A table that has only a few rows, but is updated very very frequently, for example. And checkpointing such a database is quick too, so frequent checkpoints are not a problem. You don't want to end up with 1.5 GB of WAL on a 100 MB database.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to