On Fri, Jan 30, 2015 at 3:58 AM, Heikki Linnakangas
<hlinnakan...@vmware.com> wrote:
>> During my tests, I did not observe the significance of
>> min_recycle_wal_size
>> parameter yet. Ofcourse, i had sufficient disk space for pg_xlog.
>> I would like to understand more about "min_recycle_wal_size" parameter. In
>> theory, i only understand from the note in the patch that if the disk
>> space
>> usage falls below certain threshold, min_recycle_wal_size number of WALs
>> will be removed to accommodate future pg_xlog segments. I will try to test
>> this out. Please let me know if there is any specific test to understand
>> min_recycle_wal_size behaviour.
> min_recycle_wal_size comes into play when you have only light load, so that
> checkpoints are triggered by checkpoint_timeout rather than
> checkpoint_wal_size. In that scenario, the WAL usage will shrink down to
> min_recycle_wal_size, but not below that. Did that explanation help? Can you
> suggest changes to the docs to make it more clear?

First, as a general comment, I think we could do little that would
improve the experience of tuning PostgreSQL as much as getting this
patch committed with some reasonable default values for the settings
in question.  Shipping with checkpoint_segments=3 is a huge obstacle
to good performance.  It might be a reasonable value for
min_recycle_wal_size, but it's not a remotely reasonable upper bound
on WAL generated between checkpoints.  We haven't increased that limit
even once in the 14 years we've had it (cf.
4d14fe0048cf80052a3ba2053560f8aab1bb1b22) and typical disk sizes have
grown by an order of magnitude since then.

Second, I *think* that these settings are symmetric and, if that's
right, then I suggest that they ought to be named symmetrically.
Basically, I think you've got min_checkpoint_segments (the number of
recycled segments we keep around always) and max_checkpoint_segments
(the maximum number of segments we can have between checkpoints),
essentially splitting the current role of checkpoint_segments in half.
I'd go so far as to suggest we use exactly that naming.  It would be
reasonable to allow the value to be specified in MB rather than in
16MB units, and to specify it that way by default, but maybe a
unit-less value should have the old interpretation since everybody's
used to it.  That would require adding GUC_UNIT_XSEG or similar, but
that seems OK.

Also, I'd like to propose that we set the default value of
max_checkpoint_segments/checkpoint_wal_size to something at least an
order of magnitude larger than the current default setting.  I'll open
the bidding at 1600MB (aka 100).  I expect some pushback here, but I
don't think this is unreasonable; some people will need to raise it
further.  If you're generating 1600MB of WAL in 5 minutes, you're
either making the database bigger very quickly (in which case the
extra disk space that is consumed by the WAL will quickly blend into
the background) or you are updating the data already in the database
at a tremendous rate (in which case you are probably willing to burn
some disk space to have that go fast).  Right now, it's impractical to
ship something like checkpoint_segments=100 because we'd eat all that
space even on tiny databases with no activity.  But this patch fixes
that, so we might as well try to ship a default that's large enough to
use the database as something other than a toy.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to