On 01/24/2018 12:48 PM, Stefan Petrea wrote:
> This email is structured in sections as follows:
> 1 - Estimating the size of pg_xlog depending on postgresql.conf parameters
> 2 - Cleaning up pg_xlog using a watchdog script
> 3 - Mailing list survey of related bugs
> 4 - Thoughts
> We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.
> During some database imports(using pg_restore), we're noticing fast
> and unbounded growth of pg_xlog up to the point where the
> partition(280G in size for us) that stores it fills up and PostgreSQL
> shuts down. The error seen in the logs:
> 2018-01-17 01:46:23.035 CST  LOG: database system was shut down
> at 2018-01-16 15:49:26 CST
> 2018-01-17 01:46:23.038 CST  FATAL: could not write to file
> "pg_xlog/xlogtemp.41671": No space left on device
> 2018-01-17 01:46:23.039 CST  LOG: startup process (PID 41671)
> exited with exit code 1
> 2018-01-17 01:46:23.039 CST  LOG: aborting startup due to startup
> process failure
> 2018-01-17 01:46:23.078 CST  LOG: database system is shut down
> The config settings I thought were relevant are these ones (but I'm
> also attaching the entire postgresql.conf if there are other ones that
> I missed):
> archive_command='exit 0;'
> checkpoint_completion_target = 0.7
> wal_keep_segments = 8
Those are values from the config file, right? What values are currently
used by the processes? That is, when you do
SELECT * FROM pg_settings
what values does that show? Perhaps someone modified the config file and
forgot to reload it / restart the server?
BTW there's a mistake in the settings, it should be max_wal_size=2GB
(it's just a type in the message, it's set correctly in the config).
Another thought is that the log file you provided is full of warning
about checkpoints happening less than 30 seconds apart. That means you
need to bump the max_wal_size value up - a lot. Perhaps to 16-32GB, to
make checkpoints less frequent. That is a basic checkpoint tuning.
> So currently the pg_xlog is growing a lot, and there doesn't seem to
> be any way to stop it.
> There are some formulas I came across that allow one to compute the
> maximum number of WAL allowed in pg_xlog as a function of the
> PostgreSQL config parameters.
> 1.1) Method from 2012 found in 
> The formula for the upper bound for WAL files in pg_xlog is
> (2 + checkpoint_completion_target) * checkpoint_segments + 1
> which is
> ( (2 + 0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files
> I used the 1/3 because of  the shift from checkpoint_segments to
> max_wal_size in 9.5 , the relevant quote from the release notes being:
> If you previously adjusted checkpoint_segments, the following formula
> will give you an approximately equivalent setting:
> max_wal_size = (3 * checkpoint_segments) * 16MB
> Another way of computing it, also according to  is the following
> 2 * checkpoint_segments + wal_keep_segments + 1
> which is (2048/16) + 8 + 1 = 137 WAL files
> So far we have two answers, in practice none of them check out, since
> pg_xlog grows indefinitely.
> 1.2) Method from the PostgreSQL internals book
> The book  says the following:
> it could temporarily become up to "3 * checkpoint_segments + 1"
> Ok, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files
> This doesn't check out either.
I don't quite understand the logic in the first formula - why you first
divide by 3 and then multiply by 2.7. But that does not really matter,
amount of WAL segments kept in pg_xlog should be about 2GB, give or
take. If you got much more WAL than that, the segments are kept because
of something preventing their removal.
And if I understand it correctly, you have about ~200GB of them, right?
> 1.3) On the mailing list  , I found similar formulas that were seen
> 1.4) The post at  says max_wal_size is as soft limit and also sets
> wal_keep_segments = 0 in order to enforce keeping as little WAL as
> possible around. Would this work?
Yes, max_wal_size is a soft limit, which means it can be temporarily
exceeded. But 2GB vs. 200GB is helluwa difference, far beyond what would
be reasonable with max_wal_size=2GB.
Regarding wal_keep_segments=0 - considering you currently have this set
to 8, which is a whopping 128MB, I very much doubt setting it to 0 will
make any difference. The segments are kept around for some other reason.
There are cases where wal_keep_segments are set to high values (like
5000 or so), to allow replicas to temporarily fall behind without having
to setup a WAL archive. But this is not the case here. Honestly, I doubt
setting this to 8 makes practical sense ... That value seems so low it
does not guarantee anything.
> Does wal_keep_segments = 0 turn off WAL recycling? Frankly, I would
> rather have WAL not be recycled/reused, and just deleted to keep
> pg_xlog below expected size.
No, it doesn't. Why would it disable that? It simply means the segments
may need to be keept around for longer before getting recycled.
> Another question is, does wal_level = replica affect the size of
> pg_xlog in any way? We have an archive_command that just exits with
> exit code 0, so I don't see any reason for the pg_xlog files to not be
> cleaned up.
No, it shouldn't. The replica should stream the WAL (if it's close to
current positiion), or fetch the older WAL segments if it falls behind.
But if it falls behind too much, it may not be able to catch up as the
WAL segments get removed.
wal_keep_segments is a protection against that, but it won't keep
segments indefinitely and you set that just to 8. So this is not the
Another option is that you created a replication slot. That is actually
the only I can think of that could cause this issue. Perhaps there is a
replication slot that is not used anymore and is preventing removal of
WAL segments? Because that's the whole point of replication slots.
SELECT * FROM pg_replication_slots;
say on the master node?
> 2) Cleaning up pg_xlog using a watchdog script
> To get the import done I wrote a script that's actually inspired from
> a blog post where the pg_xlog out of disk space problem is
> addressed . It periodically reads the last checkpoint's REDO WAL
> file, and deletes all WAL in pg_xlog before that one.
This makes no sense. The database should be able to remove unnecessary
WAL segments automatically. There's pretty small chance you'll get it
right in an external script - not removing WAL segments that are still
Find the actual root cause, fix it. Don't invent custom scripts messing
with the critical part of the database.
> This e-mail message, including any attachments, is for the sole use
> of the intended recipient of this message, and may contain
> information that is confidential or legally protected. If you are not
> the intended recipient or have received this message in error, you
> are not authorized to copy, distribute, or otherwise use this message
> or its attachments. Please notify the sender immediately by return
> e-mail and permanently delete this message and any attachments.
> Tangoe makes no warranty that this e-mail or its attachments are
> error or virus free.
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services