This email is structured in sections as follows:

1 - Estimating the size of pg_xlog depending on postgresql.conf parameters
2 - Cleaning up pg_xlog using a watchdog script
3 - Mailing list survey of related bugs
4 - Thoughts

We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.
During some database imports(using pg_restore), we're noticing fast
and unbounded growth of pg_xlog up to the point where the
partition(280G in size for us) that stores it fills up and PostgreSQL
shuts down. The error seen in the logs:

    2018-01-17 01:46:23.035 CST [41671] LOG:  database system was shut down at 
2018-01-16 15:49:26 CST
    2018-01-17 01:46:23.038 CST [41671] FATAL:  could not write to file 
"pg_xlog/xlogtemp.41671": No space left on device
    2018-01-17 01:46:23.039 CST [41662] LOG:  startup process (PID 41671) 
exited with exit code 1
    2018-01-17 01:46:23.039 CST [41662] LOG:  aborting startup due to startup 
process failure
    2018-01-17 01:46:23.078 CST [41662] LOG:  database system is shut down

The config settings I thought were relevant are these ones (but I'm
also attaching the entire postgresql.conf if there are other ones that
I missed):

    archive_command='exit 0;'
    checkpoint_completion_target = 0.7
    wal_keep_segments = 8

So currently the pg_xlog is growing a lot, and there doesn't seem to
be any way to stop it.

There are some formulas I came across that allow one to compute the
maximum number of WAL allowed in pg_xlog as a function of the
PostgreSQL config parameters.

1.1) Method from 2012 found in [2]

The formula for the upper bound for WAL files in pg_xlog is 

(2 + checkpoint_completion_target) * checkpoint_segments + 1
which is 
( (2 + 0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files

I used the 1/3 because of [6] the shift from checkpoint_segments to
max_wal_size in 9.5 , the relevant quote from the release notes being:

    If you previously adjusted checkpoint_segments, the following formula
    will give you an approximately equivalent setting:
    max_wal_size = (3 * checkpoint_segments) * 16MB

Another way of computing it, also according to [2] is the following
2 * checkpoint_segments + wal_keep_segments + 1
which is (2048/16) + 8 + 1 = 137  WAL files

So far we have two answers, in practice none of them check out, since
pg_xlog grows indefinitely.

1.2) Method from the PostgreSQL internals book 

The book [4] says the following:

    it could temporarily become up to "3 * checkpoint_segments + 1"

Ok, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files

This doesn't check out either.

1.3) On the mailing list [3] , I found similar formulas that were seen

1.4) The post at [5] says max_wal_size is as soft limit and also sets
wal_keep_segments = 0 in order to enforce keeping as little WAL as
possible around.  Would this work?

Does wal_keep_segments = 0 turn off WAL recycling? Frankly, I would
rather have WAL not be recycled/reused, and just deleted to keep
pg_xlog below expected size.

Another question is, does wal_level = replica affect the size of
pg_xlog in any way?  We have an archive_command that just exits with
exit code 0, so I don't see any reason for the pg_xlog files to not be
cleaned up.

2) Cleaning up pg_xlog using a watchdog script

To get the import done I wrote a script that's actually inspired from
a blog post where the pg_xlog out of disk space problem is
addressed [1].  It periodically reads the last checkpoint's REDO WAL
file, and deletes all WAL in pg_xlog before that one. 

The intended usage is for this script to run alongside the imports
in order for pg_xlog to be cleaned up gradually and prevent the disk
from filling up.

Unlike the blog post and probably slightly wrong is that I used
lexicographic ordering and not ordering by date. But I guess it worked
because the checks were frequent enough that no WAL ever got
recycled. In retrospect I should've used the date ordering.

Does this script have the same effect as checkpoint_completion_target=0 ?

At the end of the day, this script seems to have allowed the import we needed
to get done, but I acknowledge it was a stop-gap measure and not a long-term
solution, hence me posting on the mailing list to find a better solution.

3) Mailing list survey of related bugs

On the mailing lists, in the past, there have been bugs around pg_xlog
growing out of control:

BUG 7902 [7] - Discusses a situation where WAL are produced faster than
checkpoints can be completed(written to disk), and therefore the WALs
in pg_xlog cannot be recycled/deleted.  The status of this bug report
is unclear. I have a feeling it's still open. Is that the case?

BUG 14340 [9] - A user(Sonu Gupta) is reporting pg_xlog unbounded growth
and is asked to do some checks and then directed to the pgsql-general mailing 
where he did not follow up.
I quote the checks that were suggested

    Check that your archive_command is functioning correctly, and that you
    don't have any inactive replication slots (select * from
    pg_replication_slots where not active).  Also check the server logs if
    both those things are okay.

I have done these checks, and the archive_command we have is returning zero 
And we do not have inactive replication slots.

BUG 10013 [12] - A user reports initdb to fill up the disk once he changes
BLCKSZ and/or XLOG_BLCKSZ to non-standard values. The bug seems to be

BUG 11989 [8] - A user reports a pg_xlog unbounded growth that concludes
in a disk outage.  No further replies after the bug report.

BUG 2104 [10] - A user reports a PostgreSQL not recycling pg_xlog
files. It's suggested that this might have happened because
checkpoints were failing so WAL segments could not be recycled.

BUG 7801 [11] - This is a bit offtopic for our problem(since we don't have
replication set up yet for the server with unbound pg_xlog growth),
but still an interesting read.

A slave falls too far behind a master which leads to increase of
pg_xlog on the slave. The user says making
checkpoint_completion_target=0 or, manually running CHECKPOINT on the
slave is immediately freeing up space on the slave's pg_xlog.

I also learned here that a CHECKPOINT occurs approximately every
checkpoint_completion_target * checkpoint_timeout. Is this correct?

Should I set checkpoint_completion_target=0? 

4) Thoughts

In the logs, there are lines like the following one:

    28 2018-01-17 02:34:39.407 CST [59922] HINT:  Consider increasing the 
configuration parameter "max_wal_size".
    29 2018-01-17 02:35:02.513 CST [59922] LOG:  checkpoints are occurring too 
frequently (23 seconds apart)

This looks very similar to BUG 7902 [7]. Is there any rule of thumb,
guideline or technique that can be used when checkpoints cannot be
completed fast enough ?

I'm not sure if this is a misconfiguration problem or a bug. Which one
would be more appropriate?


[4] http://www.interdb.jp/blog/pgsql/pg95walsegments/
[6] https://www.postgresql.org/docs/9.5/static/release-9-5.html#AEN128150

Stefan Petrea
System Engineer, Network Engineering


This e-mail message, including any attachments, is for the sole use of the 
intended recipient of this message, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. Tangoe makes no warranty that this e-mail or its attachments are 
error or virus free.

Attachment: postgresql.conf
Description: postgresql.conf

Attachment: postgresql-logs.txt.gz
Description: postgresql-logs.txt.gz

Attachment: cleanup-wal.sh
Description: cleanup-wal.sh

Reply via email to