On 3 January 2017 at 15:44, Robert Haas <robertmh...@gmail.com> wrote:

> Yeah.  I don't think there's any way to get around the fact that there
> will be bigger latency spikes in some cases with larger WAL files.

One way would be for the WALwriter to zerofill new files ahead of
time, thus avoiding the latency spike.

> For example, in a quick test on my laptop,
> zero-filling a 16 megabyte file using "dd if=/dev/zero of=x bs=8k
> count=2048" takes about 11 milliseconds, and zero-filling a 64
> megabyte file with a count of 8192 increases the time to almost 50
> milliseconds.  That's something, but I wouldn't rate it as concerning.

I would rate that as concerning, especially if we allow much larger sizes.

> But the flip side is that it's wrong to imagine that there's no harm
> in leaving the situation as it is.

The case for change has been made; the only discussion is what's in
the new patch.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to