On Tue, Jan 3, 2017 at 11:16 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 3 January 2017 at 15:44, Robert Haas <robertmh...@gmail.com> wrote:
>> Yeah. I don't think there's any way to get around the fact that there
>> will be bigger latency spikes in some cases with larger WAL files.
> One way would be for the WALwriter to zerofill new files ahead of
> time, thus avoiding the latency spike.
Sure, we could do that. I think it's an independent improvement,
though: it is beneficial with or without this patch.
>> For example, in a quick test on my laptop,
>> zero-filling a 16 megabyte file using "dd if=/dev/zero of=x bs=8k
>> count=2048" takes about 11 milliseconds, and zero-filling a 64
>> megabyte file with a count of 8192 increases the time to almost 50
>> milliseconds. That's something, but I wouldn't rate it as concerning.
> I would rate that as concerning, especially if we allow much larger sizes.
I don't really understand the concern. If we allow large sizes but
they are not the default, people can make a throughput-vs-latency
trade-off when chosing a value for their installation. Those kind of
trade-offs are common and unavoidable. If we raise the default, then
it's more of a concern, but I'm not sure those numbers are big enough
to worry about. I'm not sure how to decide which numbers are big
enough to worry about, either.
I guess we need some test results showing what happens with this patch
in the real world before we go further. I agree that there's a
possible downside to raising the segment size, but my suspicion is
that the results are going to be better, not worse, because of
reducing the number of end-of-segment fsyncs. There's no point
worrying too much about how we're going to mitigate the negative
impact until we know for sure that there is one.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: