Hi,

Following are the performance results for read write test observed with
different numbers of "*backend_flush_after*".

1) backend_flush_after = *256kb* (32*8kb), tps = *10841.178815*
2) backend_flush_after = *512kb* (64*8kb), tps = *11098.702707*
3) backend_flush_after = *1MB* (128*8kb), tps = *11434.964545*
4) backend_flush_after = *2MB* (256*8kb), tps = *13477.089417*


*Note:* Above test has been performed on Unpatched master with default
values for checkpoint_flush_after, bgwriter_flush_after
and wal_writer_flush_after.

With Regards,
Ashutosh Sharma
EnterpriseDB:* http://www.enterprisedb.com <http://www.enterprisedb.com>*

On Thu, May 12, 2016 at 9:20 PM, Andres Freund <and...@anarazel.de> wrote:

> On 2016-05-12 11:27:31 -0400, Robert Haas wrote:
> > On Thu, May 12, 2016 at 11:13 AM, Andres Freund <and...@anarazel.de>
> wrote:
> > > Could you run this one with a number of different backend_flush_after
> > > settings?  I'm suspsecting the primary issue is that the default is
> too low.
> >
> > What values do you think would be good to test?  Maybe provide 3 or 4
> > suggested values to try?
>
> 0 (disabled), 16 (current default), 32, 64, 128, 256?
>
> I'm suspecting that only backend_flush_after_* has these negative
> performance implications at this point.  One path is to increase that
> option's default value, another is to disable only backend guided
> flushing. And add a strong hint that if you care about predictable
> throughput you might want to enable it.
>
> Greetings,
>
> Andres Freund
>

Reply via email to