Peter Geoghegan <pe...@2ndquadrant.com> writes:
> I'm not quite comfortable recommending a switch to milliseconds if
> that implies a loss of sub-millisecond granularity. I know that
> someone is going to point out that in some particularly benchmark,
> they can get another relatively modest increase in throughput (perhaps
> 2%-3%) by splitting the difference between two adjoining millisecond
> integer values. In that scenario, I'd be tempted to point out that
> that increase is quite unlikely to carry over to real-world benefits,
> because the setting is then right on the cusp of where increasing
> commit_delay stops helping throughput and starts hurting it. The
> improvement is likely to get lost in the noise in the context of a
> real-world application, where for example the actually cost of an
> fsync is more variable. I'm just not sure that that's the right
> attitude.

To me it's more about future-proofing.  commit_delay is the only
time-interval setting we've got where reasonable values today are in the
single-digit-millisecond range.  So it seems to me not hard to infer
that in a few years sub-millisecond values will be important, whether or
not there's any real argument for them today.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to