On Thu, Jan 5, 2017 at 7:32 PM, Jonathon Nelson <jdnel...@dyn.com> wrote: > On Thu, Jan 5, 2017 at 1:01 PM, Andres Freund <and...@anarazel.de> wrote: >> On 2017-01-05 12:55:44 -0600, Jonathon Nelson wrote:
>>> In our lab environment and with a 16MiB setting, we saw substantially >>> better network utilization (almost 2x!), primarily over high bandwidth >>> delay product links. >> >> That's a bit odd - shouldn't the OS network stack take care of this in >> both cases? I mean either is too big for TCP packets (including jumbo >> frames). What type of OS and network is involved here? > > In our test lab, we make use of multiple flavors of Linux. No jumbo frames. > We simulated anything from 0 to 160ms RTT (with varying degrees of jitter, > packet loss, etc.) using tc. Even with everything fairly clean, at 80ms RTT > there was a 2x improvement in performance. Is there compression and/or encryption being performed by the network layers? My experience with both is that they run faster on bigger chunks of data, and that might happen before the data is broken into packets. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers