On Fri, Jan 6, 2017 at 8:52 AM, Kevin Grittner <kgri...@gmail.com> wrote:
> On Thu, Jan 5, 2017 at 7:32 PM, Jonathon Nelson <jdnel...@dyn.com> wrote:
> > On Thu, Jan 5, 2017 at 1:01 PM, Andres Freund <and...@anarazel.de>
> >> On 2017-01-05 12:55:44 -0600, Jonathon Nelson wrote:
> >>> In our lab environment and with a 16MiB setting, we saw substantially
> >>> better network utilization (almost 2x!), primarily over high bandwidth
> >>> delay product links.
> >> That's a bit odd - shouldn't the OS network stack take care of this in
> >> both cases? I mean either is too big for TCP packets (including jumbo
> >> frames). What type of OS and network is involved here?
> > In our test lab, we make use of multiple flavors of Linux. No jumbo
> > We simulated anything from 0 to 160ms RTT (with varying degrees of
> > packet loss, etc.) using tc. Even with everything fairly clean, at 80ms
> > there was a 2x improvement in performance.
> Is there compression and/or encryption being performed by the
> network layers? My experience with both is that they run faster on
> bigger chunks of data, and that might happen before the data is
> broken into packets.
There is no compression or encryption. The testing was with and without
various forms of hardware offload, etc. but otherwise there is no magic up
Dyn / Principal Software Engineer