In particular, check out:
http://fasterdata.es.net/host-tuning/100g-tuning/
Your results are pretty typical.
On Tue, Apr 17, 2018 at 14:16 Jeffrey Lane <j...@canonical.com> wrote:
> On Tue, Apr 17, 2018 at 3:09 PM, Bruce A. Mah <b...@es.net> wrote:
> > If memory serves me right, Jeffrey Lane wrote:
> >> Hi all,
> >>
> >> I have what is kind of a silly question, but who has some experience
> >> testing 100Gb with iperf3?
> >>
> >> I just wanted to validate something with iperf3 to see if it is
> reasonable.
> >>
> >> With a single process running, the most I've been able to get out of a
> >> 100Gb network port is a burst of about 65Gb/s with sustained averages
> >> of around 50-55Gb/s.
> >>
> >> This is after a LOT of kernel tweaks, PCIe tweaks, and network config
> tweaks.
> >>
> >> So at this point, I'm thinking that what I'm seeing is a hardware
> >> bottleneck, since iperf3 isn't multi-threaded.
> >>
> >> What I wanted to validate, to get around that is this:
> >>
> >> On the target side, I've kicked off four iperf3 processes all bound to
> >> the same IP but listening on a different port. Now, on the client
> >> side, I kick off four iperf3 instances, one per remote port. After 30
> >> minutes of testing, each instance returns an average throughput of
> >> about 23Gb/s.
> >>
> >> So in that scenario is it reasonable that 4 parallel threads reporting
> >> 23Gb/s can be aggregated to assume we're actually seeing throughput of
> >> 92Gb/s on the 100Gb port (thus nearly saturated)?
> >
> > That all seems reasonable. We have some experience with iperf3 in very
> > high speed links, and yes, there's some amount of tuning that might be
> > involved.
> >
> > https://fasterdata.es.net/ has some information on tuning hardware and
> > software.
>
> Thanks, I'll look into it.
>
> > Note that the reported throughput is application-level throughput
> > (payload only) and doesn't include protocol overheads.
>
> Yeah, I get that, which is why I don't expect to see numbers
> approaching 100% of theoretical limit, I assumed there was a loss for
> overhead somewhere.
>
> Glad to know my math adds up, though, many thanks everyone!
>
> >
> > Bruce.
> >
> >
> >
> >
> >
>
>
>
> --
> Jeff Lane
> TPP / Server Certification Lead
>
> "Entropy isn't what it used to be."
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Iperf-users mailing list
> Iperf-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/iperf-users
>
--
Sent from Gmail Mobile
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users