On Thu, May 30, 2019 at 3:23 PM Bob McMahon <bob.mcma...@broadcom.com> wrote:
>
> hmm, I confused.  Did you run multiple iperf 3 sessions or iperf 2 with the 
> -P 8,10 option or possibly both?  Your previous response said the only way to 
> get this was with multiple iperf 3 sessions and didn't mention iperf 2 nor 
> the use of -P.

multiple iperf3 instances, similar to what Chris Preimesberger
demonstrates in his video, only we use upwards of 10 threads. (it
works out to about 1 thread per 10Gb of bandwidth).

You mentioned iperf2 would be interesting to try, I was just
commenting that we went with multiple iperf3 instances vs a single
"iperf2 -P" because iperf3 does some things we wanted (like CPU
measurements) but lacks true multithreading.  The "use multiple
instances of iperf3" is a workaround to lack of proper
multi-threading.  Also, note, I'm not an expert, we came to this by
way of a lot of internet reading and trial and error.

> In theory, iperf 2 could outperform iperf 3 per the use of threads, e..g 
> separating the traffic from the accounting and reporting.  I'm curious to 
> actual experimental results.
> Note:  Iperf 2.0.13 is really required for this class of testing as older 
> iperf  versions (e.g. 2.0.5) have performance related bugs.

It may.  If one of my machines frees up I may try this to see how it
works out. No promises though, I've already got too much work as it is
:(

>
> Bob
>
> On Thu, May 30, 2019 at 11:49 AM Jeffrey Lane <j...@canonical.com> wrote:
>>
>> For my needs (very simple testing) yes. We had to do that because
>> iperf3 doesn't multi-thread like iperf 2 did, unfortunately.
>>
>> On Thu, May 30, 2019 at 1:37 PM Bob McMahon <bob.mcma...@broadcom.com> wrote:
>> >
>> > Is it just multiple threads?  It might be interesting to try iperf 2.0.13 
>> > and the -P 8 option.
>> >
>> > Bob
>> >
>> > On Thu, May 30, 2019 at 10:04 AM Jeffrey Lane <j...@canonical.com> wrote:
>> >>
>> >> I've been working on this a bit and the only way to get it was to run
>> >> multiple iperf3 threads. To do this, you have to set up several (we do
>> >> about 8 threads for 100Gb, possibly 10) on the target (listening to
>> >> different ports) and then run to client instances (one for each port),
>> >> then aggregate the results for each, and that nets in the 92-97Gb/s
>> >> range overall.
>> >>
>> >> Additionally, in some cases tweaks are necessary (jumbo frames, some
>> >> kernel tweaks, driver tweaks, etc) but that's all case-by-case.
>> >>
>> >> And it is very much constrained by CPU and PCIe bandwidth.
>> >>
>> >>
>> >> On Thu, May 30, 2019 at 12:38 PM Chris Preimesberger <ccpi...@gmail.com> 
>> >> wrote:
>> >> >
>> >> > I tried and got up to 87Gbps throughput.  The results were CPU bound.  
>> >> > I want to build new i7 9900K PCs and re-test.  Here's a video of my 
>> >> > attempt:
>> >> >
>> >> > https://youtu.be/uh2zvaaH0hc
>> >> >
>> >> >
>> >> >
>> >> > On Thu, May 30, 2019, 3:08 AM Ashwajit Bhoutkar <bhout...@gmail.com> 
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> Just wanted to check whether it is possible to test the throughput of 
>> >> >> 100G link using iPerf.
>> >> >>
>> >> >>
>> >> >> Thank You,
>> >> >>
>> >> >> Kind Regards,
>> >> >> Ashwajit
>> >> >> _______________________________________________
>> >> >> Iperf-users mailing list
>> >> >> Iperf-users@lists.sourceforge.net
>> >> >> https://lists.sourceforge.net/lists/listinfo/iperf-users
>> >> >
>> >> > _______________________________________________
>> >> > Iperf-users mailing list
>> >> > Iperf-users@lists.sourceforge.net
>> >> > https://lists.sourceforge.net/lists/listinfo/iperf-users
>> >>
>> >>
>> >>
>> >> --
>> >> Jeff Lane
>> >> Engineering Manager
>> >> IHV/OEM Alliances and Server Certification
>> >>
>> >> "Entropy isn't what it used to be."
>> >>
>> >>
>> >> _______________________________________________
>> >> Iperf-users mailing list
>> >> Iperf-users@lists.sourceforge.net
>> >> https://lists.sourceforge.net/lists/listinfo/iperf-users
>>
>>
>>
>> --
>> Jeff Lane
>> Engineering Manager
>> IHV/OEM Alliances and Server Certification
>>
>> "Entropy isn't what it used to be."



-- 
Jeff Lane
Engineering Manager
IHV/OEM Alliances and Server Certification

"Entropy isn't what it used to be."


_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to