Sven will tell you: "RPC isn't streaming" and that may account for the 
discrepancy.  If the tests are doing any "fan-in" where multiple nodes are 
sending to single node, then it's also possible that you are exhausting switch 
buffer memory in a way that a 1:1 iperf wouldn't.

For our internal benchmarking we've used /usr/lpp/mmfs/samples/net/nsdperf to 
more closely estimate the real performance.  I haven't played with mmnetverify 
yet though.

-Paul

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Simon Thompson 
(Research Computing - IT Services)
Sent: Friday, March 17, 2017 2:50 PM
To: [email protected]
Subject: [gpfsug-discuss] mmnetverify

Hi all,

Just wondering if anyone has used the mmnetverify tool at all?

Having made some changes to our internal L3 routing this week, I was interested 
to see what it claimed.

As a side-note, it picked up some DNS resolution issues, though I'm not clear 
on some of those why it was claiming this as doing a "dig" on the node, it 
resolved fine (but adding the NSD servers to the hosts files cleared the error).

Its actually the bandwidth tests that I'm interested in hearing other people's 
experience with as the numbers that some out from it are very different (lower) 
than if we use iperf to test performance between two nodes.

Anyone any thoughts at all on this?

Thanks
Simon

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to