Sure.

lctl list_nids output for test client:

172.25.103.23@o2ib<mailto:172.25.103.23@o2ib>
172.25.104.23@o2ib1<mailto:172.25.104.23@o2ib1>

lctl list_nids output for MDS:

172.25.103.1@o2ib<mailto:172.25.103.1@o2ib>
172.25.103.1@o2ib1<mailto:172.25.103.1@o2ib1>

lctl list_nids output for each OSS:
172.25.103.x@o2ib<mailto:172.25.103.x@o2ib>
172.25.104.x@o2ib1<mailto:172.25.104.x@o2ib1>

modprobe.d/lustre.conf output:
MDS: options lnet networks=o2ib0(ib0),o2ib1(ib0)
OSS: options lnet networks=o2ib0(ib0),o2ib1(ib2)
Client: options lnet networks=o2ib0(ib0),o2ib1(ib1)

Carl


From: Kevin Van Maren [mailto:[email protected]]
Sent: Wednesday, February 15, 2012 3:25 PM
To: Barberi, Carl E
Cc: [email protected]
Subject: EXTERNAL: Re: [Lustre-discuss] LNET Performance Issue

Perhaps someone else here has a thought, but it does not make sense to me that 
loading SDP (which accelerates TCP traffic by by-passing the TCP stack) makes 
lnet faster if you are using <ip>@o2ib, and _not_ <ip>@tcp0 for your NIDs.

Any chance you've configured both TCP and O2IB nids on the machine, and it is 
somehow picking the TCP nids to use?

Can you confirm the "lctl list_nids" output, and your lustre/lnet sections of 
your modprobe.conf?

Kevin


On Feb 15, 2012, at 12:30 PM, Barberi, Carl E wrote:


We are having issues with LNET performance over Infiniband.  We have a 
configuration with a single MDT and six (6) OSTs.  The Lustre client I am using 
to test is configured to use 6 stripes (lfs setstripe -c  6 /mnt/lustre).  When 
I perform a test using the following command:

                dd if=/dev/zero of=/mnt/lustre/test.dat bs=1M count=2000

I typically get a write rate of about 815 MB/s, and we never exceed 848 MB/s.  
When I run obdfilter-survey, we easily get about 3-4GB/s write speed, but when 
I run a series of lnet-selftests, the read and write rates range from 850MB/s - 
875MB/s max.  I have performed the following optimizations to increase the data 
rate:

On the Client:
lctl set_param osc.*.checksums=0
lctl set_param osc.*.max_dirty_mb=256

On the OSTs
lctl set_param obdfilter.*.writethrough_cache_enable=0
lctl set_param obdfilter.*.read_cache_enable=0

echo 4096 > /sys/block/<devices>/queue/nr_requests

I have also loaded the ib_sdp module, which also brought an increase in speed.  
However, we need to be able to record at no less than 1GB/s, which we cannot 
achieve right now.  Any thoughts on how I can optimize LNET, which clearly 
seems to be the bottleneck?

Thank you for any help you can provide,
Carl Barberi
<ATT00001..txt>



Confidentiality Notice: This e-mail message, its contents and any attachments 
to it are confidential to the intended recipient, and may contain information 
that is privileged and/or exempt from disclosure under applicable law. If you 
are not the intended recipient, please immediately notify the sender and 
destroy the original e-mail message and any attachments (and any copies that 
may have been made) from your system or otherwise. Any unauthorized use, 
copying, disclosure or distribution of this information is strictly prohibited. 
Email addresses that end with a "-c" identify the sender as a Fusion-io 
contractor.
  
_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to