I'm sorry but I thing I will not be able to help you, because I have not 
access to our cluster now, and my expirience is about two years old.

We are using CentOS 6.x with drivers distributed by Intel. Part of this 
"proprietaly" drivers are commands to manage switch firmware etc. (It isnt 
part of OFED drivers pack and it probably will not be in Ubuntu).
the command was:

iba_manage_switch -t 0xGUID_OF_SWITCH showFwVersion

where GUID_OF_SWITCH can be found by command:

ibswitches 

or:

ibnodes



Vitek



---------- Původní zpráva ----------
Od: Nicholas Mills <[email protected]>
Komu: vithanousek <[email protected]>
Datum: 9. 10. 2016 23:08:04
Předmět: Re: [Pvfs2-users] Slow performance with InfiniBand

"

I installed the default QLogic drivers in Ubuntu 14.04.1 LTS. The kernel 
module is qla4xxx.ko version 5.04.00-k1. I wasn't able to find a firmware 
version with ibstat or or ibv_devinfo (pasted below). I'm not sure what you 
mean by topology. All the nodes should be connected to the same switch at 
QDR speeds.



I didn't mention this in my original email, but I installed a different file
system (BeeGFS) on the same InfiniBand hardware and was able to write a 863 
GB file at 1750.052 MB/s.





$ ibv_devinfo 

hca_id: qib0

transport: InfiniBand (0)

fw_ver: 0.0.0

node_guid: 0011:7500:0070:5ed4

sys_image_guid: 0011:7500:0070:5ed4

vendor_id: 0x1175

vendor_part_id: 29474

hw_ver: 0x2

board_id: InfiniPath_QLE7340

phys_port_cnt: 1

port: 1

state: PORT_ACTIVE (4)

max_mtu: 4096 (5)

active_mtu: 2048 (4)

sm_lid: 812

port_lid: 718

port_lmc: 0x00

link_layer: InfiniBand






-Nick





On Fri, Oct 7, 2016 at 5:43 PM, vithanousek <[email protected]
(mailto:[email protected])> wrote:
"
Hi,



I was solving a similar problem about two years ago. And I think, that our 
problem was with InfiniBand drivers or InfiniBand switch firmware  (some 
version of drivers create unstable connections with "full" speed, some 
drivers makes very slow speed, but all were testing with the same version of
firmware of switch)




But I didn't test it after IB switch firmware update.




What is a topology of your IB connection and what version of firmware and 
drivers are you using?




V.

---------- Původní zpráva ----------
Od: Nicholas Mills <[email protected](mailto:[email protected])>
Komu: pvfs2-users@beowulf- underground.org
(mailto:[email protected])
Datum: 7. 10. 2016 22:17:07
Předmět: [Pvfs2-users] Slow performance with InfiniBand

"



All,



I'm having an issue with OrangeFS (trunk and v.2.9.5) performance on a 
cluster with QLogic QLE 7340 HCAs. I set up a file system on 8 nodes after 
configuring --with-openib  and --without-bmi-tcp. I mounted this file system
on a 9th node (the client) and wrote a 1 GB file, but the speed was only 
13.028 MB/s. If I re-install OrangeFS with TCP I can get 1175.388 MB/s when 
transferring a 863 GB file on the same nodes.




I also have access to another cluster with Mellanox MX354A HCAs. On this 
cluster I could get 492.629 MB/s when writing a 1 GB file with OrangeFS/
InfiniBand. I'm wondering if there is an issue with BMI on QLogic HCAs.




Thanks,




Nick Mills

Graduate Research Assistant

Clemson University




______________________________ _________________
Pvfs2-users mailing list
Pvfs2-users@beowulf- underground.org
(mailto:[email protected])
http://www.beowulf- underground.org/mailman/ listinfo/pvfs2-users
(http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users)"

"



"
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to