Hello...
I searched though the mailing list archives and didn't see any mention
of this. I have a rhel 5.1 x86_64 server connected to a NetApp via two
qlogic HBAs (1). I can get 300-350MBytes/sec using a single controller.
However when I configure multipath (3) and use two FC connections ( one
from each FC card), I do not see twice the performance (3), but the same
performance as with a single card. I believe I have multibus enabled
(4).
Should I see twice the performance? Or are two FC cards only good for
failover?
thanks
(*1)
qla2xxx 0000:0f:00.0:
QLogic Fibre Channel HBA Driver: 8.01.07-k7
QLogic QLE2462 - PCI-Express to 4Gb FC, Dual Channel
ISP2432: PCIe (2.5Gb/s x4) @ 0000:0f:00.0 hdma+, host#=1, fw=4.00.150
[IP]
(2)
/etc/multipath.conf:
device {
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%
n"
prio_callout "/sbin/mpath_prio_netapp /dev/%n"
no_path_retry fail
hardware_handler "0"
path_grouping_policy multibus
failback immediate
rr_weight uniform
rr_min_io 128
path_checker readsector0
selector "round-robin 0"
}
(3)
$ iostat -m 2 # while running iozone
sdd 4068.84 0.00 184.48 0 367
sdf 3952.26 0.00 184.14 0 366
dm-1 94526.13 0.00 369.24 0 734
(4)
~]$ sudo multipath -ll
mpath2 (360a98000486e57735634466a70494859) dm-1 NETAPP,LUN
[size=150G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=8][active]
\_ 1:0:0:1 sdd 8:48 [active][ready]
\_ 3:0:0:1 sdf 8:80 [active][ready]
mpath1 (360a98000486e577356344669746f5841) dm-0 NETAPP,LUN
[size=100G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=8][active]
\_ 1:0:0:0 sdc 8:32 [active][ready]
\_ 3:0:0:0 sde 8:64 [active][ready]
--
Christopher McCrory
"The guy that keeps the servers running"
To the optimist, the glass is half full.
To the pessimist, the glass is half empty.
To the engineer, the glass is twice as big as it needs to be.
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list