**** FINAL RESULTS *****
First of all I'd thank Mike Christie for all his help. Mike I'll
tapping your brain again for some read performance help.

This for the benefit of anyone using the Dell Equallogic PS5000XV
PS5000E with SLES10 SP2 / Redhat 5.3 / Centos 5.3 / Oracle Linux +
Multipath ( MPIO ) and open-iscsi ( iscsi ).  Sorry about weird
formatting, making sure this is going get hit for people that were in
my predicament.

As from this thread my issue was amazingly slow performance with
sequential writes with my multipath, around 35 meg/s, configuration
when measured with IOMETER.  First things first... THROW OUT IOMETER
FOR LINUX , it has problems with queue depth.  With that said, with
default iscsi and multipath setup we saw between 60-80meg/sec
performance with multipath. In essence it was slower than single
interface in certain block sizes. When I got done my write performance
was pushing 180-190meg/sec with blocks as small as 4k ( sequential
write test using "dt").

Here are my tweaks:

After making any multipath changes do "multipath -F"  then "multipath"
otherwise your changes won't take effect.

/etc/multipath.conf

device {
        vendor "EQLOGIC"
        product "100E-00"
        path_grouping_policy multibus
        getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
        features "1 queue_if_no_path"   < --- important
        path_checker readsector0
        failback immediate
        path_selector "round-robin 0"
        rr_min_io 512 <---- important, only works with large queue
depth and cms in iscsi.conf
        rr_weight priorities
}


/etc/iscsi/iscsi.conf   ( restarting iscsi seems to apply the configs
fine)

# To control how many commands the session will queue set
# node.session.cmds_max to an integer between 2 and 2048 that is also
# a power of 2. The default is 128.
node.session.cmds_max = 1024

# To control the device's queue depth set node.session.queue_depth
# to a value between 1 and 128. The default is 32.
node.session.queue_depth = 128

Other changes I've made are basic gigabit network tuning for large
transfers and turning off some congestion functions, some scheduler
changes (noop is amazing for sub 4k blocks but awful for 4meg chunks
or higher). I've turned off TSO on the network cards, apparently it's
not supported with jumbo frames and actually slows down performance.


dc1stgdb14:~ # ethtool -k eth7
Offload parameters for eth7:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
dc1stgdb14:~ # ethtool -k eth10
Offload parameters for eth10:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
dc1stgdb14:~ #


On Apr 13, 4:36 pm, jnantel <nan...@hotmail.com> wrote:
> I am having a major issue with multipath + iscsi writeperformance
> with anything random or any sequential write with data sizes smaller
> than 4meg  (128k 64k 32k 16k 8k).  With 32k block size, I am able to
> get a maximum throughput of 33meg/s write.  Myperformancegets cut by
> a third with each smaller size, with 4k blocks giving me a whopping
> 4meg/s combined throughput.  Now bumping the data size up to 32meg
> gets me 160meg/sec throughput, and 64 gives me 190meg/s and finally to
> top it out 128meg gives me 210megabytes/sec.  My question is what
> factors would limit myperformancein the 4-128k range?
>
> Some basics about myperformancelab:
>
> 2 identical 1 gigabit paths (2  dual port intel pro 1000 MTs) in
> separate pcie slots.
>
> Hardware:
> 2 x Dell R900 6 quad core, 128gig ram, 2 x Dual port Intel Pro MT
> Cisco 3750s with 32gigabit stackwise interconnect
> 2 x Dell Equallogic PS5000XV arrays
> 1 x Dell Equallogic PS5000E arrays
>
> Operating systems
> SLES 10 SP2 , RHEL5 Update 3, Oracle Linux 5 update 3
>
> /etc/mutipath.conf
>
> defaults {
>         udev_dir                /dev
>         polling_interval        10
>         selector                "round-robin 0"
>         path_grouping_policy    multibus
>         getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
>         prio_callout            /bin/true
>         path_checker            readsector0
>         features "1 queue_if_no_path"
>         rr_min_io               10
>         max_fds                 8192
> #       rr_weight               priorities
>         failback                immediate
> #       no_path_retry           fail
> #       user_friendly_names     yes
>
> /etc/iscsi/iscsi.conf   (non default values)
>
> node.session.timeo.replacement_timeout = 15
> node.conn[0].timeo.noop_out_interval = 5
> node.conn[0].timeo.noop_out_timeout = 30
> node.session.cmds_max = 128
> node.session.queue_depth = 32
> node.session.iscsi.FirstBurstLength = 262144
> node.session.iscsi.MaxBurstLength = 16776192
> node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
> node.conn[0].iscsi.MaxXmitDataSegmentLength = 262144
>
> discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 65536
>
> Scheduler:
>
> cat /sys/block/sdb/queue/scheduler
> [noop] anticipatory deadline cfq
> cat /sys/block/sdc/queue/scheduler
> [noop] anticipatory deadline cfq
>
> Command outputs:
>
> iscsiadm -m session -P 3
> iSCSI Transport Class version 2.0-724
> iscsiadm version 2.0-868
> Target: iqn.2001-05.com.equallogic:0-8a0906-2c82dfd03-64c000cfe2249e37-
> dc1stgdb15-sas-raid6
>         Current Portal: 10.1.253.13:3260,1
>         Persistent Portal: 10.1.253.10:3260,1
>                 **********
>                 Interface:
>                 **********
>                 Iface Name: ieth1
>                 Iface Transport: tcp
>                 Iface Initiatorname: iqn.2005-04.com.linux:dc1stgdb15
>                 Iface IPaddress: 10.1.253.148
>                 Iface HWaddress: default
>                 Iface Netdev: eth1
>                 SID: 3
>                 iSCSI Connection State: LOGGED IN
>                 iSCSI Session State: Unknown
>                 Internal iscsid Session State: NO CHANGE
>                 ************************
>                 Negotiated iSCSI params:
>                 ************************
>                 HeaderDigest: None
>                 DataDigest: None
>                 MaxRecvDataSegmentLength: 262144
>                 MaxXmitDataSegmentLength: 65536
>                 FirstBurstLength: 65536
>                 MaxBurstLength: 262144
>                 ImmediateData: Yes
>                 InitialR2T: No
>                 MaxOutstandingR2T: 1
>                 ************************
>                 Attached SCSI devices:
>                 ************************
>                 Host Number: 5  State: running
>                 scsi5 Channel 00 Id 0 Lun: 0
>                         Attached scsi disk sdb          State: running
>         Current Portal: 10.1.253.12:3260,1
>         Persistent Portal: 10.1.253.10:3260,1
>                 **********
>                 Interface:
>                 **********
>                 Iface Name: ieth2
>                 Iface Transport: tcp
>                 Iface Initiatorname: iqn.2005-04.com.linux:dc1stgdb15
>                 Iface IPaddress: 10.1.253.48
>                 Iface HWaddress: default
>                 Iface Netdev: eth2
>                 SID: 4
>                 iSCSI Connection State: LOGGED IN
>                 iSCSI Session State: Unknown
>                 Internal iscsid Session State: NO CHANGE
>                 ************************
>                 Negotiated iSCSI params:
>                 ************************
>                 HeaderDigest: None
>                 DataDigest: None
>                 MaxRecvDataSegmentLength: 262144
>                 MaxXmitDataSegmentLength: 65536
>                 FirstBurstLength: 65536
>                 MaxBurstLength: 262144
>                 ImmediateData: Yes
>                 InitialR2T: No
>                 MaxOutstandingR2T: 1
>                 ************************
>                 Attached SCSI devices:
>                 ************************
>                 Host Number: 6  State: running
>                 scsi6 Channel 00 Id 0 Lun: 0
>                         Attached scsi disk sdc          State: running
>
> Jonathan Nantel
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to