jnantel wrote:
> Well I've got some disconcerting news on this issue.  No changes at
> any level alter the 34/meg throughput I get. I flushed multipath, blew
> away /var/lib/iscsi just in case. I also verified in /var/lib/iscsi
> the options got set. RHEL53 took my renice no problem.
> 


What were you using for the io test tool, and how did you run it?

> Some observations:
> Single interface iscsi gives me the exact same 34meg/sec
> Going with 2 interfaces it gives me 17meg/sec each interface
> Going with 4 interfaces it gives me 8meg/sec...etc..etc..etc.
> I can't seem to set node.conn[0].iscsi.MaxXmitDataSegmentLength =
> 262144 in a way that actually gets used.


We will always take what the target wants to use, so you have to 
increase it there.


> node.session.iscsi.MaxConnections = 1    can't find any docs on this,
> doubtful it is relevant.
> 
> iscsiadm -m session -P 3  still gives me the default 65536 for xmit
> segment.
> 
> The Equallogic has all its interfaces on the same SAN network, this is
> contrary to most implementations of multipath I've done. This is the
> vendor recommended deployment.
> 
> Whatever is choking performance its consistently choking it down to
> the same level.
> 
> 
> 
> 
> On Apr 13, 5:33 pm, Mike Christie <micha...@cs.wisc.edu> wrote:
>> jnantel wrote:
>>
>>> I am having a major issue with multipath + iscsi write performance
>>> with anything random or any sequential write with data sizes smaller
>>> than 4meg  (128k 64k 32k 16k 8k).  With 32k block size, I am able to
>>> get a maximum throughput of 33meg/s write.  My performance gets cut by
>>> a third with each smaller size, with 4k blocks giving me a whopping
>>> 4meg/s combined throughput.  Now bumping the data size up to 32meg
>>> gets me 160meg/sec throughput, and 64 gives me 190meg/s and finally to
>>> top it out 128meg gives me 210megabytes/sec.  My question is what
>>> factors would limit my performance in the 4-128k range?
>> I think linux is just not so good with smaller IO sizes like 4K. I do
>> not see good performance with Fibre Channel or iscsi.
>>
>> 64K+ should be fine, but you want to get lots of 64K+ IOs in flight. If
>> you run iostat or blktrace you should see more than 1 IO in flight. If
>> while the test is running if you
>> cat /sys/class/scsi_host/hostX/host_busy
>> you should also see lots of IO running.
>>
>> What limits the number of IO? On the iscsi initiator side, it could be
>> params like node.session.cmds_max or node.session.queue_depth. For a
>> decent target like the ones you have I would increase
>> node.session.cmds_max to 1024 and increase node.session.queue_depth to 128.
>>
>> What IO tool are you using? Are you doing direct IO or are you doing
>> file system IO? If you just use something like dd with bs=64K then you
>> are not going to get lots of IO running. I think you will get 1 64K IO
>> in flight, so throughput is not going to be high. If you use something
>> like disktest
>> disktest -PT -T30 -h1 -K128 -B64k -ID /dev/sdb
>>
>> you should see a lot of IOs (depends on merging).
>>
>> If you were using dd with bs=128m then that IO is going to get broken
>> down into lots of smaller IOs (probably around 256K), and so the pipe is
>> nice and full.
>>
>> Another thing I noticed in RHEL is if you increase the nice value of the
>> iscsi threads it will increase write perforamnce sometimes. So for RHEL
>> or Oracle do
>>
>> ps -u root | grep scsi_wq
>>
>> Then patch the scsi_wq_%HOST_ID with the iscsiadm -m session -P 3 Host
>> Number. And then renive the thread to -20.
>>
>> Also check the logs and make sure you do not see any conn error messages.
>>
>> And then what do you get when running the IO test to the individual
>> iscsi disks instead of the dm one? Is there any difference? You might
>> want to change the rr_min_io. If you are sending smaller IOs then
>> rr_min_io of 10 is probably too small. The path is not going to get lots
>> of nice large IOs like you would want.
>>
>>
>>
>>> Some basics about my performance lab:
>>> 2 identical 1 gigabit paths (2  dual port intel pro 1000 MTs) in
>>> separate pcie slots.
>>> Hardware:
>>> 2 x Dell R900 6 quad core, 128gig ram, 2 x Dual port Intel Pro MT
>>> Cisco 3750s with 32gigabit stackwise interconnect
>>> 2 x Dell Equallogic PS5000XV arrays
>>> 1 x Dell Equallogic PS5000E arrays
>>> Operating system
>>> SLES 10 SP2 , RHEL5 Update 3, Oracle Linux 5 update 3
>>> /etc/mutipath.conf
>>> defaults {
>>>         udev_dir                /dev
>>>         polling_interval        10
>>>         selector                "round-robin 0"
>>>         path_grouping_policy    multibus
>>>         getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
>>>         prio_callout            /bin/true
>>>         path_checker            readsector0
>>>         features "1 queue_if_no_path"
>>>         rr_min_io               10
>>>         max_fds                 8192
>>> #       rr_weight               priorities
>>>         failback                immediate
>>> #       no_path_retry           fail
>>> #       user_friendly_names     yes
>>> /etc/iscsi/iscsi.conf   (non default values)
>>> node.session.timeo.replacement_timeout = 15
>>> node.conn[0].timeo.noop_out_interval = 5
>>> node.conn[0].timeo.noop_out_timeout = 30
>>> node.session.cmds_max = 128
>>> node.session.queue_depth = 32
>>> node.session.iscsi.FirstBurstLength = 262144
>>> node.session.iscsi.MaxBurstLength = 16776192
>>> node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
>>> node.conn[0].iscsi.MaxXmitDataSegmentLength = 262144
>>> discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 65536
>>> Scheduler:
>>> cat /sys/block/sdb/queue/scheduler
>>> [noop] anticipatory deadline cfq
>>> cat /sys/block/sdc/queue/scheduler
>>> [noop] anticipatory deadline cfq
>>> Command outputs:
>>> iscsiadm -m session -P 3
>>> iSCSI Transport Class version 2.0-724
>>> iscsiadm version 2.0-868
>>> Target: iqn.2001-05.com.equallogic:0-8a0906-2c82dfd03-64c000cfe2249e37-
>>> dc1stgdb15-sas-raid6
>>>         Current Portal: 10.1.253.13:3260,1
>>>         Persistent Portal: 10.1.253.10:3260,1
>>>                 **********
>>>                 Interface:
>>>                 **********
>>>                 Iface Name: ieth1
>>>                 Iface Transport: tcp
>>>                 Iface Initiatorname: iqn.2005-04.com.linux:dc1stgdb15
>>>                 Iface IPaddress: 10.1.253.148
>>>                 Iface HWaddress: default
>>>                 Iface Netdev: eth1
>>>                 SID: 3
>>>                 iSCSI Connection State: LOGGED IN
>>>                 iSCSI Session State: Unknown
>>>                 Internal iscsid Session State: NO CHANGE
>>>                 ************************
>>>                 Negotiated iSCSI params:
>>>                 ************************
>>>                 HeaderDigest: None
>>>                 DataDigest: None
>>>                 MaxRecvDataSegmentLength: 262144
>>>                 MaxXmitDataSegmentLength: 65536
>>>                 FirstBurstLength: 65536
>>>                 MaxBurstLength: 262144
>>>                 ImmediateData: Yes
>>>                 InitialR2T: No
>>>                 MaxOutstandingR2T: 1
>>>                 ************************
>>>                 Attached SCSI devices:
>>>                 ************************
>>>                 Host Number: 5  State: running
>>>                 scsi5 Channel 00 Id 0 Lun: 0
>>>                         Attached scsi disk sdb          State: running
>>>         Current Portal: 10.1.253.12:3260,1
>>>         Persistent Portal: 10.1.253.10:3260,1
>>>                 **********
>>>                 Interface:
>>>                 **********
>>>                 Iface Name: ieth2
>>>                 Iface Transport: tcp
>>>                 Iface Initiatorname: iqn.2005-04.com.linux:dc1stgdb15
>>>                 Iface IPaddress: 10.1.253.48
>>>                 Iface HWaddress: default
>>>                 Iface Netdev: eth2
>>>                 SID: 4
>>>                 iSCSI Connection State: LOGGED IN
>>>                 iSCSI Session State: Unknown
>>>                 Internal iscsid Session State: NO CHANGE
>>>                 ************************
>>>                 Negotiated iSCSI params:
>>>                 ************************
>>>                 HeaderDigest: None
>>>                 DataDigest: None
>>>                 MaxRecvDataSegmentLength: 262144
>>>                 MaxXmitDataSegmentLength: 65536
>>>                 FirstBurstLength: 65536
>>>                 MaxBurstLength: 262144
>>>                 ImmediateData: Yes
>>>                 InitialR2T: No
>>>                 MaxOutstandingR2T: 1
>>>                 ************************
>>>                 Attached SCSI devices:
>>>                 ************************
>>>                 Host Number: 6  State: running
>>>                 scsi6 Channel 00 Id 0 Lun: 0
>>>                         Attached scsi disk sdc          State: running
>>> Jonathan Nantel
>>
> > 


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to