On Thu, Jan 20, 2011 at 02:22:24PM -0600, Mike Christie wrote:
> On 01/19/2011 01:13 PM, Joe Hoot wrote:
>> # To control how many commands the session will queue set
>> # node.session.cmds_max to an integer between 2 and 2048 that is also
>> # a power of 2. The default is 128.
>> node.session.cmds_max = 128
>> # To control the device's queue depth set node.session.queue_depth
>> # to a value between 1 and 1024. The default is 32.
>> node.session.queue_depth = 32
>>
>
> Hey, I am not sure if we are hitting a bug in the kernel but some other  
> user reported that if they increase cmds_max and queue_depth to  
> something like cmds_max=1024 and queue_depth=128, then they can use the  
> default noop settings and not see any timeouts.
>
> If that helps you too, then we might have a bug in the kernel fifo code  
> or our use of it.
>
>
>>
>> 2) *cmds_max and queue_depth* - I haven't adjusted those settings yet.  What
>> is the advantage and disadvantage of raising those?  I am using dm-multipath
>> with rr_min_io currently set to something like 200.  So every 200 i/o's are
>> going to the other path at the dm-multipath layer.  I am also using 9000 mtu
>> size.  So I'm not sure how that plays into this -- specific to these queue
>> depths and max_cmds, that is.  Also, how do the cmds_max and queue_depth
>> relate?  From what I'm reading, it seems like the queue_depth is each
>> iface's buffer and that the cmds_max is specific to each session?
>>
>
> cmds_max is the max number of commands the initiator will send down in  
> each session. queue_depth is the max number of commands it will send  
> down to a device/LU.
>
> So if you had the settings above and 5 devices/LUs on a target then the  
> initiator could end up sending 32 cmds to 4 devices (because 32 * 4 =  
> 128 and that hits the cmds_max setting), and 1 device would have to wait  
> for some commands to finish before the initiator would send it some.
>
> The target also has its own limit that it tells us about, and we will  
> not send more commands than it says it can take. So if cmds_max is  
> larger then the target's limit we will obey the target limit.
>
> For EQL boxes, you always get one device/LU per target, and you end up  
> with lots of targets. So your instinct might be to just set them to the  
> same value. However, you would still want to set the cmds_max a little  
> higher than queue_depth because cmds_max covers scsi/block commands and  
> also internal iSCSI commands and scsi eh tasks like nops or task  
> management commands like aborts.
>
> I am not sure what to set rr_min_io to. It depends on if you are using  
> bio based or request based multipath. For bio based if you are sending  
> lots of small IO then you would want to set rr_min_io higher to make  
> sure lots of small bios are sent to the same path so that they get  
> merged into one nice big command/request. If you are sending lots of  
> large IOs then you could set rr_min_io closer to queue_depth. For  
> request based multipath you could set rr_min_io closer to queue_depth  
> because the requests should be merged already and so the request is  
> going to go out as that command.
>

For VMware ESX/ESXi EQL recommends rr_min_io value of 3,
to utilize all the paths simultaneously..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to