Ok.  A couple of new questions regarding this:

Here are my currently settings:

#node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 15
node.session.err_timeo.lu_reset_timeout = 20
# To control how many commands the session will queue set
# node.session.cmds_max to an integer between 2 and 2048 that is also
# a power of 2. The default is 128.
node.session.cmds_max = 128
# To control the device's queue depth set node.session.queue_depth
# to a value between 1 and 1024. The default is 32.
node.session.queue_depth = 32

QUESTIONS:
==========

1) *noop_out_timeout* - I found that if I raise my noop_out_timeout to 15 
secs, I'm not seeing nearly as many (actually I haven't seen any timeouts 
for the last hour that I've been testing).  Should I previously had this set 
to 10.  In the network world, it just seems that 15 seconds is a really 
really long time to actually wait for a noop ping response (I'm relating 
this to ICMP pings, but still...).  Is it common for users to set the 
noop_out_timeout this high?

2) *cmds_max and queue_depth* - I haven't adjusted those settings yet.  What 
is the advantage and disadvantage of raising those?  I am using dm-multipath 
with rr_min_io currently set to something like 200.  So every 200 i/o's are 
going to the other path at the dm-multipath layer.  I am also using 9000 mtu 
size.  So I'm not sure how that plays into this -- specific to these queue 
depths and max_cmds, that is.  Also, how do the cmds_max and queue_depth 
relate?  From what I'm reading, it seems like the queue_depth is each 
iface's buffer and that the cmds_max is specific to each session?

3)  *iSCSI/scsi/dm-multipath failures* - With regards to timeouts and 
thresholds and queue_depth and such, if an iSCSI session hits its timeout 
values and there is a queue_depth of 1024, will the entire queue fail back 
to dm-multipath?  Will I need to verify that dm-multipath's queue is >= 
1024?  If dm-multipath doesn't have that big of a queue, will which IOs will 
go back to dm-multipath?  I'm guessing first in first out get bounced back 
to dm-multipath, but the remaining queued IO get stuck in that sessions 
queue until that session comes back online?   Once it comes back online, I'm 
assuming those get flushed.  But if that session doesn't come back and the 
retry_max gets hit, will those queued items then get bounced back to 
dm-multipath?



-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to