Mike Christie wrote:
> Mike Christie wrote:
>> We were discussing this on the fcoe list, and I mentioned it here on the 
>> scsi cmd timer patch. What I thought we might want to do is instead of 
>> doing it based on cmdsn window rejects which can sometimes be static, we 
>> can take into account how often commands are timing while they are still 
>> pending. For example if a cmd is still pending and the scsi command 
>> timer fires, then can_queue is probably too high at that time.
>>
>> But we probably also need to check the cmdsn window rejects, because I 
>> think that can also change on some targets. So it can be separate 
>> patches, and I am not against what you are trying to do.
> 
> Actually, I am not sure what it is buying us except extra complexity 
> just so we can avoid a debug printk. What is the reason for the 
> threshhold code? In your previous patch you just set the can_queue to 
> the cmdsn window size, which seemed fine except the window size could be 
> larger than the number of commands that user had set for 
> node.session.cmds_max.
> 
Ah, did I? Totally forgot about this.

No, the main reason here was that under heavy loads we're hitting
the cmdsn window closed case quite often (the count went into millions,
literally). So I thought that it might be better to dynamically modify
some setting so that the scsi request_fn could take care of this situation.
Which would avoid having to call ->queuecommand() unnecessarily, increasing
lock contention on the session lock and all that sort of thing.

Your're right in that we shouldn't increase above the overall number
of commands lest we run into memory problems.

> 
>> We could also then add some code to try and dynamically increase it like 
>> Vasu is doing for the device queue_depth. This can_queue rampup would be 
>> for the target port resources/limitations instead of Vasu's device one.
>>
> 
> Oh yeah, just to be clear when I say can_queue I sometimes meant both 
> and sometimes just mean one or the other. The host port can be connected 
> to multiple target ports. Then the target port could be connected to 
> multiple initiator ports. Some of the ports can be on the same target or 
> initiator, but they can also be on completely different boxes, so you 
> end up with both the host and target can_queue limits fluctuating for 
> different reasons. In the end I think you would want to handle both.
> 
Yes, definitely. It just so bloody difficult to get to the starget
structure. One has to do some traversing starting at the scsi_host,
and there are always some locks involved ... bah.
A nice clean pointer from iscsi_session to starget would've been nice.
So to avoid all this I settled for the host's can_queue; which works
as well for open-iscsi. But in general, yes, of course, we
should be using starget's can_queue.

> For the target can_queue you could do something where we never send more 
> than cmdsn window commands, then you could time commands and track if it 
> was taking longer or shorter with different number of commands and you 
> cold track if cmds were timing out, and depending on the results you 
> could increase/decrease the target can_queue similar to what is done 
> with the fc drives and the device queue_depth.
> 
The advantage of the simple tracking I've implemented is that it's
... well ... simple. I don't really feel comfortable with tracking
the latency of a command, as this depends on various factors.
Tracking CmdSN window full rejects solely relies on target
properties which is kinda neat, I thought.
And I surely do _NOT_ want to track SCSI command timeouts.
In most cases these indicate an transport error of some kind
and not just a queue overrun.

> We would want to do something similar for the host can queue too, and 
> then also we will want to hook into Vasu's device queue_depth 
> rampup/down code as well. Also did you try the QUEUE_FULL patch I sent 
> with the MSA target?
> 
Yes, I did. But it didn't trigger :-(

Not sure on what trigger we should host->can_queue latch on.
By rights we should be modifying it based on host parameters.
Eg when sendpage() starts returning EAGAIN or somesuch.
But currently the target adjusts outweigh any command
retries by about 2 orders of magnitude, so it's hardly
worth it.
Well, for me :-)
Might be different when using iSCSI HBAs.

Anyway, I'll update the patch.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                   zSeries & Storage
h...@suse.de                          +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to