On Tuesday, January 21, 2020 at 1:15:29 AM UTC-8, Bobby wrote:
>
> Hi all,
>
> I have a question please. Are these todo's finally part of Open-iSCSi 
> initiator?
>
> Thanks
>

No, not really. It's a "hard problem", and offload cards have somewhat 
worked around the problem by doing all of the work in the card. 

>
> On Wednesday, January 7, 2015 at 5:57:14 PM UTC+1, hare wrote:
>>
>> On 01/07/2015 05:25 PM, Sagi Grimberg wrote: 
>> > Hi everyone, 
>> > 
>> > Now that scsi-mq is fully included, we need an iSCSI initiator that 
>> > would use it to achieve scalable performance. The need is even greater 
>> > for iSCSI offload devices and transports that support multiple HW 
>> > queues. As iSER maintainer I'd like to discuss the way we would choose 
>> > to implement that in iSCSI. 
>> > 
>> > My measurements show that iSER initiator can scale up to ~2.1M IOPs 
>> > with multiple sessions but only ~630K IOPs with a single session where 
>> > the most significant bottleneck the (single) core processing 
>> > completions. 
>> > 
>> > In the existing single connection per session model, given that command 
>> > ordering must be preserved session-wide, we end up in a serial command 
>> > execution over a single connection which is basically a single queue 
>> > model. The best fit seems to be plugging iSCSI MCS as a multi-queued 
>> > scsi LLDD. In this model, a hardware context will have a 1x1 mapping 
>> > with an iSCSI connection (TCP socket or a HW queue). 
>> > 
>> > iSCSI MCS and it's role in the presence of dm-multipath layer was 
>> > discussed several times in the past decade(s). The basic need for MCS 
>> is 
>> > implementing a multi-queue data path, so perhaps we may want to avoid 
>> > doing any type link aggregation or load balancing to not overlap 
>> > dm-multipath. For example we can implement ERL=0 (which is basically 
>> the 
>> > scsi-mq ERL) and/or restrict a session to a single portal. 
>> > 
>> > As I see it, the todo's are: 
>> > 1. Getting MCS to work (kernel + user-space) with ERL=0 and a 
>> >    round-robin connection selection (per scsi command execution). 
>> > 2. Plug into scsi-mq - exposing num_connections as nr_hw_queues and 
>> >    using blk-mq based queue (conn) selection. 
>> > 3. Rework iSCSI core locking scheme to avoid session-wide locking 
>> >    as much as possible. 
>> > 4. Use blk-mq pre-allocation and tagging facilities. 
>> > 
>> > I've recently started looking into this. I would like the community to 
>> > agree (or debate) on this scheme and also talk about implementation 
>> > with anyone who is also interested in this. 
>> > 
>> Yes, that's a really good topic. 
>>
>> I've pondered implementing MC/S for iscsi/TCP but then I've figured my 
>> network implementation knowledge doesn't spread that far. 
>> So yeah, a discussion here would be good. 
>>
>> Mike? Any comments? 
>>
>> Cheers, 
>>
>> Hannes 
>> -- 
>> Dr. Hannes Reinecke                      zSeries & Storage 
>> [email protected]                              +49 911 74053 688 
>> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg 
>> GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg) 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4f57a546-6857-4a5a-968b-2b51a6e6ad68%40googlegroups.com.

Reply via email to