Version v0.100 of open-isns released

2020-01-23 Thread The Lee-Man
Hello:

I've released version v0.100 of open-isns. This version includes:

* fixes to IPv6 handling
* fixes to existing test suite for openssl
* adding new python3-based unittests, to replace deprecated perl-based tests

Please help yourself, at https://github.com/open-iscsi/open-isns


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/f980a812-7db6-4c1e-94e3-bae0eefb3ec4%40googlegroups.com.


Re: [LSF/MM TOPIC] iSCSI MQ adoption via MCS discussion

2020-01-23 Thread The Lee-Man


On Tuesday, January 21, 2020 at 1:15:29 AM UTC-8, Bobby wrote:
>
> Hi all,
>
> I have a question please. Are these todo's finally part of Open-iSCSi 
> initiator?
>
> Thanks
>

No, not really. It's a "hard problem", and offload cards have somewhat 
worked around the problem by doing all of the work in the card. 

>
> On Wednesday, January 7, 2015 at 5:57:14 PM UTC+1, hare wrote:
>>
>> On 01/07/2015 05:25 PM, Sagi Grimberg wrote: 
>> > Hi everyone, 
>> > 
>> > Now that scsi-mq is fully included, we need an iSCSI initiator that 
>> > would use it to achieve scalable performance. The need is even greater 
>> > for iSCSI offload devices and transports that support multiple HW 
>> > queues. As iSER maintainer I'd like to discuss the way we would choose 
>> > to implement that in iSCSI. 
>> > 
>> > My measurements show that iSER initiator can scale up to ~2.1M IOPs 
>> > with multiple sessions but only ~630K IOPs with a single session where 
>> > the most significant bottleneck the (single) core processing 
>> > completions. 
>> > 
>> > In the existing single connection per session model, given that command 
>> > ordering must be preserved session-wide, we end up in a serial command 
>> > execution over a single connection which is basically a single queue 
>> > model. The best fit seems to be plugging iSCSI MCS as a multi-queued 
>> > scsi LLDD. In this model, a hardware context will have a 1x1 mapping 
>> > with an iSCSI connection (TCP socket or a HW queue). 
>> > 
>> > iSCSI MCS and it's role in the presence of dm-multipath layer was 
>> > discussed several times in the past decade(s). The basic need for MCS 
>> is 
>> > implementing a multi-queue data path, so perhaps we may want to avoid 
>> > doing any type link aggregation or load balancing to not overlap 
>> > dm-multipath. For example we can implement ERL=0 (which is basically 
>> the 
>> > scsi-mq ERL) and/or restrict a session to a single portal. 
>> > 
>> > As I see it, the todo's are: 
>> > 1. Getting MCS to work (kernel + user-space) with ERL=0 and a 
>> >round-robin connection selection (per scsi command execution). 
>> > 2. Plug into scsi-mq - exposing num_connections as nr_hw_queues and 
>> >using blk-mq based queue (conn) selection. 
>> > 3. Rework iSCSI core locking scheme to avoid session-wide locking 
>> >as much as possible. 
>> > 4. Use blk-mq pre-allocation and tagging facilities. 
>> > 
>> > I've recently started looking into this. I would like the community to 
>> > agree (or debate) on this scheme and also talk about implementation 
>> > with anyone who is also interested in this. 
>> > 
>> Yes, that's a really good topic. 
>>
>> I've pondered implementing MC/S for iscsi/TCP but then I've figured my 
>> network implementation knowledge doesn't spread that far. 
>> So yeah, a discussion here would be good. 
>>
>> Mike? Any comments? 
>>
>> Cheers, 
>>
>> Hannes 
>> -- 
>> Dr. Hannes Reinecke  zSeries & Storage 
>> ha...@suse.de  +49 911 74053 688 
>> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg 
>> GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg) 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4f57a546-6857-4a5a-968b-2b51a6e6ad68%40googlegroups.com.


Re: [PATCH] iscsi: Add support for asynchronous iSCSI session destruction

2020-01-23 Thread The Lee-Man
On Friday, January 17, 2020 at 3:33:35 PM UTC-8, Gabriel Krisman Bertazi 
wrote:
>
> From: Frank Mayhar  
>
> iSCSI session destruction can be arbitrarily slow, since it might 
> require network operations and serialization inside the scsi layer. 
> This patch adds a new user event to trigger the destruction work 
> asynchronously, releasing the rx_queue_mutex as soon as the operation is 
> queued and before it is performed.  This change allow other operations 
> to run in other sessions in the meantime, removing one of the major 
> iSCSI bottlenecks for us. 
>
> To prevent the session from being used after the destruction request, we 
> remove it immediately from the sesslist. This simplifies the locking 
> required during the asynchronous removal. 
>
> Co-developed-by: Khazhismel Kumykov  
> Signed-off-by: Khazhismel Kumykov  
> Signed-off-by: Frank Mayhar  
> Co-developed-by: Gabriel Krisman Bertazi  
> Signed-off-by: Gabriel Krisman Bertazi  
> --- 
>
> This patch requires a patch that just went upstream to apply cleanly. 
> it is ("iscsi: Don't destroy session if there are outstanding 
> connections"), which was just merged by Martin into 5.6/scsi-queue. 
> Please make sure you have it in your tree, otherwise this one won't 
> apply. 
>
>  drivers/scsi/scsi_transport_iscsi.c | 36 + 
>  include/scsi/iscsi_if.h |  1 + 
>  include/scsi/scsi_transport_iscsi.h |  1 + 
>  3 files changed, 38 insertions(+) 
>
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index ba6cfaf71aef..e9a8e0317b0d 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -95,6 +95,8 @@ static DECLARE_WORK(stop_conn_work, stop_conn_work_fn); 
>  static atomic_t iscsi_session_nr; /* sysfs session id for next new 
> session */ 
>  static struct workqueue_struct *iscsi_eh_timer_workq; 
>   
> +static struct workqueue_struct *iscsi_destroy_workq; 
> + 
>  static DEFINE_IDA(iscsi_sess_ida); 
>  /* 
>   * list of registered transports and lock that must 
> @@ -1615,6 +1617,7 @@ static struct sock *nls; 
>  static DEFINE_MUTEX(rx_queue_mutex); 
>   
>  static LIST_HEAD(sesslist); 
> +static LIST_HEAD(sessdestroylist); 
>  static DEFINE_SPINLOCK(sesslock); 
>  static LIST_HEAD(connlist); 
>  static LIST_HEAD(connlist_err); 
> @@ -2035,6 +2038,14 @@ static void __iscsi_unbind_session(struct 
> work_struct *work) 
>  ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n"); 
>  } 
>   
> +static void __iscsi_destroy_session(struct work_struct *work) 
> +{ 
> +struct iscsi_cls_session *session = 
> +container_of(work, struct iscsi_cls_session, 
> destroy_work); 
> + 
> +session->transport->destroy_session(session); 
> +} 
> + 
>  struct iscsi_cls_session * 
>  iscsi_alloc_session(struct Scsi_Host *shost, struct iscsi_transport 
> *transport, 
>  int dd_size) 
> @@ -2057,6 +2068,7 @@ iscsi_alloc_session(struct Scsi_Host *shost, struct 
> iscsi_transport *transport, 
>  INIT_WORK(>block_work, __iscsi_block_session); 
>  INIT_WORK(>unbind_work, __iscsi_unbind_session); 
>  INIT_WORK(>scan_work, iscsi_scan_session); 
> +INIT_WORK(>destroy_work, __iscsi_destroy_session); 
>  spin_lock_init(>lock); 
>   
>  /* this is released in the dev's release function */ 
> @@ -3617,6 +3629,23 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct 
> nlmsghdr *nlh, uint32_t *group) 
>  else 
>  transport->destroy_session(session); 
>  break; 
> +case ISCSI_UEVENT_DESTROY_SESSION_ASYNC: 
> +session = iscsi_session_lookup(ev->u.d_session.sid); 
> +if (!session) 
> +err = -EINVAL; 
> +else if (iscsi_session_has_conns(ev->u.d_session.sid)) 
> +err = -EBUSY; 
> +else { 
> +unsigned long flags; 
> + 
> +/* Prevent this session from being found again */ 
> +spin_lock_irqsave(, flags); 
> +list_move(>sess_list, ); 
> +spin_unlock_irqrestore(, flags); 
> + 
> +queue_work(iscsi_destroy_workq, 
> >destroy_work); 
> +} 
> +break; 
>  case ISCSI_UEVENT_UNBIND_SESSION: 
>  session = iscsi_session_lookup(ev->u.d_session.sid); 
>  if (session) 
> @@ -4662,8 +4691,14 @@ static __init int iscsi_transport_init(void) 
>  goto release_nls; 
>  } 
>   
> +iscsi_destroy_workq = 
> create_singlethread_workqueue("iscsi_destroy"); 
> +if (!iscsi_destroy_workq) 
> +goto destroy_wq; 
> + 
>  return 0; 
>   
> +destroy_wq: 
> +destroy_workqueue(iscsi_eh_timer_workq); 
>  release_nls: 
>  

Re: [PATCH v4] iscsi: Perform connection failure entirely in kernel space

2020-01-23 Thread The Lee-Man
On Wednesday, January 15, 2020 at 7:52:39 PM UTC-8, Martin K. Petersen 
wrote:
>
>
> > Please consider the v4 below with the lock added. 
>
> Lee: Please re-review this given the code change. 
>

Martin:

The recent change makes sense, so please still include my:

Reviewed-by: Lee Duncan 

>
> > From: Bharath Ravi  
> > 
> > Connection failure processing depends on a daemon being present to (at 
> > least) stop the connection and start recovery.  This is a problem on a 
> > multipath scenario, where if the daemon failed for whatever reason, the 
> > SCSI path is never marked as down, multipath won't perform the 
> > failover and IO to the device will be forever waiting for that 
> > connection to come back. 
> > 
> > This patch performs the connection failure entirely inside the kernel. 
> > This way, the failover can happen and pending IO can continue even if 
> > the daemon is dead. Once the daemon comes alive again, it can execute 
> > recovery procedures if applicable. 
>
> -- 
> Martin K. PetersenOracle Linux Engineering 
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/1ece4cbc-29df-4b10-952d-a8d829e24e6f%40googlegroups.com.


Re: iSCSI Multiqueue

2020-01-23 Thread The Lee-Man
On Wednesday, January 15, 2020 at 7:16:48 AM UTC-8, Bobby wrote:
>
>
> Hi all,
>
> I have a question regarding multi-queue in iSCSI. AFAIK, *scsi-mq* has 
> been functional in kernel since kernel 3.17. Because earlier,
> the block layer was updated to multi-queue *blk-mq* from single-queue. So 
> the current kernel has full-fledged *multi-queues*.
>
> The question is:
>
> How an iSCSI initiator uses multi-queue? Does it mean having multiple 
> connections? I would like 
> to see where exactly that is achieved in the code, if someone can please 
> me give me a hint. Thanks in advance :)
>
> Regards
>

open-iscsi does not use multi-queue specifically, though all of the block 
layer is now converted to using multi-queue. If I understand correctly, 
there is no more single-queue, but there is glue that allows existing 
single-queue drivers to continue on, mapping their use to multi-queue. 
(Someone please correct me if I'm wrong.)

The only time multi-queue might be useful for open-iscsi to use would be 
for MCS -- multiple connections per session. But the implementation of 
multi-queue makes using it for MCS problematic. Because each queue is on a 
different CPU, open-iscsi would have to coordinate the multiple connections 
across multiple CPUs, making things like ensuring correct sequence numbers 
difficult.

Hope that helps. I _believe_ there is still an effort to map open-iscsi MCS 
to multi-queue, but nobody has tried to actually do it yet that I know of. 
The goal, of course, is better throughput using MCS.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8f236c4a-a207-4a0e-8dff-ad14a74e57dc%40googlegroups.com.