On Tue, May 15, 2018 at 01:25:34PM -0700, The Lee-Man wrote:
> Hi All:
> 
> I have been thinking that, with all the new development going on in Linux 
> and in our own open-iscsi community, that a summit would be a good idea.
> 
> It turns out both Chris and I live in the Northwest US (near Portland, OR), 
> and it would be easy for us to get together and talk.
> 
> But it occurs to me there are a lot of other players in the open-iscsi 
> space that might like to get together.

Oops, actually though I responded to this already. Lee knows I'm up for
some face-to-face discussion, and would be happy to see other people
involved. I'm not sure who's actively watching this list, we might need
to reach out to some of the industry partners with a stake in this
directly.
 
> We could have discussions and talks. e.g.:
> 
> * recent changes in open-iscsi (e.g. libopeniscsiusr)
> * handling containers
> * booting issues
> * scaling open-iscsi -- thousands of LUNs?
> * network interface: why not use a tap interface?
> * sysfs -- what a mess. How to synchronize with the asynchronous
> * where next?
> 
> (I'm sure there are a lot of other topics I can't think of right now)

This is a good list. A few comments and other ideas.

> booting issues

I think booting issues take up more of my time than any other class of
problem. So I'm for looking at rethinking things here.

iscsistart is a mess, and we could clean it up but ... at one point
someone working on dracut asked why it couldn't just take the kernel
command line and do what was needed. It would avoid having to script a
translation from cmdline to iscsistart (or iscsiadm) commands, and take
control of the booting process more in Open-iSCSI. Maybe not a bad idea?

> network interface: why not use a tap interface?

I'm thinking you mean for network configuration support of offloading
HBAs, where iscsiuio is being used with a userspace IP stack. If so,
then putting an encapsulation interface to the drivers in place and
using a tap and the standard IP stack and tools sounds interesting.

As an additional incentive, iscsiuio currently maps bnx2x and qed
register spaces directly and programs a part of the adapter from
userspace. This actually break with CONFIG_IO_STRICT_DEVMEM and is
something that we should be changing, probably encapsulating ethernet
frames in netlink or something.

Other big stuff to look at on the kernel side:

The kernel session locking in general. The frwd_lock/back_lock split was
a pain to figure out the task list corruption problem a while back, and
now it looks like it still might have issues. Reported problems need
looking at sooner, but can we do better with task allocation and
locking?

Is there something was can do for iscsi_tcp with scsi_mq? I don't know
who proposed it (or how long ago) but it's been stuck in my mind to look
at a limited implementation of MC/S to have multiple TCP connections and
run them as multiple queues per host. Not to compete with dm-multipath,
but only within a single path for performance scale out on the system.

- Chris

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to