On 13/08, Nir Soffer wrote: > On Thu, Aug 13, 2020 at 1:32 AM The Lee-Man <[email protected]> wrote: > > > On Sunday, August 9, 2020 at 11:08:50 AM UTC-7, Amit Bawer wrote: > >> > >> ... > >> > >>> > >>>> The other option is to use one login-all call without parallelism, but > >>>> that would have other implications on our system to consider. > >>>> > >>> > >>> Such as? > >>> > >> As mentioned above, unless there is a way to specify a list of targets > >> and portals for a single login (all) command. > >> > >>> > >>>> Your answers would be helpful once again. > >>>> > >>>> Thanks, > >>>> - Amit > >>>> > >>>> > >>> You might be interested in a new feature I'm considering adding to > >>> iscsiadm to do asynchronous logins. In other words, the iscsiadm could, > >>> when asked to login to one or more targets, would send the login request > >>> to > >>> the targets, then return success immediately. It is then up to the > >>> end-user > >>> (you in this case) to poll for when the target actually shows up. > >>> > >> This sounds very interesting, but probably will be available to us only > >> on later RHEL releases, if chosen to be delivered downstream. > >> At present it seems we can only use the login-all way or logins in a > >> dedicated threads per target-portal. > >> > >>> > >>> ... > >>> > >> > > So you can only use RH-released packages? > > > > Yes, we support RHEL and CentOS now. > > > > That's fine with me, but I'm asking you to test a new feature and see if > > it fixes your problems. If it helped, I would add up here in this repo, and > > redhat would get it by default when they updated, which they do regularly, > > as does my company (SUSE). > > > > Sure, this is how we do things. But using async login is something we can > use only > in a future version, maybe RHEL/CentOS 8.4, since it is probably too late > for 8.3. > > Just as a "side" point, I wouldn't attack your problem by manually listing > > nodes to login to. > > > > It does seem as if you assume you are the only iscsi user on the system. > > In that case, you have complete control of the node database. Assuming your > > targets do not change, you can set up your node database once and never > > have to discover iscsi targets again. Of course if targets change, you can > > update your node database, but only as needed, i.e. full discovery > > shouldn't be needed each time you start up, unless targets are really > > changing all the time in your environment. > > > > This is partly true. in oVirt, there is the vdsm daemon managing iSCSI > connections. > so usually only vdsm manipulates the database. > > However even in vdsm we have an issue when we attach a Cinder based volume. > In this case we use os-brick (https://github.com/openstack/os-brick) to > attach the > volume, and it will discover and login to the volume. >
Hi, For os-brick we would have to modify the library to use the async login mechanism, because right now it's serializing iSCSI connections using an in-process lock. There are at least two reasons why we are serializing iSCSI logins/logouts: - It's easier: We don't have to be careful with race conditions between attach/detach/cleanup on failed attach on the same targets. - It's more robust: This is the main reason. I don't remember exactly when/where it happened, but concurrently creating nodes and logging in could lead to a program (iscsiadm or iscsid, I don't remember) getting stuck forever. It is in my TODO list to improve the connection speed by reducing the critical section we are locking, but it's not something I'm currently working on. > And of course we cannot prevent an admin from changing the database for > their > valid reasons. > > So being able to login/logout to specific nodes is very attractive for us. > > If you do discovery and have nodes in your node database you don't like, > > just remove them. > > > > We can do this, adding and removing nodes we added, but we cannot remove > nodes > we did not add. If may be something added by os-brik or an administrator. > > Another point about your scheme: you are setting each node's 'startup' to > > 'manual', but manual is the default, and since you seem to own the > > open-iscsi code on this system, you can ensure the default is manual. > > Perhaps because this is a test? > > > > No, this is our production setup. I don't know why we specify manual, maybe > this was not the default in 2009 when this code was written, or maybe the > intent > was to be explicit about it, in case the default would change? > Yes, that's the reason. The os-brick library doesn't know if the system has customized defaults, so it sets every configuration option that is necessary for its correct operation explicitly. > Do you see a problem with explicit node.startup=manual? > The only downside I can think of is the time spent setting it. Cheers, Gorka. > > > > > So, again, I ask you if you will test the async login code? It's really > > not much extra work -- just a "git clone" and a "make install" (mostly). If > > not, the async feature may make it into iscsiadm any way, some time soon, > > but I'd really prefer other testers for this feature before that. > > > > Sure, we will test this. > > Having async login API sounds great, but my concern is how do we wait for > the > login result. For example with systemd many things became asynchronous, but > there is no good way to wait for things. Few examples are mounts that can > fail > after the mount command completes, because after the completion udev changes > permissions on the mount, or multipath devices, which may not be ready after > connecting to a target. > > Can you elaborate on how you would wait for the login result, and how would > you > get login error for reporting up the stack? How can you handle timeouts? > This is > easy to do when using synchronous API with threads. > > From our point of view we want to be able to: > > start async login process > for result in login results: > add result to response > return response with connection details > > This runs on every host in a cluster, and the results are returned to oVirt > engine, > managing the cluster. > > Cheers, > Nir > > -- > You received this message because you are subscribed to the Google Groups > "open-iscsi" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/open-iscsi/CAMr-obscx-wmXs8Y2Y1NzWjcgc_vY-hOaYho50hhQiaJVeN9Qw%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/open-iscsi/20200922121520.76vlvwnx26d7hg7t%40localhost.
