On Wed, Feb 10, 2016 at 5:12 PM, Fox, Kevin M <kevin....@pnnl.gov> wrote:

> But the issue is, when told to detach, some of the drivers do bad things.
> then, is it the driver's issue to refcount to fix the issue, or is it
> nova's to refcount so that it doesn't call the release before all users are
> done with it? I think solving it in the middle, in cinder's probably not
> the right place to track it, but if its to be solved on nova's side, nova
> needs to know when it needs to do it. But cinder might have to relay some
> extra info from the backend.
>
> Either way, On the driver side, there probably needs to be a mechanism on
> the driver to say it either can refcount properly so its multiattach
> compatible (or that nova should refcount), or to default to not allowing
> multiattach ever, so existing drivers don't break.
>
> Thanks,
> Kevin
> ________________________________________
> From: Sean McGinnis [sean.mcgin...@gmx.com]
> Sent: Wednesday, February 10, 2016 3:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when
> to call os-brick's connector.disconnect_volume
>
> On Wed, Feb 10, 2016 at 11:16:28PM +0000, Fox, Kevin M wrote:
> > I think part of the issue is whether to count or not is cinder driver
> specific and only cinder knows if it should be done or not.
> >
> > But if cinder told nova that particular multiattach endpoints must be
> refcounted, that might resolve the issue?
> >
> > Thanks,
> > Kevin
>
> I this case (the point John and I were making at least) it doesn't
> matter. Nothing is driver specific, so it wouldn't matter which backend
> is being used.
>
> If a volume is needed, request it to be attached. When it is no longer
> needed, tell Cinder to take it away. Simple as that.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Hey Kevin,

So I think what Sean M pointed out is still valid in your case.  It's not
really that some drivers do bad things, the problem is actually the way
attach/detach works in OpenStack as a whole.  The original design (which we
haven't strayed very far from) was that you could only attach a single
resource to a single compute node.  That was it, there was no concept of
multi-attach etc.

Now however folks want to introduce multi-attach, which means all of the
old assumptions that the code was written on and designed around are kinda
"bad assumptions" now.  It's true, as you pointed out however that there
are some drivers that behave or deal with targets in a way that makes
things complicated, but they're completely inline with the scsi standards
and aren't doing anything *wrong*.

The point Sean M and I were trying to make is that for the specific use
case of a single volume being attached to a compute node, BUT being passed
through to more than one Instance it might be worth looking at just
ensuring that Compute Node doesn't call detach unless it's *done* with all
of the Instances that it was passing that volume through to.

You're absolutely right, there are some *weird* things that a couple of
vendors do with targets in the case of like replication where they may
actually create a new target and attach; those sorts of things are
ABSOLUTELY Cinder's problem and Nova should not have to know anything about
that as a consumer of the Target.

My view is that maybe we should look at addressing the multiple use of a
single target case in Nova, and then absolutely figure out how to make
things work correctly on the Cinder side for all the different behaviors
that may occur on the Cinder side from the various vendors.

Make sense?

John
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to