Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-24 Thread Lee Yarwood
On 20-07-18 08:10:37, Erlon Cruz wrote:
> Nice, good to know. Thanks all for the feedback. We will fix that in our
> drivers.

FWIW Nova does not and AFAICT never has called os-force_detach.

We previously used os-terminate_connection with v2 where the connector
was optional. Even then we always provided one, even providing the
destination connector during an evacuation when the source connector
wasn't stashed in connection_info.
 
> @Walter, so, in this case, if Cinder has the connector, it should not need
> to call the driver passing a None object right?

Yeah I don't think this is an issue with v3 given the connector is
stashed with the attachment, so all we require is a reference to the
attachment to cleanup the connection during evacuations etc.

Lee
 
> Erlon
> 
> Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
> escreveu:
> 
> > The whole purpose of this test is to simulate the case where Nova doesn't
> > know where the vm is anymore,
> > or may simply not exist, but we need to clean up the cinder side of
> > things.   That being said, with the new
> > attach API, the connector is being saved in the cinder database for each
> > volume attachment.
> >
> > Walt
> >
> > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> > wrote:
> >
> >> On 17/07, Sean McGinnis wrote:
> >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> >> > > Hi Cinder and Nova folks,
> >> > >
> >> > > Working on some tests for our drivers, I stumbled upon this tempest
> >> test
> >> > > 'force_detach_volume'
> >> > > that is calling Cinder API passing a 'None' connector. At the time
> >> this was
> >> > > added several CIs
> >> > > went down, and people started discussing whether this
> >> (accepting/sending a
> >> > > None connector)
> >> > > would be the proper behavior for what is expected to a driver to
> >> do[1]. So,
> >> > > some of CIs started
> >> > > just skipping that test[2][3][4] and others implemented fixes that
> >> made the
> >> > > driver to disconnected
> >> > > the volume from all hosts if a None connector was received[5][6][7].
> >> >
> >> > Right, it was determined the correct behavior for this was to
> >> disconnect the
> >> > volume from all hosts. The CIs that are skipping this test should stop
> >> doing so
> >> > (once their drivers are fixed of course).
> >> >
> >> > >
> >> > > While implementing this fix seems to be straightforward, I feel that
> >> just
> >> > > removing the volume
> >> > > from all hosts is not the correct thing to do mainly considering that
> >> we
> >> > > can have multi-attach.
> >> > >
> >> >
> >> > I don't think multiattach makes a difference here. Someone is forcibly
> >> > detaching the volume and not specifying an individual connection. So
> >> based on
> >> > that, Cinder should be removing any connections, whether that is to one
> >> or
> >> > several hosts.
> >> >
> >>
> >> Hi,
> >>
> >> I agree with Sean, drivers should remove all connections for the volume.
> >>
> >> Even without multiattach there are cases where you'll have multiple
> >> connections for the same volume, like in a Live Migration.
> >>
> >> It's also very useful when Nova and Cinder get out of sync and your
> >> volume has leftover connections. In this case if you try to delete the
> >> volume you get a "volume in use" error from some drivers.
> >>
> >> Cheers,
> >> Gorka.
> >>
> >>
> >> > > So, my questions are: What is the best way to fix this problem? Should
> >> > > Cinder API continue to
> >> > > accept detachments with None connectors? If, so, what would be the
> >> effects
> >> > > on other Nova
> >> > > attachments for the same volume? Is there any side effect if the
> >> volume is
> >> > > not multi-attached?
> >> > >
> >> > > Additionally to this thread here, I should bring this topic to
> >> tomorrow's
> >> > > Cinder's meeting,
> >> > > so please join if you have something to share.
> >> > >
> >> >
> >> > +1 - good plan.
> >> >
> >> >
> >> >
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-20 Thread Erlon Cruz
Nice, good to know. Thanks all for the feedback. We will fix that in our
drivers.

@Walter, so, in this case, if Cinder has the connector, it should not need
to call the driver passing a None object right?

Erlon

Em qua, 18 de jul de 2018 às 12:56, Walter Boring 
escreveu:

> The whole purpose of this test is to simulate the case where Nova doesn't
> know where the vm is anymore,
> or may simply not exist, but we need to clean up the cinder side of
> things.   That being said, with the new
> attach API, the connector is being saved in the cinder database for each
> volume attachment.
>
> Walt
>
> On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor 
> wrote:
>
>> On 17/07, Sean McGinnis wrote:
>> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
>> > > Hi Cinder and Nova folks,
>> > >
>> > > Working on some tests for our drivers, I stumbled upon this tempest
>> test
>> > > 'force_detach_volume'
>> > > that is calling Cinder API passing a 'None' connector. At the time
>> this was
>> > > added several CIs
>> > > went down, and people started discussing whether this
>> (accepting/sending a
>> > > None connector)
>> > > would be the proper behavior for what is expected to a driver to
>> do[1]. So,
>> > > some of CIs started
>> > > just skipping that test[2][3][4] and others implemented fixes that
>> made the
>> > > driver to disconnected
>> > > the volume from all hosts if a None connector was received[5][6][7].
>> >
>> > Right, it was determined the correct behavior for this was to
>> disconnect the
>> > volume from all hosts. The CIs that are skipping this test should stop
>> doing so
>> > (once their drivers are fixed of course).
>> >
>> > >
>> > > While implementing this fix seems to be straightforward, I feel that
>> just
>> > > removing the volume
>> > > from all hosts is not the correct thing to do mainly considering that
>> we
>> > > can have multi-attach.
>> > >
>> >
>> > I don't think multiattach makes a difference here. Someone is forcibly
>> > detaching the volume and not specifying an individual connection. So
>> based on
>> > that, Cinder should be removing any connections, whether that is to one
>> or
>> > several hosts.
>> >
>>
>> Hi,
>>
>> I agree with Sean, drivers should remove all connections for the volume.
>>
>> Even without multiattach there are cases where you'll have multiple
>> connections for the same volume, like in a Live Migration.
>>
>> It's also very useful when Nova and Cinder get out of sync and your
>> volume has leftover connections. In this case if you try to delete the
>> volume you get a "volume in use" error from some drivers.
>>
>> Cheers,
>> Gorka.
>>
>>
>> > > So, my questions are: What is the best way to fix this problem? Should
>> > > Cinder API continue to
>> > > accept detachments with None connectors? If, so, what would be the
>> effects
>> > > on other Nova
>> > > attachments for the same volume? Is there any side effect if the
>> volume is
>> > > not multi-attached?
>> > >
>> > > Additionally to this thread here, I should bring this topic to
>> tomorrow's
>> > > Cinder's meeting,
>> > > so please join if you have something to share.
>> > >
>> >
>> > +1 - good plan.
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Walter Boring
The whole purpose of this test is to simulate the case where Nova doesn't
know where the vm is anymore,
or may simply not exist, but we need to clean up the cinder side of
things.   That being said, with the new
attach API, the connector is being saved in the cinder database for each
volume attachment.

Walt

On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor  wrote:

> On 17/07, Sean McGinnis wrote:
> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > > Hi Cinder and Nova folks,
> > >
> > > Working on some tests for our drivers, I stumbled upon this tempest
> test
> > > 'force_detach_volume'
> > > that is calling Cinder API passing a 'None' connector. At the time
> this was
> > > added several CIs
> > > went down, and people started discussing whether this
> (accepting/sending a
> > > None connector)
> > > would be the proper behavior for what is expected to a driver to
> do[1]. So,
> > > some of CIs started
> > > just skipping that test[2][3][4] and others implemented fixes that
> made the
> > > driver to disconnected
> > > the volume from all hosts if a None connector was received[5][6][7].
> >
> > Right, it was determined the correct behavior for this was to disconnect
> the
> > volume from all hosts. The CIs that are skipping this test should stop
> doing so
> > (once their drivers are fixed of course).
> >
> > >
> > > While implementing this fix seems to be straightforward, I feel that
> just
> > > removing the volume
> > > from all hosts is not the correct thing to do mainly considering that
> we
> > > can have multi-attach.
> > >
> >
> > I don't think multiattach makes a difference here. Someone is forcibly
> > detaching the volume and not specifying an individual connection. So
> based on
> > that, Cinder should be removing any connections, whether that is to one
> or
> > several hosts.
> >
>
> Hi,
>
> I agree with Sean, drivers should remove all connections for the volume.
>
> Even without multiattach there are cases where you'll have multiple
> connections for the same volume, like in a Live Migration.
>
> It's also very useful when Nova and Cinder get out of sync and your
> volume has leftover connections. In this case if you try to delete the
> volume you get a "volume in use" error from some drivers.
>
> Cheers,
> Gorka.
>
>
> > > So, my questions are: What is the best way to fix this problem? Should
> > > Cinder API continue to
> > > accept detachments with None connectors? If, so, what would be the
> effects
> > > on other Nova
> > > attachments for the same volume? Is there any side effect if the
> volume is
> > > not multi-attached?
> > >
> > > Additionally to this thread here, I should bring this topic to
> tomorrow's
> > > Cinder's meeting,
> > > so please join if you have something to share.
> > >
> >
> > +1 - good plan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Gorka Eguileor
On 17/07, Sean McGinnis wrote:
> On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > Hi Cinder and Nova folks,
> >
> > Working on some tests for our drivers, I stumbled upon this tempest test
> > 'force_detach_volume'
> > that is calling Cinder API passing a 'None' connector. At the time this was
> > added several CIs
> > went down, and people started discussing whether this (accepting/sending a
> > None connector)
> > would be the proper behavior for what is expected to a driver to do[1]. So,
> > some of CIs started
> > just skipping that test[2][3][4] and others implemented fixes that made the
> > driver to disconnected
> > the volume from all hosts if a None connector was received[5][6][7].
>
> Right, it was determined the correct behavior for this was to disconnect the
> volume from all hosts. The CIs that are skipping this test should stop doing 
> so
> (once their drivers are fixed of course).
>
> >
> > While implementing this fix seems to be straightforward, I feel that just
> > removing the volume
> > from all hosts is not the correct thing to do mainly considering that we
> > can have multi-attach.
> >
>
> I don't think multiattach makes a difference here. Someone is forcibly
> detaching the volume and not specifying an individual connection. So based on
> that, Cinder should be removing any connections, whether that is to one or
> several hosts.
>

Hi,

I agree with Sean, drivers should remove all connections for the volume.

Even without multiattach there are cases where you'll have multiple
connections for the same volume, like in a Live Migration.

It's also very useful when Nova and Cinder get out of sync and your
volume has leftover connections. In this case if you try to delete the
volume you get a "volume in use" error from some drivers.

Cheers,
Gorka.


> > So, my questions are: What is the best way to fix this problem? Should
> > Cinder API continue to
> > accept detachments with None connectors? If, so, what would be the effects
> > on other Nova
> > attachments for the same volume? Is there any side effect if the volume is
> > not multi-attached?
> >
> > Additionally to this thread here, I should bring this topic to tomorrow's
> > Cinder's meeting,
> > so please join if you have something to share.
> >
>
> +1 - good plan.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Sean McGinnis
On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> Hi Cinder and Nova folks,
> 
> Working on some tests for our drivers, I stumbled upon this tempest test
> 'force_detach_volume'
> that is calling Cinder API passing a 'None' connector. At the time this was
> added several CIs
> went down, and people started discussing whether this (accepting/sending a
> None connector)
> would be the proper behavior for what is expected to a driver to do[1]. So,
> some of CIs started
> just skipping that test[2][3][4] and others implemented fixes that made the
> driver to disconnected
> the volume from all hosts if a None connector was received[5][6][7].

Right, it was determined the correct behavior for this was to disconnect the
volume from all hosts. The CIs that are skipping this test should stop doing so
(once their drivers are fixed of course).

> 
> While implementing this fix seems to be straightforward, I feel that just
> removing the volume
> from all hosts is not the correct thing to do mainly considering that we
> can have multi-attach.
> 

I don't think multiattach makes a difference here. Someone is forcibly
detaching the volume and not specifying an individual connection. So based on
that, Cinder should be removing any connections, whether that is to one or
several hosts.

> So, my questions are: What is the best way to fix this problem? Should
> Cinder API continue to
> accept detachments with None connectors? If, so, what would be the effects
> on other Nova
> attachments for the same volume? Is there any side effect if the volume is
> not multi-attached?
> 
> Additionally to this thread here, I should bring this topic to tomorrow's
> Cinder's meeting,
> so please join if you have something to share.
> 

+1 - good plan.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-17 Thread Erlon Cruz
Hi Cinder and Nova folks,

Working on some tests for our drivers, I stumbled upon this tempest test
'force_detach_volume'
that is calling Cinder API passing a 'None' connector. At the time this was
added several CIs
went down, and people started discussing whether this (accepting/sending a
None connector)
would be the proper behavior for what is expected to a driver to do[1]. So,
some of CIs started
just skipping that test[2][3][4] and others implemented fixes that made the
driver to disconnected
the volume from all hosts if a None connector was received[5][6][7].

While implementing this fix seems to be straightforward, I feel that just
removing the volume
from all hosts is not the correct thing to do mainly considering that we
can have multi-attach.

So, my questions are: What is the best way to fix this problem? Should
Cinder API continue to
accept detachments with None connectors? If, so, what would be the effects
on other Nova
attachments for the same volume? Is there any side effect if the volume is
not multi-attached?

Additionally to this thread here, I should bring this topic to tomorrow's
Cinder's meeting,
so please join if you have something to share.

Erlon

___
[1] https://bugs.launchpad.net/cinder/+bug/1686278
[2]
https://openstack-ci-logs.aws.infinidat.com/14/578114/2/check/dsvm-tempest-infinibox-fc/14fa930/console.html
[3]
http://54.209.116.144/14/578114/2/check/kaminario-dsvm-tempest-full-iscsi/ce750c8/console.html
[4]
http://logs.openstack.netapp.com/logs/14/578114/2/upstream-check/cinder-cDOT-iSCSI/8e2c549/console.html#_2018-07-16_20_06_16_937286
[5]
https://review.openstack.org/#/c/551832/1/cinder/volume/drivers/dell_emc/vnx/adapter.py
[6]
https://review.openstack.org/#/c/550324/2/cinder/volume/drivers/hpe/hpe_3par_common.py
[7]
https://review.openstack.org/#/c/536778/2/cinder/volume/drivers/infinidat.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev