Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-02-06 Thread Avishay Traeger
On Thu, Feb 4, 2016 at 6:38 PM, Walter A. Boring IV 
wrote:

> My plan was to store the connector object at attach_volume time.   I was
> going to add an additional column to the cinder volume attachment table
> that stores the connector that came from nova.   The problem is live
> migration.   After live migration the connector is out of date.  Cinder
> doesn't have an existing API to update attachment.  That will have to be
> added, so that the connector info can be updated.
> We have needed this for force detach for some time now.
>
> It's on my list, but most likely not until N, or at least not until the
> microversions land in Cinder.
> Walt
>

I think live migration should probably just be a second attachment - during
the migration you have two attachments, then you detach the first.  I think
this is correct because as far as Cinder and the storage is concerned,
there are two attachments.  I think most of this mess started because we
were trying to make the volume status reflect the status in Cinder and
Nova.  If the status reflects only Cinder's (and the storage backends')
status, things because simpler.  (Might need to pass an extra flag on the
second attach to over-ride any "no multiattach" policies that exist.)

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-02-04 Thread Walter A. Boring IV
My plan was to store the connector object at attach_volume time.   I was 
going to add an additional column to the cinder volume attachment table 
that stores the connector that came from nova.   The problem is live 
migration. After live migration the connector is out of date.  Cinder 
doesn't have an existing API to update attachment.  That will have to be 
added, so that the connector info can be updated.

We have needed this for force detach for some time now.

It's on my list, but most likely not until N, or at least not until the 
microversions land in Cinder.

Walt



Hi all,
I was wondering if there was any way to cleanly detach volumes from 
failed nodes.  In the case where the node is up nova-compute will call 
Cinder's terminate_connection API with a "connector" that includes 
information about the node - e.g., hostname, IP, iSCSI initiator name, 
FC WWPNs, etc.
If the node has died, this information is no longer available, and so 
the attachment cannot be cleaned up properly.  Is there any way to 
handle this today?  If not, does it make sense to save the connector 
elsewhere (e.g., DB) for cases like these?


Thanks,
Avishay

--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog 
 | Twitter 
 | Google+ 
 | 
Linkedin 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Matt Riedemann



On 1/26/2016 5:55 AM, Avishay Traeger wrote:

OK great, thanks!  I added a suggestion to the etherpad as well, and
found this link helpful: https://review.openstack.org/#/c/266095/

On Tue, Jan 26, 2016 at 1:37 AM, D'Angelo, Scott > wrote:

There is currently no simple way to clean up Cinder attachments if
the Nova node (or the instance) has gone away. We’ve put this topic
on the agenda for the Cinder mid-cycle this week:

https://etherpad.openstack.org/p/mitaka-cinder-midcycle L#113


__ __

*From:*Avishay Traeger [mailto:avis...@stratoscale.com
]
*Sent:* Monday, January 25, 2016 7:21 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [Nova][Cinder] Cleanly detaching volumes
from failed nodes

__ __

Hi all,

I was wondering if there was any way to cleanly detach volumes from
failed nodes.  In the case where the node is up nova-compute will
call Cinder's terminate_connection API with a "connector" that
includes information about the node - e.g., hostname, IP, iSCSI
initiator name, FC WWPNs, etc.

If the node has died, this information is no longer available, and
so the attachment cannot be cleaned up properly.  Is there any way
to handle this today?  If not, does it make sense to save the
connector elsewhere (e.g., DB) for cases like these?

__ __

Thanks,

Avishay


__ __

-- 

*Avishay Traeger, PhD*

/System Architect/

__ __

Mobile: +972 54 447 1475 

E-mail: avis...@stratoscale.com 

__ __



__ __

Web  | Blog
 | Twitter
 | Google+


 |
Linkedin 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog
 | Twitter
 | Google+

 |
Linkedin 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I've replied on https://review.openstack.org/#/c/266095/ and the related 
cinder change https://review.openstack.org/#/c/272899/ which are adding 
a new key to the volume connector dict being passed around between nova 
and cinder, which is not ideal.


I'd really like to see us start modeling the volume connector with 
versioned objects so we can (1) tell what's actually in this mystery 
connector dict in the nova virt driver interface and (2) handle version 
compat with adding new keys to it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Avishay Traeger
On Wed, Jan 27, 2016 at 1:01 PM, Matt Riedemann 
wrote:


> I've replied on https://review.openstack.org/#/c/266095/ and the related
> cinder change https://review.openstack.org/#/c/272899/ which are adding a
> new key to the volume connector dict being passed around between nova and
> cinder, which is not ideal.
>
> I'd really like to see us start modeling the volume connector with
> versioned objects so we can (1) tell what's actually in this mystery
> connector dict in the nova virt driver interface and (2) handle version
> compat with adding new keys to it.
>

I agree with you.  Actually, I think it would be more correct to have
Cinder store it, and not pass it at all to terminate_connection().


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Matt Riedemann



On 1/27/2016 11:22 AM, Avishay Traeger wrote:

On Wed, Jan 27, 2016 at 1:01 PM, Matt Riedemann
> wrote:


I've replied on https://review.openstack.org/#/c/266095/ and the
related cinder change https://review.openstack.org/#/c/272899/ which
are adding a new key to the volume connector dict being passed
around between nova and cinder, which is not ideal.

I'd really like to see us start modeling the volume connector with
versioned objects so we can (1) tell what's actually in this mystery
connector dict in the nova virt driver interface and (2) handle
version compat with adding new keys to it.


I agree with you.  Actually, I think it would be more correct to have
Cinder store it, and not pass it at all to terminate_connection().


--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog
 | Twitter
 | Google+

 |
Linkedin 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That would be ideal but I don't know if cinder is storing this 
information in the database like nova is in the nova 
block_device_mappings.connection_info column.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Duncan Thomas
On 27 January 2016 at 06:40, Matt Riedemann 
wrote:

> On 1/27/2016 11:22 AM, Avishay Traeger wrote:
>
>>
>> I agree with you.  Actually, I think it would be more correct to have
>> Cinder store it, and not pass it at all to terminate_connection().
>>
>>
> That would be ideal but I don't know if cinder is storing this information
> in the database like nova is in the nova
> block_device_mappings.connection_info column.
>


This is being discussed for cinder, since it is useful for implementing
force detach / cleanup in cinder

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread Avishay Traeger
OK great, thanks!  I added a suggestion to the etherpad as well, and found
this link helpful: https://review.openstack.org/#/c/266095/

On Tue, Jan 26, 2016 at 1:37 AM, D'Angelo, Scott <scott.dang...@hpe.com>
wrote:

> There is currently no simple way to clean up Cinder attachments if the
> Nova node (or the instance) has gone away. We’ve put this topic on the
> agenda for the Cinder mid-cycle this week:
>
> https://etherpad.openstack.org/p/mitaka-cinder-midcycle L#113
>
>
>
> *From:* Avishay Traeger [mailto:avis...@stratoscale.com]
> *Sent:* Monday, January 25, 2016 7:21 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from
> failed nodes
>
>
>
> Hi all,
>
> I was wondering if there was any way to cleanly detach volumes from failed
> nodes.  In the case where the node is up nova-compute will call Cinder's
> terminate_connection API with a "connector" that includes information about
> the node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
>
> If the node has died, this information is no longer available, and so the
> attachment cannot be cleaned up properly.  Is there any way to handle this
> today?  If not, does it make sense to save the connector elsewhere (e.g.,
> DB) for cases like these?
>
>
>
> Thanks,
>
> Avishay
>
>
>
> --
>
> *Avishay Traeger, PhD*
>
> *System Architect*
>
>
>
> Mobile: +972 54 447 1475
>
> E-mail: avis...@stratoscale.com
>
>
>
>
>
> Web <http://www.stratoscale.com/> | Blog
> <http://www.stratoscale.com/blog/> | Twitter
> <https://twitter.com/Stratoscale> | Google+
> <https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
>  | Linkedin <https://www.linkedin.com/company/stratoscale>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web <http://www.stratoscale.com/> | Blog <http://www.stratoscale.com/blog/>
 | Twitter <https://twitter.com/Stratoscale> | Google+
<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin <https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread Avishay Traeger
Hi all,
I was wondering if there was any way to cleanly detach volumes from failed
nodes.  In the case where the node is up nova-compute will call Cinder's
terminate_connection API with a "connector" that includes information about
the node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
If the node has died, this information is no longer available, and so the
attachment cannot be cleaned up properly.  Is there any way to handle this
today?  If not, does it make sense to save the connector elsewhere (e.g.,
DB) for cases like these?

Thanks,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-25 Thread D'Angelo, Scott
There is currently no simple way to clean up Cinder attachments if the Nova 
node (or the instance) has gone away. We’ve put this topic on the agenda for 
the Cinder mid-cycle this week:
https://etherpad.openstack.org/p/mitaka-cinder-midcycle 
L#113<https://etherpad.openstack.org/p/mitaka-cinder-midcycle%20L#113>

From: Avishay Traeger [mailto:avis...@stratoscale.com]
Sent: Monday, January 25, 2016 7:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed 
nodes

Hi all,
I was wondering if there was any way to cleanly detach volumes from failed 
nodes.  In the case where the node is up nova-compute will call Cinder's 
terminate_connection API with a "connector" that includes information about the 
node - e.g., hostname, IP, iSCSI initiator name, FC WWPNs, etc.
If the node has died, this information is no longer available, and so the 
attachment cannot be cleaned up properly.  Is there any way to handle this 
today?  If not, does it make sense to save the connector elsewhere (e.g., DB) 
for cases like these?

Thanks,
Avishay

--
Avishay Traeger, PhD
System Architect

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com<mailto:avis...@stratoscale.com>

[http://www.stratoscale.com/wp-content/uploads/Logo-Signature-Stratoscale-230.jpg]


Web<http://www.stratoscale.com/> | Blog<http://www.stratoscale.com/blog/> | 
Twitter<https://twitter.com/Stratoscale> | 
Google+<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin<https://www.linkedin.com/company/stratoscale>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev