Has anyone seen this issue in Folsom?

-----Original Message-----
From: B, Vivekanandan (HP Software (BLR)) 
[mailto:vivekanandan.bangarus...@hp.com] 
Sent: Thursday, May 02, 2013 8:59 AM
To: Smith, Eric E; openstack@lists.launchpad.net
Subject: RE: Cinder - attach / detach / reattach fails


Hi,

I tried the same on my Grizzly setup and I was able to add the same volume back 
with the same device file name.


~Vivek

-----Original Message-----
From: Openstack [mailto:openstack-bounces+bvivek=hp....@lists.launchpad.net] On 
Behalf Of eric_e_sm...@dell.com
Sent: Thursday, May 02, 2013 6:29 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Cinder - attach / detach / reattach fails

A little more information, I've discovered that this only happens if I reuse 
the previous device (/dev/vdc in this instance).  If I use the next device 
(/dev/vdd) the attachment works fine.

-----Original Message-----
From: Openstack 
[mailto:openstack-bounces+eric_e_smith=dell....@lists.launchpad.net] On Behalf 
Of Smith, Eric E
Sent: Thursday, May 02, 2013 7:42 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Cinder - attach / detach / reattach fails

Is this a bug perhaps?  I have a volume and a VM, I can attach the volume to 
the VM and see the disk with fdisk -l, I can detach the volume and see it is 
missing.  When I try to reattach the volume I get the following error:

May  2 12:18:01 compute-4 ERROR nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]  Failed to attach volume 
2e307a3d-3635-432d-99ed-6e8ddddf5792 at /dev/vdc
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83] Traceback (most recent call last):
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2039, in 
_attach_volume
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]     mountpoint)
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 104, in wrapped
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]     temp_level, payload)
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]     self.gen.next()
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 79, in wrapped
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]     return f(*args, **kw)
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 673, in 
attach_volume
0122013-05-02 12:18:01,209 30214 TRACE nova.compute.manager [instance: 
4fcd874f-f8e5-4a08-9745-838b0636bd83]     mount_device)

Does anyone else have this issue?

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to