[Yahoo-eng-team] [Bug 1374999] [NEW] iSCSI volume detach does not correctly remove the multipath device descriptors

2014-09-28 Thread Sampath Priyankara
._remove_multipath_device_descriptor(multipath_device)
return
Therefore, first two volumes worked fine. However, when it comes to the last 
device (in this case 3rd one),  this method return without calling 
_remove_multipath_device_descriptor due to following if statement. 
 
if not in_use:
# disconnect if no other multipath devices with same iqn
self._disconnect_mpath(iscsi_properties, ips_iqns)
return
It just disconnect them but not remove them.
One of the reasons why we have to remove them is,
https://bugs.launchpad.net/nova/+bug/1223975

IMO, we should call _remove_multipath_device_descriptor in above if
statement before return.

** Affects: nova
 Importance: Undecided
 Assignee: Sampath Priyankara (sampath-priyankara)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sampath Priyankara (sampath-priyankara)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374999

Title:
  iSCSI volume detach does not correctly remove the multipath device
  descriptors

Status in OpenStack Compute (Nova):
  New

Bug description:
  iSCSI volume detach does not correctly remove the multipath device
  descriptors

  tested environment:
  nova-compute on Ubuntu 14.04.1, iscsi_use_multipath=True and iSCSI volume 
backend is EMC VNX 5300.

   I created 3 cinder volumes and attached them to a nova instance. Then I 
detach them one by one. First 2 volumes volumes detached successfully.  3rd 
volume also successfully detached but ended up with  failed multipaths. 
  Here is the terminal log for last volume detach.

  openstack@W1DEV103:~/devstack$ cinder list
  
+--++--+--+-+--+--+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable |
   Attached to
   |
  
+--++--+--+-+--+--+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | in-use | None | 1 |
   None
   | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e |
  
+--++--+--+-+--+--+
  openstack@W1CN103:/etc/iscsi$ date;sudo multipath -l
  Fri Sep 19 21:38:13 JST 2014
  360060160cf0036002d1475f6e73fe411 dm-2 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | |- 4:0:0:42 sdb 8:16 active undef running
  | |- 5:0:0:42 sdd 8:48 active undef running
  | |- 6:0:0:42 sdf 8:80 active undef running
  | `- 7:0:0:42 sdh 8:112 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
  |- 11:0:0:42 sdp 8:240 active undef running
  |- 8:0:0:42 sdj 8:144 active undef running
  |- 9:0:0:42 sdl 8:176 active undef running
  `- 10:0:0:42 sdn 8:208 active undef running
  openstack@W1CN103:/etc/iscsi$ date;sudo iscsiadm -m session
  Fri Sep 19 21:38:19 JST 2014
  tcp: [10] 172.23.58.228:3260,4 iqn.1992-04.com.emc:cx.fcn00133400150.a7
  tcp: [3] 172.23.58.238:3260,8 iqn.1992-04.com.emc:cx.fcn00133400150.b7
  tcp: [4] 172.23.58.235:3260,20 iqn.1992-04.com.emc:cx.fcn00133400150.b4
  tcp: [5] 172.23.58.236:3260,6 iqn.1992-04.com.emc:cx.fcn00133400150.b5
  tcp: [6] 172.23.58.237:3260,19 iqn.1992-04.com.emc:cx.fcn00133400150.b6
  tcp: [7] 172.23.58.225:3260,16 iqn.1992-04.com.emc:cx.fcn00133400150.a4
  tcp: [8] 172.23.58.226:3260,2 iqn.1992-04.com.emc:cx.fcn00133400150.a5
  tcp: [9] 172.23.58.227:3260,17 iqn.1992-04.com.emc:cx.fcn00133400150.a6

  openstack@W1DEV103:~/devstack$ nova volume-detach 
5bd68785-4acf-43ab-ae13-11b1edc3a62e
  56a63288-5cc0-4f5c-9197-cde731172dd8
  openstack@W1DEV103:~/devstack$
  openstack@W1DEV103:~/devstack$ cinder list
  
+--+---+--+--+-+--+--+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable |
   Attached to
   |
  
+--+---+--+--+-+--+--+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | detaching | None | 1 |
   None
   | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e|

  
+--+---+--+--+-+--+--+
  openstack@W1DEV103:~/devstack$
  openstack@W1DEV103:~/devstack$ cinder list
  
+--+---+--+--+-+--+-+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | available | None | 1 |
   None

[Yahoo-eng-team] [Bug 1340552] [NEW] Volume detach error when use NFS as the cinder backend

2014-07-11 Thread Sampath Priyankara
Public bug reported:

Tested Environment
--
OS: Ubuntu 14.04 LST
Cinder NFS driver: 
volume_driver=cinder.volume.drivers.nfs.NfsDriver

Error description
--
I used NFS as the cinder storage backend and successfully attached multiple 
volumes to nova instances.
However, when I tried to detach one them, I found following error on 
nova-compute.log.

2014-07-07 17:48:46.175 3195 ERROR nova.virt.libvirt.volume 
[req-a07d077f-2ad1-4558-91fa-ab1895ca4914 c8ac60023a794aed8cec8552110d5f12 
fdd538eb5dbf48a98d08e6d64def73d7] Couldn't unmount the NFS share 
172.23.58.245:/NFSThinLun2
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Traceback (most 
recent call last):
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py, line 675, 
in disconnect_volume
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
utils.execute('umount', mount_path, run_as_root=True)
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
/usr/local/lib/python2.7/dist-packages/nova/utils.py, line 164, in execute
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume return 
processutils.execute(*cmd, **kwargs)
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
/usr/local/lib/python2.7/dist-packages/nova/openstack/common/processutils.py, 
line 193, in execute
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume cmd=' 
'.join(cmd))
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
ProcessExecutionError: Unexpected error while running command.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf umount 
/var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Exit code: 16
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stdout: ''
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stderr: 
'umount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\numount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\n'
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume

For NFS volumes, every time you detach a volume, nova tries to umount the 
device path.
/nova/virt/libvirt/volume.py in 
Line 632: class LibvirtNFSVolumeDriver(LibvirtBaseVolumeDriver):
Line 653:   def disconnect_volume(self, connection_info, disk_dev):
Line 661:   utils.execute('umount', mount_path, run_as_root=True)
 
This works when the device path is not busy.
If the device path is busy (or in use), it should output a message to log and 
continue.
The problem is, Instead of output a log message, it raise exception and that 
cause the above error.

I think the reason is, the ‘if’ statement at Line 663 fails to catch the device 
busy massage from the content of the exc.message. It looking for the ‘target is 
busy’ in the exc.message, but umount error code returns ‘device is busy’.
Therefore, current code skip the ‘if’ statement and run the ‘else’ and raise 
the exception.

How to reproduce
--
(1) Prepare a NFS share storage and set it as the storage backend of you 
cinder
(refer 
http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/NFS-driver.html)
In cinder.conf
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=path to your nfs share list file
(2) Create 2 empty volumes from cinder
(3) Create a nova instance and attach above 2 volumes
(4) Then, try to detach one of them.
You will get the error in nova-compute.log “Couldn't unmount the NFS share 
your NFS mount path on nova-compute”

Proposed Fix
--
I’m not sure about any other OSs who outputs the ‘target is busy’ in the umount 
error code. 
Therefore, first fix comes to my mind is fix the ‘if’ statement to:
Before fix; 
if 'target is busy' in exc.message:
After fix;
if 'device is busy' in exc.message:

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340552

Title:
  Volume detach error when use NFS as the cinder backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tested Environment
  --
  OS: Ubuntu 14.04 LST
  Cinder NFS driver: 
  volume_driver=cinder.volume.drivers.nfs.NfsDriver

  Error description
  --
  I used NFS as the cinder storage backend and successfully attached multiple 
volumes to nova instances.
  However, when I tried to detach one them, I found following error on 
nova-compute.log.

  2014-07-07 17:48:46.175 3195 ERROR nova.virt.libvirt.volume 
[req-a07d077f-2ad1-4558-91fa-ab1895ca4914 c8ac60023a794aed8cec8552110d5f12 
fdd538eb5dbf48a98d08e6d64def73d7]