[Yahoo-eng-team] [Bug 1752115] Re: detach multiattach volume disconnects innocent bystander

2018-03-02 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Tags added: multiattach volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752115

Title:
  detach multiattach volume disconnects innocent bystander

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  Detaching a multi-attached lvm volume from one server, causes the
  other server to lose connectivity to the volume. I found this while
  developing a new tempest test to test this scenario.

  - create 2 instances on the same host, both simple instances with ephemeral 
disks
  - create a multi-attach lvm volume, attach to both instances
  - check that you can re-read the partition table from inside each instance 
(via ssh):

 $ sudo blockdev --rereadpt /dev/vdb

This succeeds on both instances (no output or err message is
  returned).

  - detach the volume from one of the instances
  - recheck connectivity. The expected result is that the command will now fail 
in the instance where 
the volume was detached. But it also fails on the instance where the volume 
is still supposedly 
attached:

 $ sudo blockdev --rereadpt /dev/vdb
 BLKRRPART: Input/output error

  cinder & nova still think that the volume is attached correctly:

  $ cinder show 2cf26a15-8937-4654-ba81-70cbcb97a238 | grep attachment
  | attachment_ids | ['f5876aff-5b5b-45a0-a020-515ca339eae4']   

  $ nova show vm1 | grep attached
  | os-extended-volumes:volumes_attached | [{"id": 
"2cf26a15-8937-4654-ba81-70cbcb97a238", "delete_on_termination": false}] |

  cinder version:

  :/opt/stack/cinder$ git show
  commit 015b1053990f00d1522c1074bcd160b4b57a5801
  Merge: 856e636 481535e
  Author: Zuul 
  Date:   Thu Feb 22 14:00:17 2018 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1752115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752115] Re: detach multiattach volume disconnects innocent bystander

2018-02-27 Thread John Griffith
So looking into this the problem appears to be that Nova calls the brick
initiator disconnect_volume method indiscriminately.  Brick has no way
currently to interrogate usage of a connection, and I'm not sure that
something like that could be added in this case.

My first thought was that it would be logical to check for multiattach on the 
same host here:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1249
by using the objects.BlockDeviceMapping.get_by_volume() HOWEVER it turns out 
that's another special thing that isn't allowed when a volume is 
multiattach=True (I haven't figured out why that's there yet, but looking). 

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752115

Title:
  detach multiattach volume disconnects innocent bystander

Status in OpenStack Compute (nova):
  New

Bug description:
  Detaching a multi-attached lvm volume from one server, causes the
  other server to lose connectivity to the volume. I found this while
  developing a new tempest test to test this scenario.

  - create 2 instances on the same host, both simple instances with ephemeral 
disks
  - create a multi-attach lvm volume, attach to both instances
  - check that you can re-read the partition table from inside each instance 
(via ssh):

 $ sudo blockdev --rereadpt /dev/vdb

This succeeds on both instances (no output or err message is
  returned).

  - detach the volume from one of the instances
  - recheck connectivity. The expected result is that the command will now fail 
in the instance where 
the volume was detached. But it also fails on the instance where the volume 
is still supposedly 
attached:

 $ sudo blockdev --rereadpt /dev/vdb
 BLKRRPART: Input/output error

  cinder & nova still think that the volume is attached correctly:

  $ cinder show 2cf26a15-8937-4654-ba81-70cbcb97a238 | grep attachment
  | attachment_ids | ['f5876aff-5b5b-45a0-a020-515ca339eae4']   

  $ nova show vm1 | grep attached
  | os-extended-volumes:volumes_attached | [{"id": 
"2cf26a15-8937-4654-ba81-70cbcb97a238", "delete_on_termination": false}] |

  cinder version:

  :/opt/stack/cinder$ git show
  commit 015b1053990f00d1522c1074bcd160b4b57a5801
  Merge: 856e636 481535e
  Author: Zuul 
  Date:   Thu Feb 22 14:00:17 2018 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1752115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp