Thanks to Tzach I was able to get access to an env downstream and
confirm whats going on here.

c-vol appears to be creating a fresh secret for the new volume that
isn't capable of unlocking the volume. IMHO c-vol should just copy the
associated secret UUID during the creation process from an image with
one already associated to it.

Additionally, the create flow here is really weird, I can see that we
download the image twice and try to import into rbd twice. The first
import appears to be a fresh LUKS encrypted image, the second a raw to
raw conversion that does nothing to the original LUKS encryption of the
image.

Anyway, I'm removing nova from this bug and adding cinder. More detailed
notes can be found below.

[ notes ]

I can see multiple keys used by n-cpu :

2018-05-17 11:47:47.382 1 DEBUG barbicanclient.v1.secrets [req-
6c45d622-ecf1-4cbb-a038-b8eaaf776818 ea26e0f59cf44f909a0dbe86f1f21078
3d16a4daf99042d5adbc4f0d55dbf322 - default default] Getting secret -
Secret href: http://172.17.1.12:9311/v1/secrets/a3c400ce-
8b94-4ee5-90e9-564bab6c823b get /usr/lib/python2.7/site-
packages/barbicanclient/v1/secrets.py:457

2018-05-17 11:52:26.413 1 DEBUG barbicanclient.v1.secrets [req-dfe882de-
0b11-4a70-b527-78b47a7faf2e ea26e0f59cf44f909a0dbe86f1f21078
3d16a4daf99042d5adbc4f0d55dbf322 - default default] Getting secret -
Secret href: http://172.17.1.12:9311/v1/secrets/3b88eedc-813e-
4e01-bec7-d8d2b7d2ef42 get /usr/lib/python2.7/site-
packages/barbicanclient/v1/secrets.py:457

Fetching these we can see that they are not the same :

$ curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/octet-stream' -o 
a3c400ce-8b94-4ee5-90e9-564bab6c823b 
http://10.0.0.106:9311/v1/secrets/a3c400ce-8b94-4ee5-90e9-564bab6c823b
* About to connect() to 10.0.0.106 port 9311 (#0)
*   Trying 10.0.0.106...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* 
Connected to 10.0.0.106 (10.0.0.106) port 9311 (#0)
> GET /v1/secrets/a3c400ce-8b94-4ee5-90e9-564bab6c823b HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.106:9311
> X-Auth-Token: 
> gAAAAABa_W8gLDSMIfr7hzDC385Qpjewpy2awYIrqyO0O8U5VceB4YX_xyDlH7zBBPyR68L5krAEvCzkJq-b335TbGGeqQ_EDFNa9pclZo7Qm3m0_E8ofv0W9Ny8XWwhKERNK-3BxuUUMf1N7CgexHnkIgFye23EpzZF8lcxAKWmNCIiY_p2h9g
> Accept: application/octet-stream
> 
< HTTP/1.1 200 OK
< Date: Thu, 17 May 2018 12:12:16 GMT
< Server: Apache
< x-openstack-request-id: req-e32e0e58-8234-4fd3-90d8-50f9f72d617c
< Content-Length: 32
< Content-Type: application/octet-stream
< 
{ [data not shown]
100    32  100    32    0     0    115      0 --:--:-- --:--:-- --:--:--   115
* Connection #0 to host 10.0.0.106 left intact

$ curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/octet-stream' -o 
3b88eedc-813e-4e01-bec7-d8d2b7d2ef42 
http://10.0.0.106:9311/v1/secrets/3b88eedc-813e-4e01-bec7-d8d2b7d2ef42          
                                             
* About to connect() to 10.0.0.106 port 9311 (#0)
*   Trying 10.0.0.106...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* 
Connected to 10.0.0.106 (10.0.0.106) port 9311 (#0)
> GET /v1/secrets/3b88eedc-813e-4e01-bec7-d8d2b7d2ef42 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.106:9311
> X-Auth-Token: 
> gAAAAABa_W8gLDSMIfr7hzDC385Qpjewpy2awYIrqyO0O8U5VceB4YX_xyDlH7zBBPyR68L5krAEvCzkJq-b335TbGGeqQ_EDFNa9pclZo7Qm3m0_E8ofv0W9Ny8XWwhKERNK-3BxuUUMf1N7CgexHnkIgFye23EpzZF8lcxAKWmNCIiY_p2h9g
> Accept: application/octet-stream
> 
< HTTP/1.1 200 OK
< Date: Thu, 17 May 2018 12:12:33 GMT
< Server: Apache
< x-openstack-request-id: req-cd6964b5-eaac-4d97-b3a5-ae5dc2d2e474
< Content-Length: 32
< Content-Type: application/octet-stream
< 
{ [data not shown]
100    32  100    32    0     0    112      0 --:--:-- --:--:-- --:--:--   112
* Connection #0 to host 10.0.0.106 left intact

Decoding as n-cpu does to set the passphrase (urgh!):

$ python 
[..]
>>> for key in ['3b88eedc-813e-4e01-bec7-d8d2b7d2ef42', 
>>> 'a3c400ce-8b94-4ee5-90e9-564bab6c823b']:                                    
>>>                                                                             
>>>                                                                   
...   with open(key) as f:                                                      
                                                                                
                                                                                
                                  
...     binascii.hexlify(f.read()).decode('utf-8')                              
                                                                                
                                                                                
                                  
... 
u'08293dc9fc1dbbb34fa3b464a851db40b3bd4f819b96ababff9dbc13eb9ebe05'
u'a01c2b1721ce9e893ee7856aa7b9c6252d3d6ff77c9ab20c4e83e7355958352b'

$ sudo rbd -k /etc/ceph/ceph.client.openstack.keyring --user openstack export 
volumes/volume-17276f8b-c21d-479e-a58a-522319d01ff8 
17276f8b-c21d-479e-a58a-522319d01ff8.img
Exporting image: 100% complete...done.

$ sudo rbd -k /etc/ceph/ceph.client.openstack.keyring --user openstack export 
volumes/volume-50fb8cc6-fd69-4a7e-844e-e0629d81a3b7 
50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img
Exporting image: 100% complete...done.

We can see the same LUKS header in both volumes:

$ qemu-img info 17276f8b-c21d-479e-a58a-522319d01ff8.img
image: 17276f8b-c21d-479e-a58a-522319d01ff8.img
file format: luks
virtual size: 1.0G (1073741824 bytes)
disk size: 134M
encrypted: yes
Format specific information:
    ivgen alg: plain64
    hash alg: sha256
    cipher alg: aes-256
    uuid: 8d41fc3e-d134-480c-a35e-a42bf8a5763b
    cipher mode: xts
    slots:
        [0]:
            active: true
            iters: 1038604
            key offset: 4096
            stripes: 4000
        [1]:
            active: false
            key offset: 262144
        [2]:
            active: false
            key offset: 520192
        [3]:
            active: false
            key offset: 778240
        [4]:
            active: false
            key offset: 1036288
        [5]:
            active: false
            key offset: 1294336
        [6]:
            active: false
            key offset: 1552384
        [7]:
            active: false
            key offset: 1810432
    payload offset: 2068480
    master key iters: 259425

$ qemu-img info 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img 
image: 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img
file format: luks
virtual size: 2.0G (2145415168 bytes)
disk size: 112M
encrypted: yes
Format specific information:
    ivgen alg: plain64
    hash alg: sha256
    cipher alg: aes-256
    uuid: 8d41fc3e-d134-480c-a35e-a42bf8a5763b
    cipher mode: xts
    slots:
        [0]:
            active: true
            iters: 1038604
            key offset: 4096
            stripes: 4000
        [1]:
            active: false
            key offset: 262144
        [2]:
            active: false
            key offset: 520192
        [3]:
            active: false
            key offset: 778240
        [4]:
            active: false
            key offset: 1036288
        [5]:
            active: false
            key offset: 1294336
        [6]:
            active: false
            key offset: 1552384
        [7]:
            active: false
            key offset: 1810432
    payload offset: 2068480
    master key iters: 259425

However only one key works here:

$ sudo losetup -f 17276f8b-c21d-479e-a58a-522319d01ff8.img
$ sudo losetup -f 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img

$ sudo cryptsetup luksOpen /dev/loop0 17276f8b-c21d-479e-a58a-522319d01ff8
Enter passphrase for /home/heat-admin/17276f8b-c21d-479e-a58a-522319d01ff8.img: 
a01c2b1721ce9e893ee7856aa7b9c6252d3d6ff77c9ab20c4e83e7355958352b

$ sudo cryptsetup luksOpen /dev/loop1 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7
Enter passphrase for /home/heat-admin/50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img: 
a01c2b1721ce9e893ee7856aa7b9c6252d3d6ff77c9ab20c4e83e7355958352b

$ sudo cryptsetup close 17276f8b-c21d-479e-a58a-522319d01ff8
$ sudo cryptsetup close 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7
$ sudo cryptsetup luksOpen /dev/loop0 17276f8b-c21d-479e-a58a-522319d01ff8
Enter passphrase for /home/heat-admin/17276f8b-c21d-479e-a58a-522319d01ff8.img: 
08293dc9fc1dbbb34fa3b464a851db40b3bd4f819b96ababff9dbc13eb9ebe05
No key available with this passphrase.
Enter passphrase for /home/heat-admin/17276f8b-c21d-479e-a58a-522319d01ff8.img: 
Error reading passphrase from terminal.
$ sudo cryptsetup luksOpen /dev/loop1 50fb8cc6-fd69-4a7e-844e-e0629d81a3b7      
                                                                                
                                                                                
          
Enter passphrase for /home/heat-admin/50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img: 
08293dc9fc1dbbb34fa3b464a851db40b3bd4f819b96ababff9dbc13eb9ebe05
No key available with this passphrase.
Enter passphrase for /home/heat-admin/50fb8cc6-fd69-4a7e-844e-e0629d81a3b7.img: 
Error reading passphrase from terminal.


** Project changed: nova => cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1764125

Title:
  Re-attaching an encrypted(Barbican) Cinder (RBD) volume to an instance
  fails

Status in Cinder:
  New

Bug description:
  Description of problem: 
  An attached encrypted (Barbican) RBD Cinder volume was attached to instance, 
write data to it. 
  Then volume was detached, when trying to reattach the volume to same instance 
volume fails to attach. Odd errors on attached nova-compute.log 

  2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b]     uuid=managed_object_id)
  2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b] ManagedObjectNotFoundError: Key not 
found, uuid: 7912eac8-2652-4c92-b53f-3db4ecca7bc7

  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 177, in request
  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server     raise 
exceptions.from_response(resp, body)
  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment 
c17e2b89-5a36-4e7e-8c71-b975f2f5ccb3 could not be found.
  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server 

  
  How reproducible:
  Unsure looks like every time I try to re-attach. 

  Steps to Reproduce:
  1. Boot an instance
  2. Create an encrypted(Barbican) backed Cinder(RBD) volume, attach to 
instance write data.  
  3. Detach volume from instance
  4. Try to reattach same volume to same instance. 

  $nova volume-attach 923c5318-8502-4f85-a215-78afc4fd641b
  16584072-ef78-4a80-91ab-cbd47e9bc70d auto

  5. Volume fails to attach
  No error volume remains unattached
  cinder list 
  
+--------------------------------------+-----------+-------------+------+----------------------------+----------+--------------------------------------+
  | ID                                   | Status    | Name        | Size | 
Volume Type                | Bootable | Attached to                          |
  
+--------------------------------------+-----------+-------------+------+----------------------------+----------+--------------------------------------+
  | 16584072-ef78-4a80-91ab-cbd47e9bc70d | available | 2-Encrypted | 1    | 
LuksEncryptor-Template-256 | false    |                                      |

  
  Actual results:
  Volume fails to attach. 

  
  Expected results:
  Volume should successfully reattach. 

  
  Environment / Version-Release number of selected component (if applicable):
  rhel7.5 
  openstack-nova-conductor-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  python-nova-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  python-novaclient-9.1.1-1.el7ost.noarch
  openstack-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
  openstack-nova-scheduler-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  openstack-nova-console-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  puppet-cinder-12.3.1-0.20180222074326.18152ac.el7ost.noarch
  openstack-nova-compute-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  python2-cinderclient-3.5.0-1.el7ost.noarch
  python-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
  openstack-nova-api-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  openstack-nova-novncproxy-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  puppet-nova-12.3.1-0.20180319062741.9db79a6.el7ost.noarch
  openstack-nova-common-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  openstack-nova-migration-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
  openstack-nova-placement-api-17.0.2-0.20180323024604.0390d5f.el7ost.noarch

  Libvirt + KVM
  Neutron networking
  Cinder volume is RBD backed and encrypted via Barbican.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1764125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to