[openstack-dev] [cinder][nova] - Barbican w/Live Migration in DevStack Multinode

2018-07-30 Thread Walsh, Helen
Hi OpenStack Community,

I am having some issues with key management in a multinode devstack (from 
master branch 27th July '18) environment where Barbican is the configured 
key_manager.  I have followed setup instructions from the following pages:

  *   https://docs.openstack.org/barbican/latest/contributor/devstack.html 
(manual configuration)
  *   
https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-encryption.html

So far:

  *   Unencrypted block volumes can be attached to instances on any compute node
  *   Instances with unencrypted volumes can also be live migrated to other 
compute node
  *   Encrypted bootable volumes created successfully
  *   Instances can be launched using these encrypted volumes when the instance 
is spawned on demo_machine1 (controller & compute node)
  *   Instances cannot be launched using encrypted volumes when the instance is 
spawned on demo_machine2 or demo_machine3 (compute only), the same failure can 
be seen in nova logs from both compute nodes:

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG cinderclient.v3.client 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] GET call to 
cinderv3 for 
http://10.0.0.63/volume/v3/3f22a0262a7b4832a08c24ac0295cbd9/volumes/296148bf-edb8-4c9f-88c2-44464907f7e7/encryption
 used request id req-71fa7f20-c0bc-46c3-9f07-5866344d31a1 {{(pid=25686) request 
/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:844}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG os_brick.encryptors 
[None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Using volume 
encryption metadata '{u'cipher': u'aes-xts-plain64', u'encryption_key_id': 
u'da7ee21c-67ff-4d74-95a0-18ee6c25d85a', u'provider': u'luks', u'key_size': 
256, u'control_location': u'front-end'}' for connection: {'status': 
u'attaching', 'detached_at': u'', u'volume_id': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'attach_mode': u'null', 
'driver_volume_type': u'iscsi', 'instance': 
u'e0dc6eac-09bb-4232-bea7-7b8b161cfa31', 'attached_at': 
u'2018-07-30T13:35:17.00', 'serial': 
u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'data': {'device_path': 
'/dev/disk/by-id/scsi-SEMC_SYMMETRIX_900049_wy000', u'target_discovered': True, 
u'encrypted': True, u'qos_specs': None, u'target_iqn': 
u'iqn.1992-04.com.emc:69700bcbb7112504018f', u'target_portal': 
u'192.168.0.60:3260', u'volume_id': u'296148bf-edb8-4c9f-88c2-44464907f7e7', 
u'target_lun': 1, u'access_mode': u'rw'}} {{(pid=25686) get_encryption_metadata 
/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py:125}}

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: WARNING 
keystoneauth.identity.generic.base [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Failed to discover 
available identity versions when contacting http://localhost/identity/v3. 
Attempting to parse version from URL.: NotFound: Not Found (HTTP 404)

Jul 30 14:35:18 demo_machine2 nova-compute[25686]: ERROR 
castellan.key_manager.barbican_key_manager [None 
req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Error creating Barbican 
client: Could not find versioned identity endpoints when attempting to 
authenticate. Please check that your auth_url is correct. Not Found (HTTP 404): 
DiscoveryFailure: Could not find versioned identity endpoints when attempting 
to authenticate. Please check that your auth_url is correct. Not Found (HTTP 
404)

All instance of Nova have [key_manager] configured as follows:
[key_manager]
backend = barbican
auth_url = http://10.0.0.63/identity/
### Tried with and without the below config options, same result
# auth_type = password
# password = devstack
# username = barbican

Any assistance here would be greatly appreciated, I have spent a lot of time 
looking for some additional information for the use of Barbican in multinode 
devstack environments or with live migration but there is nothing out there, 
everything is for all-in-one environments and I'm not having any issues when 
everything is on one node. I am wondering if at this point there is something I 
am missing in terms of services in a multinode devstack environment, 
qualification of barbican in a multinode environment is outside of the 
recommended test config but following the docs it looks very straight forward.

Some information on the three nodes in my environment are below, if there is 
any other information I can provide let me know, thanks for the help!

Node & Service Breakdown
Node 1 (Controller & Compute)
stack@demo_machine1:~$ openstack service list
+--+-++
| ID   | Name| Type   |
+--+-++
| 43a1334c755c4c81969565097cc9c30c | cinder  | volume |
| 52a8927c09154e33900f24c7c95a9f8b | cinderv2| volumev2   |
| 5427a9dff3b6477197062e1747843c4d | nova_legacy | compute_legacy |
| 

[openstack-dev] FW: [cinder]

2018-05-24 Thread Walsh, Helen
Sending on Michael's behalf...


From: McAleer, Michael
Sent: Monday 21 May 2018 15:18
To: 'openstack-dev@lists.openstack.org'
Subject: FW: [openstack-dev] [cinder]

Hi Cinder Devs,

I would like to ask a question concerning Cinder CLI commands in DevStack 
13.0.0.0b2.dev167.

I stacked a clean environment this morning to run through some sanity tests of 
new features, two of which are list manageable 
volumes
 and 
snapshots.
 When I attempt to run this command using Cinder CLI I am getting an invalid 
choice error in response:

stack@openstack-dev:~/devstack$ cinder manageable-list 
openstack-dev@VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333
[usage output omitted]
error: argument : invalid choice: u'manageable-list'

The same behaviour can be seen for listing manageable-snapshots also, invalid 
choice error. I looked for a similar command using the OpenStack Volume CLI but 
there wasn't any similar commands found which would return a list of manageable 
volumes or snapshots.

I didn't see any deprecation notices for the command, and the commands worked 
fine in earlier DevStack environments in this Rocky dev cycle, so just 
wondering what the status is of the commands and if this is possibly an 
oversight.

Thanks!
Michael

Michael McAleer
Software Engineer 1, Core Technologies
Dell EMC | Enterprise Storage Division
Phone: +353 21 428 1729
michael.mcal...@dell.com
Ireland COE, Ovens, Co. Cork, Ireland
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Recall: [Cinder] string freeze exception for VMAX driver

2017-08-03 Thread Walsh, Helen
Walsh, Helen would like to recall the message, "[openstack-dev] [Cinder]  
string freeze exception for VMAX driver".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] string freeze exception for VMAX driver

2017-08-03 Thread Walsh, Helen

To whom it may concern,

I would like to request a string freeze exception for 2 patches that are on the 
merge queue for Pike.



1. VMAX driver - align VMAX QOS settings with front end  (CI Passed)
https://review.openstack.org/#/c/484885/7/cinder/volume/drivers/dell_emc/vmax/rest.py
  line 800 (removal of exception message)

Although it's primary aim is to align QoS with front end setting it indirectly 
fixes a lazy loading error we were seeing around QoS which occasionally
Broke CI on previous patches.



2.   VMAX driver - seamless upgrade from SMI-S to REST (CI Pending)
https://review.openstack.org/#/c/482138/19/cinder/volume/drivers/dell_emc/vmax/common.py
   line 1400 ,1455 (message changes)

This is vital for as reuse of volumes from Ocata to Pike.  In Ocata we used 
SMIS to interface with the VMAX, in Pike we are using REST.  A few changes 
needed to be made to make this transition as seamless as possible.

Thank you,
Helen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] string freeze exception for VMAX driver

2017-08-02 Thread Walsh, Helen


To whom it may concern,

I would like to request a string freeze exception for 2 patches that are on the 
merge queue for Pike.



1. VMAX driver - align VMAX QOS settings with front end  (CI Passed)
https://review.openstack.org/#/c/484885/7/cinder/volume/drivers/dell_emc/vmax/rest.py
  line 800 (removal of exception message)

Although it's primary aim is to align QoS with front end setting it indirectly 
fixes a lazy loading error we were seeing around QoS which occasionally
Broke CI on previous patches.



2.   VMAX driver - seamless upgrade from SMI-S to REST (CI Pending)
https://review.openstack.org/#/c/482138/19/cinder/volume/drivers/dell_emc/vmax/common.py
   line 1400 ,1455 (message changes)

This is vital for as reuse of volumes from Ocata to Pike.  In Ocata we used 
SMIS to interface with the VMAX, in Pike we are using REST.  A few changes 
needed to be made to make this transition as seamless as possible.

Thank you,
Helen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev