This is still showing up in logstash, there are 18 fails in the last 10 days:
http://logstash.openstack.org/#eyJmaWVsZHMiOiBbXSwgInNlYXJjaCI6ICJtZXNzYWdlOlwiVGVzdEVuY3J5cHRlZENpbmRlclZvbHVtZXNcIiBBTkQgKCAobWVzc2FnZTpcImRldGFjaF92b2x1bWVcIiBBTkRcbiBtZXNzYWdlOlwiVHlwZUVycm9yOiA8dHlwZSAnTm9uZVR5cGUnPiBjYW4ndCBiZSBkZWNvZGVkXCIpIE9SXG4obWVzc2FnZTpcImF0dGFjaF92b2x1bWVcIiBBTkRcbiBtZXNzYWdlOlwiRGV2aWNlSXNCdXN5OiBUaGUgc3VwcGxpZWQgZGV2aWNlICh2ZGIpIGlzIGJ1c3lcIikpIEFORFxudGFnczpcInNjcmVlbi1uLWNwdS50eHRcIlxuIiwgInRpbWVmcmFtZSI6ICI4NjQwMDAiLCAiZ3JhcGhtb2RlIjogImNvdW50IiwgIm9mZnNldCI6IDB9 ** Changed in: nova Milestone: juno-3 => None ** No longer affects: cinder ** Changed in: nova Status: Fix Released => New ** Changed in: nova Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1348204 Title: test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Compute (nova) icehouse series: New Bug description: http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm- full/168a5dd/console.html#_2014-07-24_01_07_09_115 2014-07-24 01:07:09.116 | tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume] 2014-07-24 01:07:09.116 | ---------------------------------------------------------------------------------------------------------------------------------------- 2014-07-24 01:07:09.116 | 2014-07-24 01:07:09.116 | Captured traceback: 2014-07-24 01:07:09.117 | ~~~~~~~~~~~~~~~~~~~ 2014-07-24 01:07:09.117 | Traceback (most recent call last): 2014-07-24 01:07:09.117 | File "tempest/test.py", line 128, in wrapper 2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs) 2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in test_encrypted_cinder_volumes_cryptsetup 2014-07-24 01:07:09.117 | self.attach_detach_volume() 2014-07-24 01:07:09.117 | File "tempest/scenario/test_encrypted_cinder_volumes.py", line 49, in attach_detach_volume 2014-07-24 01:07:09.117 | self.nova_volume_detach() 2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 757, in nova_volume_detach 2014-07-24 01:07:09.117 | self._wait_for_volume_status('available') 2014-07-24 01:07:09.117 | File "tempest/scenario/manager.py", line 710, in _wait_for_volume_status 2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, status) 2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 230, in status_timeout 2014-07-24 01:07:09.118 | not_found_exception=not_found_exception) 2014-07-24 01:07:09.118 | File "tempest/scenario/manager.py", line 296, in _status_timeout 2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message) 2014-07-24 01:07:09.118 | TimeoutException: Request timed out 2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 4ef6a14a-3fce-417f-aa13-5aab1789436e to become available I've actually been seeing this out of tree in our internal CI also but thought it was just us or our slow VMs, this is the first I've seen it upstream. From the traceback in the console log, it looks like the volume does get to available status because it doesn't get out of that state when tempest is trying to delete the volume on tear down. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1348204/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : [email protected] Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp

