** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1664347

Title:
  test_volume_boot_pattern failed to get an instance into ACTIVE state

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/53/431753/2/check/gate-grenade-dsvm-neutron-
  dvr-multinode-ubuntu-
  xenial/8cf8895/logs/grenade.sh.txt.gz#_2017-02-11_01_24_50_334

  2017-02-11 01:24:50.334 | Captured traceback:
  2017-02-11 01:24:50.334 | ~~~~~~~~~~~~~~~~~~~
  2017-02-11 01:24:50.334 |     Traceback (most recent call last):
  2017-02-11 01:24:50.334 |       File "tempest/test.py", line 99, in wrapper
  2017-02-11 01:24:50.334 |         return f(self, *func_args, **func_kwargs)
  2017-02-11 01:24:50.334 |       File 
"tempest/scenario/test_volume_boot_pattern.py", line 168, in 
test_volume_boot_pattern
  2017-02-11 01:24:50.334 |         security_group=security_group))
  2017-02-11 01:24:50.334 |       File 
"tempest/scenario/test_volume_boot_pattern.py", line 72, in 
_boot_instance_from_resource
  2017-02-11 01:24:50.334 |         return self.create_server(image_id='', 
**create_kwargs)
  2017-02-11 01:24:50.334 |       File "tempest/scenario/manager.py", line 208, 
in create_server
  2017-02-11 01:24:50.334 |         image_id=image_id, **kwargs)
  2017-02-11 01:24:50.334 |       File "tempest/common/compute.py", line 182, 
in create_test_server
  2017-02-11 01:24:50.334 |         server['id'])
  2017-02-11 01:24:50.334 |       File 
"/opt/stack/old/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2017-02-11 01:24:50.334 |         self.force_reraise()
  2017-02-11 01:24:50.334 |       File 
"/opt/stack/old/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2017-02-11 01:24:50.334 |         six.reraise(self.type_, self.value, self.tb)
  2017-02-11 01:24:50.334 |       File "tempest/common/compute.py", line 164, 
in create_test_server
  2017-02-11 01:24:50.334 |         clients.servers_client, server['id'], 
wait_until)
  2017-02-11 01:24:50.334 |       File "tempest/common/waiters.py", line 96, in 
wait_for_server_status
  2017-02-11 01:24:50.334 |         raise lib_exc.TimeoutException(message)
  2017-02-11 01:24:50.334 |     tempest.lib.exceptions.TimeoutException: 
Request timed out
  2017-02-11 01:24:50.334 |     Details: 
(TestVolumeBootPatternV2:test_volume_boot_pattern) Server 
8a628dec-ebe5-401a-9ba9-72358eabca06 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

  The instance was created at:
  2017-02-11 01:24:50.304 |     2017-02-11 01:17:58,462 26423 INFO     
[tempest.lib.common.rest_client] Request 
(TestVolumeBootPatternV2:test_volume_boot_pattern): 202 POST 
http://158.69.83.100:8774/v2.1/servers 2.300s

  Then the test spins on the instance waiting for it to transition into ACTIVE:
  2017-02-11 01:24:50.305 |     2017-02-11 01:17:58,802 26423 INFO     
[tempest.lib.common.rest_client] Request 
(TestVolumeBootPatternV2:test_volume_boot_pattern): 200 GET 
http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06 
0.333s
  2017-02-11 01:24:50.305 |     2017-02-11 01:17:58,802 26423 DEBUG    
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'<omitted>', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-02-11 01:24:50.305 |             Body: None
  2017-02-11 01:24:50.305 |         Response - Headers: {u'connection': 
'close', u'vary': 'X-OpenStack-Nova-API-Version', u'content-length': '1093', 
u'x-openstack-nova-api-version': '2.1', u'openstack-api-version': 'compute 
2.1', 'status': '200', 'content-location': 
'http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-3abcfd5a-7e3d-4333-b66b-547e4ce79ce7', u'date': 'Sat, 11 Feb 2017 01:17:58 
GMT'}
  2017-02-11 01:24:50.305 |             Body: {"server": 
{"OS-EXT-STS:task_state": "scheduling", "addresses": {}, "links": [{"href": 
"http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06";, 
"rel": "self"}, {"href": 
"http://158.69.83.100:8774/servers/8a628dec-ebe5-401a-9ba9-72358eabca06";, 
"rel": "bookmark"}], "image": "", "OS-EXT-STS:vm_state": "building", 
"OS-SRV-USG:launched_at": null, "flavor": {"id": "42", "links": [{"href": 
"http://158.69.83.100:8774/flavors/42";, "rel": "bookmark"}]}, "id": 
"8a628dec-ebe5-401a-9ba9-72358eabca06", "user_id": 
"f4e2fa64f9c54cadb1cac2642a2d757f", "OS-DCF:diskConfig": "MANUAL", 
"accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, 
"OS-EXT-AZ:availability_zone": "", "metadata": {}, "status": "BUILD", 
"updated": "2017-02-11T01:17:58Z", "hostId": "", "OS-SRV-USG:terminated_at": 
null, "key_name": "tempest-TestVolumeBootPatternV2-228135911", "name": 
"tempest-TestVolumeBootPatternV2-server-1766399508", "created": 
"2017-02-11T01:17:58Z", "tenant_id": "c30e6127a8104ae3aad60827bbe07a78", 
"os-extended-volumes:volumes_attached": [], "config_drive": ""}}

  The attempt to clean up the instance is at:
  2017-02-11 01:24:50.316 |     2017-02-11 01:21:15,384 26423 INFO     
[tempest.lib.common.rest_client] Request 
(TestVolumeBootPatternV2:test_volume_boot_pattern): 204 DELETE 
http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06 
0.296s

  If we look into nova-api logs for the initial VM creation request, it's here:
  
http://logs.openstack.org/53/431753/2/check/gate-grenade-dsvm-neutron-dvr-multinode-ubuntu-xenial/8cf8895/logs/old/screen-n-api.txt.gz#_2017-02-11_01_17_56_280

  It seems like all went fine, the db record is created, all calls to
  neutron and cinder succeeded, and so we can expect e.g. nova-compute
  service to process the new instance. But when we search for the
  instance UUID in nova-cpu log, we can't find any. Conductor and
  scheduler logs don't reveal anything wrong, though debug output is
  hard to read, so maybe I miss something.

  It all looks like, for some reason, nova instance request was not
  dispatched into nova-compute, so it never progressed past building
  stage.

  Note the failure is in the old part of grenade, meaning it's with
  Ocata code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1664347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to