[Yahoo-eng-team] [Bug 2028895] [NEW] Interoperable Image Import in glance documented format for inject not working as expected

2023-07-27 Thread Rafael Lopez
Public bug reported:

According to the documentation, the correct way to specify custom import image 
metadata properties is:
"inject is a comma-separated list of properties and values that will be 
injected into the image record for the imported image. Each property and value 
should be quoted and separated by a colon (‘:’) as shown in the example above."

With the example being:
inject = "property1":"value1","property2":"value2",...

When specifying properties in this way the resulting properties in the imported 
image look like this:
properties   | "property2"='"value2"', "property3"='"value3', 
os_glance_failed_import='', os_glance_importing_to_stores='', 
os_hash_algo='sha512', 
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
 os_hidden='False', owner_specified.openstack.md5='', 
owner_specified.openstack.object='images/proptest1', 
owner_specified.openstack.sha256='', property1"='"value1"', stores='local'

If you look closely at each of the properties, the quotes are inconsistent:
"property2"='"value2"'
"property3"='"value3
property1"='"value1"'

Conversely, if you use the following (no quotes):
inject = property1:value1,property2:value2,property3:value3

properties   | os_glance_failed_import='',
os_glance_importing_to_stores='', os_hash_algo='sha512',
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
os_hidden='False', owner_specified.openstack.md5='',
owner_specified.openstack.object='images/proptest2',
owner_specified.openstack.sha256='', property1='value1',
property2='value2', property3='value3', stores='local'

Now it looks better:
property1='value1'
property2='value2'
property3='value3'

The resulting quotes using this format seem to match the other standard
properties, ie. key='value' and I suspect what we are going for. I'm
unclear if this is a parser issue or a documentation issue.

---
Release: 27.0.0.0b3.dev5 on 2022-08-30 13:35:51
SHA: 46c30f0b6db6ed6a86b1b84e69748025ad9050c6
Source: 
https://opendev.org/openstack/glance/src/doc/source/admin/interoperable-image-import.rst
URL: 
https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: documentation

** Summary changed:

- Interoperable Image Import in glance documented format for inject not working
+ Interoperable Image Import in glance documented format for inject not working 
as expected

** Description changed:

  According to the documentation, the correct way to specify custom import 
image metadata properties is:
- "inject is a comma-separated list of properties and values that will be 
injected into the image record for the imported image. Each property and value 
should be quoted and separated by a colon (‘:’) as shown in the example above." 
+ "inject is a comma-separated list of properties and values that will be 
injected into the image record for the imported image. Each property and value 
should be quoted and separated by a colon (‘:’) as shown in the example above."
  
  With the example being:
  inject = "property1":"value1","property2":"value2",...
  
  When specifying properties in this way the resulting properties in the 
imported image look like this:
  properties   | "property2"='"value2"', "property3"='"value3', 
os_glance_failed_import='', os_glance_importing_to_stores='', 
os_hash_algo='sha512', 
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
 os_hidden='False', owner_specified.openstack.md5='', 
owner_specified.openstack.object='images/proptest1', 
owner_specified.openstack.sha256='', property1"='"value1"', stores='local'
  
  If you look closely at each of the properties, the quotes are inconsistent:
  "property2"='"value2"'
  "property3"='"value3
  property1"='"value1"'
  
  Conversely, if you use the following (no quotes):
  inject = property1:value1,property2:value2,property3:value3
  
  properties   | os_glance_failed_import='',
  os_glance_importing_to_stores='', os_hash_algo='sha512',
  
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
  os_hidden='False', owner_specified.openstack.md5='',
  owner_specified.openstack.object='images/proptest2',
  owner_specified.openstack.sha256='', property1='value1',
  property2='value2', property3='value3', stores='local'
  
  Now it looks better:
  property1='value1'
  property2='value2'
  property3='value3'
  
  The resulting quotes using this format seem to match the other standard
- properties, ie. key='value' and I suspect what we are going for.
- 
+ properties, ie. key='value' and I suspect what we are going for. I'm
+ unclear if this is

[Yahoo-eng-team] [Bug 2028851] Re: Console output was empty in test_get_console_output_server_id_in_shutoff_status

2023-07-27 Thread Ghanshyam Mann
test_get_console_output_server_id_in_shutoff_status  test was always
wrong since starting which used to get the console for the server
created in the setup method and is in active state. This test never
tried to get the console of the shutoff server.

It is still unknown how this wrong test started failing after the test
refactoring in this commit
https://review.opendev.org/c/openstack/tempest/+/889109 and unhide this
issue.

This Tempest test corrects the tempest test, which will always fail
because the test expects the console of *shutoff* Guest also, which is
not something returned by Nova. Nova does not return the server console
for the shutoff server.

There is an open question on Nova's behaviours:

1. What should Nova return the console output of the shutoff guest
2. what status code should Nova return in case the "console is not available"? 
Currently, it returns 404.

There is some discussion over this topic in IRC:
- 
hhttps://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2023-07-27.log.html#t2023-07-27T18:50:03


** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2028851

Title:
   Console output was empty in
  test_get_console_output_server_id_in_shutoff_status

Status in OpenStack Compute (nova):
  New
Status in tempest:
  Confirmed

Bug description:
  test_get_console_output_server_id_in_shutoff_status

  
https://github.com/openstack/tempest/blob/04cb0adc822ffea6c7bfccce8fa08b03739894b7/tempest/api/compute/servers/test_server_actions.py#L713

  is failing consistently in the nova-lvm job starting on July 24 with
  132 failures in the last 3 days. https://tinyurl.com/kvcc9289

  
  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
728, in test_get_console_output_server_id_in_shutoff_status
  self.wait_for(self._get_output)
File "/opt/stack/tempest/tempest/api/compute/base.py", line 340, in wait_for
  condition()
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
213, in _get_output
  self.assertTrue(output, "Console output was empty.")
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: '' is not true : Console output was empty.

  its not clear why this has started failing. it may be a regression or
  a latent race in the test that we are now failing.

  def test_get_console_output_server_id_in_shutoff_status(self):
  """Test getting console output for a server in SHUTOFF status

  Should be able to GET the console output for a given server_id
  in SHUTOFF status.
  """

  # NOTE: SHUTOFF is irregular status. To avoid test instability,
  #   one server is created only for this test without using
  #   the server that was created in setUpClass.
  server = self.create_test_server(wait_until='ACTIVE')
  temp_server_id = server['id']

  self.client.stop_server(temp_server_id)
  waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
  self.wait_for(self._get_output)

  the test does not wait for the VM to be sshable so its possible that
  we are shutting off the VM before it is fully booted and no output has
  been written to the console.

  this failure has happened on multiple providers but only in the nova-lvm job.
  the console behavior is unrelated to the storage backend but the lvm job i 
belive is using
  lvm on a loopback file so the storage performance is likely slower then 
raw/qcow.

  so perhaps the boot is taking longer and no output is being written.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2028851/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990257] Re: [OpenStack Yoga] Creating a VM is failed when stops only one rabbitmq

2023-07-27 Thread Mesut Muhammet Şahin
** Also affects: kolla
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990257

Title:
  [OpenStack Yoga] Creating a VM is failed when stops only one rabbitmq

Status in kolla:
  New
Status in kolla-ansible:
  New
Status in OpenStack Compute (nova):
  Opinion
Status in RabbitMQ:
  New

Bug description:
  Hi, I deploy a new OpenStack cluster (OpenStack Yoga) by kolla-ansible. 
Everything works fine.
  Then, I try to stop only one rabbitmq-server in cluster; after that, I can't 
create a new VM.

  Reproduce:
  - Deploy a new openstack cluster yoga by kolla-ansible
  - Stop random rabbitmq on one node (docker stop rabbitmq)
  - Test create new server

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1990257/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028851] Re: Console output was empty in test_get_console_output_server_id_in_shutoff_status

2023-07-27 Thread Sylvain Bauza
Seems to be a regression coming from the automatic rebase of
https://github.com/openstack/tempest/commit/eea2c1cfac1e5d240cad4f8be68cff7d72f220a8

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2028851

Title:
   Console output was empty in
  test_get_console_output_server_id_in_shutoff_status

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  New

Bug description:
  test_get_console_output_server_id_in_shutoff_status

  
https://github.com/openstack/tempest/blob/04cb0adc822ffea6c7bfccce8fa08b03739894b7/tempest/api/compute/servers/test_server_actions.py#L713

  is failing consistently in the nova-lvm job starting on July 24 with
  132 failures in the last 3 days. https://tinyurl.com/kvcc9289

  
  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
728, in test_get_console_output_server_id_in_shutoff_status
  self.wait_for(self._get_output)
File "/opt/stack/tempest/tempest/api/compute/base.py", line 340, in wait_for
  condition()
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
213, in _get_output
  self.assertTrue(output, "Console output was empty.")
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: '' is not true : Console output was empty.

  its not clear why this has started failing. it may be a regression or
  a latent race in the test that we are now failing.

  def test_get_console_output_server_id_in_shutoff_status(self):
  """Test getting console output for a server in SHUTOFF status

  Should be able to GET the console output for a given server_id
  in SHUTOFF status.
  """

  # NOTE: SHUTOFF is irregular status. To avoid test instability,
  #   one server is created only for this test without using
  #   the server that was created in setUpClass.
  server = self.create_test_server(wait_until='ACTIVE')
  temp_server_id = server['id']

  self.client.stop_server(temp_server_id)
  waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
  self.wait_for(self._get_output)

  the test does not wait for the VM to be sshable so its possible that
  we are shutting off the VM before it is fully booted and no output has
  been written to the console.

  this failure has happened on multiple providers but only in the nova-lvm job.
  the console behavior is unrelated to the storage backend but the lvm job i 
belive is using
  lvm on a loopback file so the storage performance is likely slower then 
raw/qcow.

  so perhaps the boot is taking longer and no output is being written.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2028851/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2004641] Re: ImageLocationsTest.test_replace_location fails intermittently

2023-07-27 Thread Martin Kopec
This hasn't occurred again for some time now.
I'll close this, it seems it got fixed by the change in tempest - 
https://review.opendev.org/c/openstack/tempest/+/872982
Feel free to reopen and retriage if felt otherwise.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2004641

Title:
  ImageLocationsTest.test_replace_location fails intermittently

Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Saw a new gate failure happening a couple of times :

  
https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))&_a=(columns:!(filename),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:filename,negate:!f,params:(query:job-
  output.txt),type:phrase),query:(match_phrase:(filename:job-
  
output.txt,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:test_replace_location),sort:!())

  
  Example of a failed run :
  2023-02-02 22:20:18.197006 | controller | ==
  2023-02-02 22:20:18.197030 | controller | Failed 1 tests - output below:
  2023-02-02 22:20:18.197050 | controller | ==
  2023-02-02 22:20:18.197071 | controller |
  2023-02-02 22:20:18.197095 | controller | 
tempest.api.image.v2.test_images.ImageLocationsTest.test_replace_location[id-bf6e0009-c039-4884-b498-db074caadb10]
  2023-02-02 22:20:18.197115 | controller | 
--
  2023-02-02 22:20:18.197134 | controller |
  2023-02-02 22:20:18.197152 | controller | Captured traceback:
  2023-02-02 22:20:18.197171 | controller | ~~~
  2023-02-02 22:20:18.197190 | controller | Traceback (most recent call 
last):
  2023-02-02 22:20:18.197212 | controller |
  2023-02-02 22:20:18.197234 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 875, in 
test_replace_location
  2023-02-02 22:20:18.197254 | controller | image = 
self._check_set_multiple_locations()
  2023-02-02 22:20:18.197273 | controller |
  2023-02-02 22:20:18.197292 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 847, in 
_check_set_multiple_locations
  2023-02-02 22:20:18.197311 | controller | image = 
self._check_set_location()
  2023-02-02 22:20:18.197329 | controller |
  2023-02-02 22:20:18.197351 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 820, in 
_check_set_location
  2023-02-02 22:20:18.197372 | controller | 
self.client.update_image(image['id'], [
  2023-02-02 22:20:18.197391 | controller |
  2023-02-02 22:20:18.197410 | controller |   File 
"/opt/stack/tempest/tempest/lib/services/image/v2/images_client.py", line 40, 
in update_image
  2023-02-02 22:20:18.197429 | controller | resp, body = 
self.patch('images/%s' % image_id, data, headers)
  2023-02-02 22:20:18.197447 | controller |
  2023-02-02 22:20:18.197465 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 346, in patch
  2023-02-02 22:20:18.197490 | controller | return self.request('PATCH', 
url, extra_headers, headers, body)
  2023-02-02 22:20:18.197513 | controller |
  2023-02-02 22:20:18.197533 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in request
  2023-02-02 22:20:18.197552 | controller | self._error_checker(resp, 
resp_body)
  2023-02-02 22:20:18.197571 | controller |
  2023-02-02 22:20:18.197590 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 831, in 
_error_checker
  2023-02-02 22:20:18.197612 | controller | raise 
exceptions.BadRequest(resp_body, resp=resp)
  2023-02-02 22:20:18.197633 | controller |
  2023-02-02 22:20:18.197655 | controller | 
tempest.lib.exceptions.BadRequest: Bad request
  2023-02-02 22:20:18.197674 | controller | Details: b'400 Bad Request\n\nThe 
Store URI was malformed.\n\n   '
  2023-02-02 22:20:18.197692 | controller |
  2023-02-02 22:20:18.197711 | controller |
  2023-02-02 22:20:18.197729 | controller | Captured pythonlogging:
  2023-02-02 22:20:18.197748 | controller | ~~~
  2023-02-02 22:20:18.197774 | controller | 2023-02-02 22:01:06,773 114933 
INFO [tempest.lib.common.rest_client] Request 
(ImageLocationsTest:test_replace_location): 201 POST 
https://10.210.193.38/image/v2/images 1.036s
  2023-02-02 22:20:18.197798 | controller | 2023-02-02 22:01:06,774 114933 
DEBUG[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'applica

[Yahoo-eng-team] [Bug 2004641] Re: ImageLocationsTest.test_replace_location fails intermittently

2023-07-27 Thread Martin Kopec
This seems to be duplicate of https://bugs.launchpad.net/glance/+bug/1999800
I'll close this, it seems it got fixed by the change in tempest - 
https://review.opendev.org/c/openstack/tempest/+/872982
Feel free to reopen and retriage if felt otherwise.

** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2004641

Title:
  ImageLocationsTest.test_replace_location fails intermittently

Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Saw a new gate failure happening a couple of times :

  
https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))&_a=(columns:!(filename),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:filename,negate:!f,params:(query:job-
  output.txt),type:phrase),query:(match_phrase:(filename:job-
  
output.txt,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:test_replace_location),sort:!())

  
  Example of a failed run :
  2023-02-02 22:20:18.197006 | controller | ==
  2023-02-02 22:20:18.197030 | controller | Failed 1 tests - output below:
  2023-02-02 22:20:18.197050 | controller | ==
  2023-02-02 22:20:18.197071 | controller |
  2023-02-02 22:20:18.197095 | controller | 
tempest.api.image.v2.test_images.ImageLocationsTest.test_replace_location[id-bf6e0009-c039-4884-b498-db074caadb10]
  2023-02-02 22:20:18.197115 | controller | 
--
  2023-02-02 22:20:18.197134 | controller |
  2023-02-02 22:20:18.197152 | controller | Captured traceback:
  2023-02-02 22:20:18.197171 | controller | ~~~
  2023-02-02 22:20:18.197190 | controller | Traceback (most recent call 
last):
  2023-02-02 22:20:18.197212 | controller |
  2023-02-02 22:20:18.197234 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 875, in 
test_replace_location
  2023-02-02 22:20:18.197254 | controller | image = 
self._check_set_multiple_locations()
  2023-02-02 22:20:18.197273 | controller |
  2023-02-02 22:20:18.197292 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 847, in 
_check_set_multiple_locations
  2023-02-02 22:20:18.197311 | controller | image = 
self._check_set_location()
  2023-02-02 22:20:18.197329 | controller |
  2023-02-02 22:20:18.197351 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 820, in 
_check_set_location
  2023-02-02 22:20:18.197372 | controller | 
self.client.update_image(image['id'], [
  2023-02-02 22:20:18.197391 | controller |
  2023-02-02 22:20:18.197410 | controller |   File 
"/opt/stack/tempest/tempest/lib/services/image/v2/images_client.py", line 40, 
in update_image
  2023-02-02 22:20:18.197429 | controller | resp, body = 
self.patch('images/%s' % image_id, data, headers)
  2023-02-02 22:20:18.197447 | controller |
  2023-02-02 22:20:18.197465 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 346, in patch
  2023-02-02 22:20:18.197490 | controller | return self.request('PATCH', 
url, extra_headers, headers, body)
  2023-02-02 22:20:18.197513 | controller |
  2023-02-02 22:20:18.197533 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in request
  2023-02-02 22:20:18.197552 | controller | self._error_checker(resp, 
resp_body)
  2023-02-02 22:20:18.197571 | controller |
  2023-02-02 22:20:18.197590 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 831, in 
_error_checker
  2023-02-02 22:20:18.197612 | controller | raise 
exceptions.BadRequest(resp_body, resp=resp)
  2023-02-02 22:20:18.197633 | controller |
  2023-02-02 22:20:18.197655 | controller | 
tempest.lib.exceptions.BadRequest: Bad request
  2023-02-02 22:20:18.197674 | controller | Details: b'400 Bad Request\n\nThe 
Store URI was malformed.\n\n   '
  2023-02-02 22:20:18.197692 | controller |
  2023-02-02 22:20:18.197711 | controller |
  2023-02-02 22:20:18.197729 | controller | Captured pythonlogging:
  2023-02-02 22:20:18.197748 | controller | ~~~
  2023-02-02 22:20:18.197774 | controller | 2023-02-02 22:01:06,773 114933 
INFO [tempest.lib.common.rest_client] Request 
(ImageLocationsTest:test_replace_location): 201 POST 
https://10.210.193.38/image/v2/images 1.036s
  2023-02-02 22:20:18.197798 | controller | 2023-02-02 22:01:06,774 114933 
DEBUG[tempest.lib.common.rest_client] Request - Headers:

[Yahoo-eng-team] [Bug 2004641] Re: ImageLocationsTest.test_replace_location fails intermittently

2023-07-27 Thread Martin Kopec
I didn't see that the job would fail because of the mentioned test the
whole month (July). I consider this fixed by
https://review.opendev.org/c/openstack/tempest/+/872982

** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2004641

Title:
  ImageLocationsTest.test_replace_location fails intermittently

Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Saw a new gate failure happening a couple of times :

  
https://opensearch.logs.openstack.org/_dashboards/app/discover?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))&_a=(columns:!(filename),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:filename,negate:!f,params:(query:job-
  output.txt),type:phrase),query:(match_phrase:(filename:job-
  
output.txt,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:test_replace_location),sort:!())

  
  Example of a failed run :
  2023-02-02 22:20:18.197006 | controller | ==
  2023-02-02 22:20:18.197030 | controller | Failed 1 tests - output below:
  2023-02-02 22:20:18.197050 | controller | ==
  2023-02-02 22:20:18.197071 | controller |
  2023-02-02 22:20:18.197095 | controller | 
tempest.api.image.v2.test_images.ImageLocationsTest.test_replace_location[id-bf6e0009-c039-4884-b498-db074caadb10]
  2023-02-02 22:20:18.197115 | controller | 
--
  2023-02-02 22:20:18.197134 | controller |
  2023-02-02 22:20:18.197152 | controller | Captured traceback:
  2023-02-02 22:20:18.197171 | controller | ~~~
  2023-02-02 22:20:18.197190 | controller | Traceback (most recent call 
last):
  2023-02-02 22:20:18.197212 | controller |
  2023-02-02 22:20:18.197234 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 875, in 
test_replace_location
  2023-02-02 22:20:18.197254 | controller | image = 
self._check_set_multiple_locations()
  2023-02-02 22:20:18.197273 | controller |
  2023-02-02 22:20:18.197292 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 847, in 
_check_set_multiple_locations
  2023-02-02 22:20:18.197311 | controller | image = 
self._check_set_location()
  2023-02-02 22:20:18.197329 | controller |
  2023-02-02 22:20:18.197351 | controller |   File 
"/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 820, in 
_check_set_location
  2023-02-02 22:20:18.197372 | controller | 
self.client.update_image(image['id'], [
  2023-02-02 22:20:18.197391 | controller |
  2023-02-02 22:20:18.197410 | controller |   File 
"/opt/stack/tempest/tempest/lib/services/image/v2/images_client.py", line 40, 
in update_image
  2023-02-02 22:20:18.197429 | controller | resp, body = 
self.patch('images/%s' % image_id, data, headers)
  2023-02-02 22:20:18.197447 | controller |
  2023-02-02 22:20:18.197465 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 346, in patch
  2023-02-02 22:20:18.197490 | controller | return self.request('PATCH', 
url, extra_headers, headers, body)
  2023-02-02 22:20:18.197513 | controller |
  2023-02-02 22:20:18.197533 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in request
  2023-02-02 22:20:18.197552 | controller | self._error_checker(resp, 
resp_body)
  2023-02-02 22:20:18.197571 | controller |
  2023-02-02 22:20:18.197590 | controller |   File 
"/opt/stack/tempest/tempest/lib/common/rest_client.py", line 831, in 
_error_checker
  2023-02-02 22:20:18.197612 | controller | raise 
exceptions.BadRequest(resp_body, resp=resp)
  2023-02-02 22:20:18.197633 | controller |
  2023-02-02 22:20:18.197655 | controller | 
tempest.lib.exceptions.BadRequest: Bad request
  2023-02-02 22:20:18.197674 | controller | Details: b'400 Bad Request\n\nThe 
Store URI was malformed.\n\n   '
  2023-02-02 22:20:18.197692 | controller |
  2023-02-02 22:20:18.197711 | controller |
  2023-02-02 22:20:18.197729 | controller | Captured pythonlogging:
  2023-02-02 22:20:18.197748 | controller | ~~~
  2023-02-02 22:20:18.197774 | controller | 2023-02-02 22:01:06,773 114933 
INFO [tempest.lib.common.rest_client] Request 
(ImageLocationsTest:test_replace_location): 201 POST 
https://10.210.193.38/image/v2/images 1.036s
  2023-02-02 22:20:18.197798 | controller | 2023-02-02 22:01:06,774 114933 
DEBUG[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-A

[Yahoo-eng-team] [Bug 2028851] [NEW] Console output was empty in test_get_console_output_server_id_in_shutoff_status

2023-07-27 Thread sean mooney
Public bug reported:

test_get_console_output_server_id_in_shutoff_status

https://github.com/openstack/tempest/blob/04cb0adc822ffea6c7bfccce8fa08b03739894b7/tempest/api/compute/servers/test_server_actions.py#L713

is failing consistently in the nova-lvm job starting on July 24 with 132
failures in the last 3 days. https://tinyurl.com/kvcc9289


Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 728, in test_get_console_output_server_id_in_shutoff_status
self.wait_for(self._get_output)
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 340, in wait_for
condition()
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 213, in _get_output
self.assertTrue(output, "Console output was empty.")
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: '' is not true : Console output was empty.

its not clear why this has started failing. it may be a regression or a
latent race in the test that we are now failing.

def test_get_console_output_server_id_in_shutoff_status(self):
"""Test getting console output for a server in SHUTOFF status

Should be able to GET the console output for a given server_id
in SHUTOFF status.
"""

# NOTE: SHUTOFF is irregular status. To avoid test instability,
#   one server is created only for this test without using
#   the server that was created in setUpClass.
server = self.create_test_server(wait_until='ACTIVE')
temp_server_id = server['id']

self.client.stop_server(temp_server_id)
waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
self.wait_for(self._get_output)

the test does not wait for the VM to be sshable so its possible that we
are shutting off the VM before it is fully booted and no output has been
written to the console.

this failure has happened on multiple providers but only in the nova-lvm job.
the console behavior is unrelated to the storage backend but the lvm job i 
belive is using
lvm on a loopback file so the storage performance is likely slower then 
raw/qcow.

so perhaps the boot is taking longer and no output is being written.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2028851

Title:
   Console output was empty in
  test_get_console_output_server_id_in_shutoff_status

Status in OpenStack Compute (nova):
  New

Bug description:
  test_get_console_output_server_id_in_shutoff_status

  
https://github.com/openstack/tempest/blob/04cb0adc822ffea6c7bfccce8fa08b03739894b7/tempest/api/compute/servers/test_server_actions.py#L713

  is failing consistently in the nova-lvm job starting on July 24 with
  132 failures in the last 3 days. https://tinyurl.com/kvcc9289

  
  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
728, in test_get_console_output_server_id_in_shutoff_status
  self.wait_for(self._get_output)
File "/opt/stack/tempest/tempest/api/compute/base.py", line 340, in wait_for
  condition()
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
213, in _get_output
  self.assertTrue(output, "Console output was empty.")
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: '' is not true : Console output was empty.

  its not clear why this has started failing. it may be a regression or
  a latent race in the test that we are now failing.

  def test_get_console_output_server_id_in_shutoff_status(self):
  """Test getting console output for a server in SHUTOFF status

  Should be able to GET the console output for a given server_id
  in SHUTOFF status.
  """

  # NOTE: SHUTOFF is irregular status. To avoid test instability,
  #   one server is created only for this test without using
  #   the server that was created in setUpClass.
  server = self.create_test_server(wait_until='ACTIVE')
  temp_server_id = server['id']

  self.client.stop_server(temp_server_id)
  waiters.wait_for_server_status(self.client, temp_server_id, 'SHUTOFF')
  self.wait_for(self._get_output)

  the test does not wait for the VM to be sshable so its possible that
  we are shutting off the VM before it is fully booted and no output has
  been written to the console.

  this failure has happened on multiple providers but only in the nova-lvm job.
  the console behavior is unrelated to the storage backend but the lvm job i 
belive is using
  lvm on a loopback fil

[Yahoo-eng-team] [Bug 2028846] [NEW] FIP PF don't works with vlan tenant network and ovn backend

2023-07-27 Thread Slawek Kaplonski
Public bug reported:

After patch https://review.opendev.org/c/openstack/neutron/+/878450 was merged, 
for vlan tenant network neutron sets "reside-on-redirect-chassis=False" in the 
Logical Router Ports in OVN NB. This is done like that to make sure that such 
traffic is not centralized.
But the problem is with port forwardings associated with vms connected to ports 
in the vlan tenant networks as PFs are implemented in OVN backend as OVN 
Loadbalancers and are centralized. So in such case we should still centralize 
traffic from such network probably to make such PFs working fine.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-dvr-backlog ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028846

Title:
  FIP PF don't works with vlan tenant network and ovn backend

Status in neutron:
  Confirmed

Bug description:
  After patch https://review.opendev.org/c/openstack/neutron/+/878450 was 
merged, for vlan tenant network neutron sets "reside-on-redirect-chassis=False" 
in the Logical Router Ports in OVN NB. This is done like that to make sure that 
such traffic is not centralized.
  But the problem is with port forwardings associated with vms connected to 
ports in the vlan tenant networks as PFs are implemented in OVN backend as OVN 
Loadbalancers and are centralized. So in such case we should still centralize 
traffic from such network probably to make such PFs working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2028846/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp