[Yahoo-eng-team] [Bug 1930653] [NEW] test_refresh_associations_time fails intermittently

2021-06-02 Thread melanie witt
Public bug reported:

Noticed in the gate, test_refresh_associations_time failed with the
following trace:

ft1.1: 
nova.tests.unit.scheduler.client.test_report.TestAssociations.test_refresh_associations_timetesttools.testresult.real._StringException:
 pythonlogging:'': {{{
2021-06-02 18:09:44,145 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2021-06-02 18:09:44,172 WARNING [oslo_policy.policy] JSON formatted policy_file 
support is deprecated since Victoria release. You need to use YAML format which 
will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool 
to convert existing JSON-formatted policy file to YAML-formatted in backward 
compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
2021-06-02 18:09:44,190 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'system_admin_api', 'system_reader_api', 'project_admin_api', 
'project_member_api', 'project_reader_api', 'system_admin_or_owner', 
'system_or_project_reader', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell'] specified in policy files are the same as 
the defaults provided by the service. You can remove these rules from policy 
files which will make maintenance easier. You can detect these redundant rules 
by ``oslopolicy-list-redundant`` tool also.
}}}

Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py38/lib/python3.8/site-packages/mock/mock.py",
 line 1346, in patched
return func(*newargs, **newkeywargs)
  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/scheduler/client/test_report.py",
 line 3146, in test_refresh_associations_time
self.assert_getters_were_called(uuid)
  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/scheduler/client/test_report.py",
 line 3047, in assert_getters_were_called
self.mock_get_inv.assert_called_once_with(self.context, uuid)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py38/lib/python3.8/site-packages/mock/mock.py",
 line 925, in assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected '_get_inventory' to be called once. Called 0 times.


I found this is because though most of the use of time.time() in the refresh 
associations code was mocked in the test, there is one spot where it is not 
mocked and if the test happens to run slowly, the delta between the recorded 
"now" and the refresh run timestamp will be larger than expected and cause the 
test assertion to fail.

I think this can be fixed by mocking the remaining call of time.time().

** Affects: nova
 Importance: Undecided
 Assignee: melanie witt (melwitt)
 Status: Triaged


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1930653

Title:
  test_refresh_associations_time fails intermittently

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Noticed in the gate, test_refresh_associations_time failed with the
  following trace:

  ft1.1: 
nova.tests.unit.scheduler.client.test_report.TestAssociations.test_refresh_associations_timetesttools.testresult.real._StringException:
 pythonlogging:'': {{{
  2021-06-02 18:09:44,145 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2021-06-02 18:09:44,172 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2021-06-02 18:09:44,190 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 

[Yahoo-eng-team] [Bug 1821755] Re: live migration break the anti-affinity policy of server group simultaneously

2021-06-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/784166
Committed: 
https://opendev.org/openstack/nova/commit/33c8af1f8c46c9c37fcc28fb3409fbd3a78ae39f
Submitter: "Zuul (22348)"
Branch:master

commit 33c8af1f8c46c9c37fcc28fb3409fbd3a78ae39f
Author: Rodrigo Barbieri 
Date:   Wed Mar 31 11:06:49 2021 -0300

Error anti-affinity violation on migrations

Error-out the migrations (cold and live) whenever the
anti-affinity policy is violated. This addresses
violations when multiple concurrent migrations are
requested.

Added detection on:
- prep_resize
- check_can_live_migration_destination
- pre_live_migration

The improved method of detection now locks based on group_id
and considers other migrations in-progress as well.

Closes-bug: #1821755
Change-Id: I32e6214568bb57f7613ddeba2c2c46da0320fabc


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821755

Title:
  live migration break the anti-affinity policy of server group
  simultaneously

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  If we live migrate two instance simultaneously, the instances will break the 
instance group policy.

  Steps to reproduce
  ==
  OpenStack env with three compute nodes(node1, node2 and node3). Then we 
create two VMs(vm1, vm2) with the anti-affinity policy.
  At last, we live migrate two VMs simultaneously.

  Before live-migration, the VMs are located as followed:
  node1  ->  vm1
  node2  ->  vm2
  node3

  * nova live-migration vm1
  * nova live-migration vm2

  Expected result
  ===
  Fail to live migrate vm1 and vm2.

  Actual result
  =
  node1
  node2
  node3  ->  vm1,vm2

  Environment
  ===
  master branch of openstack

  As described above, the live migration could not check the in-progress
  live-migration and just select the host by scheduler filter. So that
  they are migrated to the same host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929886] Re: [neutron]http_retries config option not used when calling the port binding API

2021-06-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/793512
Committed: 
https://opendev.org/openstack/nova/commit/56eb253e9febccf721df6bca4eb851ad26cb70a6
Submitter: "Zuul (22348)"
Branch:master

commit 56eb253e9febccf721df6bca4eb851ad26cb70a6
Author: melanie witt 
Date:   Fri May 28 00:26:04 2021 +

Honor [neutron]http_retries in the manual client

Change Ifb3afb13aff7e103c2e80ade817d0e63b624604a added a nova side
config option for specifying neutron client retries that maps to the
ksa connect_retries config option to provide parity with the cinder and
glance clients that have nova side config options.

That change missed passing CONF.neutron.http_retries to the manual
client used for calling the port binding API. This sets the
connect_retries attribute on the manual ksa client so http_retries
will be honored.

Closes-Bug: #1929886
Related-Bug: #1866937

Change-Id: I8296e4be9f0706fab043451b856efadbb7bd41f6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1929886

Title:
  [neutron]http_retries config option not used when calling the port
  binding API

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New

Bug description:
  Discovered downstream [1] that the [neutron]http_retries config option
  has no effect on calls to the port binding API, so calls to it are not
  resilient like our other calls to the neutron API.

  The neutronclient didn't have support for the port binding API at the
  time when nova needed to call it, so a manual ksa client was created
  in the interim.

  When the [neutron]http_retries config option was added [2], this
  manual client was missed and thus never does any retries.

  The backportable fix for this will pass the CONF.neutron.http_retries
  option to the ksa client and the following proposed change to use the
  now existing neutronclient support should be reviewed and merged to
  prevent future similar bugs:

  https://review.opendev.org/c/openstack/nova/+/706295

  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1955031
  [2] https://review.opendev.org/c/openstack/nova/+/712226

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1929886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930597] [NEW] Doc for "Configuring SSL Support" outdated in glance

2021-06-02 Thread Jean Chorin
Public bug reported:

The "Configuring SSL Support" states that `cert_file`, `key_file` and
`ca_file` can be set to enable TLS.

But on the [changelog of
Ussuri](https://docs.openstack.org/releasenotes/glance/ussuri.html) it
is mentioned that:

> If upgrade is conducted from PY27 where ssl connections has been
terminated into glance-api, the termination needs to happen externally
from now on.

So the `cert_file`, `key_file` and `ca_file` configuration options
should be removed from the documentation.

---
Release: 22.0.0.0rc2.dev2 on 2020-05-24 10:41:41
SHA: b5437773b20db3d6ef20d449a8a43171c8fc7f69
Source: 
https://opendev.org/openstack/glance/src/doc/source/configuration/configuring.rst
URL: https://docs.openstack.org/glance/wallaby/configuration/configuring.html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1930597

Title:
  Doc for "Configuring SSL Support" outdated in glance

Status in Glance:
  New

Bug description:
  The "Configuring SSL Support" states that `cert_file`, `key_file` and
  `ca_file` can be set to enable TLS.

  But on the [changelog of
  Ussuri](https://docs.openstack.org/releasenotes/glance/ussuri.html) it
  is mentioned that:

  > If upgrade is conducted from PY27 where ssl connections has been
  terminated into glance-api, the termination needs to happen externally
  from now on.

  So the `cert_file`, `key_file` and `ca_file` configuration options
  should be removed from the documentation.

  ---
  Release: 22.0.0.0rc2.dev2 on 2020-05-24 10:41:41
  SHA: b5437773b20db3d6ef20d449a8a43171c8fc7f69
  Source: 
https://opendev.org/openstack/glance/src/doc/source/configuration/configuring.rst
  URL: https://docs.openstack.org/glance/wallaby/configuration/configuring.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1930597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930401] Re: Fullstack l3 agent tests failing due to timeout waiting until port is active

2021-06-02 Thread Oleg Bondarev
** Also affects: oslo.privsep
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930401

Title:
  Fullstack l3 agent tests failing due to timeout waiting until port is
  active

Status in neutron:
  Confirmed
Status in oslo.privsep:
  New

Bug description:
  Many fullstack L3 agent related tests are failing recently and the
  common thing for many of them is the fact that they are failing while
  waiting until port status will be ACTIVE. Like e.g.:

  
https://9cec50bd524f94a2df4c-c6273b9a7cf594e42eb2c4e7f818.ssl.cf5.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/6fc0704/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_73b/793141/2/check/neutron-fullstack-with-uwsgi/73b08ae/testr_results.html
  
https://b87ba208d44b7f1356ad-f27c11edabee52a7804784593cf2712d.ssl.cf5.rackcdn.com/791365/5/check/neutron-fullstack-with-uwsgi/634ccb1/testr_results.html
  
https://dd43e0f9601da5e2e650-51b18fcc89837fbadd0245724df9c686.ssl.cf1.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/5413cd9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8d0/791365/5/check/neutron-fullstack-with-uwsgi/8d024fb/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_188/791365/5/check/neutron-fullstack-with-uwsgi/188aa48/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/792998/2/check/neutron-fullstack-with-uwsgi/9a3b5a2/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929998] Re: VXLAN interface cannot source from lo with multiple IPs

2021-06-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/793500
Committed: 
https://opendev.org/openstack/neutron/commit/4cd11f4dee494c5851318410c523dfcfbce4c824
Submitter: "Zuul (22348)"
Branch:master

commit 4cd11f4dee494c5851318410c523dfcfbce4c824
Author: Anthony Timmins 
Date:   Thu May 27 16:11:04 2021 -0400

Use local and ip address to create vxlan interface

Attempting to terminate a vxlan on the lo interface with
multiple ip addresses fails. This seems to be because only
the first ip address on the interface is used. If this address
is invalid for vxlan creation (ie. 127.0.0.1), the vxlan
interface will be created, but will not have a source ip
address, and will therefore be non-functional. To remedy this
issue, when L2population is used, we can set the local
argument to the local_ip, thus ensuring the intended ip
address is configured.

Closes-Bug: 1929998
Change-Id: I9c54a268fc4ef9705637556ecba161bd6523a047
Signed-off-by: Anthony Timmins 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1929998

Title:
  VXLAN interface cannot source from lo with multiple IPs

Status in neutron:
  Fix Released

Bug description:
  Attempting to terminate a vxlan on the lo interface with
  multiple ip addresses fails. This seems to be because only
  the first ip address on the interface is used. If this address
  is invalid for vxlan creation (ie. 127.0.0.1), the vxlan
  interface will be created, but will not have a source ip
  address, and will therefore be non-functional. By setting and
  including the local argument during vxlan interface creation,
  we can ensure that the intended ip address is used.

  Linux version: Ubuntu 20.04.1 LTS
  Neutron version: neutron-21.0.0.0rc3.dev1

  Current behavior:

  #No source IP address is found on the vxlan interface

  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet 10.1.1.1/32 scope global lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever

  # ip a show dev vxlan-62
  <...>
  vxlan id 62 dev lo srcport 0 0 dstport 8472 ttl 32 ageing 300 udpcsum 
noudp6zerocsumtx noudp6zerocsumrx

  Expected behavior:

  #The source IP address is configured correctly

  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet 10.1.1.1/32 scope global lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever

  # ip a show dev vxlan-62
  <...>
  vxlan id 62 local 10.1.1.1 dev lo srcport 0 0 dstport 8472 ttl 32 ageing 300 
udpcsum noudp6zerocsumtx noudp6zerocsumrx

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1929998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930566] [NEW] Octavia tempest "octavia-v2-dsvm-scenario" gate fails

2021-06-02 Thread Arkady Shtempler
Public bug reported:

Zuul fails for octavia-tempest-plugin test patches.
Seems that "octavia-v2-dsvm-scenario" gate constantly fails with: Failure or 
Timeout.

Here are few test patches:
https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/755281
https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/777412
https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/755050

The Error (in case of failure) is:
2021-06-02 07:22:31.787811 | controller | 2021-06-02 06:56:01,451 144223 INFO   
  [octavia_tempest_plugin.services.load_balancer.v2.base_client] Cleanup 
complete for member ade7500e-480c-4597-a53f-45518f2e477f...
2021-06-02 07:22:31.787814 | controller |
2021-06-02 07:22:31.787817 | controller |
2021-06-02 07:22:31.787821 | controller | 
octavia_tempest_plugin.tests.scenario.v2.test_ipv6_traffic_ops.IPv6TrafficOperationsScenarioTest.test_tcp_ipv6_vip_mixed_ipv4_ipv6_members_traffic[id-a4e8d5d1-03d5-4252-9300-e89b9b2bdafc]
2021-06-02 07:22:31.787824 | controller | 
---
2021-06-02 07:22:31.787827 | controller |
2021-06-02 07:22:31.787831 | controller | Captured traceback:
2021-06-02 07:22:31.787834 | controller | ~~~
2021-06-02 07:22:31.787837 | controller | Traceback (most recent call last):
2021-06-02 07:22:31.787841 | controller |
2021-06-02 07:22:31.787844 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/octavia_tempest_plugin/tests/scenario/v2/test_ipv6_traffic_ops.py",
 line 194, in test_tcp_ipv6_vip_mixed_ipv4_ipv6_members_traffic
2021-06-02 07:22:31.787849 | controller | 
self._test_ipv6_vip_mixed_ipv4_ipv6_members_traffic(const.TCP, 81,
2021-06-02 07:22:31.787853 | controller |
2021-06-02 07:22:31.787856 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/octavia_tempest_plugin/tests/scenario/v2/test_ipv6_traffic_ops.py",
 line 184, in _test_ipv6_vip_mixed_ipv4_ipv6_members_traffic
2021-06-02 07:22:31.787860 | controller | 
self.check_members_balanced(self.lb_vip_address,
2021-06-02 07:22:31.787863 | controller |
2021-06-02 07:22:31.787870 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/octavia_tempest_plugin/tests/validators.py",
 line 282, in check_members_balanced
2021-06-02 07:22:31.787874 | controller | self._wait_for_lb_functional(
2021-06-02 07:22:31.787877 | controller |
2021-06-02 07:22:31.787880 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/octavia_tempest_plugin/tests/validators.py",
 line 423, in _wait_for_lb_functional
2021-06-02 07:22:31.787884 | controller | raise Exception(message)
2021-06-02 07:22:31.787887 | controller |
2021-06-02 07:22:31.787890 | controller | Exception: Server 
[fd9f:a880:3b61:1::3da] on port 81 did not begin passing traffic within the 
timeout period. Failing test.
2021-06-02 07:22:31.787894 | controller |
2021-06-02 07:22:31.787897 | controller |
2021-06-02 07:22:31.787900 | controller | Captured pythonlogging:

** Affects: octavia
 Importance: Undecided
 Status: New


** Tags: octavia

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930566

Title:
  Octavia tempest "octavia-v2-dsvm-scenario" gate fails

Status in octavia:
  New

Bug description:
  Zuul fails for octavia-tempest-plugin test patches.
  Seems that "octavia-v2-dsvm-scenario" gate constantly fails with: Failure or 
Timeout.

  Here are few test patches:
  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/755281
  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/777412
  https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/755050

  The Error (in case of failure) is:
  2021-06-02 07:22:31.787811 | controller | 2021-06-02 06:56:01,451 144223 INFO 
[octavia_tempest_plugin.services.load_balancer.v2.base_client] Cleanup 
complete for member ade7500e-480c-4597-a53f-45518f2e477f...
  2021-06-02 07:22:31.787814 | controller |
  2021-06-02 07:22:31.787817 | controller |
  2021-06-02 07:22:31.787821 | controller | 
octavia_tempest_plugin.tests.scenario.v2.test_ipv6_traffic_ops.IPv6TrafficOperationsScenarioTest.test_tcp_ipv6_vip_mixed_ipv4_ipv6_members_traffic[id-a4e8d5d1-03d5-4252-9300-e89b9b2bdafc]
  2021-06-02 07:22:31.787824 | controller | 
---
  2021-06-02 07:22:31.787827 | controller |
  2021-06-02 07:22:31.787831 | controller | Captured traceback:
  2021-06-02 07:22:31.787834 | controller | ~~~
  

[Yahoo-eng-team] [Bug 1930563] [NEW] oslopolicy-convert-json-to-yaml

2021-06-02 Thread hy
Public bug reported:

The error in node1 is as follows:
[Wed Jun 02 16:00:32.837954 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.096518 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.109075 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.116617 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.140800 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.147785 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.168505 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.169357 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.175743 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
[Wed Jun 02 16:00:33.189434 2021] [wsgi:error] [pid 361523:tid 140707810875136] 
[remote 10.80.0.229:61350] WARNING oslo_policy.policy JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy 

[Yahoo-eng-team] [Bug 1930551] [NEW] Instance image assumes that volume's image metadata exists

2021-06-02 Thread Bui Hong Ha
Public bug reported:

When I access an instance through /project/instance/${(ID), the
dashboard always show an error notification with message: Failed to get
attached volume.

The instance is working fine and the volume can be verified through openstack 
cli. 
A quick grep for the error message shows the below code path in which, the 
instance.image processing in the Try path makes an assumption that volume 
metadata always exists. This is not the case when the volume was created with 
"no source, empty source" as Volume Source.

Also there is no logging code in this code path, so although we see the
error in dashboard, we don't find any log message that can indicate the
issue.


class OverviewTab(tabs.Tab):
name = _("Overview")
slug = "overview"
template_name = ("project/instances/"
 "_detail_overview.html")

def get_context_data(self, request):
instance = self.tab_group.kwargs['instance']
if instance.volumes and not instance.image:
try:
volume = api.cinder.volume_get(
self.request, volume_id=instance.volumes[0].volumeId)
instance.image = {
'id': volume.volume_image_metadata['image_id'],
'name': volume.volume_image_metadata['image_name']}
except Exception:
exceptions.handle(self.request,
  _('Failed to get attached volume.'))
return {"instance": instance}

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930551

Title:
  Instance image assumes that volume's image metadata exists

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I access an instance through /project/instance/${(ID), the
  dashboard always show an error notification with message: Failed to
  get attached volume.

  The instance is working fine and the volume can be verified through openstack 
cli. 
  A quick grep for the error message shows the below code path in which, the 
instance.image processing in the Try path makes an assumption that volume 
metadata always exists. This is not the case when the volume was created with 
"no source, empty source" as Volume Source.

  Also there is no logging code in this code path, so although we see
  the error in dashboard, we don't find any log message that can
  indicate the issue.

  
  class OverviewTab(tabs.Tab):
  name = _("Overview")
  slug = "overview"
  template_name = ("project/instances/"
   "_detail_overview.html")

  def get_context_data(self, request):
  instance = self.tab_group.kwargs['instance']
  if instance.volumes and not instance.image:
  try:
  volume = api.cinder.volume_get(
  self.request, volume_id=instance.volumes[0].volumeId)
  instance.image = {
  'id': volume.volume_image_metadata['image_id'],
  'name': volume.volume_image_metadata['image_name']}
  except Exception:
  exceptions.handle(self.request,
_('Failed to get attached volume.'))
  return {"instance": instance}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp