[Yahoo-eng-team] [Bug 1865117] [NEW] fatal=False in context.can() impact the policy-defaults-refresh to get expect tests results

2020-02-27 Thread Brin Zhang
Public bug reported:

While we do policy-defaults-refresh feature of os-instance-actions show API [1].
We should test the authorized contexts and the unauthorized contexts, and check 
the PolicyNotAuthorized [2].
If we set fatal=False in context.can(), we will not get the PolicyNotAuthorized 
exception (e.g.[1]) [3], so we should adjust the judgment strategy when 
context.can () is used as a condition.

[1]https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/instance_actions.py#L161
[2]https://github.com/openstack/nova/blob/master/nova/tests/unit/policies/base.py#L96-L101
[3]https://review.opendev.org/#/c/70/2/nova/tests/unit/policies/test_instance_actions.py@131

** Affects: nova
 Importance: Undecided
 Assignee: Brin Zhang (zhangbailin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Brin Zhang (zhangbailin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1865117

Title:
  fatal=False in context.can() impact the policy-defaults-refresh to get
  expect tests results

Status in OpenStack Compute (nova):
  New

Bug description:
  While we do policy-defaults-refresh feature of os-instance-actions show API 
[1].
  We should test the authorized contexts and the unauthorized contexts, and 
check the PolicyNotAuthorized [2].
  If we set fatal=False in context.can(), we will not get the 
PolicyNotAuthorized exception (e.g.[1]) [3], so we should adjust the judgment 
strategy when context.can () is used as a condition.

  
[1]https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/instance_actions.py#L161
  
[2]https://github.com/openstack/nova/blob/master/nova/tests/unit/policies/base.py#L96-L101
  
[3]https://review.opendev.org/#/c/70/2/nova/tests/unit/policies/test_instance_actions.py@131

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1865117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864133] Re: cinder service isn't working via horizon after doing a Rocky->Stein->Train upgrade

2020-02-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/709954
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4b31ae5063e4392ecf964ffaae15a30bf97ddedc
Submitter: Zuul
Branch:master

commit 4b31ae5063e4392ecf964ffaae15a30bf97ddedc
Author: Akihiro Motoki 
Date:   Wed Feb 26 02:26:29 2020 +0900

Check volume endpoint availability in the same order

cinder microversion check (api.cinder.get_microversion) checks
volume endpoint availability in a different order as cinderclient()
does. It does not API_VERSIONS setting in horizon.
As a result, when multiple volume endpoints are configured,
get_microversion() accesses a volume endpoint with a different API
version. At the moment cinder v2 and v3 APIs returns the same info,
so it only affects when cinder v1 endpoint is configured.

This commit introduces a new function _find_cinder_url() to
retrieve a volume endpoint considering API_VERSIONS.

get_auth_params_from_request() is no longer needed and the variable
substitutions are now backed to cinderclient(). It was introduced
so that the memoized decorator worked but the memoized decorator
was improved and we no longer need it.

Change-Id: I69b1fc11caf8a78c49e98aba6c538e5f344b14f2
Closes-Bug: #1864133


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1864133

Title:
  cinder service isn't  working via horizon after doing a
  Rocky->Stein->Train upgrade

Status in Cinder:
  Incomplete
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in kolla-ansible:
  Triaged

Bug description:
  Description:

  cinder service isn't working via horizon after doing a Rocky->Stein->Train 
upgrade.
  We use kollan-ansible to do the upgrade. And lvm as volume backend.

  Horizon dashboard http://HORIZON-DASHBOARD-HOST/project/volumes/ page display 
empty and just says:
  "Error: Unable to retrieve volume list."
  "Error: Unable to retrieve snapshot list."

  whereas it is working via openstack client tool, say "openstack volume
  list" output is ok.

  Other info:
  => cinder-api-access.log via horizon <===
  - - - [21/Feb/2020:01:44:25 +] "GET /v1/3f1cbbce59704574b423b5baaafaaae0 
HTTP/1.1" 404 154 2068 "-" "python-requests/2.22.0"
  - - - [21/Feb/2020:01:44:25 +] "GET /v1/3f1cbbce59704574b423b5baaafaaae0 
HTTP/1.1" 404 154 1926 "-" "python-requests/2.22.0"

  => cinder-api-access.log via openstack client <===
  - - - [21/Feb/2020:01:43:40 +] "GET 
/v3/3f1cbbce59704574b423b5baaafaaae0/volumes/detail HTTP/1.1" 200 3321 90044 
"-" "python-cinderclient"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1864133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864020] Re: libvirt.libvirtError: Requested operation is not valid: format of backing image %s of image %s was not specified in the image metadata (See https://libvirt.org/kbase

2020-02-27 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:21.0.0~b2~git2020021008
.1fcd74730d-0ubuntu4

---
nova (2:21.0.0~b2~git2020021008.1fcd74730d-0ubuntu4) focal; urgency=medium

  * d/p/libvirt-provide-backing-file-format-creating-qcow2.patch: Without this
patch, domains (instances) can't be launched with libvirt 6.0.0. Picked
from https://review.opendev.org/#/c/708745/ (LP: #1864020).

 -- Corey Bryant   Thu, 27 Feb 2020 09:05:50
-0500

** Changed in: nova (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864020

Title:
  libvirt.libvirtError: Requested operation is not valid: format of
  backing image %s of image %s was not specified in the image metadata
  (See https://libvirt.org/kbase/backing_chains.html for
  troubleshooting)

Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  The following was discovered using Fedora 30 and a virt-preview job in
  the below change:

  zuul: Add the fedora-latest-virt-preview job to the experimental queue
  https://review.opendev.org/#/c/704573/

  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [None req-7efa9e8b-3c21-4787-8b47-54cab5fe3756 
tempest-AggregatesAdminTestJSON-76056319 
tempest-AggregatesAdminTestJSON-76056319] [instance: 
543723fb-3afc-460c-9139-809bcacd1840] Instance failed to spawn: 
libvirt.libvirtError: Requested operation is not valid: format of backing image 
'/opt/stack/data/nova/instances/_base/8e0569aaf1cbdb522514c3dc9d0fa8fad6f78c50' 
of image 
'/opt/stack/data/nova/instances/543723fb-3afc-460c-9139-809bcacd1840/disk' was 
not specified in the image metadata (See 
https://libvirt.org/kbase/backing_chains.html for troubleshooting)
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] Traceback 
(most recent call last):
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2604, in _build_resources
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] yield 
resources
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2377, in _build_and_run_instance
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] 
block_device_info=block_device_info)
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3399, in spawn
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] 
power_on=power_on)
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6193, in 
_create_domain_and_network
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] 
destroy_disks_on_failure)
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] 
self.force_reraise()
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] 
six.reraise(self.type_, self.value, self.tb)
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840]   File 
"/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise
  Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR 
nova.compute.manager [instance:

[Yahoo-eng-team] [Bug 1859496] Re: Deleting stuck build instance may leak allocations

2020-02-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/702368
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f35930eef8fa27ee972e87366abb38596839fdba
Submitter: Zuul
Branch:master

commit f35930eef8fa27ee972e87366abb38596839fdba
Author: Alexandre Arents 
Date:   Mon Jan 13 15:53:24 2020 +

Avoid allocation leak when deleting instance stuck in BUILD

During instance build, conductor claim resources to scheduler
and create instance DB entry in cell.

If for any reason conductor is not able to complete a build after
instance claim (ex: AMQP issues, conductor restart before build completes)
and in the mean time user requests deletion of its stuck instance in BUILD,
nova api will delete build_request but let allocation in place resulting
in a leak.

The change proposes that nova api ensures allocation cleanup is made
in case of ongoing/incomplete build.
Note that because build did not reach a cell, compute is not able to heal
allocation during its periodic update_available_resource task.
Furthermore, it ensures that instance mapping is also queued for deletion.

Change-Id: I4d3193d8401614311010ed0e055fcb3aaeeebaed
Closes-Bug: #1859496


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1859496

Title:
  Deleting stuck build instance may leak allocations

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  After issues in control plane during instance creation,
  Instance may stay stuck in BUILD state.

  Even after deleting them, placement allocation may remain,
  and compute host log is complaining that:
  Instance eba20a0f-5856-4600-bcaa-7b758d04b5c5 has allocations against this 
compute host but is not found in the database.

  
  Steps to reproduce
  ==

  On a fresh devstack master install

  
  1) open a terminal that display entry in placement.allocations and 
nova_cell1.instances all seconds:
  while true ; do  date ; mysql -e "select * from placement.allocations" ; 
mysql -e "select * from nova_cell1.instances where deleted=0" ;sleep 1 ; done

  2) Trigguer a spawn of 50 instances & kill rabbit after 5sec to simulate 
issue on control plane:
  openstack server create  --flavor m1.tiny --image cirros-0.4.0-x86_64-disk 
--nic net-id=private alex --min 50 --max 50 & sleep 5 ;  sudo pkill 
rabbitmq-server

  Note: To reach the bug,  goal is to get instances Allocated by
  scheduler, but not let the time to conductor to create entry in
  nova_cell1.instances

  You should see allocations appearing in allocations:
  
+-++--+--+--+---+--+
  | created_at  | updated_at | id   | resource_provider_id | 
consumer_id  | resource_class_id | used |
  
+-++--+--+--+---+--+
  | 2020-01-13 11:02:51 | NULL   | 1727 |1 | 
8d0a42fe-922b-4c08-afe3-65d65893d355 | 2 |1 |
  | 2020-01-13 11:02:51 | NULL   | 1728 |1 | 
8d0a42fe-922b-4c08-afe3-65d65893d355 | 1 |  512 |
  | 2020-01-13 11:02:51 | NULL   | 1729 |1 | 
8d0a42fe-922b-4c08-afe3-65d65893d355 | 0 |1 |
  | 2020-01-13 11:02:51 | NULL   | 1730 |1 | 
3cd1b8be-6997-452e-86e0-5013c9ab6bda | 2 |1 |
  | 2020-01-13 11:02:51 | NULL   | 1731 |1 | 
3cd1b8be-6997-452e-86e0-5013c9ab6bda | 1 |  512 |
  .

  instances are all stuck in BUILD at this stage

  3) delete instances:
  openstack server list | awk '/m1.tiny/ {print $2}' | xargs openstack server 
delete
  4) service rabbitmq-server start
  5) openstack server list 
  
  6)  mysql -e "select count(*) from placement.allocations"
  +--+
  | count(*) |
  +--+
  |  150 |
  +--+
  Allocation remains
  7) nova-compute logs complaining that:
  Instance eba20a0f-5856-4600-bcaa-7b758d04b5c5 has allocations against this 
compute host but is not found in the database.

  Expected result
  ===
  placement allocation of instance have to be cleanup after deletion

  Actual result
  =
  placement allocation of instance are leaked.

  
  Environment
  ===
  At least stein to master seems impacted

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1859496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.lau

[Yahoo-eng-team] [Bug 1865098] [NEW] logging error in _update_routed_network_host_routes

2020-02-27 Thread Harald Jensås
Public bug reported:

File "/opt/stack/neutron/neutron/services/segments/plugin.py", line 556, in 
_update_routed_network_host_routes  
  
(subnet.id, subnet.network_id)) 


 Message: "Error formatting log line msg='Updating host routes for subnet *s on 
routed network *s' err=TypeError('not enough arguments for format string',)"
 
 Arguments: (('ebae0025-a701-4b22-aa66-00ec48025b30', 
'306a1697-03a2-4445-bc78-c874fe226ffd'),)   

   
 --- Logging error ---

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: logging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865098

Title:
  logging error in _update_routed_network_host_routes

Status in neutron:
  New

Bug description:
  File "/opt/stack/neutron/neutron/services/segments/plugin.py", line 556, in 
_update_routed_network_host_routes  
  
  (subnet.id, subnet.network_id))   

  
   Message: "Error formatting log line msg='Updating host routes for subnet *s 
on routed network *s' err=TypeError('not enough arguments for format string',)" 

   Arguments: (('ebae0025-a701-4b22-aa66-00ec48025b30', 
'306a1697-03a2-4445-bc78-c874fe226ffd'),)   

   
   --- Logging error ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865082] [NEW] ssh_pwauth: no disables PasswordAuthentication in MatchUsers block as well as globally

2020-02-27 Thread Eric Swanson
Public bug reported:

Cloud Provider: DigitalOcean
Expected: ssh_pwauth: no will only disable PasswordAuthentication globally in 
/etc/ssh/sshd_config
Actual: ssh_pwauth also disables PasswordAuthentication under a MatchUsers 
block where I'd like it to remain enabled

Complicating factor: I am actually not passing `ssh_pwauth: no`
explicitly anywhere. DigitalOcean seems to be passing it themselves
because I am providing an SSH key. I'd actually be fine with totally
disabling the `ssh_pwauth` feature in my image, as I have already passed
a fully-configured sshd_config.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/1865082/+attachment/5331716/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1865082

Title:
  ssh_pwauth: no disables PasswordAuthentication in MatchUsers block as
  well as globally

Status in cloud-init:
  New

Bug description:
  Cloud Provider: DigitalOcean
  Expected: ssh_pwauth: no will only disable PasswordAuthentication globally in 
/etc/ssh/sshd_config
  Actual: ssh_pwauth also disables PasswordAuthentication under a MatchUsers 
block where I'd like it to remain enabled

  Complicating factor: I am actually not passing `ssh_pwauth: no`
  explicitly anywhere. DigitalOcean seems to be passing it themselves
  because I am providing an SSH key. I'd actually be fine with totally
  disabling the `ssh_pwauth` feature in my image, as I have already
  passed a fully-configured sshd_config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1865082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865061] [NEW] When neutron does a switch-over between router 1 and router2, the router1 conntrack flows shoud be deleted

2020-02-27 Thread Slawek Kaplonski
Public bug reported:

Originally reported for RHOSP-14 by Candido Campos.

Description of problem:

When neutron does a switch-over between router 1 and router2, the
contrack flows of router1 shoud be deleted

How reproducible:


Steps to Reproduce:
1. Deploy OpenStack with 3 controllers
2. Create a Network with a router and at least one vm
3. create a fip and assign it to the vm
4. ssh to vm fip: ssh -vvv cirros@X.X.X.X
5.  In controller with active router: ip netns exec qrouter-XX ip link set 
ha-XXX down ; ip netns exec qrouter-XX ip link set ha-XXX up
7.Check that contrack flows are not deleted: docker exec -t -i -u root 
neutron_l3_agent ip netns exec qrouter-XXX conntrack -L
7. Again In controller with active router: ip netns exec qrouter-XX ip link set 
ha-XXX down ; ip netns exec qrouter-XX ip link set ha-XXX up
8.When router active switch back to the previous router ssh connection is 
broken.


Actual results:

conntrack flows are reused. SSh connection is broken.

Expected results:

contrack flows are recreated. ssh connection isn't broken.

The problem exists only if second failover will be done in short time,
before conntrack table on first controller will be cleared. So it's not
very serious problem for real L3HA deployments probably but it would be
nice to have it fixed.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865061

Title:
  When neutron does a switch-over between router 1 and router2, the
  router1 conntrack flows shoud be deleted

Status in neutron:
  Confirmed

Bug description:
  Originally reported for RHOSP-14 by Candido Campos.

  Description of problem:

  When neutron does a switch-over between router 1 and router2, the
  contrack flows of router1 shoud be deleted

  How reproducible:

  
  Steps to Reproduce:
  1. Deploy OpenStack with 3 controllers
  2. Create a Network with a router and at least one vm
  3. create a fip and assign it to the vm
  4. ssh to vm fip: ssh -vvv cirros@X.X.X.X
  5.  In controller with active router: ip netns exec qrouter-XX ip link set 
ha-XXX down ; ip netns exec qrouter-XX ip link set ha-XXX up
  7.Check that contrack flows are not deleted: docker exec -t -i -u root 
neutron_l3_agent ip netns exec qrouter-XXX conntrack -L
  7. Again In controller with active router: ip netns exec qrouter-XX ip link 
set ha-XXX down ; ip netns exec qrouter-XX ip link set ha-XXX up
  8.When router active switch back to the previous router ssh connection is 
broken.


  Actual results:

  conntrack flows are reused. SSh connection is broken.

  Expected results:

  contrack flows are recreated. ssh connection isn't broken.

  The problem exists only if second failover will be done in short time,
  before conntrack table on first controller will be cleared. So it's
  not very serious problem for real L3HA deployments probably but it
  would be nice to have it fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865058] [NEW] Add funtional tests for arp_responder feature

2020-02-27 Thread Slawek Kaplonski
Public bug reported:

Arp responder is in Neutron since long time but it seems it's not well tested. 
As it is something what is hidden from the user, I think that easiest way to 
test if proper OpenFlow rules are really created by ovs agent on the node is to 
add some functional tests.
Or maybe fullstack test would be good too.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: fullstack functional-tests l3-dvr-backlog low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865058

Title:
  Add funtional tests for arp_responder feature

Status in neutron:
  Confirmed

Bug description:
  Arp responder is in Neutron since long time but it seems it's not well 
tested. As it is something what is hidden from the user, I think that easiest 
way to test if proper OpenFlow rules are really created by ovs agent on the 
node is to add some functional tests.
  Or maybe fullstack test would be good too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865040] [NEW] Able to show update and delete aggregate with invalid id

2020-02-27 Thread GEET JAIN
Public bug reported:

Description
===
Able to show, update and delete the aggregate with invalid id. Invalid id means 
it will start with the same id but appended with some alphanumeric string(ex - 
actual_id: 5 invalid_id: 5abcd or invalid_id: 5abcd123).

This issue is only with actual_id appended with alphanumeric value
started with alphabets not with numbers.

The aggregate id which is being received on routes is not converting to
integer anywhere in the code and later it gets transfer to db , which
truncates the appended string with original id -

ex below warning -

/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166: Warning:
(1292, u"Truncated incorrect DOUBLE value: '6abcd123'")

There are ways to change the setting and convert the warning into error
but the code should handle such situation and raise an exception with
proper error message.

Steps to reproduce
==

1. Create an aggregate -

++--+---+
| ID | Name | Availability Zone |
++--+---+
|  5 | new_name | None  |
++--+---+

2. Get the above created aggregate with a wrong id Ex - 5abcd (started
with correct id but appended with some alphabets)

curl -g -i -X GET http://192.168.56.5:8774/v2.1/os-aggregates/5abcd -H 
"User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
$TOKEN"HTTP/1.1 200 OK
Content-Length: 226
Content-Type: application/json
Openstack-Api-Version: compute 2.1
X-Openstack-Nova-Api-Version: 2.1
Vary: OpenStack-API-Version
Vary: X-OpenStack-Nova-API-Version
X-Compute-Request-Id: req-c76d66ad-c4ce-430a-bcd5-a5ec5e962d2e
Date: Thu, 27 Feb 2020 13:44:07 GMT

{"aggregate": {"name": "new_name", "availability_zone": null, "deleted":
false, "created_at": "2020-02-27T13:34:00.00", "updated_at":
"2020-02-27T13:41:14.00", "hosts": [], "deleted_at": null, "id": 5,
"metadata": {}}}stack@a:~/nova/nova/api/openstack/compute$

3. Update the above created aggregate with a wrong id Ex - 5abcd
(started with correct id but appended with some alphabets) -

Response (0.169s) - http://192.168.56.5:8774/v2.1/os-aggregates/5abcd
200 OK

{
  "aggregate": {
"name": "new_updated",
"availability_zone": null,
"deleted": false,
"created_at": "2020-02-27T13:34:00.00",
"updated_at": "2020-02-27T13:45:17.542075",
"hosts": [],
"deleted_at": null,
"id": 5,
"metadata": {}
  }
}

4. Delete the above created aggregate with a wrong id Ex - 5abcd
(started with correct id but appended with some alphabets) -

curl -g -i -X DELETE http://192.168.56.5:8774/v2.1/os-aggregates/5abcd -H 
"User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
$TOKEN"
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: application/json
Openstack-Api-Version: compute 2.1
X-Openstack-Nova-Api-Version: 2.1
Vary: OpenStack-API-Version
Vary: X-OpenStack-Nova-API-Version
X-Compute-Request-Id: req-8d4a2d57-934b-4c66-9a48-9e114b1b4e9f
Date: Thu, 27 Feb 2020 13:46:10 GMT

Expected result
===
Show, update and delete should not work for invalid id (mentioned in summary).

Actual result
=
Show, update and delete is working for invalid id (mentioned in summary).

Environment
===
1. Openstack Release - Ocata
2. Hypervisor - QEMU

** Affects: nova
 Importance: Undecided
 Assignee: GEET JAIN (geet123jain)
 Status: In Progress

** Attachment added: "Screenshot from 2020-02-27 16-51-33.png"
   
https://bugs.launchpad.net/bugs/1865040/+attachment/5331627/+files/Screenshot%20from%202020-02-27%2016-51-33.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1865040

Title:
  Able to show update and delete aggregate with invalid id

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Able to show, update and delete the aggregate with invalid id. Invalid id 
means it will start with the same id but appended with some alphanumeric 
string(ex - actual_id: 5 invalid_id: 5abcd or invalid_id: 5abcd123).

  This issue is only with actual_id appended with alphanumeric value
  started with alphabets not with numbers.

  The aggregate id which is being received on routes is not converting
  to integer anywhere in the code and later it gets transfer to db ,
  which truncates the appended string with original id -

  ex below warning -

  /usr/local/lib/python2.7/dist-packages/pymysql/cursors.py:166:
  Warning: (1292, u"Truncated incorrect DOUBLE value: '6abcd123'")

  There are ways to change the setting and convert the warning into
  error but the code should handle such situation and raise an exception
  with proper error message.

  Steps to reproduce
  ==

  1. Create an aggregate -

  ++--+---+
  | ID | Name | Availability Zone |
  +---

[Yahoo-eng-team] [Bug 1864841] Re: Neutron -> Designate integration does not consider the dns pool for zone

2020-02-27 Thread YAMAMOTO Takashi
** Tags added: dns rfe

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864841

Title:
  Neutron -> Designate integration does not consider the dns pool for
  zone

Status in neutron:
  Opinion

Bug description:
  Hi,

  We're using Neutron-> Designate "Reverse" Integration: 
https://docs.openstack.org/designate/latest/contributor/integrations.html
  for creating the PTR records automatically for FloatingIPs.

  The reverse dns zone for PTR gets created always in the default
  Designate server pool.

  We have multiple dns server pools and the expectation is that if the
  forward DNS zone (where A record is auto-created) is hosted on the
  non-default Designate pool, than the reverse dns zone should also be
  created on the same Designate pool.

  This is the code which handles zone creation:
  
https://github.com/openstack/neutron/blob/master/neutron/services/externaldns/drivers/designate/driver.py#L121-L126

  It can be improved as follows:

  1. Ask Designate on which dns pool the forward dns zone is located
  
(https://github.com/openstack/neutron/blob/master/neutron/services/externaldns/drivers/designate/driver.py#L93
  -> this is where A is created)

  2. If that is not a default dns pool -> specify that pool uuid as an
  attribute when creating the reverse dns zone for PTR records.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864996] [NEW] Notification about approaching session timeout

2020-02-27 Thread Mateusz Pawlowski
Public bug reported:

This is a feature request raised by the UAI customer.

SF case ref. 00267788

Customer complains that Horizon doesn't notify user that login session
will shortly expire. Horizon should render notification with option to
reset session timeout counter so user can avoid timeout and doesn't have
to re-login. He explained that increasing session timeout is not an
valid solution.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sts wishlist

** Description changed:

  This is a feature request raised by the UAI customer.
  
  SF case ref. 00267788
  
  Customer complains that Horizon doesn't notify user that login session
- will shortly expire. Customer would like get notification with option to
- reset session timeout counter. He explained that increasing session
- timeout is not an valid solution.
+ will shortly expire. Horizon should render notification with option to
+ reset session timeout counter so user can avoid timeout and doesn't have
+ to re-login. He explained that increasing session timeout is not an
+ valid solution.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1864996

Title:
  Notification about approaching session timeout

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is a feature request raised by the UAI customer.

  SF case ref. 00267788

  Customer complains that Horizon doesn't notify user that login session
  will shortly expire. Horizon should render notification with option to
  reset session timeout counter so user can avoid timeout and doesn't
  have to re-login. He explained that increasing session timeout is not
  an valid solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1864996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860140] Re: [OVN] Provider driver sends malformed update to Octavia

2020-02-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/708590
Committed: 
https://git.openstack.org/cgit/openstack/ovn-octavia-provider/commit/?id=439fc8f4b0fa52d358cdf12208476b56d79e6891
Submitter: Zuul
Branch:master

commit 439fc8f4b0fa52d358cdf12208476b56d79e6891
Author: Maciej Józefczyk 
Date:   Fri Jan 17 14:38:26 2020 +

Don't send malformed status update to Octavia

In some corner cases while updating a member, OVN
Provider Driver sends a malformed status update
to Octavia that breaks the update operation and
resources stuck in PENDING_UPDATE state.

In OVN while resource is administratively disabled
we suffix its ID with string ':D'.
This patch filters this string from resources ID
before sending Octavia a status update.

Because for now we don't have running master branch
for OVN provider driver this change will be applied
first on stable/train and then cherry-picked.

Change-Id: Ib2ce04625faddd6ed263678bad2f4eb10929a520
Closes-Bug: #1860140


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860140

Title:
  [OVN] Provider driver sends  malformed update to Octavia

Status in neutron:
  Fix Released

Bug description:
  In some corner cases while updating a member OVN Provider driver sends
  a malformed status update to Octavia that breaks the update operation
  and resources stuck in PENDING_UPDATE state.

  Example error below:

  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver [-] 
Unexpected exception in request_handler: 
octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The status update had 
an unknown error.', "E
  rror while updating the load balancer status: 'NoneType' object has no 
attribute 'update'")
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
   
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
442, in _update_status_to_octavia   
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
self._octavia_driver_lib.update_loadbalancer_status(status) 
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/octavia_lib/api/drivers/driver_lib.py", line 
126, in update_loadbalancer_status 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
status_record=response.pop(constants.STATUS_RECORD, None))  
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860141] Re: [OVN] Provider driver fails while LB VIP port already created

2020-02-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/708591
Committed: 
https://git.openstack.org/cgit/openstack/ovn-octavia-provider/commit/?id=42106d4696aaa8e91e458783b15c90335ff5f7e3
Submitter: Zuul
Branch:master

commit 42106d4696aaa8e91e458783b15c90335ff5f7e3
Author: Maciej Józefczyk 
Date:   Fri Jan 17 15:08:27 2020 +

Don't fail if VIP already exist or has been deleted before

Sometimes there is a race condition on creation of VIP port
or deleting it that ends with exception and blocks LB stack
creation / deletion.

Because for now we don't have running master branch
for OVN provider driver this change will be applied
first on stable/train in networking-ovn tree and then cherry-picked.

Change-Id: I2aaae7c407caba7a57e2ca2d4ed524f3bb63953f
Closes-Bug: #1860141


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860141

Title:
  [OVN] Provider driver fails while LB VIP port already created

Status in neutron:
  Fix Released

Bug description:
  Sometimes there is a RACE condition on creation of VIP port that ends
  with exception and blocks LB stack creation.

  
  Example error:

  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
[req-a1e64a8e-5971-4b2f-afdf-33e026f193d6 - 968cd882ee5145d4a3e30b9612b0cae0 - 
default default] Provider 'ovn' raised a driver error: An unknown driver e
  rror occurred.: octavia_lib.api.drivers.exceptions.DriverError: ('An unknown 
driver error occurred.', IpAddressAlreadyAllocatedClient())
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils Traceback (most 
recent call last):
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1843, in create_vip_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils project_id, 
lb_id, vip_dict)['port']
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1523, in create_vip_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
network_driver.neutron_client.create_port(port)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 803, in 
create_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
self.post(self.ports_path, body=body)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 359, in 
post
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
headers=headers, params=params)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 294, in 
do_request
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
self._handle_fault_response(status_code, replybody, resp)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 269, in 
_handle_fault_response
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
exception_handler_v20(status_code, error_body)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 93, in 
exception_handler_v20
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
request_ids=request_ids)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
neutronclient.common.exceptions.IpAddressAlreadyAllocatedClient: IP address 
172.30.188.22 already allocated in subnet 6cdca17f-c896-4684-9feb-6d0aa4aa3cb

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp