[Yahoo-eng-team] [Bug 1995031] [NEW] [CI][periodic] neutron-functional-with-uwsgi-fips job failing

2022-10-27 Thread Mamatisa Nurmatov
Public bug reported:

Sometimes periodic job neutron-functional-with-uwsgi-fips fails on
following two tests:

neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter
test_dvr_router_with_centralized_fip_calls_keepalived_cidr [1]
test_dvr_router_snat_namespace_with_interface_remove [2]


Latest builds: 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0


1) 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_108/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-uwsgi-fips/10804fa/testr_results.html

2)
https://662fbc83c91c32a8789e-45518917cf8baf33fe991d0324b9a061.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
functional-with-uwsgi-fips/4cad1c3/testr_results.html

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Importance: High => Medium

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995031

Title:
  [CI][periodic] neutron-functional-with-uwsgi-fips job failing

Status in neutron:
  Confirmed

Bug description:
  Sometimes periodic job neutron-functional-with-uwsgi-fips fails on
  following two tests:

  neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter
  test_dvr_router_with_centralized_fip_calls_keepalived_cidr [1]
  test_dvr_router_snat_namespace_with_interface_remove [2]

  
  Latest builds: 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0

  
  1) 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_108/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-uwsgi-fips/10804fa/testr_results.html

  2)
  
https://662fbc83c91c32a8789e-45518917cf8baf33fe991d0324b9a061.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  functional-with-uwsgi-fips/4cad1c3/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1995031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995028] [NEW] list os-service causing reconnects to memcached all the time

2022-10-27 Thread norman shen
Public bug reported:

Description
===

we are running a victoria openstack cluster (python3). and I observe
that everytime when an openstack compute service list executed, nova-api
will create a new connection to memcache. Actually there are several
reasons to cause this behavior

1. when running natively with eventlet's wsgi server, everytime a new coroutine 
is created to host web request and this causes keystonemiddle auth_token which 
uses python-memcached to reconnect to memcahced all the time
2. os-services will trigger nova.availability_zones.set_availability_zones and 
it will update cache every time, since cellv2 is enabled, this method is 
running in an co-routine as well
3. python-memcached's Client is inheriting from threading.local which will be 
monkey_patched to use eventlet's implementation and thus for every co-routine 
context it will create a new connection

Steps to reproduce
==

1. Patch def _get_socket and print connection
2. execute openstack compute service list

Expected result
===

Maintain stable connections to memcached

Actual result
=

Reconnects

Environment
===

1. devstack victoria openstack

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1995028

Title:
  list os-service causing reconnects to memcached all the time

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  we are running a victoria openstack cluster (python3). and I observe
  that everytime when an openstack compute service list executed, nova-
  api will create a new connection to memcache. Actually there are
  several reasons to cause this behavior

  1. when running natively with eventlet's wsgi server, everytime a new 
coroutine is created to host web request and this causes keystonemiddle 
auth_token which uses python-memcached to reconnect to memcahced all the time
  2. os-services will trigger nova.availability_zones.set_availability_zones 
and it will update cache every time, since cellv2 is enabled, this method is 
running in an co-routine as well
  3. python-memcached's Client is inheriting from threading.local which will be 
monkey_patched to use eventlet's implementation and thus for every co-routine 
context it will create a new connection

  Steps to reproduce
  ==

  1. Patch def _get_socket and print connection
  2. execute openstack compute service list

  Expected result
  ===

  Maintain stable connections to memcached

  Actual result
  =

  Reconnects

  Environment
  ===

  1. devstack victoria openstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1995028/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995029] [NEW] list os-service causing reconnects to memcached all the time

2022-10-27 Thread norman shen
Public bug reported:

Description
===

we are running a victoria openstack cluster (python3). and I observe
that everytime when an openstack compute service list executed, nova-api
will create a new connection to memcache. Actually there are several
reasons to cause this behavior

1. when running natively with eventlet's wsgi server, everytime a new coroutine 
is created to host web request and this causes keystonemiddle auth_token which 
uses python-memcached to reconnect to memcahced all the time
2. os-services will trigger nova.availability_zones.set_availability_zones and 
it will update cache every time, since cellv2 is enabled, this method is 
running in an co-routine as well
3. python-memcached's Client is inheriting from threading.local which will be 
monkey_patched to use eventlet's implementation and thus for every co-routine 
context it will create a new connection

Steps to reproduce
==

1. Patch def _get_socket and print connection
2. execute openstack compute service list

Expected result
===

Maintain stable connections to memcached

Actual result
=

Reconnects

Environment
===

1. devstack victoria openstack

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1995029

Title:
  list os-service causing reconnects to memcached all the time

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  we are running a victoria openstack cluster (python3). and I observe
  that everytime when an openstack compute service list executed, nova-
  api will create a new connection to memcache. Actually there are
  several reasons to cause this behavior

  1. when running natively with eventlet's wsgi server, everytime a new 
coroutine is created to host web request and this causes keystonemiddle 
auth_token which uses python-memcached to reconnect to memcahced all the time
  2. os-services will trigger nova.availability_zones.set_availability_zones 
and it will update cache every time, since cellv2 is enabled, this method is 
running in an co-routine as well
  3. python-memcached's Client is inheriting from threading.local which will be 
monkey_patched to use eventlet's implementation and thus for every co-routine 
context it will create a new connection

  Steps to reproduce
  ==

  1. Patch def _get_socket and print connection
  2. execute openstack compute service list

  Expected result
  ===

  Maintain stable connections to memcached

  Actual result
  =

  Reconnects

  Environment
  ===

  1. devstack victoria openstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1995029/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977524] Re: Wrong redirect after deleting zone from Zone Overview pane

2022-10-27 Thread Tatiana Ovchinnikova
Thanks for the link Michael
Fixed in https://review.opendev.org/c/openstack/horizon/+/857496
I'm closing this bug.

** Changed in: horizon
   Status: New => Fix Released

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

** Changed in: designate-dashboard
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

** Changed in: designate-dashboard
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1977524

Title:
  Wrong redirect after deleting zone from Zone Overview pane

Status in Designate Dashboard:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When deleting zone from Zones -> specific zone ->  Overview pane i am getting 
page not exist error. 
  After successful notification that zone is being removed website redirects to 
/dashboard/dashboard/project/dnszones which has duplicate dashboard path. 
  When deleting from zones list view everything works fine.

  
  Tested on Ussuri environment, but code seems to be unchanged in newer 
releases. 
  I've tried to apply bugfixes for reloading zones/flating-ip panes but with no 
effect for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1977524/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994983] [NEW] Instance is stopped on destination node after evacuation if stop call was issued before evacuation

2022-10-27 Thread Alexey Stupnikov
Public bug reported:

Steps to reproduce:

1. Start an instance on a compute node (src compute).
2. Destroy src compute. Wait till nova-compute server will go down. 
3. Run 'openstack server stop' for an instance.
4. Run 'server set --state error' to the instance to be able to evacuate the 
instance (no longer needed if fix for bug #1978983 presents)
5. Evacuate the instance to another compute node (dst compute).
6. Start src compute.
7. Confirm that the evacuated instance is stopped after src compute come online.

This behavior is a bug because src compute could come up after few
months and shutdown some important instance. It looks like this behavior
is caused by cast RPC call for SRC compute to stop an instance which
sits in the queue until compute is back online and then causes DB state
change.

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=2130112

** Affects: nova
 Importance: Undecided
 Assignee: Alexey Stupnikov (astupnikov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994983

Title:
  Instance is stopped on destination node after evacuation if stop call
  was issued before evacuation

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce:

  1. Start an instance on a compute node (src compute).
  2. Destroy src compute. Wait till nova-compute server will go down. 
  3. Run 'openstack server stop' for an instance.
  4. Run 'server set --state error' to the instance to be able to evacuate the 
instance (no longer needed if fix for bug #1978983 presents)
  5. Evacuate the instance to another compute node (dst compute).
  6. Start src compute.
  7. Confirm that the evacuated instance is stopped after src compute come 
online.

  This behavior is a bug because src compute could come up after few
  months and shutdown some important instance. It looks like this
  behavior is caused by cast RPC call for SRC compute to stop an
  instance which sits in the queue until compute is back online and then
  causes DB state change.

  RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=2130112

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1994983/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994980] [NEW] FR for variable substitution in nocloud-net urls (eg system serial number)

2022-10-27 Thread Jamie Murphy
Public bug reported:

It would be really useful if there was variable substitution in nocloud-
net ds config.

I think an example would explain this best.

ds=nocloud-net;s=http://10.10.0.1:8000/$sysserial/

So in the above it would be great if cloud-init converted $sysserial
into the systems serial. thus polling the http datasource server for a
specific config for that system by serial number.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1994980

Title:
  FR for variable substitution in nocloud-net urls (eg system serial
  number)

Status in cloud-init:
  New

Bug description:
  It would be really useful if there was variable substitution in
  nocloud-net ds config.

  I think an example would explain this best.

  ds=nocloud-net;s=http://10.10.0.1:8000/$sysserial/

  So in the above it would be great if cloud-init converted $sysserial
  into the systems serial. thus polling the http datasource server for a
  specific config for that system by serial number.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1994980/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912320] Re: TestTimer breaks VPNaaS functional tests

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912320

Title:
  TestTimer breaks VPNaaS functional tests

Status in neutron:
  Fix Released

Bug description:
  Some functional tests for neutron-vpnaas make use of the NamespaceFixture.
  If the tests are run in combination with a recent neutron version some tests
  fail because the TestTimer raises a TestTimerTimeout even if the namespace
  cleanup finishes before the timeout.

  In the test setup the tox env for dsvm-functional-sswan will normally
  install neutron 17.0.0 (victoria), but for my tests I needed a recent
  neutron, so I installed it as an additional step in the setup of the tox env.

  The test setup steps are like these, on an Ubuntu 20.04 host:

  git clone https://git.openstack.org/openstack-dev/devstack
  git clone https://opendev.org/openstack/neutron
  git clone https://opendev.org/openstack/neutron-vpnaas
  cd neutron-vpnaas
  VENV=dsvm-functional-sswan ./tools/configure_for_vpn_func_testing.sh 
../devstack -i
  tox -e dsvm-functional-sswan --notest
  source .tox/dsvm-functional-sswan/bin/activate
  python -m pip install ../neutron
  deactivate

  Then run the neutron-vpnaas functional tests:

  tox -e dsvm-functional-sswan

  Some tests fail and you see the TestTimerTimeout exception.

  The tests were fine with neutron 17.0.0.
  The TestTimer was introduced later.
  See
  Change set https://review.opendev.org/c/openstack/neutron/+/754938/
  Related bug https://bugs.launchpad.net/neutron/+bug/1838793

  I could narrow the problem with the TestTimer down.
  In at least one neutron-vpnaas test
  
(neutron_vpnaas.tests.functional.strongswan.test_netns_wrapper.TestNetnsWrapper.test_netns_wrap_success)
  the NamespaceFixture is used.
  The TestTimer is set up, the test completes and the namespace is deleted
  successfully before the 5 seconds of the timer are over. But shortly after
  that the timer still fires.

  The problem is the following: on timer start the old signal handler is
  stored (Handler.SIG_DFL in my case) and the remaining time of any existing
  alarm (in my case 0). On exit the signal handler is supposed to be reset
  and the alarm too. But neither happens.
  The signal handler is not set back, because Handler.SIG_DFL is falsy.
  The alarm is not stopped because the old value was 0 (there was no ongoing
  alarm). So in the end the alarm started by TestTimer will eventually be
  signalled.

  References:
  Change set where the TestTimer was introduced:
  https://review.opendev.org/c/openstack/neutron/+/754938/
  That related to bug #1838793

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912320/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915341] Re: neutron-linuxbridge-agent not starting due to nf_tables rules

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915341

Title:
  neutron-linuxbridge-agent not starting due to nf_tables rules

Status in neutron:
  Fix Released

Bug description:
  * Description
  When restarting neutron-linuxbridge-agent it fails, because it cannot remove 
nf_tables chains

  * Pre-conditions
  Openstack Ussuri on Ubuntu 20.04 installed as described in 
https://docs.openstack.org/install-guide on real hardware.

  * reproduction steps
  When you remove an instance there seem to remain some rules in neutronARP-* 
and neutronMAC-* tables. When restarting neutron-linuxbridge-agent then, it 
fails:

  neutron_lib.exceptions.ProcessExecutionError: Exit code: 4; Stdin: ; Stdout: 
; Stderr: ebtables v1.8.4 (nf_tables):  CHAIN_USER_DEL failed (Device or 
resource busy): chain neutronARP-tap0a9b5e3a-21
  INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Stopping Linux 
bridge agent agent.

  After flushing these chains the agent can be started.

  # openstack --version
  openstack 5.2.0

  * severity: blocker

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915341/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940312] Re: Ussuri-only: Network segments are not tried out for allocation

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940312

Title:
  Ussuri-only: Network segments are not tried out for allocation

Status in neutron:
  Fix Released

Bug description:
  * High level description: When we get a list of segments to choose
  from, and the first segment is already allocated, it fails right away
  returning RetryRequest exception, the the other segments are never
  tried out.

  I explain it a little further on the comments of PatchSet 1 here:
  https://review.opendev.org/c/openstack/neutron/+/803986/1

  This actually works at master due to a side effect of a refactoring
  that was done on
  
https://opendev.org/openstack/neutron/commit/6eaa6d83d7c7f07fd4bf04879c91582de504eff4
  to randomize the selection of segments, but on stable/ussuri, when not
  specifying the provider_network_type, we got into a situation where we
  had segments to allocate in vlan but neutron was allocating vxlan
  instead.

  * Pre-conditions: network_segment_range plugin enabled and several
  vlan project networks created on the system

  * Step-by-step reproduction steps: openstack --os-username
  'project1_admin' --os-password '**' --os-project-name project1
  --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-
  user-domain-name Default --os-project-domain-name Default --os-
  identity-api-version 3 --os-interface internal --os-region-name
  RegionOne network create network11

  * Expected output: network created successfully (there was available
  ranges)

  * Actual output: HttpException: 503, Unable to create the network. No
  tenant network is available for allocation.

  * Version:
    ** OpenStack version: stable/ussuri
    ** Linux distro: Centos 7
    ** Deployment: StarlingX Openstack

  * Perceived severity: Major - System is usable in some circumstances

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940312/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1892861] Re: [neutron-tempest-plugin] If paramiko SSH client connection fails because of authentication, cannot reconnect

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1892861

Title:
  [neutron-tempest-plugin] If paramiko SSH client connection fails
  because of authentication, cannot reconnect

Status in neutron:
  Fix Released

Bug description:
  In the VM boot process, cloud-init copies the SSH keys.

  If the tempest test tries to connect to the VM before the SSH keys are
  copied, the SSH client will raise a
  paramiko.ssh_exception.AuthenticationException. From this point, even
  when the SSH keys are copied into the VM, the SSH client cannot
  reconnect anymore into the VM using the pkey.

  If a bigger sleep time is added manually (to avoid this race
  condition: try to connect when the IP is available in the port but the
  SSH keys are still not present in the VM), the SSH client connects
  without any problem.

  [1]http://paste.openstack.org/show/797127/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1892861/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905568] Re: Sanity checks missing port_name while adding tunnel port

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905568

Title:
  Sanity checks missing port_name while adding tunnel port

Status in neutron:
  Fix Released

Bug description:
  Functions ovs_vxlan_supported and ovs_vxlan_supported from
  neutron.cmd.sanity.checks module are creating tunnel port by using
  neutron.agent.common.ovs_lib.OVSBridge.add_tunnel_port() method but
  they are not passing port name as a first argument. That argument is
  mandatory so should be passed there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905568/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988549] Re: Trunk status is down after a live-migration

2022-10-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/853779
Committed: 
https://opendev.org/openstack/neutron/commit/178ee6fd3d76802cd7f577ad3d0d190117e78962
Submitter: "Zuul (22348)"
Branch:master

commit 178ee6fd3d76802cd7f577ad3d0d190117e78962
Author: Arnau Verdaguer 
Date:   Fri Aug 19 16:40:50 2022 +0200

[Trunk] Update the trunk status with the parent status

After a trunk VM has been migrated the trunk status remains
DOWN, After the parent port is back to active modify the trunk
status.

Closes-Bug: #1988549
Change-Id: Ia0f7a6e8510af2c3545993e0d0d4bb06a9b70b79


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988549

Title:
  Trunk status is down after a live-migration

Status in neutron:
  Fix Released

Bug description:
  After live-migrating a trunk VM the network trunk status remains in a
  DOWN state even if the subports and parent port status is ACTIVE and
  there is connectivity between subports and to the trunk port.

  stepts to reproduce:
  1. Create trunk with some subport,
  2. Spawn vm with plugged to the trunk's parent port,
  3. Check status of trunk's parent port, subport's port and trunk itself- all 
ACTIVE
  4. Live-migrate vm
  5. Check status of trunk's parent port and subport's port and trunk itself - 
subport and parent port status is UP but trunk itself has DOWN status.

  Info:
  (overcloud) [stack@undercloud-0 ~]$ openstack server event list 
ovn-migration-server-trunk-ext-pinger-1
  
+--+--+++
  | Request ID   | Server ID
| Action | Start Time |
  
+--+--+++
  | req-93485625-4e4f-40f0-89a6-104853eceee5 | 
8ded64a4-6675-462c-8276-663a46f8efa9 | create | 2022-09-01T20:51:14.00 |
  
+--+--+++
  (overcloud) [stack@undercloud-0 ~]$ openstack port list | grep -ext-pinger-1
  | de281f6b-afab-424f-a05e-058cabc42b53 | 
ovn-migration-port-trunk-ext-pinger-1-subport | fa:16:3e:bc:b6:96 | 
ip_address='192.168.200.178', subnet_id='e54a235d-1337-4f78-ba64-1c3d1f21c97e'  
   | ACTIVE |
  | fca6bb27-a0a1-47fb-bb37-b6aa53121248 | 
ovn-migration-port-trunk-ext-pinger-1 | fa:16:3e:bc:b6:96 | 
ip_address='10.0.0.218', subnet_id='f0ef20d3-5769-478c-bbfd-7384d5ffb284'   
   | ACTIVE |
  (overcloud) [stack@undercloud-0 ~]$ openstack network trunk show 
ovn-migration-trunk-pinger-1 -f value -c status
  ACTIVE
  (overcloud) [stack@undercloud-0 ~]$ openstack server migrate --live-migration 
--host compute-0.redhat.local --block-migration --wait 
ovn-migration-server-trunk-ext-pinger-1
  Complete
  (overcloud) [stack@undercloud-0 ~]$ openstack port list | grep -ext-pinger-1
  | de281f6b-afab-424f-a05e-058cabc42b53 | 
ovn-migration-port-trunk-ext-pinger-1-subport | fa:16:3e:bc:b6:96 | 
ip_address='192.168.200.178', subnet_id='e54a235d-1337-4f78-ba64-1c3d1f21c97e'  
   | ACTIVE |
  | fca6bb27-a0a1-47fb-bb37-b6aa53121248 | 
ovn-migration-port-trunk-ext-pinger-1 | fa:16:3e:bc:b6:96 | 
ip_address='10.0.0.218', subnet_id='f0ef20d3-5769-478c-bbfd-7384d5ffb284'   
   | ACTIVE |
  (overcloud) [stack@undercloud-0 ~]$ openstack network trunk show 
ovn-migration-trunk-pinger-1 -f value -c status
  DOWN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988549/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438320] Re: Subnet pool created should be blocked when allow_overlapping_ips=False

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438320

Title:
  Subnet pool created should be blocked when allow_overlapping_ips=False

Status in neutron:
  Won't Fix

Bug description:
  Creation of subnet pools should be blocked when
  allow_overlapping_ips=False. This conflicts with the notion of subnet
  pools and causes allocation of overlapping prefixes to be blocked,
  even when allocating across different pools.  The simplest solution is
  to declare subnet pools incompatible with allow_overlapping_ips=False
  and block creation of subnet pools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438320/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459427] Re: VPNaaS: Certificate support for IPSec

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459427

Title:
  VPNaaS: Certificate support for IPSec

Status in neutron:
  Won't Fix

Bug description:
  Problem: Currently, when creating VPN IPSec site-to-site connections,
  the end user can only create tunnels using pre-shared keys for
  authentication. There is no way to use (the far superior)
  certificates, which are preferred for production environments.

  Solution: We can leverage off of Barbican to add certificate support
  for VPNaaS IPSec connections.

  Importance: Adding support for specifying certificates, will help with
  the acceptance and deployment of the VPNaaS feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459427/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460499] Re: Instance can not get IP address in tacker by using nova's drvier

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460499

Title:
  Instance can not get IP address in tacker by using nova's drvier

Status in neutron:
  Won't Fix

Bug description:
  Instance can not get IP address in Tacker by using nova's driver.
  Because instance's port's admin_state_up is False when creating port.
  I think port's admin_state_up should by True in creating. Bug fix in:
  https://review.openstack.org/#/c/187039/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460499/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471032] Re: [api-ref]Support Basic Address Scope CRUD as extensions

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471032

Title:
  [api-ref]Support Basic Address Scope CRUD as extensions

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/189741
  commit cbd95318ad6c44e72a3aa163f7a399353c8b4458
  Author: vikram.choudhary 
  Date:   Tue Jun 9 19:55:59 2015 +0530

  Support Basic Address Scope CRUD as extensions
  
  This patch adds the support for basic address scope CRUD.
  Subsequent patches will be added to use this address scope
  on subnet pools.
  
  DocImpact
  APIImpact
  
  Co-Authored-By: Ryan Tidwell 
  Co-Authored-By: Numan Siddique 
  Change-Id: Icabdd22577cfda0e1fbf6042e4b05b8080e54fdb
  Partially-implements:  blueprint address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471032/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437496] Re: port-update --fixed-ips doesn't work for routers

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437496

Title:
  port-update --fixed-ips doesn't work for routers

Status in neutron:
  Opinion

Bug description:
  Performing a port-update with a different set of fixed-ips than are
  currently on the port will be reported as a success by Neutron,
  however the actual addresses will not be updated in the Linux network
  namespace. This now has more functional implications as a result of
  multiple subnets being allowed on the external router interface
  (https://review.openstack.org/#/c/149068). If the interface has two
  subnets and the user wishes to remove one, they will have to clear the
  gateway interface first, removing both (causing traffic disruption),
  delete the subnet, and re-set the gateway on the router to re-add the
  remaining subnet. If port-update were functional for router addresses,
  this command could be used to remove a second subnet without causing
  disruption to the first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437496/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994967] [NEW] Evacuating instances should be stopped at virt-driver level

2022-10-27 Thread Sahid Orentino
Public bug reported:

The current behavior for an evacuated instance at destination node is to
have the virt-driver starting the virtual machine, then a compute API
call if needed to stop the instance.

A cleaner solution would be to have virt driver API handling an expected
state when spawned on  host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994967

Title:
  Evacuating instances should be stopped at virt-driver level

Status in OpenStack Compute (nova):
  New

Bug description:
  The current behavior for an evacuated instance at destination node is
  to have the virt-driver starting the virtual machine, then a compute
  API call if needed to stop the instance.

  A cleaner solution would be to have virt driver API handling an
  expected state when spawned on  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1994967/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989232] Re: MultiAttachVolumeSwap fails or takes a long time to detach volume

2022-10-27 Thread Balazs Gibizer
Thanks Aboubacar for the well written bug report.

I agree that we have a race between the disconnect and the swap
operation. Both uses a lock but they use different locks so they can
overlap.

Failed case:

Sep 30 02:51:47  Lock "connect_volume" "released" by 
"os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: held 
0.156s {{(pid=2571444) inner 
/usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:400}}
Sep 30 02:51:55  Lock "0d711b7b-4693-4a7e-9a94-ca4186b4a670" "released" by 
"nova.compute.manager.ComputeManager.swap_volume.._do_locked_swap_volume"
 :: held 153.400s {{(pid=2571444) inner 
/usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:400}}

Successful case:

Sep 29 17:12:21 Lock "4aeaeb5d-295f-4149-9330-a016d9da1730" "released" by 
"nova.compute.manager.ComputeManager.swap_volume.._do_locked_swap_volume"
 :: held 632.783s {{(pid=2571444) inner 
/usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:400}}
Sep 29 17:12:25 Lock "connect_volume" "released" by 
"os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: held 
0.142s {{(pid=2571444) inner 
/usr/local/lib/python3.8/dist-packages/oslo_concurrency/lockutils.py:400}}


** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Tags added: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1989232

Title:
  MultiAttachVolumeSwap fails or takes a long time to detach volume

Status in OpenStack Compute (nova):
  Triaged
Status in os-brick:
  New
Status in tempest:
  New

Bug description:
  ERROR:
  
tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach
  fails during tempest iSCSI tests due to volume taking a long time to
  detach or failing to detach from instance.  The logs herein show an
  example of a failure to detach.

  EXEPCTED BEHAVIOR: Volume successfully detaches and test passes.

  HOW TO DUPLICATE:
  Run: tox -e all -- 
tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach
 | tee -a console.log.out

  CONFIG:
  - DevStack Zed Release
  - Single node using iSCSI
  - Host OS: Ubuntu 20.04
  Distributor ID: Ubuntu
  Description:Ubuntu 20.04.3 LTS
  Release:20.04
  Codename:   focal

  From tempest console.log:

  
tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach[id-e8f
  8f9d1-d7b7-4cd2-8213-ab85ef697b6e,slow,volume]
  
-
  --

  Captured traceback:
  ~~~
  Traceback (most recent call last):

    File "/opt/stack/tempest/tempest/lib/decorators.py", line 81, in wrapper
  return f(*func_args, **func_kwargs)

    File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)

    File 
"/opt/stack/tempest/tempest/api/compute/admin/test_volume_swap.py", line 245, 
in test_volume_swap_
  with_multiattach
  waiters.wait_for_volume_resource_status(self.volumes_client,

    File "/opt/stack/tempest/tempest/common/waiters.py", line 301, in 
wait_for_volume_resource_status
  time.sleep(client.build_interval)

    File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  raise TimeoutException()

  fixtures._fixtures.timeout.TimeoutException

  Captured traceback-1:
  ~
  Traceback (most recent call last):

    File "/opt/stack/tempest/tempest/common/waiters.py", line 385, in 
wait_for_volume_attachment_remove_from_server
  raise lib_exc.TimeoutException(message)

  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Volume a54c67b7-786e-4ba7-94ea-d1e0a722424a failed to detach from 
server 986b2dd5-542a-4344-a929-9ac7bbf35d7c within the required time (3600 s) 
from the compute API perspective

  In waiters.py:

  373 while any(volume for volume in volumes if volume['volumeId'] == 
volume_id):
  374 time.sleep(client.build_interval)
  375
  376 timed_out = int(time.time()) - start >= client.build_timeout
  377 if timed_out:
  378 console_output = 
client.get_console_output(server_id)['output']
  379 LOG.debug('Console output for %s\nbody=\n%s',
  380   server_id, console_output)
  381 message = ('Volume %s failed to detach from server %s 
within '
  382'the required time (%s s) from the compute API 
'
  383 

[Yahoo-eng-team] [Bug 1994087] Re: Unexpected API Error.

2022-10-27 Thread Balazs Gibizer
>From the logs you provided:

2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi raise 
exception.NeutronAdminCredentialConfigurationInvalid()
2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi 
NeutronAdminCredentialConfigurationInvalid: Networking client is experiencing 
an unauthorized exception.

So you have to check the [neutron] section of the config file of the
nova-api service and verify that the credentials provided there are
valid and can be used to query neutron.


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994087

Title:
  Unexpected API Error.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I got a nova-api ERROR when i create ainstance use the command in my
  rocky clusetr.

  [root@zlmanager40 ~]# openstack server create --image 
6939705e-47fb-4406-9aed-e1f106a31739   --flavor c1.2c4g  --availability-zone 
nova:zlstorage58 --network office_56 zl-os-checker02
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 
500) (Request-ID: req-c78a192b-4a8f-47d6-8570-2d9cf1dd66ff)

  
  The nova-api log look like this , what's wrong with my cluster,the cluster 
version is rocky and use the kolla-ansible to deploy ? 

  2022-10-25 08:41:27.137 30 ERROR nova.network.neutronv2.api 
[req-c78a192b-4a8f-47d6-8570-2d9cf1dd66ff 22c0da6816464768a96ec9957df76af5 
7c60a604a0df49469f5c689a577b6d99 - default default] Neutron client was not able 
to generate a valid admin token, please verify Neutron admin credential located 
in nova.conf: Unauthorized: 401-{u'error': {u'message': u'The request you have 
made requires authentication.', u'code': 401, u'title': u'Unauthorized'}}
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi 
[req-c78a192b-4a8f-47d6-8570-2d9cf1dd66ff 22c0da6816464768a96ec9957df76af5 
7c60a604a0df49469f5c689a577b6d99 - default default] Unexpected exception in API 
method: NeutronAdminCredentialConfigurationInvalid: Networking client is 
experiencing an unauthorized exception.
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 801, in 
wrapped
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2022-10-25 08:41:27.138 30 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
  2022-10-25 08:41:27.138 30 ERROR 

[Yahoo-eng-team] [Bug 1381562] Re: Add functional tests for metadata agent

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381562

Title:
  Add functional tests for metadata agent

Status in neutron:
  Fix Released

Bug description:
  As per discussion on
  
https://review.openstack.org/#/c/121782/8/neutron/tests/unit/test_metadata_agent.py:

  Tests could do something like sending an HTTP request to a proxy,
  while mocking the API (and then potential RPC, if rpc is merged in
  metadata agent) response, then assertiwg that the agent forwarded the
  correct HTTP request to Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381562/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405057] Re: Filter port-list based on security_groups associated not working

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405057

Title:
  Filter port-list based on security_groups associated not working

Status in neutron:
  Fix Released

Bug description:
  Sample Usecases:

  1. neutron port-list --security_groups=6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  2.neutron port-list --security_groups list=true 
6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  Since, security_groups associated to a port are referenced from 
securitygroups db table, we cant just filter ports
  based on security_groups directly as it works for other paramters.

  Example:
  neutron port-list --mac_address list=true fa:16:3e:40:2b:cc fa:16:3e:8e:32:3e
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 1cecec78-226f-4379-b5ad-c145e2e14048 |  | fa:16:3e:40:2b:cc | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.2"} |
  | eec24494-09a8-4fa8-885d-e3fda37fe756 |  | fa:16:3e:8e:32:3e | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.3"} |
  
+--+--+---+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405057/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371435] Re: Remove unnecessary iptables reload when L2 agent enable ipset

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371435

Title:
  Remove unnecessary iptables reload when L2 agent enable ipset

Status in neutron:
  Won't Fix

Bug description:
  When l2 agent enables ipset, if a security group just update its members,  
iptables should not be reloaded, it just need to add members to ipset chain.
  there is a room to improve!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371435/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp