[Yahoo-eng-team] [Bug 1798475] Re: Fullstack test test_ha_router_restart_agents_no_packet_lost failing

2024-04-24 Thread Lajos Katona
I close this for now, the test
test_ha_router_restart_agents_no_packet_lost is still marked as
unstable, feel free to reopen it.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1798475

Title:
  Fullstack test test_ha_router_restart_agents_no_packet_lost failing

Status in neutron:
  Won't Fix

Bug description:
  Found at least 4 times recently:

  
http://logs.openstack.org/97/602497/5/gate/neutron-fullstack/b8ba2f9/logs/testr_results.html.gz
  
http://logs.openstack.org/90/610190/2/gate/neutron-fullstack/1f633ed/logs/testr_results.html.gz
  
http://logs.openstack.org/52/608052/1/gate/neutron-fullstack/6d36706/logs/testr_results.html.gz
  
http://logs.openstack.org/48/609748/1/gate/neutron-fullstack/f74a133/logs/testr_results.html.gz

  
  Looks that sometimes during L3 agent restart there is some packets loss 
noticed and that cause failure. We need to investigate that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1798475/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775220] Re: Unit test neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase. test_get_objects_queries_constant fails often

2024-04-24 Thread Lajos Katona
The test 
(neutron.tests.unit.objects.test_base.BaseDbObjectTestCase.test_get_objects_queries_constant)
 is still unstable, but my query haven't found failure of the test.
I close this now but feel free to reopen it if you encounter this issue

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775220

Title:
  Unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase.
  test_get_objects_queries_constant fails often

Status in neutron:
  Won't Fix

Bug description:
  Since some time we have quite often issue with unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase
  .test_get_objects_queries_constant

  It happens also for periodic jobs. Examples of failures from last
  week:

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py27-with-oslo-master/031dc64/testr_results.html.gz

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py35-with-neutron-lib-master/4f4b599/testr_results.html.gz

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py35-with-oslo-master/348faa8/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775220/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774463] Re: RFE: Add support for IPv6 on DVR Routers for the Fast-path exit

2024-04-24 Thread Lajos Katona
I close now this bug due to long inactivity, please open in again if you
wish to work on it, or see the issue in your environment.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1774463

Title:
  RFE: Add support for IPv6 on DVR Routers for the Fast-path exit

Status in neutron:
  Won't Fix

Bug description:
  This RFE is to add support for IPv6 on DVR Routers for the Fast-Path-Exit.
  Today DVR support Fast-Path-Exit through the FIP Namespace, but FIP Namespace 
does not support IPv6 addresses for the Link local address and also we don't 
have any ra proxy enabled in the FIP Namespace.
  So this RFE should address those issues.

  1. Update the link local address for 'rfp' and 'fpr' ports to support both 
IPv4 and IPv6.
  2. Enable ra proxy in the FIP Namespace and also assign IPv6 address to the 
FIP gateway port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1774463/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744402] Re: fullstack security groups test fails because ncat process don't starts

2024-04-24 Thread Lajos Katona
Since https://review.opendev.org/c/openstack/neutron/+/830374
fullstack's unstable decorator is removed from test_security_groups .
(that is ~unmaintained/yoga at least since the tag is removed)

** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1744402

Title:
  fullstack security groups test fails because ncat process don't starts

Status in neutron:
  Fix Released

Bug description:
  Sometimes fullstack test
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  fails because "ncat" process don't starts properly:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/fullstack/test_securitygroup.py", line 163, in 
test_securitygroup
  net_helpers.NetcatTester.TCP)
File "neutron/tests/fullstack/test_securitygroup.py", line 68, in 
assert_connection
  self.assertTrue(netcat.test_connectivity())
File "neutron/tests/common/net_helpers.py", line 509, in 
test_connectivity
  self.client_process.writeline(testing_string)
File "neutron/tests/common/net_helpers.py", line 459, in client_process
  self.establish_connection()
File "neutron/tests/common/net_helpers.py", line 489, in 
establish_connection
  address=self.address)
File "neutron/tests/common/net_helpers.py", line 537, in 
_spawn_nc_in_namespace
  proc = RootHelperProcess(cmd, namespace=namespace)
File "neutron/tests/common/net_helpers.py", line 288, in __init__
  self._wait_for_child_process()
File "neutron/tests/common/net_helpers.py", line 321, in 
_wait_for_child_process
  "in %d seconds" % (self.cmd, timeout)))
File "neutron/common/utils.py", line 649, in wait_until_true
  raise exception
  RuntimeError: Process ['ncat', u'20.0.0.5', '', '-w', '20'] hasn't 
been spawned in 20 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1744402/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675910] Re: segment event transaction semantics are wrong

2024-04-24 Thread Lajos Katona
I close this now as I understand more thing works if we keep
_delete_segments_for_network in PRECOMMIT_DELETE, (the revert mentioned
by Ihar  mentions some, see:
https://review.opendev.org/c/openstack/neutron/+/475955 )

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
   Status: Invalid => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1675910

Title:
  segment event transaction semantics are wrong

Status in neutron:
  Won't Fix

Bug description:
  _delete_segments_for_network is currently being called inside of a
  transaction, which results in all of the BEFORE/PRECOMMIT/AFTER events
  for the segments themselves being inside of a transaction. This makes
  them all effectively PRECOMMIT in the database lifecycle which
  violates the semantics we've assigned to them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1675910/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2061883] [NEW] [fwaas] Duplicate entry for key default_firewall_groups.PRIMARY'

2024-04-16 Thread Lajos Katona
Public bug reported:

In periodic neutron-tempest-plugin-fwaas job there are sporadic failures with 
internal server error (see [1]):
Apr 13 08:51:56.167863 np0037278106 neutron-server[59018]: ERROR 
neutron.api.v2.resource oslo_db.exception.DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, "Duplicate entry 
'802cc07da18040609dc5772f1d4149b9' for key 'default_firewall_groups.PRIMARY'")

802cc07da18040609dc5772f1d4149b9 is the uuid of the project/tenant in
the above exception.


Opensearch link:
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20269,%20in%20test_create_show_delete_firewall_group%22'),sort:!())

[1]: https://paste.opendev.org/show/bIwAbuJ88F8IPdTCJjYN/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2061883

Title:
  [fwaas] Duplicate entry  for key
  default_firewall_groups.PRIMARY'

Status in neutron:
  New

Bug description:
  In periodic neutron-tempest-plugin-fwaas job there are sporadic failures with 
internal server error (see [1]):
  Apr 13 08:51:56.167863 np0037278106 neutron-server[59018]: ERROR 
neutron.api.v2.resource oslo_db.exception.DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, "Duplicate entry 
'802cc07da18040609dc5772f1d4149b9' for key 'default_firewall_groups.PRIMARY'")

  802cc07da18040609dc5772f1d4149b9 is the uuid of the project/tenant in
  the above exception.

  
  Opensearch link:
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20269,%20in%20test_create_show_delete_firewall_group%22'),sort:!())

  [1]: https://paste.opendev.org/show/bIwAbuJ88F8IPdTCJjYN/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2061883/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629097] Re: neutron-rootwrap processes not getting cleaned up

2024-02-22 Thread Lajos Katona
Neutron changed to use privsep, if you still see similar issues please
reopen this bug report or open a new one for privsep please.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629097

Title:
  neutron-rootwrap processes not getting cleaned up

Status in neutron:
  Invalid

Bug description:
  neutron-rootwrap processes aren't getting cleaned up on Newton.  I'm
  testing with Newton rc3.

  I was noticing memory exhaustion on my neutron gateway units, which turned 
out to be due to compounding neutron-rootwrap processes:
  sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client 
monitor Interface name,ofport,external_ids --format=json

  $ top -n1 -b -o VIRT
  http://paste.ubuntu.com/23252407/

  $ ps aux|grep ovsdb-client
  http://paste.ubuntu.com/23252658/

  Restarting openvswitch cleans up the processes but they just start piling 
again up soon after:
  sudo systemctl restart openvswitch-switch

  At first I thought this was an openvswitch issue, however I reverted
  the code in get_root_helper_child_pid() and neutron-rootwrap processes
  started getting cleaned up. See corresponding commit for code that
  possibly introduced this at [1].

  This can be recreated with the openstack charms using xenial-newton-
  staging.  On newton deploys, neutron-gateway and nova-compute units
  will exhaust memory due to compounding ovsdb-client processes.

  [1]
  commit fd93e19f2a415b3803700fc491749daba01a4390
  Author: Assaf Muller 
  Date:   Fri Mar 18 16:29:26 2016 -0400

  Change get_root_helper_child_pid to stop when it finds cmd

  get_root_helper_child_pid recursively finds the child of pid,
  until it can no longer find a child. However, the intention is
  not to find the deepest child, but to strip away root helpers.
  For example 'sudo neutron-rootwrap x' is supposed to find the
  pid of x. However, in cases 'x' spawned quick lived children of
  its own (For example: ip / brctl / ovs invocations),
  get_root_helper_child_pid returned those pids if called in
  the wrong time.

  Change-Id: I582aa5c931c8bfe57f49df6899445698270bb33e
  Closes-Bug: #1558819

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1629097/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639272] Re: LB agent not updating port status upon port misconfiguration

2024-02-22 Thread Lajos Katona
Linuxbridge agent is experimental now.

** Tags added: linuxbridge

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639272

Title:
  LB agent not updating port status upon port misconfiguration

Status in neutron:
  Won't Fix

Bug description:
  The Linux bridge agent does not update the status of a port once it is
  no longer configured correctly. Nova or operator manually deleting
  ports from bridges under the control of the agent is one example.

  See: https://review.openstack.org/#/c/351675/4/specs/ocata/port-data-
  plane-status.rst L230

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639272/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618244] Re: Possible scale issues with neutron-fwaas requesting all tenants with firewalls after RPC failures

2024-02-22 Thread Lajos Katona
If I understand well the above patch fixed the issue, please don't
hesitate to reopen this bug if you see the scale issue

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618244

Title:
  Possible scale issues with neutron-fwaas requesting all tenants with
  firewalls after RPC failures

Status in neutron:
  Fix Released

Bug description:
  Information zzelle in conversation with njohnston

  An overload is caused first by some neutron-servers crashed, secondly
  by every l3-agent trying to perform a "full" process_services_sync.
  When we restarted every crashed neutron-servers and purge neutron
  queues, we restarted crashed neutron-servers rpc workers are still
  overloaded because of full syncs.

  About 60 L3Agents, with one router per L3Agent.

  Key question: typically i don't understand why in full sync a
  l3-agents request tenants with FWs intead of requesting its tenants
  with FW ?

  https://github.com/openstack/neutron-
  
fwaas/blob/master/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py#L224

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618244/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1914757] Re: [ovn] add ovn driver for security-group-logging

2024-02-19 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1914757

Title:
  [ovn] add ovn driver for security-group-logging

Status in neutron:
  Fix Released

Bug description:
  
  The request is about to have a log file where security groups events are 
logged to be
  consumed by security department, like any other commercial firewall vendor 
has.

  This is a follow up to:
  https://bugs.launchpad.net/neutron/+bug/1468366

  ml2/OVN has a functionality gap related to the support for 
security-group-logging:
  https://blueprints.launchpad.net/neutron/+spec/security-group-logging

  This work is also tracked under Bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1619266

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1914757/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1913621] Re: Permant ARP entries not added to DVR qrouter when connected to two Networks

2024-02-19 Thread Lajos Katona
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1913621

Title:
  Permant ARP entries not added to DVR qrouter when connected to two
  Networks

Status in neutron:
  Fix Released

Bug description:
  Hi,
  I am running openstack ussuri with ovs and DVR routers.

  I'm facing a problem with communication between two networks connected
  to the same router. The issue is caused because there are no permanent
  ARP entries added to the qrouter when a new instance is created on one
  of the networks. This means that when traffic reaches the router, it
  does not know how to reach the destination MAC address of the new
  instance. Below is an example.

  I created two Networks each with its own subnet.
  NetworkA/SubnetA: 172.18.18.0/24
  NetworkB/SubnetB: 172.19.19.0/24

  I created one router and connected both networks to it.
  The qrouter has a port with IP 172.18.18.1 and another port with IP 
172.19.19.1

  Then I created multiple instance on NetworkA which were spawned on different 
computes.
  Here is the ARP table from the DVR router on one of the computes
  root@compute004[SRV][PRD001][LAT]:~# ip netns exec 
qrouter-3fe791ef-8432-41c3-a4ac-28ae741b533f arp -a | grep 18.18
  ? (172.18.18.2) at fa:16:3e:13:7b:bd [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.78) at fa:16:3e:66:bf:8b [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.27) at fa:16:3e:85:bd:e2 [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.161) at fa:16:3e:43:07:b2 [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.66) at fa:16:3e:85:75:cb [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.3) at fa:16:3e:7b:32:0d [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.21) at fa:16:3e:05:c7:ef [ether] PERM on qr-e68fe2ed-2a
  ? (172.18.18.4) at fa:16:3e:02:3d:1a [ether] PERM on qr-e68fe2ed-2a

  The permanent ARPs exist for DHCP (.2, .3, .4), snat (.27) and 4 instances 
(.78, .161, .66, .21).
  No problem for now.
  Then I created an instance on NetworkB. When I check the ARP table, there is 
no permanent entries for my new instance.
  root@compute004[SRV][PRD001][LAT]:~# ip netns exec 
qrouter-3fe791ef-8432-41c3-a4ac-28ae741b533f arp -a | grep 19.19
  ? (172.19.19.3) at fa:16:3e:b4:16:3e [ether] PERM on qr-6d2d939d-1e
  ? (172.19.19.138) at fa:16:3e:fa:f7:f1 [ether] PERM on qr-6d2d939d-1e
  ? (172.19.19.4) at fa:16:3e:0c:84:53 [ether] PERM on qr-6d2d939d-1e
  ? (172.19.19.2) at fa:16:3e:e4:44:e3 [ether] PERM on qr-6d2d939d-1e

  The only entries are for DHCP (.2, .3, .4) and the SNAT (.138).
  My instance IP on NetworkB is 172.19.19.56.

  Then I added a new instance but in NetworkA. The instance has IP 
172.18.18.230.
  This time no permanent ARP entry is added! The original instances ARP entries 
exist but not for the new instance.

  So now, if I add any new instances on either NetworkA or NetworkB, no new 
permanent ARP entry is added to to the DVR qrouter. It is the same on all 
computes for which this qrouter exists.
  So it seems that as soon as there are instances that exist on both networks 
connected to the same router, permanent ARP entries cease to be created.

  I don't believe this is normal and and it is affecting communication
  between both networks via the router. Can someone confirm this issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1913621/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1913646] Re: DVR router ARP traffic broken for networks containing multiple subnets

2024-02-19 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1913646

Title:
  DVR router ARP traffic broken for networks containing multiple subnets

Status in neutron:
  Fix Released

Bug description:
  Hi,
  I am running openstack ussuri with ovs and DVR routers.

  When there are multiple subnets in one network, Neutron does not
  consider the possibility that the subnets could be connected to
  different routers. This is a problem when a DVR router is expecting to
  receive an ARP reply. In OVS br-int, table 3 contains only one rule
  per network which applies to traffic destined to the DVR MAC. This
  rule translates the DVR MAC to the MAC of the newest router on the
  network but does not take into consideration that a network could have
  multiple subnets connected to different routers.

  The use case where I am facing this issue is with manila. Manila
  defines one network object in the service project but each time a user
  creates a new "Share Network", the Manila service creates a new subnet
  within the network. So you can end up with many subnets and routers
  within a network.

  It is a bit confusing so below are more details taking my use case with 
manila as an example.
  Manila has a network called manila_service_network
  In manila.conf a CIDR and mask is configured and the subnets that will be 
created by manila service are configured within the CIDR defined and using the 
mask that is defined.

  On a user project I create NetworkX/SubnetX (172.20.20.0/24) and connect it 
to routerX. I also have instanceX on this network.
  Then I create a Share network. This creates a subnet within network 
manila_service_network with IP 10.128.16.0/20. 
  Manila_subnet1 (10.128.16.0/20) is connected to routerX which is already 
connected to SubnetX (172.20.20.0/24). 
  A ShareInstance is created on the manila_subnet1 and has IP 10.128.19.189.
  It is important that the ShareInstance and InstanceX be located on different 
computes.

  Now communication between InstanceX and the ShareInstance should work
  but it does not. Here's why.

  InstanceX wants to communicate to the ShareInstance so it sends a packet to 
it's gateway RouterX. 
  RouterX needs to route the packet to the ShareInstance but it does not have 
the MAC address in its ARP table.
  Router X sends an ARP request -> ARP, Request who-has 10.128.19.189 tell 
10.128.16.1, length 28
  RouterX never receives an ARP reply.
  I followed the flows in br-int and br-tun.

  Since traffic is coming from a DVR router, OVS br-tun changes the router's 
source MAC to the computes's DVR MAC. fa:16:3e:80:4c:3a is the MAC of the 
router with IP 10.128.16.1
   cookie=0x7027c9402a453a34, duration=411942.542s, table=1, n_packets=434, 
n_bytes=33812, idle_age=589, hard_age=65534, 
priority=1,dl_vlan=31,dl_src=fa:16:3e:80:4c:3a 
actions=mod_dl_src:fa:16:3f:67:83:30,resubmit(,2) 

  Then the packet reaches the ARP responder table 21. There is an entry in 
table 21 for the ShareInstance MAC so it modifies the packet and sends it back 
to br-int.
  cookie=0x7027c9402a453a34, duration=11769.612s, table=21, n_packets=23, 
n_bytes=966, idle_age=2, priority=1,arp,dl_vlan=31,arp_tpa=10.128.19.189 
actions=load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163ee3273f->NXM_NX_ARP_SHA[],load:0xa8013bd->NXM_OF_ARP_SPA[],move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:fa:16:3e:e3:27:3f,IN_PORT

  But remember that the router source MAC became the DVR MAC and,
  because of table 21, it is now the destination MAC.

  This br-int table 1 rules sends us to table 3 because of the destination MAC 
being the DVR MAC.
  cookie=0xe728ac45412eb352, duration=4676042.487s, table=0, 
n_packets=10695122, n_bytes=449195124, idle_age=0, hard_age=65534, 
priority=5,in_port=2,dl_dst=fa:16:3f:67:83:30 actions=resubmit(,3)

  In table 3 there is a rule that changes the destination MAC from the DVR MAC 
to the router MAC based on the VLAN (network). In our case vlan 31.
  cookie=0xe728ac45412eb352, duration=23642.517s, table=3, n_packets=10626537, 
n_bytes=446314554, idle_age=0, priority=5,dl_vlan=31,dl_dst=fa:16:3f:67:83:30 
actions=mod_dl_dst:fa:16:3e:4d:d0:f9,strip_vlan,output:725

  You can see that the mod_dl_dst MAC (fa:16:3e:4d:d0:f9) is not the original 
source MAC of my router (fa:16:3e:80:4c:3a).
  Why?
  Because there are multiple subnets in the network manila_service_network, 
each connected to a different router.
  fa:16:3e:4d:d0:f9 belongs to a router connected to Manila_subnet2 
(10.128.48.0/20) which is within manila_service_network.
  This means the ARP reply is sent to the wrong router.
  All the subnets in manila_service_network use vlan 31 so, by having one rule 
in table 3 for vlan 31, causes all traffic to be sent to one router (usually 
the 

[Yahoo-eng-team] [Bug 1626642] Re: Cleanup and add more UT for FWaaS v2 plugin

2023-11-15 Thread Lajos Katona
We don't need to track such improvements as bugs, any test coverage
improvement are welcomed

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1626642

Title:
  Cleanup and add more UT for FWaaS v2 plugin

Status in neutron:
  Invalid

Bug description:
  Add more UT without overlap to the db UT. Add a helper function or add
  to setUp a lot of the common code around creating a router, subnet and
  attach etc so eliminates redundant code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1626642/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612050] Re: Need more data added for RBAC policy notifications

2023-11-15 Thread Lajos Katona
We can close this as I see, the notifications are much more detailed
since, for rbac:

INFO oslo.messaging.notification.rbac_policy.create.end [None
req-b98f02a2-b65c-4331-ab51-966186dc7fd0 None admin] {"message_id":
"15f5fe6b-5254-4b67-9303-6250119376d1", "publisher_id":
"network.newtaas", "event_type": "rbac_policy.create.end","priority":
"INFO", "payload": {"rbac_policy": {"id":
"4f11ca6e-9c98-4dcf-8797-cd8ce13103d0", "project_id":
"6fa72026f37a480d8727409aa7b3f7b6", "action": "access_as_shared",
"object_id":"5c93e716-b195-4f91-915a-7120bcddec39", "target_tenant":
"*", "object_type": "network", "tenant_id":
"6fa72026f37a480d8727409aa7b3f7b6"}}, "timestamp": "2023-11-15
16:19:36.764328"}


and for rbac delete:

INFO oslo.messaging.notification.rbac_policy.delete.end [None
req-6a29b809-9c27-4577-86e9-9a486178b49d None admin] {"message_id":
"1a8ff78c-4be5-4582-82b4-0518434053b7",
"publisher_id":"network.newtaas","event_type": "rbac_policy.delete.end",
"priority": "INFO", "payload": {"rbac_policy_id":
"4f11ca6e-9c984dcf-8797-cd8ce13103d0", "rbac_policy": {"id":
"4f11ca6e-9c98-4dcf-8797-cd8ce13103d0",
"project_id":"6fa72026f37a480d8727409aa7b3f7b6", "action":
"access_as_shared", "object_id": "5c93e716-b195-4f91-915a7120bcddec39",
"target_tenant": "*", "object_type": "network",
"tenant_id":"6fa72026f37a480d8727409aa7b3f7b6"}}, "timestamp":
"2023-11-15 16:20:30.891196"}

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612050

Title:
  Need more data added for RBAC policy notifications

Status in neutron:
  Fix Released

Bug description:
  For the Searchlight project, we are receiving notifications for the
  RBAC policy commands.

  rbac-create
  rbac-delete

  The payload for rbac_policy.create.end is complete and allows
  Searchlight to update our state to reflect the policy changes.

  The payload for rbac_policy.delete.end is not as complete. The payload
  we receive is:

  {
  "event_type": "rbac_policy.delete.end",
  "payload":
  { "rbac_policy_id": "d7491be9-ee3d-40d7-9880-0ce82c7c12f6" }

  }

  Since the RBAC policy is being deleted, we cannot query the details of
  the policy through the Neutron API using the policy ID. Doing so
  results in a race condition where the majority of the time the policy
  has already been deleted.

  This means we need to store the details of the policy upon
  rbac_policy.create.end time, which requires extraneous state in
  Searchlight.

  We would like a change to the rbac_policy.delete.end payload to
  include all policy's details. Mirroring the same information provided
  by the rbac_policy.create.end notification:

  {
  "event_type": "rbac_policy.delete.end",
  "payload":
  { "target_tenant": "admin", "tenant_id": "c4b424b17cc04cefa7211b40c5c893c2", 
"object_type": "network", "object_id": "64f00d1c-a6b6-4c00-a800-10eb9360a976", 
"action": "access_as_shared", "id": "d7491be9-ee3d-40d7-9880-0ce82c7c12f6" }

  }

  At a bare minimum, we would need "tenant_id", "object_id" and "id" to
  be returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612050/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2041609] [NEW] FIP update removes QoS policy

2023-10-27 Thread Lajos Katona
Public bug reported:

When updating a FIP with QoS policy (even updating the description)
overwrites the QoS policy id with None.

$ openstack floating ip create public --qos-policy foo_qos_policy_0
+-+--+
| Field   | Value|
+-+--+
| created_at  | 2023-10-27T10:00:51Z |
| description |  |
.
| id  | bd2639aa-34a2-4d81-b655-24ca2106cac4 |

| qos_policy_id   | 6396b46c-0a6f-4dd0-a916-e1607573a614 |
...
+-+--+
$ openstack floating ip set bd2639aa-34a2-4d81-b655-24ca2106cac4 --description 
"my floatin ip with QoS"

$ openstack floating ip show bd2639aa-34a2-4d81-b655-24ca2106cac4
+-+--+
| Field   | Value|
+-+--+
| created_at  | 2023-10-27T10:00:51Z |
| description |  |
.
| id  | bd2639aa-34a2-4d81-b655-24ca2106cac4 |

| qos_policy_id   | None |
...
+-+--+

The issue is on master and seems was introduced with this patch [1]:
[1]: https://review.opendev.org/c/openstack/neutron/+/833667

As I see an extra condition (as was before [1]) is necessary here:
https://opendev.org/openstack/neutron/src/commit/53f4fd6b9fcb4f8ba907bfbace342bf902fc55f7/neutron/db/l3_db.py#L1610-L1611

** Affects: neutron
 Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: In Progress


** Tags: low-hanging-fruit qos

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2041609

Title:
  FIP update removes QoS policy

Status in neutron:
  In Progress

Bug description:
  When updating a FIP with QoS policy (even updating the description)
  overwrites the QoS policy id with None.

  $ openstack floating ip create public --qos-policy foo_qos_policy_0
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2023-10-27T10:00:51Z |
  | description |  |
  .
  | id  | bd2639aa-34a2-4d81-b655-24ca2106cac4 |
  
  | qos_policy_id   | 6396b46c-0a6f-4dd0-a916-e1607573a614 |
  ...
  +-+--+
  $ openstack floating ip set bd2639aa-34a2-4d81-b655-24ca2106cac4 
--description "my floatin ip with QoS"

  $ openstack floating ip show bd2639aa-34a2-4d81-b655-24ca2106cac4
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2023-10-27T10:00:51Z |
  | description |  |
  .
  | id  | bd2639aa-34a2-4d81-b655-24ca2106cac4 |
  
  | qos_policy_id   | None |
  ...
  +-+--+

  The issue is on master and seems was introduced with this patch [1]:
  [1]: https://review.opendev.org/c/openstack/neutron/+/833667

  As I see an extra condition (as was before [1]) is necessary here:
  
https://opendev.org/openstack/neutron/src/commit/53f4fd6b9fcb4f8ba907bfbace342bf902fc55f7/neutron/db/l3_db.py#L1610-L1611

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2041609/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2023634] [NEW] Gate: Fuctional test_virtual_port_host_update failes recently

2023-06-13 Thread Lajos Katona
Public bug reported:

neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_virtual_port_host_update
failes recently quite often, example failure (see [1]):

ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_virtual_port_host_updatetesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File "/usr/lib/python3.10/unittest/mock.py", line 1369, in patched
return func(*newargs, **newkeywargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 403, in test_virtual_port_host_update
mock_update_vip_host.assert_called_once_with(vip['id'], None)
  File "/usr/lib/python3.10/unittest/mock.py", line 930, in 
assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected 'update_virtual_port_host' to be called once. Called 2 
times.
Calls: [call('18c414c7-f897-4b39-bdb4-d35953df1d2f', None),
 call('18c414c7-f897-4b39-bdb4-d35953df1d2f', None)].

Opensearch link:
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(build_status,build_name),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Expected%20update_virtual_port_host%20to%20be%20called%22'),sort:!())

[1]:
https://872de5c590dd926ff0db-30e72828a36544d0c7466f2989d78bfe.ssl.cf1.rackcdn.com/885341/2/check/neutron-
functional-with-uwsgi/3079c44/testr_results.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2023634

Title:
  Gate: Fuctional test_virtual_port_host_update failes recently

Status in neutron:
  New

Bug description:
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_virtual_port_host_update
  failes recently quite often, example failure (see [1]):

  ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_virtual_port_host_updatetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1369, in patched
  return func(*newargs, **newkeywargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 403, in test_virtual_port_host_update
  mock_update_vip_host.assert_called_once_with(vip['id'], None)
File "/usr/lib/python3.10/unittest/mock.py", line 930, in 
assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected 'update_virtual_port_host' to be called once. Called 
2 times.
  Calls: [call('18c414c7-f897-4b39-bdb4-d35953df1d2f', None),
   call('18c414c7-f897-4b39-bdb4-d35953df1d2f', None)].

  Opensearch link:
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(build_status,build_name),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Expected%20update_virtual_port_host%20to%20be%20called%22'),sort:!())

  [1]:
  
https://872de5c590dd926ff0db-30e72828a36544d0c7466f2989d78bfe.ssl.cf1.rackcdn.com/885341/2/check/neutron-
  functional-with-uwsgi/3079c44/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2023634/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017023] [NEW] Tempest: remove test duplication for Compute legacy networking API and Neutron API calls

2023-04-19 Thread Lajos Katona
Public bug reported:

In Tempest there are many tests under tempest.api.compute which calls Nova 
legacy API to create security-groups FIPs and similar. 
These APIs are legacy in Nova, and the calls only proxied toward Neutron (see 
[1] as example).
There are similar tests under tempest.api.network and under tempest.scenario.
I suggest to remove these calls, check if we can remove any tests that are 
redundant or move them to scenario group and change them to use Neutron API.


[1]: 
https://opendev.org/openstack/nova/src/branch/master/nova/network/security_group_api.py#L370-L401

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017023

Title:
  Tempest: remove test duplication for Compute legacy networking API and
  Neutron API calls

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in tempest:
  New

Bug description:
  In Tempest there are many tests under tempest.api.compute which calls Nova 
legacy API to create security-groups FIPs and similar. 
  These APIs are legacy in Nova, and the calls only proxied toward Neutron (see 
[1] as example).
  There are similar tests under tempest.api.network and under tempest.scenario.
  I suggest to remove these calls, check if we can remove any tests that are 
redundant or move them to scenario group and change them to use Neutron API.

  
  [1]: 
https://opendev.org/openstack/nova/src/branch/master/nova/network/security_group_api.py#L370-L401

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017023/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015471] [NEW] [RFE] Add ERSPAN for tap-as-a-service with OVS and OVN

2023-04-06 Thread Lajos Katona
Public bug reported:

ERSPAN (Encapsulated Remote Switch Port Analyzer) is a videly used tool
to analyse traffic of switch ports. The whole concept first was used
widely in Cisco switches.

ERSPAN protocol is used in 2 versions, version 1 (Type II), and version
2 (Type III) (Note: Type I was not widely used) ERSPAN version 2 adds an
extra ERSPAN header over GRE, and a similar but more flexible extra
ERSPAN header is used for version 3 (see [1]).

Since OVS 2.10 it is possible to use ERSPAN with OVS (see [2], and [3]) both 
ERSPAN v1 and v2.
Since OVN v22.12.0 it is possible to create mirrors with OVN (see [4], I can't 
find it in the release-notes or in any OVN docs, I suppose that is my lack of 
experience with OVN).
NOTE: OVN only supports ERSPAN v1, and with OVN it is also possible to create a 
clean GRE type mirror.

There's a few things to consider, I add here now only the question of
how the API should look like.

The current TAAS API deals with 2 high level objects:
* The Tap Service identifies the destination of the mirroring, which is a 
Neutron port (see [5])
* The Tap Flow identifies the source of the mirroring, which is again a Neutron 
port (see [6]). There is a N:1 relationship between tap-flows and tap-services, 
so multiple tap-flows can be the source of one tap-service.

With ERSPAN this model is not that useful:

* one way forward can be to keep the current API with extra fields for both Tap 
Service and for Tap flow:
Tap Service: new field that mark the tap-service as ERSPAN destination port (in 
this case the port field should not be obligatory)
Tap Flow: new fields: erspan_dst_ip and erspan_idx.

* Another option is to encode this in the Tap Service and we could keep
at least the Tap Flow unchanged. This would mean that for "legacy"
mirroring with OVS or SRIOV the API behaves differently, or used
differenltly.

* Yet another option is to introduce a new API for ERSPAN to make as
simple as possible.

[1]: https://datatracker.ietf.org/doc/id/draft-foschiano-erspan-02.txt => Note 
this is a draft, and I think ERSPAN was not standardized.
[2]: https://docs.openvswitch.org/en/latest/faq/configuration/
[3]: http://www.openvswitch.org//support/dist-docs/ovs-fields.7.txt
[4]: 
https://github.com/ovn-org/ovn/commit/323f978cbf4599568fcca9edec8ed53c076d2664
[5]: https://docs.openstack.org/api-ref/network/v2/index.html#create-tap-service
[6]: https://docs.openstack.org/api-ref/network/v2/index.html#create-tap-flow

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: rfe

** Changed in: neutron
   Importance: Undecided => Wishlist

** Description changed:

  ERSPAN (Encapsulated Remote Switch Port Analyzer) is a videly used tool
  to analyse traffic of switch ports. The whole concept first was used
  widely in Cisco switches.
  
  ERSPAN protocol is used in 2 versions, version 1 (Type II), and version
  2 (Type III) (Note: Type I was not widely used) ERSPAN version 2 adds an
  extra ERSPAN header over GRE, and a similar but more flexible extra
  ERSPAN header is used for version 3 (see [1]).
  
  Since OVS 2.10 it is possible to use ERSPAN with OVS (see [2], and [3]) both 
ERSPAN v1 and v2.
- Since OVN v22.12.0 it is possible to create mirrors with OVN (see [4], I 
can't find it in the release-notes or in any OVN docs, I suppose that is my 
lack of experience with OVN). 
+ Since OVN v22.12.0 it is possible to create mirrors with OVN (see [4], I 
can't find it in the release-notes or in any OVN docs, I suppose that is my 
lack of experience with OVN).
  NOTE: OVN only supports ERSPAN v1, and with OVN it is also possible to create 
a clean GRE type mirror.
  
  There's a few things to consider, I add here now only the question of
  how the API should look like.
  
  The current TAAS API deals with 2 high level objects:
  * The Tap Service identifies the destination of the mirroring, which is a 
Neutron port (see [5])
  * The Tap Flow identifies the source of the mirroring, which is again a 
Neutron port (see [6]). There is a N:1 relationship between tap-flows and 
tap-services, so multiple tap-flows can be the source of one tap-service.
  
  With ERSPAN this model is not that useful:
  
  * one way forward can be to keep the current API with extra fields for both 
Tap Service and for Tap flow:
  Tap Service: new field that mark the tap-service as ERSPAN destination port 
(in this case the port field should not be obligatory)
  Tap Flow: new fields: erspan_dst_ip and erspan_idx.
  
  * Another option is to encode this in the Tap Service and we could keep
  at least the Tap Flow unchanged. This would mean that for "legacy"
  mirroring with OVS or SRIOV the API behaves differently, or used
  differenltly.
  
  * Yet another option is to introduce a new API for ERSPAN to make as
  simple as possible.
  
- 
  [1]: https://datatracker.ietf.org/doc/id/draft-foschiano-erspan-02.txt => 
Note this is a draft, and I think ERSPAN was not standardized.
  [2]: 

[Yahoo-eng-team] [Bug 1597132] Re: FWaaS: Create Firewall fails with "NoSuchOptError: no such option: router_distributed"

2023-01-31 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597132

Title:
  FWaaS: Create Firewall fails with "NoSuchOptError: no such option:
  router_distributed"

Status in neutron:
  Won't Fix

Bug description:
  This is seen in a setup where the stock L3 plugin
  (neutron.services.l3_router.l3_router_plugin:L3RouterPlugin) is not
  configured, but instead, a different L3 plugin is used. The create
  firewall operation fails with the following exception:

  2016-06-28 15:24:46.940 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Matched POST /fw/firewalls.json 
__call__ /usr/lib/python2.7/site-packages/routes/middleware.py:100
  2016-06-28 15:24:46.941 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Route path: '/fw/firewalls.:(format)', 
defaults: {'action': u'create', 'controller': >} __call__ 
/usr/lib/python2.7/site-packages/routes/middleware.py:102
  2016-06-28 15:24:46.941 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Match dict: {'action': u'create', 
'controller': >, 
'format': u'json'} __call__ 
/usr/lib/python2.7/site-packages/routes/middleware.py:103
  2016-06-28 15:24:46.956 12176 DEBUG neutron.api.v2.base 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Request body: {u'firewall': {u'shared': 
False, u'description': u"{'network_function_id': 
'b875efff-8fd5-4a9a-92e6-19a74c528f7f'}", u'firewall_policy_id': 
u'35d8b1f9-c0aa-478d-806b-7904e80f13fc', u'name': u'FWaaS-provider', 
u'admin_state_up': True}} prepare_request_body 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py:656
  2016-06-28 15:24:46.957 12176 DEBUG neutron.api.v2.base 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Unknown quota resources ['firewall']. 
_create /usr/lib/python2.7/site-packages/neutron/api/v2/base.py:458
  2016-06-28 15:24:46.957 12176 DEBUG 
neutron_fwaas.services.firewall.fwaas_plugin 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create_firewall() called 
create_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py:230
  2016-06-28 15:24:46.958 12176 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create_firewall() called 
create_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:302
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create failed
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in create
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 521, in _create
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource obj = 
do_create(body)
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 503, in 
do_create
  2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1598078] Re: dnsmasq replies are incorrect after multiple simultaneously reloads

2023-01-31 Thread Lajos Katona
As this bug is inactive for years I close it now, feel free to reopen it

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598078

Title:
  dnsmasq replies are incorrect after multiple simultaneously reloads

Status in neutron:
  Won't Fix

Bug description:
  When boot lot of instances by single request (nova boot --min-count 90), some 
instances do not receive DHCP reply.
  After investigation found that DHCP server for some requests answers to 
correct address and for some requests answers to broadcast.

  Packages captured by tcpdump:
  13:37:44.298533 3c:fd:fe:9c:62:c4 > ff:ff:ff:ff:ff:ff, ethertype IPv4 
(0x0800), length 590: (tos 0x0, ttl 20, id 0, offset 0, flags [none], proto UDP 
(17), length 576)
 0.0.0.0.68 > 255.255.255.255.67: [udp sum ok] BOOTP/DHCP, Request from 
3c:fd:fe:9c:62:c4, length 548, xid 0xfe9c62c4, Flags [Broadcast] (0x8000)
Client-Ethernet-Address 3c:fd:fe:9c:62:c4
Vendor-rfc1048 Extensions
  Magic Cookie 0x63825363
  DHCP-Message Option 53, length 1: Discover
  Parameter-Request Option 55, length 36:
Subnet-Mask, Time-Zone, Default-Gateway, Time-Server
IEN-Name-Server, Domain-Name-Server, RL, Hostname
BS, Domain-Name, SS, RP
EP, RSZ, TTL, BR
YD, YS, NTP, Vendor-Option
Requested-IP, Lease-Time, Server-ID, RN
RB, Vendor-Class, TFTP, BF
Option 128, Option 129, Option 130, Option 131
Option 132, Option 133, Option 134, Option 135
  MSZ Option 57, length 2: 1260
  GUID Option 97, length 17: 
0.55.49.57.48.54.49.85.83.69.53.51.55.87.78.89.54
  ARCH Option 93, length 2: 0
  NDI Option 94, length 3: 1.2.1
  Vendor-Class Option 60, length 32: "PXEClient:Arch:0:UNDI:002001"
  END Option 255, length 0
  PAD Option 0, length 0, occurs 200
  13:37:44.298819 fa:16:3e:f0:b1:23 > ff:ff:ff:ff:ff:ff, ethertype IPv4 
(0x0800), length 401: (tos 0xc0, ttl 64, id 30122, offset 0, flags [none], 
proto UDP (17), length 387)
 10.51.1.1.67 > 255.255.255.255.68: [udp sum ok] BOOTP/DHCP, Reply, length 
359, xid 0xfe9c62c4, Flags [Broadcast] (0x8000)
Your-IP 10.51.5.125
Server-IP 10.51.0.4
Client-Ethernet-Address 3c:fd:fe:9c:62:c4
Vendor-rfc1048 Extensions
  Magic Cookie 0x63825363
  DHCP-Message Option 53, length 1: Offer
  Server-ID Option 54, length 4: 10.51.1.1
  Lease-Time Option 51, length 4: 600
  RN Option 58, length 4: 300
  RB Option 59, length 4: 525
  Subnet-Mask Option 1, length 4: 255.255.0.0
  BR Option 28, length 4: 10.51.255.255
  Domain-Name Option 15, length 14: "openstacklocal"
  Hostname Option 12, length 16: "host-10-51-5-125"
  TFTP Option 66, length 10: "10.51.0.4^@"
  BF Option 67, length 11: "pxelinux.0^@"
  Default-Gateway Option 3, length

  After restart of neutron-dhcp-agent this issue gone.
  Looks like on bunch of port-create and port-update operations 
neutron-dhcp-agent sends HUP signal to dnsmasq for reload to frequently.
  dnsmasq do clear cache and read files by signal event asynchronously which 
causes errors in loaded data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598735] Re: subnetpool address scope change event should include the old address scope id

2023-01-31 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598735

Title:
  subnetpool address scope change event should include the old address
  scope id

Status in neutron:
  Won't Fix

Bug description:
  in neutron.db.db_base_plugin_v2, 
  def update_subnetpool(self, context, id, subnetpool):
  
  if address_scope_changed:
  # Notify about the update of subnetpool's address scope
  kwargs = {'context': context, 'subnetpool_id': id}
  registry.notify(resources.SUBNETPOOL_ADDRESS_SCOPE,
  events.AFTER_UPDATE,
  self.update_subnetpool,
  **kwargs)

  This kwargs ONLY includes subnetpool_id, in some cases, we want to
  know the subnetpool old address scope id.

  Here is the use case,
  To develop bgp vpn in neutron-dynamic-routing, each bgpvpn is associated to 
address scope, bgpvpn routes only include the subnets of same address scope. 
  If the subnetpool change its address scope, for some bgpvpnvs associated to 
old address scope, it should delete the routes; for some bgpvpns associated to 
new address scope, it should add new routes.

  If this event does not include old address_scope_id, bgpvpn can not
  delete old routes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598735/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605264] Re: Return data lacks gw_port_id when setting router gateway port.

2023-01-31 Thread Lajos Katona
This bug is inactive for years, I close it now, please reopen it if you
think the issue can be fixed in Neutron

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605264

Title:
  Return data lacks gw_port_id when setting router gateway port.

Status in neutron:
  Won't Fix

Bug description:
  Return date lacks gw_port_id when setting router gateway port.
  This column is defined in db code, but lost in API extension attributes file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605264/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606229] Re: vif_port_id of ironic port is not updating after neutron port-delete

2023-01-31 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606229

Title:
  vif_port_id of ironic port is not updating after neutron port-delete

Status in Ironic:
  Won't Fix
Status in neutron:
  Won't Fix

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606229/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607369] Re: In case of PCI-PT the mac address of the port should be flushed when the Vm attached to it is deleted

2023-01-31 Thread Lajos Katona
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607369

Title:
  In case of PCI-PT the mac address of the port should be flushed when
  the Vm attached  to it is deleted

Status in neutron:
  Won't Fix

Bug description:
  1.brought up a pci-pt setup
  2.Created a pci-pt port (it is assigned a mac starting with fa:)
  3.now boot a Vm with port
  4.on successful boot ,port created in step 2 gets mac of the nic of compute
  5.now delete the vm ,we see that even though Vm is deleted The port still 
contains mac of the compute nic 
  6.if we would want to boot a new vm on the same compute ,we will need to 
either use the same port or first delete the port created in step 2 and create 
new port.

  Ideal scenario would be once vm is deleted ,The mac associated with the port 
(compute nic mac) should be released.
  stack@hlm:~$ neutron port-list
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 6354907d-47bb-4a9f-b68a-1079d7d36a77 |  | 14:02:ec:6d:6e:98 | 
{"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.3"}  
  |
  | 7cd5cef5-af68-464b-9fc3-34aa6a0889a2 |  | fa:16:3e:29:52:3a | 
{"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.2"}  
  |
  | 8e69dfc9-1f5b-4a9b-8e25-8d941841ae0b |  | 14:02:ec:6d:6e:99 | 
{"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": 
"17.17.17.3"} |
  | a88d264e-a35a-4027-b975-29631b629232 |  | fa:16:3e:34:c9:cf | 
{"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": 
"17.17.17.2"} |
  
+--+--+---+---+

  
  stack@hlm:~$ nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | 61fe0ea1-6364-469b-ae6f-a2255268f8c5 | VM   | ACTIVE | -  | Running 
| n5=7.7.7.3; n6=17.17.17.3 |
  
+--+--+++-+---+

  
  stack@hlm:~$ neutron port-create n6 --vnic-type=direct-physical
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:host_id   | 
  |
  | binding:profile   | {}  
  |
  | binding:vif_details   | {}  
  |
  | binding:vif_type  | unbound 
  |
  | binding:vnic_type | direct-physical 
  |
  | created_at| 2016-07-28T06:35:17 
  |
  | description   | 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | dns_name  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": "17.17.17.4"} |
  | id| 1769331d-0c5c-46ff-957e-a538a84b5095
  |
  | 

[Yahoo-eng-team] [Bug 1612403] Re: Cannot filter OVSDB columns, only tables

2023-01-31 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612403

Title:
  Cannot filter OVSDB columns, only tables

Status in neutron:
  Fix Released

Bug description:
  The current ovsdb connection class
  (neutron.agent.ovsdb.native.connection.Connection) allows filtering
  OVSDB tables, but not columns. Filtering columns may allow a
  performance gain when only specific columns in a table are accessed.

  Specifically, this is a feature we are trying to use in Dragonflow[1],
  in class DFConnection and table columns
  ovsdb_monitor_table_filter_default.

  [1]
  
https://github.com/openstack/dragonflow/blob/master/dragonflow/db/drivers/ovsdb_vswitch_impl.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612403/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612433] Re: neutron-db-manage autogenerate is generating empty upgrades

2023-01-31 Thread Lajos Katona
As this bug is inactive for years I close it now, feel free to reopen

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612433

Title:
  neutron-db-manage autogenerate is generating empty upgrades

Status in networking-arista:
  New
Status in neutron:
  Invalid

Bug description:
  The alembic autogenerate wrapper,

neutron-db-manage revision -m "description" --[contract|expand]

  is no longer collecting model/migration diffs and is generating empty
  upgrade scripts.

  Not sure when this broke.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1612433/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614680] Re: In FWaaS v2 cross-tenant assignment of policies is inconsistent

2023-01-31 Thread Lajos Katona
As this bug is inactive for years I changed it to "won't fix", feel free
to reopen it if you would like to work on it.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614680

Title:
  In FWaaS v2 cross-tenant assignment of policies is inconsistent

Status in neutron:
  Won't Fix

Bug description:
  In the unit tests associated with the FWaaS v2 DB
  (neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py),
  there are two that demonstrate improper handling of cross-tenant
  firewall policy assignment.

  First, the logic tested in
  test_update_firewall_rule_associated_with_other_tenant_policy
  succeeds, but it should not.

  Second, the logic tested in test_update_firewall_group_with_public_fwp
  fails, but it should succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614680/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553332] Re: API documentation missing for neutron rbac-policies

2023-01-05 Thread Lajos Katona
The API ref has entries for RBAC policies now, if you still miss
something please reopen specifiacally for the missing part

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553332

Title:
  API documentation missing for neutron rbac-policies

Status in neutron:
  Fix Released

Bug description:
  https://specs.openstack.org/openstack/neutron-
  specs/specs/liberty/rbac-networks.html#rest-api-impact doesn't appear
  to be reflected in the neutron API documentation, despite the fact
  that the spec was implemented (see neutron/extensions/rbac.py). I
  believe this should be included on http://developer.openstack.org/api-
  ref-networking-v2-ext.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553332/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577410] Re: Neutron needs a test for GET /

2023-01-05 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577410

Title:
  Neutron needs a test for GET /

Status in neutron:
  Fix Released

Bug description:
  A fundamental operation for most OpenStack services is providing
  information about what versions of an API are available to clients.  A
  version document can be retrieved by sending an unauthenticated GET
  request to the root URL ("/") for most services, including Neutron.
  This capability is important for discovery in that clients can learn
  how to interact with the cloud in question, and DefCore considers it
  an important capability for interoperability and has added similar
  capabilities to it's Guidelines for other services.[1][2]  As Neutron
  moves toward microversioning [3], being able to retrieve a version
  document will be increasingly important for clients.  However, there
  are currently no tests for GET /, so DefCore cannot make this a
  required Capability.  We should a simple smoke test or two for GET /.

  [1] http://git.openstack.org/cgit/openstack/defcore/tree/2016.01.json#n117
  [2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.01.json#n1379
  [3] https://etherpad.openstack.org/p/newton-neutron-future-neutron-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1577410/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577543] Re: Barbican Scenario Test (TLS with Intermediates)

2023-01-05 Thread Lajos Katona
All patches are abandoned and lbaas is deprecated, please reopen if you
think we need such scenario tests with octavia

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577543

Title:
   Barbican Scenario Test (TLS with Intermediates)

Status in neutron:
  Won't Fix

Bug description:
  * scenario test that does the following:
   1.  Create 1 compute server and run 2 web server on it
   2.  Create SSL Cert, Private Key with intermediates
   3.  Upload SSL Cert and Private Key to Barbican secrets and get their 
reference
   4.  Create Barbican TLS container with secrets reference
   4.  Create Load Balancer and pass in Barbican TLS container reference
   5.  Pass SSL traffic to load balancer
   6.  Verify SSL traffic to node

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1577543/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584922] Re: Add OSprofiler support

2023-01-05 Thread Lajos Katona
Neutron already has osprofiler support (i.e.:
https://review.opendev.org/c/openstack/neutron/+/615350? )

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584922

Title:
  Add OSprofiler support

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Won't Fix

Bug description:
  https://review.openstack.org/273951
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 9a43f58f4df85adc2029c33ba000ca17b746a6eb
  Author: Dina Belova 
  Date:   Fri Jan 29 11:54:14 2016 +0300

  Add OSprofiler support
  
  * Add osprofiler wsgi middleware. This middleware is used for 2 things:
1) It checks that person who wants to trace is trusted and knows
   secret HMAC key.
2) It starts tracing in case of proper trace headers
   and adds first wsgi trace point, with info about HTTP request
  
  * Add initialization of osprofiler at start of service
Currently that includes oslo.messaging notifer instance creation
to send Ceilometer backend notifications.
  
  Neutron client change: Ic11796889075b2a0e589b70398fc4d4ed6f3ef7c
  
  Co-authored-by: Ryan Moats 
  Depends-On: I5102eb46a7a377eca31375a0d64951ba1fdd035d
  Closes-Bug: #1335640
  DocImpact Add devref and operator documentation on how to use this
  APIImpact
  Change-Id: I7fa2ad57dc5763ce72cba6945ebcadef2188e8bd

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1584922/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597219] Re: default value display error in rbac-networks.rst

2023-01-05 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597219

Title:
  default value display error in rbac-networks.rst

Status in neutron:
  Won't Fix

Bug description:
  In the below document:
  
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/rbac-networks.rst

  In the Table of The Section "REST API Impact", the default value of
  the 'target_tenant' should be '*', but the displayed value is a black
  dot '●'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597219/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1610038] Re: neutron policy file missing load balance related rules

2023-01-05 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1610038

Title:
  neutron policy file missing load balance related rules

Status in neutron:
  Won't Fix

Bug description:
  the following rules in horizon for loadbalance related rules are
  missing in neutron policy file.

  "create_pool": "rule:admin_or_owner",
  "update_pool": "rule:admin_or_owner",
  "delete_pool": "rule:admin_or_owner",

  "create_vip": "rule:admin_or_owner",
  "update_vip": "rule:admin_or_owner",
  "delete_vip": "rule:admin_or_owner",

  "create_member": "rule:admin_or_owner",
  "update_member": "rule:admin_or_owner",
  "delete_member": "rule:admin_or_owner",

  "create_health_monitor": "rule:admin_or_owner",
  "update_health_monitor": "rule:admin_or_owner",
  "delete_health_monitor": "rule:admin_or_owner",

  "create_pool_health_monitor": "rule:admin_or_owner",
  "delete_pool_health_monitor": "rule:admin_or_owner",

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1610038/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526673] Re: [api-ref]Need to write "update agent" on Networking API

2023-01-05 Thread Lajos Katona
If I understand well this is already done

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526673

Title:
  [api-ref]Need to write "update agent" on Networking API

Status in neutron:
  Fix Released

Bug description:
  Neutron supports "update agent" on Networking API, and tempest also is 
testing the API.
  However, the api-site doesn't contain the API description.
  So we need to write the API for API users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526673/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543379] Re: Neutron *.delete.end notification payload do not contain metadata other than id of the entity being deleted

2023-01-05 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543379

Title:
  Neutron *.delete.end notification payload do not contain metadata
  other than id of the entity being deleted

Status in neutron:
  Won't Fix

Bug description:
  When Neutron emits notification for objects like subnet, port, router
  and network being deleted, the notification payload only contain the
  id of the entity being deleted.

  Eg - RECEIVED MESSAGE: {u'_context_domain': None, u'_context_request_id': 
u'req-82232bf3-5032-4351-b8d6-71028cfe24eb', u'event_type': u'port.delete.end', 
u'_context_auth_token': u'682e4fec9d584d29b1f3a1a803a2560c', 
u'_context_resource_uuid': None, u'_context_tenant_name': u'admin', 
u'_context_user_id': u'a0934b6ddd264d619a6aba59b978cabc', u'payload':
  {u'port_id': u'ce56ff00-5af0-45a0-af33-061c2d8a64c5'}

  , u'_context_show_deleted': False, u'priority': u'INFO',
  u'_context_is_admin': True, u'_context_project_domain': None,
  u'_context_user': u'a0934b6ddd264d619a6aba59b978cabc',
  u'publisher_id': u'network.padawan-ccp-c1-m1-mgmt', u'message_id':
  u'2b8a5fa3-968c-4808-a235-bf06ecdba412', u'_context_roles':
  [u'monasca-user', u'admin', u'key-manager:admin', u'key-
  manager:service-admin'], u'timestamp': u'2016-02-08 20:47:02.026986',
  u'_context_timestamp': u'2016-02-08 20:47:01.178041', u'_unique_id':
  u'1248f6703a0f41bfb40d0f7cd6407371', u'_context_tenant_id':
  u'a5b63ca418bf45bc9f2cfc14c0c3c59e', u'_context_project_name':
  u'admin', u'_context_user_identity':
  u'a0934b6ddd264d619a6aba59b978cabc a5b63ca418bf45bc9f2cfc14c0c3c59e -
  - -', u'_context_tenant': u'a5b63ca418bf45bc9f2cfc14c0c3c59e',
  u'_context_project_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e',
  u'_context_read_only': False, u'_context_user_domain': None,
  u'_context_user_name': u'admin'}

  Compare that to the metadata obtained when a port is created:

  RECEIVED MESSAGE: {u'_context_domain': None, u'_context_request_id': 
u'req-89234e73-0294-4a29-bada-d0daa1e66b70', u'event_type': u'port.create.end', 
u'_context_auth_token': u'318cca31e08d4ecc8cf48e33f3c661f6', 
u'_context_resource_uuid': None, u'_context_tenant_name': u'admin', 
u'_context_user_id': u'a0934b6ddd264d619a6aba59b978cabc',
  u'payload': {u'port': {u'status': u'DOWN', u'binding:host_id': u'', u'name': 
u'', u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'ecbcd2ac-e066-4bc7-8f65-d4cf182677b9', u'dns_name': u'', 
u'binding:vif_details': {}, u'mac_address': u'fa:16:3e:8e:df:e9', 
u'dns_assignment': [
  {u'hostname': u'host-192-168-1-6', u'ip_address': u'192.168.1.6', u'fqdn': 
u'host-192-168-1-6.openstacklocal.'}

  ], u'binding:vnic_type': u'normal', u'binding:vif_type': u'unbound', 
u'device_owner': u'', u'tenant_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'binding:profile': {}, u'fixed_ips': [
  {u'subnet_id': u'a650342c-5db4-4f37-aecb-4eb723355176', u'ip_address': 
u'192.168.1.6'}

  ], u'id': u'4adbe0de-6f27-4745-9c36-e56ee43a6ea3', u'security_groups': 
[u'02d614cd-053d-485f-aadf-83fc5409d111'], u'device_id': u''}},
  u'_context_show_deleted': False, u'priority': u'INFO', u'_context_is_admin': 
True, u'_context_project_domain': None, u'_context_user': 
u'a0934b6ddd264d619a6aba59b978cabc', u'publisher_id': 
u'network.padawan-ccp-c1-m3-mgmt', u'message_id': 
u'f418fa6c-5059-450b-9e0b-f7bf6970a24c', u'_context_roles': [u'monasca-user', 
u'admin', u'key-manager:admin', u'key-manager:service-admin'], u'timestamp': 
u'2016-02-08 20:48:00.589837', u'_context_timestamp': u'2016-02-08 
20:48:00.069004', u'_unique_id': u'bc1cf24aed20440c8d80f09914eaabf2', 
u'_context_tenant_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_project_name': u'admin', u'_context_user_identity': 
u'a0934b6ddd264d619a6aba59b978cabc a5b63ca418bf45bc9f2cfc14c0c3c59e - - -', 
u'_context_tenant': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_project_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_read_only': False, u'_context_user_domain': None, 
u'_context_user_name': u'admin'}

  The metadata is much richer for a *.create.end event compared to the
  *.delete.end event above. Ceilometer needs the metadata for the
  *.delete.end events.

  For accurate billing, Ceilometer needs to handle the network related 
*.delete.end events, so this change is needed for accurate billing use cases.
  Refer: 
https://github.com/openstack/ceilometer/blob/master/ceilometer/network/notifications.py#L50

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543379/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999774] [NEW] SDK: Neutron stadiums use python bindings from python-neutronclient which will be deprecated

2022-12-15 Thread Lajos Katona
Public bug reported:

As we discussed during the Antelope PTG (see [1]) the python binding code in 
python-neutronclient will be deprecated and the bindings from OpenstackSDK 
could be used.
Neutronclient prints warning about the future deprecation since [2].

This bug is to track the efforts in Neutron stadium projects to change
the code to use bindings from openstacksdk.


[1]: https://etherpad.opendev.org/p/neutron-antelope-ptg#L163
[2]: 
https://review.opendev.org/c/openstack/python-neutronclient/+/862371?forceReload=true

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999774

Title:
  SDK: Neutron stadiums use python bindings from python-neutronclient
  which will be deprecated

Status in neutron:
  New

Bug description:
  As we discussed during the Antelope PTG (see [1]) the python binding code in 
python-neutronclient will be deprecated and the bindings from OpenstackSDK 
could be used.
  Neutronclient prints warning about the future deprecation since [2].

  This bug is to track the efforts in Neutron stadium projects to change
  the code to use bindings from openstacksdk.

  
  [1]: https://etherpad.opendev.org/p/neutron-antelope-ptg#L163
  [2]: 
https://review.opendev.org/c/openstack/python-neutronclient/+/862371?forceReload=true

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999774/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511134] Re: Batch DVR ARP updates

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511134

Title:
  Batch DVR ARP updates

Status in neutron:
  Won't Fix

Bug description:
  The L3 agent currently issues ARP updates one at a time while
  processing a DVR router. Each ARP update creates an external process
  which has to call the neutron-rootwrap helper while also "ip netns
  exec " -ing each time.

  The ip command contains a "-batch " option which would be
  able to batch all of the "ip neigh replace" commands into one external
  process per qrouter namespace. This would greatly reduce the amount of
  time it takes the L3 agent to update large numbers of ARP entries,
  particularly as the number of VMs in a deployment rises.

  The benefit of batching ip commands can be seen in this simple bash
  example:

  $ time for i in {0..50}; do sudo ip netns exec qrouter-
  bc38451e-0c2f-4ad2-b76b-daa84066fefb ip a > /dev/null; done

  real  0m2.437s
  user0m0.183s
  sys   0m0.359s
  $ for i in {0..50}; do echo a >> /tmp/ip_batch_test; done
  $ time sudo ip netns exec qrouter-bc38451e-0c2f-4ad2-b76b-daa84066fefb ip -b 
/tmp/ip_batch_test > /dev/null

  real  0m0.046s
  user0m0.003s
  sys   0m0.007s

  If just 50 arp updates are batched together, there is about a 50x
  speedup. Repeating this test with 500 commands showed a speedup of
  250x (disclaimer: this was a rudimentary test just to get a rough
  estimate of the performance benefit).

  Note: see comments #1-3 for less-artificial performance data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511134/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509924] Re: Tempest needs to test DHCPv6 stateful

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think please 
reopen this bug
As I see we have still tests for slaac, and not for stateful

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509924

Title:
  Tempest needs to test DHCPv6 stateful

Status in neutron:
  Won't Fix

Bug description:
  Currently there are no tests for DHCPv6 stateful IPv6 configurations,
  due to a bug in Cirros, which does not have support for DHCPv6

  https://bugs.launchpad.net/cirros/+bug/1487041

  Work needs to be done in Tempest to select an image that has DHCPv6
  stateful support.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509924/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526587] Re: Neutron doesn't have a command to show the available IP addresses for one subnet

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526587

Title:
  Neutron doesn't have a command to show the available IP addresses for
  one subnet

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Won't Fix
Status in python-openstackclient:
  In Progress

Bug description:
  Neutron doesn't have a command to show the allocated ip addresses for
  one subnet.

  We can get the allocated ip list with command:
  [root@cts-orch ~]# neutron port-list | grep `neutron subnet-show 110-OAM2 | 
awk '/ id / {print $4}'` | cut -d"|" -f5 | cut -d":" -f3 | sort
   "135.111.122.97"}
   "135.111.122.98"}

  But we don't have a command to show the available ips for one subnet.
  I write a shell script to show the available ports as below, but it
  will be helpful if we can provide such a neutron command.

  [root@cts-orch ~]# ./show_available_ip.sh 110-OAM2
  135.111.122.99
  135.111.122.100
  135.111.122.101
  135.111.122.102
  135.111.122.103
  135.111.122.104
  135.111.122.105
  135.111.122.106
  135.111.122.107
  135.111.122.108
  135.111.122.109
  135.111.122.110
  135.111.122.111
  135.111.122.112
  135.111.122.113
  135.111.122.114
  135.111.122.115
  135.111.122.116
  135.111.122.117
  135.111.122.118
  135.111.122.119
  135.111.122.120
  135.111.122.121
  135.111.122.122
  135.111.122.123
  135.111.122.124
  Total Count: 26

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526587/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505627] Re: [RFE] QoS Explicit Congestion Notification (ECN) Support

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505627

Title:
  [RFE] QoS Explicit Congestion Notification (ECN) Support

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Network congestion can be very common in large data centers generating huge 
traffic from multiple hosts. Though each hosts can use IP header TOS ECN bit 
functionality to implement explicit congestion notification [1]_ but this will 
be a redundant effort.

  [Proposal]
  This proposal talks about achieving ECN on behalf of each host. This will 
help in making the solution centralized and can be done per tenant level. In 
addition to this traffic classification for applying ECN functionality can also 
be achieved via specific filtering rules, if required. Almost all the leading 
vendors support this option for better QoS [2]_.

  Existing QoS framework is limited only to bandwidth rate limiting and
  be extend for supporting explicit congestion notification (RFC 3168
  [3]_).

  [Benefits]
  - Enhancement to the existing QoS functionality.

  [What is the enhancement?]
  - Add ECN support to the QoS extension.
  - Add additional command lines for realizing ECN functionality.
  - Add OVS support.

  [Related information]
  [1] ECN Wiki
     http://en.wikipedia.org/wiki/Explicit_Congestion_Notification
  [2] QoS
     https://review.openstack.org/#/c/88599/
  [3] RFC 3168
     https://tools.ietf.org/html/rfc3168
  [4] Specification
  
https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification
  [5] Specification Discussion: https://etherpad.openstack.org/p/QoS_ECN
  [6] OpenVSwitch support for ECN : 
http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt
  [7] Etherpad Link : https://etherpad.openstack.org/p/QoS_ECN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505627/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506076] Re: Allow connection tracking to be disabled per-port

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug report

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506076

Title:
  Allow connection tracking to be disabled per-port

Status in neutron:
  Won't Fix

Bug description:
  This RFE is being raised in the context of this use case
  https://review.openstack.org/#/c/176301/ from the TelcoWG.

  OpenStack implements levels of per-VM security protection (security
  groups, anti-spoofing rules).  If you want to deploy a trusted VM
  which itself is providing network security functions, as with the
  above use case, then it is often necessary to disable some of the
  native OpenStack protection so as not to interfere with the protection
  offered by the VM or use excessive host resources.

  Neutron already allows you to disable security groups on a per-port
  basis.  However, the Linux kernel will still perform connection
  tracking on those ports.  With default Linux config, VMs will be
  severely scale limited without specific host configuration of
  connection tracking limits - for example, a Session Border Controller
  VM may be capable of handling millions of concurrent TCP connections,
  but a default host won't support anything like that.  This bug is
  therefore a RFE to request that disabling security group function for
  a port further disables kernel connection tracking for IP addresses
  associated with that port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506076/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507499] Re: [RFE] Centralized Management System for testing the environment

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507499

Title:
  [RFE] Centralized Management System for testing the environment

Status in neutron:
  Won't Fix

Bug description:
  To enable operators to reduce manual work upon experiencing networking
  issue, and to fast pinpoint the cause of a failure, there is a need for
  neutron to provide real-time diagnostics of its resources. This way,
  current need for manual checks, often requiring root access, would be
  gradually replaced by API queries. Providing diagnostics options in
  neutron API would also open space for development of specialized tools
  that would solve particular type of issues, e.g. inability to ping VM’s
  interface.

  Note: The description of this RFE was changed to cover previous RFEs
  related to diagnostics (namely bug 1563538, bug 1537686, bug 1519537
  and the original of this bug).

  Problem Description
  ===

  One of common questions seen at ask.openstack.org and mailing lists is
  "Why cannot I ping my floating IP address?". Usually, there are common
  steps in the diagnostics required to answer the question involving
  determination of relevant namespaces, pinging the instance from that
  namespaces etc. Currently, these steps need to be performed manually,
  often by crawling the relevant hosts and running tools that require root
  access.

  Neutron currently provides data on how the resources *should* be
  configured. It however provides only a very little diagnostics
  information reflecting *actual* resource state. Hence if an issue
  occurs, user is often left with only a little details of what works and
  what not, and has to manually crawl affected hosts to troubleshoot the
  issue.

  Proposed Change
  ===

  This RFE requests an extension of current API that exposes
  diagnostics for neutron resources so that it is accessible via API
  calls, reducing amount of needed manual work. Further it describes
  additions to Neutron CLI necessary to call the newly added API.

  Spec
  
  https://review.openstack.org/#/c/308973/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507499/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498987] Re: [RFE] DHCP agent should provide ipv6 RAs for isolated networks with ipv6 subnets

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498987

Title:
  [RFE] DHCP agent should provide ipv6 RAs for isolated networks with
  ipv6 subnets

Status in neutron:
  Won't Fix

Bug description:
  Currently, if there is no router attached to a subnet, then instances
  cannot walk thru IPv6 address assignment because there is nothing on
  the network that multicasts RAs that would provide basic info about
  how ipv6 addressing is handled there. We can have DHCP agent to run
  radvd in that case. Then instances would be able to receive IPv6
  addresses on isolated networks too.

  We could try to rely on https://tools.ietf.org/html/rfc4191.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998353] [NEW] Fullstack: test_packet_rate_limit_qos_policy_rule_lifecycle failing

2022-11-30 Thread Lajos Katona
Public bug reported:

neutron.tests.fullstack.test_qos.TestPacketRateLimitQoSOvs.test_packet_rate_limit_qos_policy_rule_lifecycle
(both egress and ingress) direction failing in neutron-fullstack-with-
uwsgi (perhaps in other fullstack jobs also, but I checked this one):

https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-
with-uwsgi=master=0

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998353

Title:
  Fullstack: test_packet_rate_limit_qos_policy_rule_lifecycle failing

Status in neutron:
  New

Bug description:
  
neutron.tests.fullstack.test_qos.TestPacketRateLimitQoSOvs.test_packet_rate_limit_qos_policy_rule_lifecycle
  (both egress and ingress) direction failing in neutron-fullstack-with-
  uwsgi (perhaps in other fullstack jobs also, but I checked this one):

  https://zuul.opendev.org/t/openstack/builds?job_name=neutron-
  fullstack-with-uwsgi=master=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501380] Re: Evolution of options and features in neutron-db-manage for Newton

2022-11-08 Thread Lajos Katona
** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501380

Title:
  Evolution of options and features in neutron-db-manage for Newton

Status in kolla:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Neutron's DB schema and alembic revisions management is evolving.
  In the process, we have deprecated and dropped features and options.

  This bug will be used to track evolutionary updates:
  - Better help and documentation
  - Better formatting and and more useful output from commands
  - Additional options for more insight into alembic branches/heads/history
  - etc.

  This bug will also be used to track the deprecations and removals:
  - Branchless migrations
  - split_branches option
  - core_plugin option
  - service option
  - quota_driver option
  - etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1501380/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505631] Re: [RFE] QoS VLAN 802.1p Support

2022-11-08 Thread Lajos Katona
this bug is inactive for ~5years, I close it now

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505631

Title:
  [RFE] QoS VLAN 802.1p Support

Status in neutron:
  Won't Fix

Bug description:
  [Overview]
  The IEEE 802.1p signaling standard defines traffic prioritization at Layer 2 
of the OSI model. Layer 2 network devices, such as switches, that adhere to 
this standard can group incoming packets into separate traffic classes.The 
802.1p standard is used to prioritize packets as they traverse a network 
segment (subnet).When a subnet becomes congested, causing a Layer 2 network 
device to drop packets, the packets marked for higher priority receive 
preferential treatment and are serviced before packets with lower priorities.

  The 802.1p priority markings for a packet are appended to the MAC
  header.On Ethernet networks, 802.1p priority markings are carried in
  Virtual Local Area Network (VLAN) tags. The IEEE 802.1q standard
  defines VLANs and VLAN tags. This standard specifies a 3-bit field for
  priority in the VLAN tag, but it does not define the values for the
  field. The 802.1p standard defines the values for the priority field.
  This standard defines eight priority classes (0 - 7). Network
  administrators can determine the actual mappings, but the standard
  makes general recommendations. The VLAN tag is placed inside the
  Ethernet header, between the source address and either the Length
  field (for an IEEE 802.3 frame) or the EtherType field (for an
  Ethernet II frame) in the MAC header. The 802.1p marking determines
  the service level that a packet receives when it crosses an
  802.1p-enabled network segment.

  [Proposal]
  Existing QoS [1]_ framework can be extend for supporting VLAN priority.
  This requirement mainly focus provider networks.
  Note: For the tenant network it can only work if the traffic is segmented via 
VLANs and does nothing for other type of segmentation.

  [Benefits]
  - Enhancement to the existing QoS functionality.

  [What is the enhancement?]
  - Add VLAN tagging support to the QoS extension for provider's network.
  - Add additional command lines for realizing Vlan tag support.
  - Add OVS support.

  [Related information]
  [1] QoS
     https://review.openstack.org/#/c/88599/
  [2] Specification
  https://blueprints.launchpad.net/neutron/+spec/vlan-802.1p-qos

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505631/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492714] Re: RFE: Pure Python driven Linux network configuration

2022-11-08 Thread Lajos Katona
pyroute2 migration is finished (mostly I suppose)

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492714

Title:
  RFE: Pure Python driven Linux network configuration

Status in neutron:
  Fix Released

Bug description:
  [Problem]
  Currently, Linux network configuration in Neutron heavily relies on shell 
commands, like ip, brctl, ipset, iptables, etc. Shell commands makes Neutron 
agents inefficient and really hard to operate in high load environment. In our 
production deployment scaling from 50 - 500 physical machines per region, 50+ 
virtual instances per machine, the Neutron agents run extremely slowly and 
sometimes unresponsive.

  There is a blueprint that switch openflow operations from shell to
  ryu-based pure python library, but it is not sufficient.

  [Solution-1]
  I'd like to introduce a pure Python netlink library: pyroute2. It supports 
network configuration including ip-link, ip-route, ip-netns, tc, ipset and 
iptables in the roadmap, and is also compatible with python3. It only requires 
standard library which is also awesome, because you don't need to rely on other 
unstable third-party libraries that makes dependency hard to maintain. 
Moreover, it supports transactional local DB operations for network 
configuration called IPDB.

  Doc Link: http://docs.pyroute2.org/general.html
  Pypi Link: https://pypi.python.org/pypi/pyroute2

  I should first issue a rfe for discussion. Forgot it. :-)
  Blueprint Link: 
https://blueprints.launchpad.net/neutron/+spec/pure-python-linuxnet-conf

  [Solution-2]
  Currently pyroute2 still doesn't support whole functionality of ipset and 
iptables, but they are definitely on the roadmap. I'm not sure its progress. 
I've forked this project and will try to involve in if possible to make sure it 
evolves as expected. What I suggest is that if possible, should we open a new 
project, pyosnetconf or networking-linuxnet-conf, whatever, that implements 
OpenStack's own python library for Linux network configuration. It may be much 
more aggressive, but still meaningful to neutron.

  I'm OK with the two solutions mentioned above. I'd like to get
  feedback as much as possible to move forward. Anyway, I strongly
  suggest to make it work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492714/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497830] Re: Neutron subprojects cannot depend on alembic revisions from other neutron subprojects

2022-11-08 Thread Lajos Katona
this bug is inactive for ~5years, I close it now

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497830

Title:
  Neutron subprojects cannot depend on alembic revisions from other
  neutron subprojects

Status in neutron:
  Won't Fix

Bug description:
  If networking-foo depends on networking-bar, then a requirement may
  arise for an alembic revision in networking-foo to depend on an
  alembic revision in networking-bar. Currently this cannot be
  accommodated because each subproject has its own alembic environment.

  To solve this issue we need to switch to one alembic environment
  (neutron's) for all neutron subprojects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497830/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481138] Re: l2population configuration is inconsistent for different agents

2022-11-08 Thread Lajos Katona
No activity for ~6 years on this bug report, so I close it now

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481138

Title:
  l2population configuration is inconsistent for different agents

Status in neutron:
  Won't Fix

Bug description:
  I run devstack for dev. I noticed that l2population configuration is
  not consistent for lb-agent and ovs-agent.

  For lb-agent, l2population is in the section [vxlan], while for ovs-
  agent, it is in the section [agent].

  Moreover, the devstack is set this configuration in [agent] that
  causes lb-agent malfunction.

  It is easy to fix it in the devstack, but, in a long run, I think it
  is better to deal with it for agent side. I think for both agents, the
  common configuration like l2population should be in the [agent]
  section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481138/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487548] Re: fullstack infrastructure tears down processes via kill -9

2022-11-08 Thread Lajos Katona
No activity for ~6 years on this bug report, so I close it now

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487548

Title:
  fullstack infrastructure tears down processes via kill -9

Status in neutron:
  Won't Fix

Bug description:
  I can't imagine this has good implications. Distros typically kill
  neutron processes via kill -15, so this should definitely be doable
  here as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487548/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438320] Re: Subnet pool created should be blocked when allow_overlapping_ips=False

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438320

Title:
  Subnet pool created should be blocked when allow_overlapping_ips=False

Status in neutron:
  Won't Fix

Bug description:
  Creation of subnet pools should be blocked when
  allow_overlapping_ips=False. This conflicts with the notion of subnet
  pools and causes allocation of overlapping prefixes to be blocked,
  even when allocating across different pools.  The simplest solution is
  to declare subnet pools incompatible with allow_overlapping_ips=False
  and block creation of subnet pools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438320/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459427] Re: VPNaaS: Certificate support for IPSec

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459427

Title:
  VPNaaS: Certificate support for IPSec

Status in neutron:
  Won't Fix

Bug description:
  Problem: Currently, when creating VPN IPSec site-to-site connections,
  the end user can only create tunnels using pre-shared keys for
  authentication. There is no way to use (the far superior)
  certificates, which are preferred for production environments.

  Solution: We can leverage off of Barbican to add certificate support
  for VPNaaS IPSec connections.

  Importance: Adding support for specifying certificates, will help with
  the acceptance and deployment of the VPNaaS feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459427/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460499] Re: Instance can not get IP address in tacker by using nova's drvier

2022-10-27 Thread Lajos Katona
Closing this now, as it was inactive for years, feel free to reopen and
propose patches if you need this feature.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460499

Title:
  Instance can not get IP address in tacker by using nova's drvier

Status in neutron:
  Won't Fix

Bug description:
  Instance can not get IP address in Tacker by using nova's driver.
  Because instance's port's admin_state_up is False when creating port.
  I think port's admin_state_up should by True in creating. Bug fix in:
  https://review.openstack.org/#/c/187039/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460499/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471032] Re: [api-ref]Support Basic Address Scope CRUD as extensions

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471032

Title:
  [api-ref]Support Basic Address Scope CRUD as extensions

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/189741
  commit cbd95318ad6c44e72a3aa163f7a399353c8b4458
  Author: vikram.choudhary 
  Date:   Tue Jun 9 19:55:59 2015 +0530

  Support Basic Address Scope CRUD as extensions
  
  This patch adds the support for basic address scope CRUD.
  Subsequent patches will be added to use this address scope
  on subnet pools.
  
  DocImpact
  APIImpact
  
  Co-Authored-By: Ryan Tidwell 
  Co-Authored-By: Numan Siddique 
  Change-Id: Icabdd22577cfda0e1fbf6042e4b05b8080e54fdb
  Partially-implements:  blueprint address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471032/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437496] Re: port-update --fixed-ips doesn't work for routers

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437496

Title:
  port-update --fixed-ips doesn't work for routers

Status in neutron:
  Opinion

Bug description:
  Performing a port-update with a different set of fixed-ips than are
  currently on the port will be reported as a success by Neutron,
  however the actual addresses will not be updated in the Linux network
  namespace. This now has more functional implications as a result of
  multiple subnets being allowed on the external router interface
  (https://review.openstack.org/#/c/149068). If the interface has two
  subnets and the user wishes to remove one, they will have to clear the
  gateway interface first, removing both (causing traffic disruption),
  delete the subnet, and re-set the gateway on the router to re-add the
  remaining subnet. If port-update were functional for router addresses,
  this command could be used to remove a second subnet without causing
  disruption to the first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437496/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381562] Re: Add functional tests for metadata agent

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381562

Title:
  Add functional tests for metadata agent

Status in neutron:
  Fix Released

Bug description:
  As per discussion on
  
https://review.openstack.org/#/c/121782/8/neutron/tests/unit/test_metadata_agent.py:

  Tests could do something like sending an HTTP request to a proxy,
  while mocking the API (and then potential RPC, if rpc is merged in
  metadata agent) response, then assertiwg that the agent forwarded the
  correct HTTP request to Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381562/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405057] Re: Filter port-list based on security_groups associated not working

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405057

Title:
  Filter port-list based on security_groups associated not working

Status in neutron:
  Fix Released

Bug description:
  Sample Usecases:

  1. neutron port-list --security_groups=6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  2.neutron port-list --security_groups list=true 
6f3d9d9d-e84d-437c-ac40-82ce3196230c
  Invalid input for operation: '6' is not an integer or uuid.

  Since, security_groups associated to a port are referenced from 
securitygroups db table, we cant just filter ports
  based on security_groups directly as it works for other paramters.

  Example:
  neutron port-list --mac_address list=true fa:16:3e:40:2b:cc fa:16:3e:8e:32:3e
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 1cecec78-226f-4379-b5ad-c145e2e14048 |  | fa:16:3e:40:2b:cc | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.2"} |
  | eec24494-09a8-4fa8-885d-e3fda37fe756 |  | fa:16:3e:8e:32:3e | 
{"subnet_id": "af938c1c-e2d7-47a0-954a-ec8524677486", "ip_address": 
"50.10.10.3"} |
  
+--+--+---+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405057/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371435] Re: Remove unnecessary iptables reload when L2 agent enable ipset

2022-10-27 Thread Lajos Katona
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371435

Title:
  Remove unnecessary iptables reload when L2 agent enable ipset

Status in neutron:
  Won't Fix

Bug description:
  When l2 agent enables ipset, if a security group just update its members,  
iptables should not be reloaded, it just need to add members to ipset chain.
  there is a room to improve!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371435/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2022-10-18 Thread Lajos Katona
implicit provider loading was long time ago merged

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  Invalid
Status in networking-l2gw:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993288] [NEW] RFE: Adopt Keystone unified limits as quota driver for Neutron

2022-10-18 Thread Lajos Katona
Public bug reported:

Keystone has the ability to store and relay project specific limits (see [1]). 
The API (see [2]) provides a way for the admin to create limits for each 
project for the resources.
The feature is considered ready and even the API (via oslo_limts) can be 
changed as more and more projects adopt it and based on user feedback.

For how to use unified limits and adopt in a project a nice guideline is
under [3].

Currently Nova (see [4]) and Glance (see [5]) partly implemented the
usage of unified limits. It is still experimental.

Cinder checked this option but decided to wait till unified limits is
more mature (see [7])

Pros (as I see):
* Common Openstack wide API for admins to define Limits for projects.
* Long term support for other enforcement models like hierarchies (as I see it 
is still not supported in oslo_limits, see [8]).

Cons (as I see):
* Keystone as bottleneck, for all operation an API req is needed (there is some 
cache in oslo_limit)
* How we can solve the concurency issue, it is now not a db_lock but we have to 
be sure that on the API level we handle concurrent resource usages.
* all resources must be first registered on Keystone API, otherwise the quota 
check/enforcement will fail.
* it is not yet ready (see the big warning on top of [1].


[1]: https://docs.openstack.org/keystone/latest/admin/unified-limits.html
[2]: https://docs.openstack.org/api-ref/identity/v3/#unified-limits
[3]: 
https://docs.openstack.org/project-team-guide/technical-guides/unified-limits.html
[4]: https://review.opendev.org/q/topic:bp%252Funified-limits-nova
[5]: https://review.opendev.org/q/topic:bp%252Fglance-unified-quotas
[6]: 
https://docs.openstack.org/keystone/latest/admin/unified-limits.html#strict-two-level
[7]: 
https://specs.openstack.org/openstack/cinder-specs/specs/zed/quota-system.html#unified-limits
[8]: 
https://opendev.org/openstack/oslo.limit/src/branch/master/oslo_limit/limit.py#L223-L240

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993288

Title:
  RFE: Adopt Keystone unified limits as quota driver for Neutron

Status in neutron:
  New

Bug description:
  Keystone has the ability to store and relay project specific limits (see 
[1]). The API (see [2]) provides a way for the admin to create limits for each 
project for the resources.
  The feature is considered ready and even the API (via oslo_limts) can be 
changed as more and more projects adopt it and based on user feedback.

  For how to use unified limits and adopt in a project a nice guideline
  is under [3].

  Currently Nova (see [4]) and Glance (see [5]) partly implemented the
  usage of unified limits. It is still experimental.

  Cinder checked this option but decided to wait till unified limits is
  more mature (see [7])

  Pros (as I see):
  * Common Openstack wide API for admins to define Limits for projects.
  * Long term support for other enforcement models like hierarchies (as I see 
it is still not supported in oslo_limits, see [8]).

  Cons (as I see):
  * Keystone as bottleneck, for all operation an API req is needed (there is 
some cache in oslo_limit)
  * How we can solve the concurency issue, it is now not a db_lock but we have 
to be sure that on the API level we handle concurrent resource usages.
  * all resources must be first registered on Keystone API, otherwise the quota 
check/enforcement will fail.
  * it is not yet ready (see the big warning on top of [1].

  
  [1]: https://docs.openstack.org/keystone/latest/admin/unified-limits.html
  [2]: https://docs.openstack.org/api-ref/identity/v3/#unified-limits
  [3]: 
https://docs.openstack.org/project-team-guide/technical-guides/unified-limits.html
  [4]: https://review.opendev.org/q/topic:bp%252Funified-limits-nova
  [5]: https://review.opendev.org/q/topic:bp%252Fglance-unified-quotas
  [6]: 
https://docs.openstack.org/keystone/latest/admin/unified-limits.html#strict-two-level
  [7]: 
https://specs.openstack.org/openstack/cinder-specs/specs/zed/quota-system.html#unified-limits
  [8]: 
https://opendev.org/openstack/oslo.limit/src/branch/master/oslo_limit/limit.py#L223-L240

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1993288/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989155] Re: Neutron 20.2 Unknown Chassis Issue

2022-09-15 Thread Lajos Katona
I hanged it to invalid, feel free to reopen it if you have more details
that point to that direction

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989155

Title:
  Neutron 20.2 Unknown Chassis Issue

Status in neutron:
  Invalid

Bug description:
  Hi,

  Today I have upgraded by openstack neutron to 20.2 and OVN to 22.03 on
  ubuntu 20.04. After the upgrade, I am getting below mentioned logs in
  northd logs.

  2022-09-08T23:08:15.791Z|00064|northd|WARN|Dropped 41 log messages in last 
206 seconds (most recently, 201 seconds ago) due to excessive rate
  2022-09-08T23:08:15.791Z|00065|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '1332fae5-747f-46bd-bd2a-6c3dd47c5b82'.
  2022-09-08T23:10:40.927Z|00066|northd|WARN|Dropped 20 log messages in last 
145 seconds (most recently, 145 seconds ago) due to excessive rate
  2022-09-08T23:10:40.928Z|00067|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '1332fae5-747f-46bd-bd2a-6c3dd47c5b82'.
  2022-09-08T23:13:12.395Z|00068|northd|WARN|Dropped 62 log messages in last 
151 seconds (most recently, 136 seconds ago) due to excessive rate
  2022-09-08T23:13:12.395Z|00069|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '1332fae5-747f-46bd-bd2a-6c3dd47c5b82'.
  2022-09-08T23:15:23.025Z|00070|northd|WARN|Dropped 272 log messages in last 
131 seconds (most recently, 125 seconds ago) due to excessive rate
  2022-09-08T23:15:23.026Z|00071|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '1332fae5-747f-46bd-bd2a-6c3dd47c5b82'.
  2022-09-08T23:24:46.723Z|00072|northd|WARN|Dropped 83 log messages in last 
564 seconds (most recently, 564 seconds ago) due to excessive rate
  2022-09-08T23:24:46.723Z|00073|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '1332fae5-747f-46bd-bd2a-6c3dd47c5b82'.

  Digging it further these LSPs belongs to unbound port i.e the address
  x.x.x.2 reserve for that port.

  I have checked this port on OVN and its not showing the chassis
  binding. I think thats why its generating those logs.

  
+-++
  | Field   | Value 
 |
  
+-++
  | admin_state_up  | UP
 |
  | allowed_address_pairs   |   
 |
  | binding_host_id |   
 |
  | binding_profile |   
 |
  | binding_vif_details |   
 |
  | binding_vif_type| unbound   
 |
  | binding_vnic_type   | normal
 |
  | created_at  | 2022-09-08T22:13:44Z  
 |
  | data_plane_status   | None  
 |
  | description |   
 |
  | device_id   | ovnmeta-e08fb6c2-4a69-4eab-8a04-4be0906d7a69  
 |
  | device_owner| network:distributed   
 |
  | device_profile  | None  
 |
  | dns_assignment  | None  
 |
  | dns_domain  | None  
 |
  | dns_name| None  
 |
  | extra_dhcp_opts |   
 |
  | fixed_ips   | ip_address='172.16.10.2', 
subnet_id='2f0e63d0-4e91-4353-82e9-f075e443' |
  | id  | d0990b61-8bf8-4ad6-a0a8-a3ccb406ed5f  
 |
  | ip_allocation   | immediate 
 |
  | mac_address | fa:16:3e:bc:41:12 
 |
  | name|   
 |
  | network_id  | 

[Yahoo-eng-team] [Bug 1982206] [NEW] stable: Neutron unit tests timeout on stable/ussuri and stable/victoria (perhaps wallaby also)

2022-07-19 Thread Lajos Katona
Public bug reported:

On stable branches the unit tests fail with time out on Ussuri and
Victoria branches (on Wallaby there's also a timeout but it has
different log, so perhaps just an accident, see [3])

The periodic jobs show the issue and my test patch also (see [1] and
[2])


The timeout happens with this pattern, always after the test 
test_two_member_trailing_chain:
2022-07-19 02:48:57.591352 | ubuntu-focal | {1} 
neutron.tests.unit.test_wsgi.TestWSGIServer.test_disable_ssl [0.039549s] ... ok
2022-07-19 02:48:57.613094 | ubuntu-focal | {1} 
neutron.tests.unit.tests.test_post_mortem_debug.TestGetIgnoredTraceback.test_two_member_trailing_chain
 [0.021062s] ... ok
2022-07-19 03:05:57.910628 | RUN END RESULT_TIMED_OUT: [untrusted : 
opendev.org/zuul/zuul-jobs/playbooks/tox/run.yaml@master]
2022-07-19 03:05:57.926082 | POST-RUN START: [untrusted : 
opendev.org/zuul/zuul-jobs/playbooks/tox/post.yaml@master]


[1]: 
https://lists.openstack.org/pipermail/openstack-stable-maint/2022-July/094216.html
[2]: 
https://72340191a2149eb73d49-2d27e805ff456f7d0a585c8eee091a7d.ssl.cf2.rackcdn.com/850391/1/check/openstack-tox-py37/9ab5c3a/job-output.txt
[3]: 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f2a/periodic-stable/opendev.org/openstack/neutron/stable/wallaby/openstack-tox-py38/f2a3976/job-output.txt

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982206

Title:
  stable: Neutron unit tests timeout on stable/ussuri and
  stable/victoria (perhaps wallaby also)

Status in neutron:
  New

Bug description:
  On stable branches the unit tests fail with time out on Ussuri and
  Victoria branches (on Wallaby there's also a timeout but it has
  different log, so perhaps just an accident, see [3])

  The periodic jobs show the issue and my test patch also (see [1] and
  [2])

  
  The timeout happens with this pattern, always after the test 
test_two_member_trailing_chain:
  2022-07-19 02:48:57.591352 | ubuntu-focal | {1} 
neutron.tests.unit.test_wsgi.TestWSGIServer.test_disable_ssl [0.039549s] ... ok
  2022-07-19 02:48:57.613094 | ubuntu-focal | {1} 
neutron.tests.unit.tests.test_post_mortem_debug.TestGetIgnoredTraceback.test_two_member_trailing_chain
 [0.021062s] ... ok
  2022-07-19 03:05:57.910628 | RUN END RESULT_TIMED_OUT: [untrusted : 
opendev.org/zuul/zuul-jobs/playbooks/tox/run.yaml@master]
  2022-07-19 03:05:57.926082 | POST-RUN START: [untrusted : 
opendev.org/zuul/zuul-jobs/playbooks/tox/post.yaml@master]

  
  [1]: 
https://lists.openstack.org/pipermail/openstack-stable-maint/2022-July/094216.html
  [2]: 
https://72340191a2149eb73d49-2d27e805ff456f7d0a585c8eee091a7d.ssl.cf2.rackcdn.com/850391/1/check/openstack-tox-py37/9ab5c3a/job-output.txt
  [3]: 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f2a/periodic-stable/opendev.org/openstack/neutron/stable/wallaby/openstack-tox-py38/f2a3976/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982206/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762414] Re: Unable to create a FIP when L2GW is enabled

2022-07-08 Thread Lajos Katona
As l2gw was removed from under stadium umbrella, and was moved under x/
namespaces, we can close this as won't fix

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762414

Title:
  Unable to create a FIP when L2GW is enabled

Status in neutron:
  Won't Fix

Bug description:
File "/usr/share/openstack-dashboard/horizon/tables/base.py", line 1389, in 
_filter_action
  [Thu Mar 29 09:11:07.992793 2018] [wsgi:error] [pid 22987:tid 
140314318284544] return action._allowed(request, datum) and row_matched
  [Thu Mar 29 09:11:07.992800 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/horizon/tables/actions.py", line 140, in 
_allowed
  [Thu Mar 29 09:11:07.992806 2018] [wsgi:error] [pid 22987:tid 
140314318284544] return self.allowed(request, datum)
  [Thu Mar 29 09:11:07.992813 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/floating_ips/tables.py",
 line 51, in allowed
  [Thu Mar 29 09:11:07.992820 2018] [wsgi:error] [pid 22987:tid 
140314318284544] targets=('floatingip', ))
  [Thu Mar 29 09:11:07.992826 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/horizon/utils/memoized.py", line 95, in wrapped
  [Thu Mar 29 09:11:07.992833 2018] [wsgi:error] [pid 22987:tid 
140314318284544] value = cache[key] = func(*args, **kwargs)
  [Thu Mar 29 09:11:07.992840 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 419, 
in tenant_quota_usages
  [Thu Mar 29 09:11:07.992846 2018] [wsgi:error] [pid 22987:tid 
140314318284544] _get_tenant_network_usages(request, usages, disabled_quotas, 
tenant_id)
  [Thu Mar 29 09:11:07.992853 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 320, 
in _get_tenant_network_usages
  [Thu Mar 29 09:11:07.992866 2018] [wsgi:error] [pid 22987:tid 
140314318284544] details = neutron.tenant_quota_detail_get(request, tenant_id)
  [Thu Mar 29 09:11:07.992873 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", line 1477, 
in tenant_quota_detail_get
  [Thu Mar 29 09:11:07.992880 2018] [wsgi:error] [pid 22987:tid 
140314318284544] response = neutronclient(request).get('/quotas/%s/details' % 
tenant_id)
  [Thu Mar 29 09:11:07.992886 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 354, in 
get
  [Thu Mar 29 09:11:07.992893 2018] [wsgi:error] [pid 22987:tid 
140314318284544] headers=headers, params=params)
  [Thu Mar 29 09:11:07.992899 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, in 
retry_request
  [Thu Mar 29 09:11:07.992906 2018] [wsgi:error] [pid 22987:tid 
140314318284544] headers=headers, params=params)
  [Thu Mar 29 09:11:07.992912 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 294, in 
do_request
  [Thu Mar 29 09:11:07.992918 2018] [wsgi:error] [pid 22987:tid 
140314318284544] self._handle_fault_response(status_code, replybody, resp)
  [Thu Mar 29 09:11:07.992925 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 269, in 
_handle_fault_response
  [Thu Mar 29 09:11:07.992932 2018] [wsgi:error] [pid 22987:tid 
140314318284544] exception_handler_v20(status_code, error_body)
  [Thu Mar 29 09:11:07.992938 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 93, in 
exception_handler_v20
  [Thu Mar 29 09:11:07.992944 2018] [wsgi:error] [pid 22987:tid 
140314318284544] request_ids=request_ids)
  [Thu Mar 29 09:11:07.992951 2018] [wsgi:error] [pid 22987:tid 
140314318284544] Forbidden: User does not have admin privileges: Cannot GET 
resource for non admin tenant.

  This is due to:
  From Queens onwards we have a issue with horizon and L2GW. We are unable to 
create a floating IP. This does not occur when using the CLI only via horizon. 
The error received is
  ‘Error: User does not have admin privileges: Cannot GET resource for non 
admin tenant. Neutron server returns request_ids: 
['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’
  This is due to: 
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762414/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1973487] Re: [RFE] Allow setting --dst-port for all port based protocols at once

2022-05-27 Thread Lajos Katona
We discussed the proposal today on the drivers meeting, see the logs:
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-05-27-14.00.log.html#l-14

The decisions was to keep this functionality in client side as there can
be complications in case it is implemented in Neutron, i.e.: iptables
can add such rule one-by-one anyway.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973487

Title:
  [RFE] Allow setting --dst-port for all port based protocols at once

Status in neutron:
  Won't Fix

Bug description:
  Currently creating a security rule[0] with with an --dst-port argument
  requires specifying a protocol which support ports [1]. If a user
  wants to set a security rule for another protocol in this group the
  same command will have to be issued again. This RFE, is a simple "ask"
  if it would be worth adding a new --protocol argument which would
  apply for all L4 protocols at once. For example, a CLI command can
  look something like this

  openstack security group rule create --ingress --dst-port 53:53
  --protocol all_L4_protocols 

  Side info, specifying "--protocol any" does not work, but that is
  expected.

  The only benefit of this RFE would be to reduce number of commands
  needed to open up ports across different L4 protocols.

  
  [0] 
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/security-group-rule.html#security-group-rule-create
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/common/_constants.py#L23-L29

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973487/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1964940] Re: Compute tests are failing with failed to reach ACTIVE status and task state "None" within the required time.

2022-05-27 Thread Lajos Katona
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964940

Title:
  Compute tests are failing with failed to reach ACTIVE status and task
  state "None" within the required time.

Status in neutron:
  In Progress
Status in tripleo:
  In Progress

Bug description:
  On Fs001 CentOS Stream 9 wallaby, Multiple compute server tempest tests are 
failing with following error [1][2]:
  ```
  {1} 
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server
 [335.060967s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File 
"/usr/lib/python3.9/site-packages/tempest/api/compute/images/test_images.py", 
line 99, in test_create_image_from_paused_server
  server = self.create_test_server(wait_until='ACTIVE')
    File "/usr/lib/python3.9/site-packages/tempest/api/compute/base.py", 
line 270, in create_test_server
  body, servers = compute.create_test_server(
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
267, in create_test_server
  LOG.exception('Server %s failed to delete in time',
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  self.force_reraise()
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  raise self.value
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
237, in create_test_server
  waiters.wait_for_server_status(
    File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 
100, in wait_for_server_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (ImagesTestJSON:test_create_image_from_paused_server) Server 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1 failed to reach ACTIVE status and task 
state "None" within the required time (300 s). Server boot request ID: 
req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b. Current status: BUILD. Current task 
state: spawning.
  ```

  Below is the list of other tempest tests failing on the same job.[2]
  ```
  
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server[id-71bcb732-0261-11e7-9086-fa163e4fa634]
  
tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume[id-d0f3f0d6-d9b6-4a32-8da4-23015dcab23c,volume]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesV270Test.test_create_get_list_interfaces[id-2853f095-8277-4067-92bd-9f10bd4f8e0c,network]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state[id-bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad]
  setUpClass 
(tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON)
  
tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest_v242.test_tagged_boot_devices[id-a2e65a6c-66f1-4442-aaa8-498c31778d96,image,network,slow,volume]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_suspended_state[id-1f82ebd3-8253-4f4e-b93f-de9b7df56d8b]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces_by_network_port[id-73fe8f02-590d-4bf1-b184-e9ca81065051,network]
  setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSONUnderV235)
  ```

  Here is the traceback from nova-compute logs [3],
  ```
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager 
[req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b d5ea6c724785473b8ea1104d70fb0d14 
64c7d31d84284a28bc9aaa4eaad2b9fb - default default] [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Instance failed to spawn: 
nova.exception.VirtualInterfaceCreateException: Virtual Interface creation 
failed
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last):
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1]   File 
"/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7231, in 
_create_guest_with_network
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] guest = self._create_guest(
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1975828] [NEW] difference in execution time between admin/non-admin call

2022-05-26 Thread Lajos Katona
Public bug reported:

Part of https://bugs.launchpad.net/neutron/+bug/1973349 :
Another interesting thing is difference in execution time between 
admin/non-admin call:
(openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/admin.rc
(openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
2142

real 0m5,401s
user 0m1,565s
sys 0m0,086s
(openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/.rc
(openstack) dmitriy@6BT6XT2:~$ time openstack port list | wc -l
2142

real 2m38,101s
user 0m1,626s
sys 0m0,083s
(openstack) dmitriy@6BT6XT2:~$
(openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
2142

real 1m17,029s
user 0m1,541s
sys 0m0,085s
(openstack) dmitriy@6BT6XT2:~$

So basically if provide tenant_id to query, it will be execute twice as
fast.But it won't look through networks owned by tenant (which would
kind of explain difference in speed).

Environment:
Neutron SHA: 97180b01837638bd0476c28bdda2340eccd649af
Backend: ovs
OS: Ubuntu 20.04
Mariadb: 10.6.5
SQLalchemy: 1.4.23
Backend: openvswitch
Plugins: router vpnaas metering 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975828

Title:
  difference in execution time between admin/non-admin call

Status in neutron:
  New

Bug description:
  Part of https://bugs.launchpad.net/neutron/+bug/1973349 :
  Another interesting thing is difference in execution time between 
admin/non-admin call:
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/admin.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 0m5,401s
  user 0m1,565s
  sys 0m0,086s
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list | wc -l
  2142

  real 2m38,101s
  user 0m1,626s
  sys 0m0,083s
  (openstack) dmitriy@6BT6XT2:~$
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 1m17,029s
  user 0m1,541s
  sys 0m0,085s
  (openstack) dmitriy@6BT6XT2:~$

  So basically if provide tenant_id to query, it will be execute twice
  as fast.But it won't look through networks owned by tenant (which
  would kind of explain difference in speed).

  Environment:
  Neutron SHA: 97180b01837638bd0476c28bdda2340eccd649af
  Backend: ovs
  OS: Ubuntu 20.04
  Mariadb: 10.6.5
  SQLalchemy: 1.4.23
  Backend: openvswitch
  Plugins: router vpnaas metering 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975828/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973049] Re: Skip DB retry when update on "standardattributes" fails

2022-05-16 Thread Lajos Katona
We discussed this during last drivers meeting (see [1]) and we agreed to
keep current behaviour with db retries, as the above proposal could
introduce more problems.


[1]: 
https://meetings.opendev.org/meetings/neutron_drivers/2022/neutron_drivers.2022-05-13-14.01.log.html#l-37

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973049

Title:
  Skip DB retry when update on "standardattributes" fails

Status in neutron:
  Won't Fix

Bug description:
  This is a recurrent problem in the Neutron server when updating a
  resource that has "standardattributes". If a concurrent update is
  done, the DB (SQLAlchemy) will return a "StaleDataError" exception.
  E.g.: [1]

  """
  UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 
were matched.
  """

  In this case, the whole transaction is retried. We should avoid
  retrying any DB operation if this error happens. This retry decorator
  is affecting some operations with [2] in place, as reported in [3].

  This is very frequent when updating the port status
  (``neutron.plugins.ml2.plugin.Ml2Plugin.update_port_statuses``) or the
  FIP status
  (``neutron.db.l3_db.L3_NAT_dbonly_mixin.update_floatingip_status``).
  Check [4]. This is a CI execution using [2], now released in neutron-
  lib 2.21.0. Those methods are concurrently called from the agents
  (ML2, L3) to set the port/FIP status (UP/DOWN).

  This bug proposes to remove this check when updating the
  "standardattributes" table. If the resource "standardattributes" is
  not updated correctly, don't raise a
  ``sqlalchemy.orm.exc.StaleDataError`` exception.

  NOTE: check the ``StandardAttribute.__mapper_args__`` parameters,
  probably deleting "version_id_col".

  [1]https://paste.opendev.org/show/b6xIzuXLgswCpuQeEr6i/
  [2]https://review.opendev.org/c/openstack/neutron-lib/+/828738
  [3]https://review.opendev.org/c/openstack/neutron/+/841246
  
[4]https://31025e2d1118fe413f77-2d2bdd86d83b89e6c319788cb06ef691.ssl.cf1.rackcdn.com/841396/1/check/neutron-tempest-plugin-scenario-openvswitch/5b532c4/controller/logs/screen-q-svc.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973049/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972854] Re: [neutron-dynamic-routing] Train CI is broken

2022-05-16 Thread Lajos Katona
** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972854

Title:
  [neutron-dynamic-routing] Train CI is broken

Status in neutron:
  Won't Fix

Bug description:
  "neutron-dynamic-routing-dsvm-tempest*" jobs are not working in
  stable/train. During the module installation, the library "PyNaCl"
  fails.

  Example patch: 
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/841270
  Example log: 
https://zuul.opendev.org/t/openstack/build/1e7cdaf9bd53422d8638f7c1d67d0ced/logs
  Snippet: https://paste.opendev.org/show/bTyt47UYVVle5W8R4HEZ/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1972854/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973035] Re: FWaaS rules lost on l3 agent restart

2022-05-16 Thread Lajos Katona
I set it to invalid as if I understand the issue was in your config,
please open it again if you see more issues.

** Tags added: fwaas

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973035

Title:
  FWaaS rules lost on l3 agent restart

Status in neutron:
  Invalid

Bug description:
  Iptables rules are lost in router namespace on restart of l3 agent.

  
  Rules before restating L3 agent
  ip netns exec qrouter-b764e745-adfe-4f31-b0f7-dc68e4468b37 iptables -S
  -P INPUT ACCEPT
  -P FORWARD ACCEPT
  -P OUTPUT ACCEPT
  -N neutron-filter-top
  -N neutron-l3-agent-FORWARD
  -N neutron-l3-agent-INPUT
  -N neutron-l3-agent-OUTPUT
  -N neutron-l3-agent-accepted
  -N neutron-l3-agent-dropped
  -N neutron-l3-agent-fwaas-defau
  -N neutron-l3-agent-iv4d0588aa2
  -N neutron-l3-agent-local
  -N neutron-l3-agent-ov4d0588aa2
  -N neutron-l3-agent-rejected
  -N neutron-l3-agent-scope
  -A INPUT -j neutron-l3-agent-INPUT
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-l3-agent-FORWARD
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-l3-agent-OUTPUT
  -A neutron-filter-top -j neutron-l3-agent-local
  -A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope
  -A neutron-l3-agent-FORWARD -o qr-e3cb6269-3b -j neutron-l3-agent-iv4d0588aa2
  -A neutron-l3-agent-FORWARD -i qr-e3cb6269-3b -j neutron-l3-agent-ov4d0588aa2
  -A neutron-l3-agent-FORWARD -o qr-e3cb6269-3b -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-FORWARD -i qr-e3cb6269-3b -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-INPUT -m mark --mark 0x1/0x -j ACCEPT
  -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
  -A neutron-l3-agent-accepted -j ACCEPT
  -A neutron-l3-agent-dropped -j DROP
  -A neutron-l3-agent-fwaas-defau -j neutron-l3-agent-dropped
  -A neutron-l3-agent-iv4d0588aa2 -m state --state INVALID -j 
neutron-l3-agent-dropped
  -A neutron-l3-agent-iv4d0588aa2 -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-iv4d0588aa2 -p tcp -m tcp --dport 22 -j 
neutron-l3-agent-accepted
  -A neutron-l3-agent-ov4d0588aa2 -m state --state INVALID -j 
neutron-l3-agent-dropped
  -A neutron-l3-agent-ov4d0588aa2 -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-ov4d0588aa2 -p icmp -j neutron-l3-agent-accepted
  -A neutron-l3-agent-ov4d0588aa2 -d 10.40.95.125/32 -p tcp -m tcp --dport 53 
-j neutron-l3-agent-accepted
  -A neutron-l3-agent-ov4d0588aa2 -d 10.40.95.125/32 -p udp -m udp --dport 53 
-j neutron-l3-agent-accepted
  -A neutron-l3-agent-ov4d0588aa2 -d 10.0.0.0/8 -j neutron-l3-agent-dropped
  -A neutron-l3-agent-ov4d0588aa2 -d 172.16.0.0/12 -j neutron-l3-agent-dropped
  -A neutron-l3-agent-ov4d0588aa2 -d 192.168.0.0/16 -j neutron-l3-agent-dropped
  -A neutron-l3-agent-rejected -j REJECT --reject-with icmp-port-unreachable
  -A neutron-l3-agent-scope -o qr-e3cb6269-3b -m mark ! --mark 
0x400/0x -j DROP

  
  Rules after restart.

  ip netns exec qrouter-b764e745-adfe-4f31-b0f7-dc68e4468b37 iptables -S
  -P INPUT ACCEPT
  -P FORWARD ACCEPT
  -P OUTPUT ACCEPT
  -N neutron-filter-top
  -N neutron-l3-agent-FORWARD
  -N neutron-l3-agent-INPUT
  -N neutron-l3-agent-OUTPUT
  -N neutron-l3-agent-local
  -N neutron-l3-agent-scope
  -A INPUT -j neutron-l3-agent-INPUT
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-l3-agent-FORWARD
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-l3-agent-OUTPUT
  -A neutron-filter-top -j neutron-l3-agent-local
  -A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope
  -A neutron-l3-agent-INPUT -m mark --mark 0x1/0x -j ACCEPT
  -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
  -A neutron-l3-agent-scope -o qr-e3cb6269-3b -m mark ! --mark 
0x400/0x -j DROP


  Name: neutron-fwaas
  Version: 16.0.1.dev3
  Summary: OpenStack Networking FWaaS
  Home-page: https://docs.openstack.org/neutron-fwaas/latest/
  Author: OpenStack
  Author-email: openstack-disc...@lists.openstack.org
  License: UNKNOWN
  Location: /openstack/venvs/neutron-21.2.9/lib/python3.8/site-packages
  Requires: neutron-lib, neutron, eventlet, oslo.config, pyroute2, os-ken, 
netaddr, six, oslo.db, oslo.log, oslo.utils, oslo.privsep, pyzmq, pbr, alembic, 
SQLAlchemy, oslo.messaging, oslo.service
  Required-by:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719806] Re: IPv4 subnets added when VM is already up on an IPv6 subnet on the same network, does not enable VM ports to get IPv4 address

2022-05-09 Thread Lajos Katona
** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719806

Title:
  IPv4 subnets added when VM is already up on an IPv6 subnet on the same
  network, does not enable VM ports to get IPv4 address

Status in neutron:
  Invalid

Bug description:
  On both stable/pike and stable/ocata, we performed the following
  steps:

  1. Create a network
  2. Create an IPv6 subnet in SLAAC Mode (both RA mode and Address mode)
  3. Create a router
  4. Attach the IPv6 subnet to the router
  5. Now boot VMs with the network-id.
  6. Make sure VMs are up and able to communicate via their Global and 
Link-Local IPv6 addresses.
  7. Create an IPv4 subnet on the same network.

  After step 5, you will notice that the booted VM neutron ports fixed-
  ips are not updated with IPv4 subnets automatically.

  The user has to manually update the VM Neutron ports via port-update
  command with the IPv4 subnet-id and then go back to the VM and recycle
  eth0 after which only the VMs will get the IPv4 address.

  The DHCP Neutron port alone got updated automatically with the IPv4
  address in addition to IPv6 address with the above steps.

  Any new VMs spawned after both IPv4 and IPv6 subnets are available on
  the network, is able to get both the addresses and its Neutron Ports
  in the control plane also reflect the same.

  BTW, if the above set of steps are followed just by swapping the order
  where we create IPv4 subnets first, then boot VMs, create an IPv6
  subnet on the same network, the VM Neutron Ports fixed-ips get updated
  automatically  with the new assigned IPv6 Global addresses on the IPv6
  subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719806/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719806] Re: IPv4 subnets added when VM is already up on an IPv6 subnet on the same network, does not enable VM ports to get IPv4 address

2022-04-29 Thread Lajos Katona
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1719806

Title:
  IPv4 subnets added when VM is already up on an IPv6 subnet on the same
  network, does not enable VM ports to get IPv4 address

Status in neutron:
  New

Bug description:
  On both stable/pike and stable/ocata, we performed the following
  steps:

  1. Create a network
  2. Create an IPv6 subnet in SLAAC Mode (both RA mode and Address mode)
  3. Create a router
  4. Attach the IPv6 subnet to the router
  5. Now boot VMs with the network-id.
  6. Make sure VMs are up and able to communicate via their Global and 
Link-Local IPv6 addresses.
  7. Create an IPv4 subnet on the same network.

  After step 5, you will notice that the booted VM neutron ports fixed-
  ips are not updated with IPv4 subnets automatically.

  The user has to manually update the VM Neutron ports via port-update
  command with the IPv4 subnet-id and then go back to the VM and recycle
  eth0 after which only the VMs will get the IPv4 address.

  The DHCP Neutron port alone got updated automatically with the IPv4
  address in addition to IPv6 address with the above steps.

  Any new VMs spawned after both IPv4 and IPv6 subnets are available on
  the network, is able to get both the addresses and its Neutron Ports
  in the control plane also reflect the same.

  BTW, if the above set of steps are followed just by swapping the order
  where we create IPv4 subnets first, then boot VMs, create an IPv6
  subnet on the same network, the VM Neutron Ports fixed-ips get updated
  automatically  with the new assigned IPv6 Global addresses on the IPv6
  subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1719806/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967893] Re: [stable/yoga] tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest fails in neutron-ovs-tempest-multinode-full job

2022-04-11 Thread Lajos Katona
the port-resource-request-groups extension was missing from devstack list, fix:
https://review.opendev.org/c/openstack/devstack/+/836671

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1967893

Title:
  [stable/yoga]
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest
  fails in neutron-ovs-tempest-multinode-full job

Status in neutron:
  Fix Released

Bug description:
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest.*
  test fail in a reproducible way in neutron-ovs-tempest-multinode-full
  job (only for yoga branch).

  Sample log failure:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0ea/835863/1/check/neutron-ovs-tempest-multinode-full/0ea66ae/testr_results.html
  from:
  https://review.opendev.org/c/openstack/neutron/+/835863/

  From Lajos' review, port-resource-request-groups extension is loaded
  but it is missing from the api_extensions list

  These tests in this job worked in the first days after yoga branching, but 
are failing since around 2022-03-31:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-multinode-full=openstack%2Fneutron=stable%2Fyoga

  At first glance I did not see any potential culprit in recent neutron
  backports, or tempest/neutron-tempest-plugin merged changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1967893/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968206] [NEW] [sriov]: set_device_rate() takes 3 positional arguments but 4 were given

2022-04-07 Thread Lajos Katona
Public bug reported:

https://review.opendev.org/q/Ibbb6d938355440c42850812e368224b76b1fce19 (
[SR-IOV] Fix QoS extension to set min/max values) seems introduced an issue 
with sriov ports, see the below traceback:
https://paste.opendev.org/show/bhQw8reMUZIQNSiWmDHc/

The original bug: https://bugs.launchpad.net/neutron/+bug/1962844

The issue seems to be that the signature was not changed everywhere to
the new dict format.

** Affects: neutron
 Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: sriov-pci-pt

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968206

Title:
  [sriov]: set_device_rate() takes 3 positional arguments but 4 were
  given

Status in neutron:
  New

Bug description:
  https://review.opendev.org/q/Ibbb6d938355440c42850812e368224b76b1fce19 (  
  [SR-IOV] Fix QoS extension to set min/max values) seems introduced an issue 
with sriov ports, see the below traceback:
  https://paste.opendev.org/show/bhQw8reMUZIQNSiWmDHc/

  The original bug: https://bugs.launchpad.net/neutron/+bug/1962844

  The issue seems to be that the signature was not changed everywhere to
  the new dict format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968206/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1961906] Re: generate_config_file_samples.sh fails with invalid literal for int() with base 10

2022-02-28 Thread Lajos Katona
** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1961906

Title:
  generate_config_file_samples.sh fails with invalid literal for int()
  with base 10

Status in neutron:
  Invalid

Bug description:
  Following [1], pyroute2 now include [2], which expects the kernel name
  to have a specific format, if this format is not followed the package
  would fail.

  Our CIs runs on none inbox kernels which do not follow the inbox
  naming convention. the neutron ./tools/generate_config_file_samples.sh
  script now fail on our CIs with the following error:

  exec ./tools/generate_config_file_samples.sh
  07:01:54  Traceback (most recent call last):
  07:01:54File "/usr/local/bin/oslo-config-generator", line 8, in 
  07:01:54  sys.exit(main())
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/oslo_config/generator.py", line 836, in 
main
  07:01:54  generate(conf)
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/oslo_config/generator.py", line 797, in 
generate
  07:01:54  groups = _get_groups(_list_opts(conf.namespace))
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/oslo_config/generator.py", line 524, in 
_list_opts
  07:01:54  loaders = _get_raw_opts_loaders(namespaces)
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/oslo_config/generator.py", line 464, in 
_get_raw_opts_loaders
  07:01:54  mgr = stevedore.named.NamedExtensionManager(
  07:01:54File "/usr/local/lib/python3.8/site-packages/stevedore/named.py", 
line 78, in __init__
  07:01:54  extensions = self._load_plugins(invoke_on_load,
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/stevedore/extension.py", line 233, in 
_load_plugins
  07:01:54  self._on_load_failure_callback(self, ep, err)
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/stevedore/extension.py", line 221, in 
_load_plugins
  07:01:54  ext = self._load_one_plugin(ep,
  07:01:54File "/usr/local/lib/python3.8/site-packages/stevedore/named.py", 
line 156, in _load_one_plugin
  07:01:54  return super(NamedExtensionManager, self)._load_one_plugin(
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/stevedore/extension.py", line 255, in 
_load_one_plugin
  07:01:54  plugin = ep.load()
  07:01:54File "/usr/lib64/python3.8/importlib/metadata.py", line 77, in 
load
  07:01:54  module = import_module(match.group('module'))
  07:01:54File "/usr/lib64/python3.8/importlib/__init__.py", line 127, in 
import_module
  07:01:54  return _bootstrap._gcd_import(name[level:], package, level)
  07:01:54File "", line 1014, in _gcd_import
  07:01:54File "", line 991, in _find_and_load
  07:01:54File "", line 975, in 
_find_and_load_unlocked
  07:01:54File "", line 671, in _load_unlocked
  07:01:54File "", line 848, in 
exec_module
  07:01:54File "", line 219, in 
_call_with_frames_removed
  07:01:54File "/opt/stack/neutron/neutron/opts.py", line 28, in 
  07:01:54  import neutron.conf.agent.l3.ha
  07:01:54File "/opt/stack/neutron/neutron/conf/agent/l3/ha.py", line 20, 
in 
  07:01:54  from neutron.agent.linux import keepalived
  07:01:54File "/opt/stack/neutron/neutron/agent/linux/keepalived.py", line 
29, in 
  07:01:54  from neutron.agent.linux import external_process
  07:01:54File 
"/opt/stack/neutron/neutron/agent/linux/external_process.py", line 26, in 

  07:01:54  from neutron.agent.linux import ip_lib
  07:01:54File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 29, 
in 
  07:01:54  from pyroute2.netlink import exceptions \
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/pyroute2/__init__.py", line 14, in 

  07:01:54  from pr2modules.config.version import __version__
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/pr2modules/config/__init__.py", line 
25, in 
  07:01:54  kernel = [int(x) for x in uname[2].split('-')[0].split('.')]
  07:01:54File 
"/usr/local/lib/python3.8/site-packages/pr2modules/config/__init__.py", line 
25, in 
  07:01:54  kernel = [int(x) for x in uname[2].split('-')[0].split('.')]
  07:01:54  ValueError: invalid literal for int() with base 10: 
'0_for_upstream_perf_2022_01_10_23_12'

  Can you guys help us to fix the issue?

  Thanks.

  [1] 
https://github.com/openstack/requirements/commit/812b39ddef60d76c037311144465714929a92041
  [2] 
https://github.com/svinota/pyroute2/commit/12a1aa8530540ad644e3098b08859b9e31321500

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1961906/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1954663] [NEW] [linuxbridge][CI]: test_floatingip_port_details fails sometimes with FIP port transition to DOWN timeout

2021-12-13 Thread Lajos Katona
Public bug reported:

test_floatingip_port_details fails intermittently with traceback like
this:

  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 252, in test_floatingip_port_details
fip = self._wait_for_fip_port_down(fip['id'])
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 324, in _wait_for_fip_port_down
raise exceptions.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Floating IP 203f3321-32f1-4eef-87fb-3f09874043bb attached port status 
failed to transition to DOWN (current status BUILD) within the required time 
(120 s). Port details: {'id': 'b255de67-798e-4d0b-b2a4-d6ddeef84a40', 'name': 
'', 'network_id': '7e49a218-d2ca-4df6-b2c6-c60c4ee6a89e', 'tenant_id': 
'e0aafaef16d247c4bc69d2a55138fdae', 'mac_address': 'fa:16:3e:92:7c:dd', 
'admin_state_up': True, 'status': 'BUILD', 'device_id': '', 'device_owner': '', 
'fixed_ips': [{'subnet_id': 'e07ee6ab-7210-4fbb-9990-406d6646f976', 
'ip_address': '10.1.0.6'}], 'allowed_address_pairs': [], 'extra_dhcp_opts': [], 
'security_groups': ['866b17a5-afc2-4eb5-9dd7-0dc7755145eb'], 'description': '', 
'binding:vnic_type': 'normal', 'binding:profile': {}, 'binding:host_id': '', 
'binding:vif_type': 'unbound', 'binding:vif_details': {}, 
'port_security_enabled': True, 'qos_policy_id': None, 'qos_network_policy_id': 
None, 'propagate_uplink_status': True, 'dns_name': '', 'dns_assignment': 
[{'ip_address': '10.1.0.6', 'hostname': 'host-10-1-0-6', 'fqdn': 
'host-10-1-0-6.openstackgate.local.'}], 'dns_domain': '', 'resource_request': 
None, 'ip_allocation': 'immediate', 'tags': [], 'created_at': 
'2021-11-19T10:06:27Z', 'updated_at': '2021-11-19T10:08:10Z', 
'revision_number': 7, 'project_id': 'e0aafaef16d247c4bc69d2a55138fdae'}


Example job log:
https://08d14f4ddffb82b199e7-61a732188f1643f755755f84f6310584.ssl.cf1.rackcdn.com/817525/7/check/neutron-tempest-plugin-scenario-linuxbridge/e4b6cfb

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fip l3 linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1954663

Title:
  [linuxbridge][CI]: test_floatingip_port_details fails sometimes with
  FIP port transition to DOWN timeout

Status in neutron:
  New

Bug description:
  test_floatingip_port_details fails intermittently with traceback like
  this:

File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 252, in test_floatingip_port_details
  fip = self._wait_for_fip_port_down(fip['id'])
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_floatingip.py",
 line 324, in _wait_for_fip_port_down
  raise exceptions.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Floating IP 203f3321-32f1-4eef-87fb-3f09874043bb attached port 
status failed to transition to DOWN (current status BUILD) within the required 
time (120 s). Port details: {'id': 'b255de67-798e-4d0b-b2a4-d6ddeef84a40', 
'name': '', 'network_id': '7e49a218-d2ca-4df6-b2c6-c60c4ee6a89e', 'tenant_id': 
'e0aafaef16d247c4bc69d2a55138fdae', 'mac_address': 'fa:16:3e:92:7c:dd', 
'admin_state_up': True, 'status': 'BUILD', 'device_id': '', 'device_owner': '', 
'fixed_ips': [{'subnet_id': 'e07ee6ab-7210-4fbb-9990-406d6646f976', 
'ip_address': '10.1.0.6'}], 'allowed_address_pairs': [], 'extra_dhcp_opts': [], 
'security_groups': ['866b17a5-afc2-4eb5-9dd7-0dc7755145eb'], 'description': '', 
'binding:vnic_type': 'normal', 'binding:profile': {}, 'binding:host_id': '', 
'binding:vif_type': 'unbound', 'binding:vif_details': {}, 
'port_security_enabled': True, 'qos_policy_id': None, 'qos_network_policy_id': 
None, 'propagate_uplink_status': True, 'dns_name': '', 'dns_assignment': 
[{'ip_address': '10.1.0.6', 'hostname': 'host-10-1-0-6', 'fqdn': 
'host-10-1-0-6.openstackgate.local.'}], 'dns_domain': '', 'resource_request': 
None, 'ip_allocation': 'immediate', 'tags': [], 'created_at': 
'2021-11-19T10:06:27Z', 'updated_at': '2021-11-19T10:08:10Z', 
'revision_number': 7, 'project_id': 'e0aafaef16d247c4bc69d2a55138fdae'}

  
  Example job log:
  
https://08d14f4ddffb82b199e7-61a732188f1643f755755f84f6310584.ssl.cf1.rackcdn.com/817525/7/check/neutron-tempest-plugin-scenario-linuxbridge/e4b6cfb

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1954663/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952867] Re: [ml2][ovs] allow multiple physical networks map to one physical ovs bridge

2021-12-03 Thread Lajos Katona
On today's drivers meeting we discussed this bug:
https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-12-03-14.01.log.html#l-15

The conclusion was to keep the current 1 Nic - 1 bridge - 1 physnet
mapping/relation.

** Tags added: rfe

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952867

Title:
  [ml2][ovs] allow multiple physical networks map to one physical ovs
  bridge

Status in neutron:
  Won't Fix

Bug description:
  In real cloud production environment, there are many hosts, which can
  access external network, which may not. Some have enough NICs to work
  for different networks, while some are lack of NICs.

  For instance, an external network, provider:network_type is ``vlan``,
  provider:physical_network is ``external``, provider:segmentation_id is
  ``4000``.

  While tenant network, provider:network_type is ``vlan``,
  provider:physical_network is ``user``, provider:segmentation_id is
  ``1000-3000``.

  So for neutron's limitation, in one single host, you have to add two
  ovs-bridges which are mapping external->br-ex, user->br-usr. And br-ex
  adds physical port eth0, br-usr adds eth1.

  But, in real world, these vlans can work in same physical nic, and physical 
hosts may be lack of NICs, which means, Neutron should allow set bridge mapping 
like this:
  {"external": br-vlan, "user": br-vlan}

  Then, for those hosts with only one NIC (or one bond NIC), can work
  for both physical types of network.

  You may say, may be we can set one network with two types of
  "provider:physical_network". One single network is currently type
  uniq. This will be a bit more complicated than the former solution.
  This needs not only neutron-server side DB model changes, but also
  agent side change. While  the former may only need agent side change
  to allow set that mappings.

  Any ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952867/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952055] Re: native firewall driver - conntrack marks too much traffic as invalid

2021-11-30 Thread Lajos Katona
Hi Yusuf, I close this bug report now, but if you have any news from k8s
team, please feel free to reopen it. If we need to change how the
firewall driver(s) work(s) perhaps we have to open an RFE and have more
analyses (neutron ha multiple fw drivers, and we have to keep them give
the same user experience).

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952055

Title:
  native firewall driver - conntrack marks too much traffic as invalid

Status in neutron:
  Invalid

Bug description:
  Hi, we are seeing strange behaviour on our victoria cluster after
  switching from hyrid firewall driver to native openvswitch firewall
  driver.

  We have to use native openvswitch firewall driver to get firewall
  logs. After enabling security group logging we had observed that there
  exist too much DROP actions even any-any ingress-egress rules for all
  protocols exist in security groups. This seems normal according to
  [Native Open vSwitch firewall
  driver](https://docs.openstack.org/neutron/latest/admin/config-
  ovsfwdriver.html#differences-between-ovs-and-iptables-firewall-
  drivers) document.

  But we do not understand why the traffic is marked invalid by
  conntrack. We are seeing too much traffic marked as INVALID by
  conntrack, especially for the services which are doing too much
  traffic. For example etcd heartbeat which send to cluster members for
  every 100 ms (tcp port 2380)

  conntrack statistics also show high counts for "insert_failed" and
  "search_restart". nf_conntrack_buckets=65536 and
  nf_conntrack_max=262144. We also not see nf_conntrack_count reaches to
  max.

  We are seeing random and frequent timeouts on the kubernetes clusters
  which installed to openstack instances on this cluster. We believe
  that situation is related this. Especially calico-node pod on k8s
  cluster gets timeouts for liveness probe checks. Tested calico with
  both ipip and vxlan mode but no changes. Tested with k8s clusters
  which are installed to different OS but still no change. (centos 7,
  debian etcd)

  Environment Details:
   OpenStack Victoria Cluster installed via kolla-ansible to Ubuntu 20.04.2 LTS 
Hosts. (Kernel:5.4.0-80-generic)
   There exist 5 controller+network node.
   "neutron-openvswitch-agent", "neutron-l3-agent" and "neutron-server" version 
is "17.2.2.dev46"
   OpenvSwitch used in DVR mode with router HA configured. (l3_ha = true)
   We are using a single centralized neutron router for connecting all tenant 
networks to provider network.
   We are using bgp_dragent to announce unique tenant networks.
   Tenant network type: vxlan
   External network type: vlan

  Conntrack Invalid Logs (After enabling nf_conntrack_log_invalid logging)
  ...
  ... For etcd port 2380
  ...
  Nov 24 10:45:47 test-compute-07 kernel: [9666429.466072] nf_ct_proto_6: 
invalid rst IN= OUT= SRC=10.211.2.168 DST=10.211.2.98 LEN=40 TOS=0x00 PREC=0x00 
TTL=64 ID=52384 DF PROTO=TCP SPT=33726 DPT=2380 SEQ=1503741580 ACK=0 WINDOW=0 
RES=0x00 RST URGP=0
  Nov 24 10:46:01 test-compute-07 kernel: [9666444.248252] nf_ct_proto_6: 
invalid packet ignored in state ESTABLISHED  IN= OUT= SRC=10.168.112.39 
DST=10.211.2.97 LEN=60 TOS=0x00 PREC=0x00 TTL=59 ID=0 DF PROTO=TCP SPT=6533 
DPT=45832 SEQ=2345805154 ACK=1982320186 WINDOW=28960 RES=0x00 ACK SYN URGP=0 
OPT (020405B40402080A1ACBE8A518E611E801030309) MARK=0x401
  Nov 24 10:46:02 test-compute-07 kernel: [9666444.490741] nf_ct_proto_6: 
invalid packet ignored in state ESTABLISHED  IN= OUT= SRC=10.168.112.39 
DST=10.211.2.97 LEN=60 TOS=0x00 PREC=0x00 TTL=59 ID=0 DF PROTO=TCP SPT=6533 
DPT=59862 SEQ=3082071853 ACK=2961225592 WINDOW=28960 RES=0x00 ACK SYN URGP=0 
OPT (020405B40402080A1ACBE8E218E612DA01030309) MARK=0x401
  Nov 24 10:46:06 test-compute-07 kernel: [9666448.362730] nf_ct_proto_6: 
invalid rst IN= OUT= SRC=10.211.2.139 DST=10.211.2.98 LEN=40 TOS=0x00 PREC=0x00 
TTL=64 ID=42180 DF PROTO=TCP SPT=42286 DPT=2380 SEQ=3794545871 ACK=0 WINDOW=0 
RES=0x00 RST URGP=0
  Nov 24 10:46:11 test-compute-07 kernel: [9666453.465972] nf_ct_proto_6: 
invalid rst IN= OUT= SRC=10.211.2.168 DST=10.211.2.98 LEN=40 TOS=0x00 PREC=0x00 
TTL=64 ID=62831 DF PROTO=TCP SPT=33954 DPT=2380 SEQ=935403626 ACK=0 WINDOW=0 
RES=0x00 RST URGP=0
  Nov 24 10:46:19 test-compute-07 kernel: [9666461.590026] nf_ct_proto_6: 
invalid packet ignored in state SYN_SENT  IN= OUT= SRC=162.247.243.149 
DST=10.211.2.121 LEN=40 TOS=0x00 PREC=0x00 TTL=49 ID=0 DF PROTO=TCP SPT=443 
DPT=56158 SEQ=1845326009 ACK=4146250693 WINDOW=1198 RES=0x00 ACK URGP=0 
MARK=0x401
  Nov 24 10:46:22 test-compute-07 kernel: [9666464.365487] nf_ct_proto_6: 
invalid rst IN= OUT= SRC=10.211.2.139 DST=10.211.2.168 LEN=40 TOS=0x00 
PREC=0x00 TTL=64 ID=47797 DF PROTO=TCP SPT=46064 DPT=2380 SEQ=4079966865 ACK=0 
WINDOW=0 RES=0x00 RST URGP=0
  Nov 24 10:47:07 

[Yahoo-eng-team] [Bug 1951083] Re: AttributeError: 'tenant_id' during _setup_new_dhcp_port

2021-11-16 Thread Lajos Katona
Perhaps it was noise what I saw, let's see after merging
https://review.opendev.org/c/openstack/neutron/+/815814

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951083

Title:
  AttributeError: 'tenant_id' during _setup_new_dhcp_port

Status in neutron:
  Invalid

Bug description:
  In recent logs there's a lot of stacktrace like this in q-dhcp.log:
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1641, in setup_dhcp_port
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent dhcp_port = 
setup_method(network, device_id, dhcp_subnets)
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1587, in 
_setup_new_dhcp_port
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent 
tenant_id=network.tenant_id,
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 118, in __getattr__
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent raise 
AttributeError(e)
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent AttributeError: 
'tenant_id'

  example:
  
https://3efaac4a1fc190950255-3b7af1770e83762d2e3e96952ca9b2d3.ssl.cf5.rackcdn.com/816850/4/check/neutron-
  ovs-tempest-slow/66aadfc/controller/logs/screen-q-dhcp.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951083/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951083] [NEW] AttributeError: 'tenant_id' during _setup_new_dhcp_port

2021-11-16 Thread Lajos Katona
Public bug reported:

In recent logs there's a lot of stacktrace like this in q-dhcp.log:
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1641, in setup_dhcp_port
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent dhcp_port = 
setup_method(network, device_id, dhcp_subnets)
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1587, in 
_setup_new_dhcp_port
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent 
tenant_id=network.tenant_id,
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 118, in __getattr__
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent raise 
AttributeError(e)
Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent AttributeError: 
'tenant_id'

example:
https://3efaac4a1fc190950255-3b7af1770e83762d2e3e96952ca9b2d3.ssl.cf5.rackcdn.com/816850/4/check/neutron-
ovs-tempest-slow/66aadfc/controller/logs/screen-q-dhcp.txt

** Affects: neutron
 Importance: Undecided
 Status: Invalid


** Tags: keystone-v3 l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951083

Title:
  AttributeError: 'tenant_id' during _setup_new_dhcp_port

Status in neutron:
  Invalid

Bug description:
  In recent logs there's a lot of stacktrace like this in q-dhcp.log:
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1641, in setup_dhcp_port
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent dhcp_port = 
setup_method(network, device_id, dhcp_subnets)
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1587, in 
_setup_new_dhcp_port
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent 
tenant_id=network.tenant_id,
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 118, in __getattr__
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent raise 
AttributeError(e)
  Nov 12 13:24:07.313051 ubuntu-focal-iweb-mtl01-0027325688 
neutron-dhcp-agent[84125]: ERROR neutron.agent.dhcp.agent AttributeError: 
'tenant_id'

  example:
  
https://3efaac4a1fc190950255-3b7af1770e83762d2e3e96952ca9b2d3.ssl.cf5.rackcdn.com/816850/4/check/neutron-
  ovs-tempest-slow/66aadfc/controller/logs/screen-q-dhcp.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951083/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1943112] Re: openstack rocky version, neutron is throwing the following error : ovs|00002|ovsdb_idl|WARN|transaction error: {"details":"Transaction causes multiple rows in \"Mana

2021-09-15 Thread Lajos Katona
Did you try to restart the related services on the host?
As this issue seems more a support request or deployment issue, I mark this bug 
invalid.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943112

Title:
  openstack rocky version, neutron is throwing the following error :
  ovs|2|ovsdb_idl|WARN|transaction error: {"details":"Transaction
  causes multiple rows in \"Manager\" table to have identi

Status in neutron:
  Invalid

Bug description:
  i am using the openstack rocky version, the neutron is throwing the error 
when command is executed - 
   " systemctl status neutron-openvswitch-agent "

  the error is as follows:

  Sep 07 09:48:38 compute-I ovs-vsctl[5369]:
  ovs|2|ovsdb_idl|WARN|transaction error: {"details":"Transaction
  causes multiple rows in \"Manager\" table to have identi

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1943112/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1943679] [NEW] test_trunk_subport_lifecycle scenario fails with subport status never reaches DOWN state

2021-09-15 Thread Lajos Katona
Public bug reported:

neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
fails in the step where expects the port to be DOWN after subport was
removed from VM, see [1].

example:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_640/805366/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/640d9ab/testr_results.html

Seems like a race between update_device_down and unbound, and neutron-server 
fails to set port status:
https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/rpc.py#L267-L271

[1]: https://opendev.org/openstack/neutron-tempest-
plugin/src/branch/master/neutron_tempest_plugin/scenario/test_trunk.py#L260-L264

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: trunk

** Tags added: trunk

** Summary changed:

- test_trunk_subport_lifecycle scenario fails with subport status never rich 
DOWN state
+ test_trunk_subport_lifecycle scenario fails with subport status never reach 
DOWN state

** Summary changed:

- test_trunk_subport_lifecycle scenario fails with subport status never reach 
DOWN state
+ test_trunk_subport_lifecycle scenario fails with subport status never reaches 
DOWN state

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943679

Title:
  test_trunk_subport_lifecycle scenario fails with subport status never
  reaches DOWN state

Status in neutron:
  New

Bug description:
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  fails in the step where expects the port to be DOWN after subport was
  removed from VM, see [1].

  example:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_640/805366/1/gate/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/640d9ab/testr_results.html

  Seems like a race between update_device_down and unbound, and neutron-server 
fails to set port status:
  
https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/rpc.py#L267-L271

  [1]: https://opendev.org/openstack/neutron-tempest-
  
plugin/src/branch/master/neutron_tempest_plugin/scenario/test_trunk.py#L260-L264

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1943679/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1923449] [NEW] test_security_group_recreated_on_port_update fails with not yet created default group

2021-04-12 Thread Lajos Katona
Public bug reported:

test_security_group_recreated_on_port_update from neutron-tempest-plugin seems 
to be failing sporadically with no default group present after port update, 
example:
https://40d1580bb656fd0ed240-3f272db0dacf207a646e9867f60c7e03.ssl.cf1.rackcdn.com/785830/1/check/neutron-tempest-plugin-api/f16b2f0/testr_results.html

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%20'default'%20not%20in%20%5B%5D%5C%22

** Affects: neutron
 Importance: Medium
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: tempest

** Tags added: tempest

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923449

Title:
  test_security_group_recreated_on_port_update fails with not yet
  created default group

Status in neutron:
  New

Bug description:
  test_security_group_recreated_on_port_update from neutron-tempest-plugin 
seems to be failing sporadically with no default group present after port 
update, example:
  
https://40d1580bb656fd0ed240-3f272db0dacf207a646e9867f60c7e03.ssl.cf1.rackcdn.com/785830/1/check/neutron-tempest-plugin-api/f16b2f0/testr_results.html

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%20'default'%20not%20in%20%5B%5D%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1923423] [NEW] l3_router.service_providers DriverController's _attrs_to_driver is not py3 compatible

2021-04-12 Thread Lajos Katona
Public bug reported:

Currently l3_router.service_providers.DriverController._attrs_to_driver has the 
following:
...
drivers = self.drivers.values()
# make sure default is tried before the rest if defined
if self.default_provider:
drivers.insert(0, self.drivers[self.default_provider])

As in python3 dict.values() gives back "dict_values" instead of list, insert 
will fail with:
"AttributeError: 'dict_values' object has no attribute 'insert'"

** Affects: neutron
 Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: trivial

** Tags added: trivial

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923423

Title:
  l3_router.service_providers DriverController's _attrs_to_driver is not
  py3 compatible

Status in neutron:
  New

Bug description:
  Currently l3_router.service_providers.DriverController._attrs_to_driver has 
the following:
  ...
  drivers = self.drivers.values()
  # make sure default is tried before the rest if defined
  if self.default_provider:
  drivers.insert(0, self.drivers[self.default_provider])

  As in python3 dict.values() gives back "dict_values" instead of list, insert 
will fail with:
  "AttributeError: 'dict_values' object has no attribute 'insert'"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921085] Re: neutron-server ovsdbapp timeout exceptions after intermittent connectivity issues

2021-03-29 Thread Lajos Katona
thanks for the info

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921085

Title:
  neutron-server ovsdbapp timeout exceptions after intermittent
  connectivity issues

Status in neutron:
  Invalid

Bug description:
  Cloud environment: bionic-ussuri with 3 neutron-server and 3 ovn-
  central components each running on separate rack. (ovn-central runs
  ovn-northd, ovsdb-nb, ovsdb-sb services)

  There is some network glitch between rack3 to other racks for a minute and so 
neutron-server/2 not able to communicate with ovn-central/0 and ovn-central/1. 
ovsdb-nb and ovsdb-sb leaders are on one of ovn-central/0 or ovn-central/1.
  However neutron-server/2 able to connect with ovn-central/2 ovndb-sb but its 
not a leader. 

  Logs from neutron-server on neutron-server/2 unit
  2021-02-15 14:20:08.119 15554 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-a3778f18-7b4d-4739-b20a-bff355fed9b0 - - - - -] ssl:10.216.241.118:6641: 
clustered database server is disconnected from cluster; trying another server
  2021-02-15 14:20:08.121 15554 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-a3778f18-7b4d-4739-b20a-bff355fed9b0 - - - - -] ssl:10.216.241.118:6641: 
connection closed by client
  2021-02-15 14:20:08.121 15554 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-a3778f18-7b4d-4739-b20a-bff355fed9b0 - - - - -] ssl:10.216.241.118:6641: 
continuing to reconnect in the background but suppressing further logging
  2021-02-15 14:20:08.853 15553 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.118:16642: connected
  2021-02-15 14:20:08.864 15563 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.251:16642: connecting...
  2021-02-15 14:20:08.869 15542 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.251:16642: connecting...
  2021-02-15 14:20:08.872 15558 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-c047c84e-8fdc-404c-8284-bba80c34fe90 - - - - -] ssl:10.216.241.251:16642: 
connecting...
  2021-02-15 14:20:08.877 15553 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.118:16642: clustered database server is disconnected from 
cluster; trying another server
  2021-02-15 14:20:08.879 15553 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.118:16642: connection closed by client
  2021-02-15 14:20:08.879 15553 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.118:16642: continuing to reconnect in the background but 
suppressing further logging
  2021-02-15 14:20:09.093 15548 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-b3fb3d36-3477-454e-97e0-11673e64eff5 - - - - -] ssl:10.216.241.251:6641: 
connecting...
  2021-02-15 14:20:09.126 15558 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-3de7f22d-c26c-493b-9463-3140898e35f0 - - - - -] ssl:10.216.241.251:6641: 
connecting...
  2021-02-15 14:20:09.129 15557 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-89da4e64-10f9-45c1-ba11-c0ff429961c9 - - - - -] ssl:10.216.241.251:6641: 
connecting...
  2021-02-15 14:20:09.129 15571 INFO ovsdbapp.backend.ovs_idl.vlog 
[req-68cd67e7-592c-4869-bc11-2d18fc070c12 - - - - -] ssl:10.216.241.251:6641: 
connecting...
  2021-02-15 14:20:09.132 15563 INFO ovsdbapp.backend.ovs_idl.vlog [-] 
ssl:10.216.241.251:6641: connecting...
  2021-02-15 14:20:10.284 15546 ERROR ovsdbapp.backend.ovs_idl.connection [-] 
(113, 'EHOSTUNREACH'): OpenSSL.SSL.SysCallError: (113, 'EHOSTUNREACH')
  ... (and more EHOSTUNREACH messages probably from each thread) 

  And I believe network connectivity is restored and then started seeing
  the Timeout exceptions to ovsdb. Any *_postcommit operations on
  neutron-server/2 got timed out.

  2021-02-15 15:17:21.163 15554 ERROR neutron.api.v2.resource 
[req-6b3381c3-69ac-44fc-b71d-a3110714f32e 84fca387fca043b984358c34174e1070 
24471fcdff7e4cac9f7fe7b4ec0d04e3 - cb47060fffe34ed0a8913db979e06523 
cb47060fffe34ed0a8913db979e06523] index failed: No details.: 
ovsdbapp.exceptions.TimeoutException: Commands 
[] exceeded timeout 180 seconds
  2021-02-15 16:03:18.018 15554 ERROR neutron.plugins.ml2.managers 
[req-3c4f2b06-2be3-4ccc-a00e-a91bf61b8473 - 6e3dac6cf8f14582be2c8a6fdc0a7458 - 
- -] Mechanism driver 'ovn' failed in create_port_postcommit: 
ovsdbapp.exceptions.TimeoutException: Commands 
[, 
, , 
, , 
, , 
, , 
, , ] exceeded timeout 180 seconds
  ...

  One reference of complete Timeout exception (points to Queue Full):
  2021-02-15 14:58:44.610 15554 ERROR neutron.api.v2.resource 
[req-9fa36c11-fcaf-4716-8371-3d4e357b5154 2ae54808a32e4ba6baec08cbc3df6cec 
64f175c521c847c5a7d31a7443a861f2 - 8b226be7ba0a4e62a16072c0c08c6d8f 
8b226be7ba0a4e62a16072c0c08c6d8f] index failed: No details.: 
ovsdbapp.exceptions.TimeoutException: Commands 
[] exceeded timeout 180 seconds
  2021-02-15 14:58:44.610 15554 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2021-02-15 14:58:44.610 15554 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1920976] Re: ovn: DVR on VLAN networks does not work

2021-03-29 Thread Lajos Katona
** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1920976

Title:
  ovn: DVR on VLAN networks does not work

Status in neutron:
  Fix Released

Bug description:
  OVN deployment with DVR router, two networks attached:
  - external (type VLAN)
  - internal (type VLAN)

  OVN configured to use external_ids:ovn-chassis-mac-mappings - to allow
  VLAN backed DVR:

  https://github.com/ovn-
  org/ovn/commit/1fed74cfc1a1e3b29cf86eab2e96048813019b57

  Neutron is setting reside-on-redirect-chassis on the router ports - which 
essentially makes the router centralized on the network nodes - and disabling 
DVR.
  With neutron/ml2.conf setting of distributing floating ips - it makes them 
inaccessible - clearing reside-on-redirect-chassis from lrp options makes the 
functionality working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1920976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921577] Re: 'Table 'ovn_revision_numbers' is already defined for this MetaData instance

2021-03-29 Thread Lajos Katona
Thanks for the comment

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921577

Title:
  'Table 'ovn_revision_numbers' is already defined for this MetaData
  instance

Status in neutron:
  Invalid

Bug description:
  neutron-server logs an error when starting with OVN plugin enabled:

  2021-03-27 11:53:02.660 828835 CRITICAL neutron.plugins.ml2.managers
  [-] The 'EntryPoint(name='ovn',
  
value='neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver:OVNMechanismDriver',
  group='neutron.ml2.mechanism_drivers')' entrypoint could not be loaded
  for the following reason: 'Table 'ovn_revision_numbers' is already
  defined for this MetaData instance.  Specify 'extend_existing=True' to
  redefine options and columns on an existing Table object.'.:
  sqlalchemy.exc.InvalidRequestError: Table 'ovn_revision_numbers' is
  already defined for this MetaData instance.  Specify
  'extend_existing=True' to redefine options and columns on an existing
  Table object.

  Fresh install of Victoria openstack using Ubuntu Focal packages.
  Installed OVN:

  apt install ovn-central
  ovn-nbctl set-connection ptcp:6641:10.230.185.137 -- set connection . 
inactivity_probe=6
  ovn-sbctl set-connection ptcp:6642:10.230.185.137 -- set connection . 
inactivity_probe=6
  service ovn-northd restart
  service ovn-ovsdb-server-nb.service start
  service ovn-ovsdb-server-sb.service start

  ovn-sbctl show

  Chassis "3c63ed00-56ea-403e-8dad-adee06b2315f"
  hostname: o3p-os-compute-1.oppp.lab
  Encap geneve
  ip: "10.230.185.136"
  options: {csum="true"}

  Configured neutron with ovn plugin:

  vi /etc/neutron/neutron.conf
  [DEFAULT]
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin

  vi /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]
  type_drivers = local,flat,vlan,geneve
  tenant_network_types = geneve
  overlay_ip_version = 4
  mechanism_drivers = ovn
  extension_drivers = port_security
  
  [ml2_type_geneve]
  vni_ranges = 1:65536
  max_header_size = 38

  [securitygroup]
  enable_ipset = true
  enable_security_group = true

  [ovn]
  ovn_nb_connection = tcp:10.230.185.137:6641
  ovn_sb_connection = tcp:10.230.185.137:6642
  ovn_l3_scheduler = leastloaded #other option is chance

  service neutron-server restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1921577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907089] [NEW] [RFE] Add BFD support for Neutron

2020-12-07 Thread Lajos Katona
Public bug reported:

I plan to open a spec with more details as gerrit is more suitable for
discussions.

# Problem description

BFD (Bidirectional Forwarding Detection) is used to detect link failures 
between routers.
It can be helpful to detect 
* if an extra route (a nexthop-destination pair) is alive or not, and change 
routes accordingly.
* Can help routing protocols like ECMP or BGP to change routing decisions based 
on link status.

# Proposed change (more details are coming in the spec)

* Add the following new APIs to Neutron:
** Handle (Create, list, show, update, delete) bfd_monitors:
POST /v2.0/bfd_monitors
GET /v2.0/bfd_monitors
GET /v2.0/bfd_monitors/{monitor_uuid}
DELETE POST /v2.0/bfd_monitors/{monitor_uuid}
PUT /v2.0/bfd_monitors/{monitor_uuid}

** Get the current status of a bfd_monitor (As current status can be fetched 
from the backend it can be an expensive operation so better to not mix it with 
show bfd_monitors operation)
GET /v2.0/bfd_monitors/{monitor_uuid}/monitor_status

* Change the existing router API
** Associate a bfd_monitor to an extra route
PUT /v2.0/routers/{router_uuid}/add_extraroutes OR PUT /v2.0/routers/{router_id}
{"router" : {"routes" : [{ "destination" : "10.0.3.0/24", "nexthop" : 
"10.0.0.13" , "bfd": }]}}

** show routes status for a given router:
GET /v2.0/routers/{router_id}/routes_status

BFD not only gives monitoring option, but generally used to allow quick 
response to link status changes. 
In Neutron case this can be the removal of dead route from the routing table, 
and adding it back if the monitor status goes to UP again. Other backends, and 
switch/routing implementations can have more sophisticated solutions of course.

A simple opensource backend can be OVS, as OVS is capable of BFD
monitoring.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907089

Title:
  [RFE] Add BFD support for Neutron

Status in neutron:
  New

Bug description:
  I plan to open a spec with more details as gerrit is more suitable for
  discussions.

  # Problem description

  BFD (Bidirectional Forwarding Detection) is used to detect link failures 
between routers.
  It can be helpful to detect 
  * if an extra route (a nexthop-destination pair) is alive or not, and change 
routes accordingly.
  * Can help routing protocols like ECMP or BGP to change routing decisions 
based on link status.

  # Proposed change (more details are coming in the spec)

  * Add the following new APIs to Neutron:
  ** Handle (Create, list, show, update, delete) bfd_monitors:
  POST /v2.0/bfd_monitors
  GET /v2.0/bfd_monitors
  GET /v2.0/bfd_monitors/{monitor_uuid}
  DELETE POST /v2.0/bfd_monitors/{monitor_uuid}
  PUT /v2.0/bfd_monitors/{monitor_uuid}

  ** Get the current status of a bfd_monitor (As current status can be fetched 
from the backend it can be an expensive operation so better to not mix it with 
show bfd_monitors operation)
  GET /v2.0/bfd_monitors/{monitor_uuid}/monitor_status

  * Change the existing router API
  ** Associate a bfd_monitor to an extra route
  PUT /v2.0/routers/{router_uuid}/add_extraroutes OR PUT 
/v2.0/routers/{router_id}
  {"router" : {"routes" : [{ "destination" : "10.0.3.0/24", "nexthop" : 
"10.0.0.13" , "bfd": }]}}

  ** show routes status for a given router:
  GET /v2.0/routers/{router_id}/routes_status

  BFD not only gives monitoring option, but generally used to allow quick 
response to link status changes. 
  In Neutron case this can be the removal of dead route from the routing table, 
and adding it back if the monitor status goes to UP again. Other backends, and 
switch/routing implementations can have more sophisticated solutions of course.

  A simple opensource backend can be OVS, as OVS is capable of BFD
  monitoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894864] [NEW] ContextualVersionConflict with pecan 1.3.3 in networking-midonet

2020-09-08 Thread Lajos Katona
Public bug reported:

In networking-midonet the following error appeared:
2020-09-08 02:03:36.953257 | ubuntu-bionic | 
pkg_resources.ContextualVersionConflict: (pecan 1.3.3 
(/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/pep8/lib/python3.6/site-packages),
 Requirement.parse('pecan>=1.4.0'), {'neutron'})

example:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e94/749857/18/check/openstack-tox-pep8/e94e873/job-output.txt

For neutron pecan version was lifted by this fix:
https://review.opendev.org/744035

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894864

Title:
  ContextualVersionConflict with pecan 1.3.3 in networking-midonet

Status in neutron:
  New

Bug description:
  In networking-midonet the following error appeared:
  2020-09-08 02:03:36.953257 | ubuntu-bionic | 
pkg_resources.ContextualVersionConflict: (pecan 1.3.3 
(/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/pep8/lib/python3.6/site-packages),
 Requirement.parse('pecan>=1.4.0'), {'neutron'})

  example:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e94/749857/18/check/openstack-tox-pep8/e94e873/job-output.txt

  For neutron pecan version was lifted by this fix:
  https://review.opendev.org/744035

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894825] [NEW] placement allocation update accepts only integers from 1

2020-09-08 Thread Lajos Katona
Public bug reported:

update_qos_minbw_allocation (see: 
https://opendev.org/openstack/neutron-lib/src/commit/245e005d1bbb9af5e57ff600fb97b2a13c85c83b/neutron_lib/placement/client.py
 ) assumes that placement allocation records can be updated by 0 (like: 
{
  "allocations": {
"4e061c03-611e-4caa-bf26-999dcff4284e": {
  "resources": {
"NET_BW_EGR_KILOBIT_PER_SEC": 0
  }
}
}

), 
but that is not possible now, the way to do this is to delete the resource 
class,
see the storyboard story for this:
https://storyboard.openstack.org/#!/story/2008111

** Affects: neutron
     Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: placement qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894825

Title:
  placement allocation update accepts only integers from 1

Status in neutron:
  New

Bug description:
  update_qos_minbw_allocation (see: 
https://opendev.org/openstack/neutron-lib/src/commit/245e005d1bbb9af5e57ff600fb97b2a13c85c83b/neutron_lib/placement/client.py
 ) assumes that placement allocation records can be updated by 0 (like: 
  {
"allocations": {
  "4e061c03-611e-4caa-bf26-999dcff4284e": {
"resources": {
  "NET_BW_EGR_KILOBIT_PER_SEC": 0
}
  }
  }

  ), 
  but that is not possible now, the way to do this is to delete the resource 
class,
  see the storyboard story for this:
  https://storyboard.openstack.org/#!/story/2008111

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887992] Re: [neutron-tempest-plugin] glance service failing during the installation

2020-07-19 Thread Lajos Katona
As https://review.opendev.org/741687 is merged shall we make this
committed?

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887992

Title:
  [neutron-tempest-plugin] glance service failing during the
  installation

Status in neutron:
  Fix Committed

Bug description:
  In some neutron-tempest-plugin CI jobs, "glance" service is not starting:
  
https://133eaf5aa1613130d74d-11eee2f9a8da0aa272aa38b865c1ef08.ssl.cf5.rackcdn.com/715482/15/check/neutron-tempest-plugin-api/9c57856/job-output.txt

  Error:
  [ERROR] /opt/stack/devstack/lib/glance:361 g-api did not start

  That could be caused because of a recent patch in devstack:
  https://review.opendev.org/#/c/741258

  According to the owner, that should be fixed with
  https://review.opendev.org/#/c/741687

  In those jobs, the service "tls-proxy" is disabled:
  https://github.com/openstack/neutron-tempest-
  plugin/blob/master/zuul.d/base.yaml

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887497] Re: Cleanup stale flows by cookie and table_id instead of just by cookie

2020-07-14 Thread Lajos Katona
Hi, thanks for bug report, and thanks Liu for checking.
I feel that this is now more an opinion, and perhaps with some details it can 
be an RFE, which can be discussed on drivers meeting with the rest of the team, 
what do you think?

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887497

Title:
  Cleanup stale flows by cookie and table_id instead of just by cookie

Status in neutron:
  Opinion

Bug description:
  Pre-conditions: After restart neutron-ovs-agent.

  After I restart neutron-ovs-agent, I found neutron cleanup stale flows
  only by cookie, and the cookies in different tables always be same,
  that means I can cleanup flows in table 20 by cookies in table 0! I
  think the safer way is to cleanup stale flows by cookie and table_id
  instead of just by cookie.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869862] [NEW] neutron-tempest-plugin-designate-scenario failes frequently with imageservice doesn't have supported version

2020-03-31 Thread Lajos Katona
Public bug reported:

neutron-tempest-plugin-designate-scenario job fails frequently with the 
following error:
...
2020-03-30 18:49:44.170062 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:256
 :   openstack --os-cloud=devstack-admin --os-region-name=RegionOne image 
create ubuntu-16.04-server-cloudimg-amd64-disk1 --property hw_rng_model=virtio 
--public --container-format=bare --disk-format qcow2
2020-03-30 18:49:46.242923 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
2020-03-30 18:49:46.247351 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
2020-03-30 18:49:46.247894 | controller | The image service for 
devstack-admin:RegionOne exists but does not have any supported versions.
2020-03-30 18:49:46.384047 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap
...

Example is here:
https://94d5d118ec3db75721c2-a00e37315b6784119b950c4b112ef30c.ssl.cf2.rackcdn.com/711610/13/check/neutron-tempest-plugin-designate-scenario/b23bb46/job-output.txt

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22The%20image%20service%20for%20devstack-admin%3ARegionOne%20exists%20but%20does%20not%20have%20any%20supported%20versions.%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869862

Title:
  neutron-tempest-plugin-designate-scenario failes frequently with
  imageservice doesn't have supported version

Status in neutron:
  New

Bug description:
  neutron-tempest-plugin-designate-scenario job fails frequently with the 
following error:
  ...
  2020-03-30 18:49:44.170062 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:256
 :   openstack --os-cloud=devstack-admin --os-region-name=RegionOne image 
create ubuntu-16.04-server-cloudimg-amd64-disk1 --property hw_rng_model=virtio 
--public --container-format=bare --disk-format qcow2
  2020-03-30 18:49:46.242923 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247351 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247894 | controller | The image service for 
devstack-admin:RegionOne exists but does not have any supported versions.
  2020-03-30 18:49:46.384047 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap
  ...

  Example is here:
  
https://94d5d118ec3db75721c2-a00e37315b6784119b950c4b112ef30c.ssl.cf2.rackcdn.com/711610/13/check/neutron-tempest-plugin-designate-scenario/b23bb46/job-output.txt

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22The%20image%20service%20for%20devstack-admin%3ARegionOne%20exists%20but%20does%20not%20have%20any%20supported%20versions.%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1856156] [NEW] Constraints file is not considered during doc build

2019-12-12 Thread Lajos Katona
Public bug reported:

During doc build constraints file is not considered, which actually
causes that stable/stein docs job is failing since neutron-lib 1.30.0 is
released.

The traceback:
2019-12-11 13:17:24.407556 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/db/test_migrations.py",
 line 34, in 
2019-12-11 13:17:24.407622 | ubuntu-bionic | from 
neutron.db.migration.models import head as head_models
2019-12-11 13:17:24.407718 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/migration/models/head.py",
 line 29, in 
2019-12-11 13:17:24.407770 | ubuntu-bionic | from neutron.db import 
agentschedulers_db  # noqa
2019-12-11 13:17:24.407883 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/agentschedulers_db.py",
 line 47, in 
2019-12-11 13:17:24.407941 | ubuntu-bionic | class 
AgentSchedulerDbMixin(agents_db.AgentDbMixin):
2019-12-11 13:17:24.408050 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/agentschedulers_db.py",
 line 55, in AgentSchedulerDbMixin
2019-12-11 13:17:24.408095 | ubuntu-bionic | 
constants.AGENT_TYPE_LOADBALANCER: None,
2019-12-11 13:17:24.408181 | ubuntu-bionic | AttributeError: module 
'neutron_lib.constants' has no attribute 'AGENT_TYPE_LOADBALANCER'

The issue presents on master as well.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc gate-failure in-stable-stein

** Tags removed: sta
** Tags added: in-stable-stein

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856156

Title:
  Constraints file is not considered during doc build

Status in neutron:
  New

Bug description:
  During doc build constraints file is not considered, which actually
  causes that stable/stein docs job is failing since neutron-lib 1.30.0
  is released.

  The traceback:
  2019-12-11 13:17:24.407556 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/db/test_migrations.py",
 line 34, in 
  2019-12-11 13:17:24.407622 | ubuntu-bionic | from 
neutron.db.migration.models import head as head_models
  2019-12-11 13:17:24.407718 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/migration/models/head.py",
 line 29, in 
  2019-12-11 13:17:24.407770 | ubuntu-bionic | from neutron.db import 
agentschedulers_db  # noqa
  2019-12-11 13:17:24.407883 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/agentschedulers_db.py",
 line 47, in 
  2019-12-11 13:17:24.407941 | ubuntu-bionic | class 
AgentSchedulerDbMixin(agents_db.AgentDbMixin):
  2019-12-11 13:17:24.408050 | ubuntu-bionic |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/agentschedulers_db.py",
 line 55, in AgentSchedulerDbMixin
  2019-12-11 13:17:24.408095 | ubuntu-bionic | 
constants.AGENT_TYPE_LOADBALANCER: None,
  2019-12-11 13:17:24.408181 | ubuntu-bionic | AttributeError: module 
'neutron_lib.constants' has no attribute 'AGENT_TYPE_LOADBALANCER'

  The issue presents on master as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1856156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841741] Re: Port-Forwarding can't be set to different protocol in the same IP and Port

2019-08-28 Thread Lajos Katona
*** This bug is a duplicate of bug 1799155 ***
https://bugs.launchpad.net/bugs/1799155

Hi, I marked this as duplicate of the original bug report #1799155.
Unfortunately as I see by policy it can't be backported as the fix you 
referenced (https://review.opendev.org/613549) contains db schema change.

** This bug has been marked a duplicate of bug 1799155
   [L3][port_forwarding] two different protocols can not have the same 
internal/external port number at the same time

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841741

Title:
  Port-Forwarding can't be set to different protocol in the same IP and
  Port

Status in neutron:
  New

Bug description:
  In Rocky version, Port-Forwarding can't be set to different protocol
  in the same IP and Port.

  For example:

  Floating IP(IP:140.128.100.100, Port:100) mapping to Private IP(IP:
  10.0.0.1. Port: 100, Protocol: tcp and udp)

  I got these error messages

  {
  "NeutronError": {
  "message": "Bad port_forwarding request: A duplicate port forwarding 
entry with same attributes already exists, conflicting values are 
{'floatingip_id': u'cdbe8e24-f0dc-45c9-aec7-9609faf4234c', 'external_port': 
1000}.",
  "type": "BadRequest",
  "detail": ""
  }
  }

  And, I found this bug has been solved and merged in Stein version.
  Commit: 
https://github.com/openstack/neutron/commit/4b7a070b3f1c66afc48d1290ec31c840e367d8ee#diff-02580f01f1bafd7855f5f78a84ce1d5d

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1841741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841700] Re: instance ingress bandwidth limiting doesn't works in ocata.

2019-08-28 Thread Lajos Katona
Due to the above, I state this bug as invalid. If you need further
questions don't hesitate to ask on #openstack-neutron irc channel or on
openstack-discuss mailing list.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841700

Title:
  instance ingress bandwidth limiting doesn't works in ocata.

Status in neutron:
  Invalid

Bug description:
  [Environment]

  Xenial-Ocata deployment

  [Description]

  The instance ingress bandwidth limit implementation was targeted for
  Ocata [0], but the full implementation ingress/egress was done during
  the pike [1] cycle.

  However, isn't reported or explicit that ingress direction isn't
  supported in ocata, which causes the following exception when
  --ingress is specified.

  $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 
--max-burst-kbits 300 --ingress bw-limiter
  Failed to create Network QoS rule: BadRequestException: 400: Client Error for 
url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, 
Unrecognized attribute(s) 'direction'

  It would be desirable for this feature to be available on Ocata for being 
able to 
  set ingress/egress bandwidth limits on the ports.

  [0] https://blueprints.launchpad.net/neutron/+spec/instance-ingress-bw-limit
  [1] https://bugs.launchpad.net/neutron/+bug/1560961

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1841700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836037] [NEW] Routed provider networks nova inventory update fails

2019-07-10 Thread Lajos Katona
Public bug reported:

The patch https://review.opendev.org/663980 introduced a serious misreading of 
placement API.
The lines 
https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@220 
assumes that "Show resource provider inventory" (see: 
https://developer.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory)
 returns a dict with like 
{'IPV4_ADDRESS': {'allocation_ratio': 42}}
but if we read the documentation the truth is that the response is a dict like:
{'allocation_ratio': 42}

The other fix in that patch is good as it is
(https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@255)
for "Update resource provider inventories" (see:
https://developer.openstack.org/api-ref/placement/?expanded=update-
resource-provider-inventories-detail#update-resource-provider-
inventories)

** Affects: neutron
 Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836037

Title:
  Routed provider networks nova inventory update fails

Status in neutron:
  New

Bug description:
  The patch https://review.opendev.org/663980 introduced a serious misreading 
of placement API.
  The lines 
https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@220 
assumes that "Show resource provider inventory" (see: 
https://developer.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory)
 returns a dict with like 
  {'IPV4_ADDRESS': {'allocation_ratio': 42}}
  but if we read the documentation the truth is that the response is a dict 
like:
  {'allocation_ratio': 42}

  The other fix in that patch is good as it is
  
(https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@255)
  for "Update resource provider inventories" (see:
  https://developer.openstack.org/api-ref/placement/?expanded=update-
  resource-provider-inventories-detail#update-resource-provider-
  inventories)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828543] [NEW] Routed provider networks: placement API handling errors

2019-05-10 Thread Lajos Katona
Public bug reported:

Routed provider networks is a feature which uses placement to store information 
about segments, the subnets in segments and make possible that nova can use 
this information in scheduling.
On master the placement API calls are failing, at first at get_inventory call:

May 09 14:15:26 multicont neutron-server[31232]: DEBUG 
oslo_concurrency.lockutils [-] Lock 
"notifier-a76cce90-7366-495e-9784-9ddef689bc71" released by 
"neutron.notifiers.batch_notifier.BatchNotifier.queue_event..synced_send"
 :: held 0.112s {{(pid=31252) inner 
/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
May 09 14:15:26 multicont neutron-server[31232]: Traceback (most recent call 
last):
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 433, in 
get_inventory
May 09 14:15:26 multicont neutron-server[31232]: return 
self._get(url).json()
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 178, in _get
May 09 14:15:26 multicont neutron-server[31232]: **kwargs)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 1037, 
in get
May 09 14:15:26 multicont neutron-server[31232]: return self.request(url, 
'GET', **kwargs)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 890, in 
request
May 09 14:15:26 multicont neutron-server[31232]: raise 
exceptions.from_response(resp, method, url)
May 09 14:15:26 multicont neutron-server[31232]: 
keystoneauth1.exceptions.http.NotFound: Not Found (HTTP 404) (Request-ID: 
req-4133f4c6-df6c-467f-9d15-e8532fc6504b)
May 09 14:15:26 multicont neutron-server[31232]: During handling of the above 
exception, another exception occurred:
...
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 229, in 
_update_nova_inventory
May 09 14:15:26 multicont neutron-server[31232]: IPV4_RESOURCE_CLASS)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 53, in wrapper
May 09 14:15:26 multicont neutron-server[31232]: return f(self, *a, **k)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 444, in 
get_inventory
May 09 14:15:26 multicont neutron-server[31232]: if "No resource provider 
with uuid" in e.details:
May 09 14:15:26 multicont neutron-server[31232]: TypeError: argument of type 
'NoneType' is not iterable

Using stable/pike (not just for neutron) the syncing is OK.
I suppose as the placement client code was moved to neutron-lib and changed to 
work with placement 1.20 something happened that makes routed networks 
placement calls failing.

Some details:
Used reproduction steps: 
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html (of 
course the pike one for stable/pike deployment)
neutron: d0e64c61835801ad8fdc707fc123cfd2a65ffdd9
neutron-lib: bcd898220ff53b3fed46cef8c460269dd6af3492
placement: 57026255615679122e6f305dfa3520c012f57ca7
nova: 56fef7c0e74d7512f062c4046def10401df16565
Ubuntu 18.04.2 LTS based multihost devstack

** Affects: neutron
 Importance: Medium
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: placement segments

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828543

Title:
  Routed provider networks: placement API handling errors

Status in neutron:
  New

Bug description:
  Routed provider networks is a feature which uses placement to store 
information about segments, the subnets in segments and make possible that nova 
can use this information in scheduling.
  On master the placement API calls are failing, at first at get_inventory call:

  May 09 14:15:26 multicont neutron-server[31232]: DEBUG 
oslo_concurrency.lockutils [-] Lock 
"notifier-a76cce90-7366-495e-9784-9ddef689bc71" released by 
"neutron.notifiers.batch_notifier.BatchNotifier.queue_event..synced_send"
 :: held 0.112s {{(pid=31252) inner 
/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
  May 09 14:15:26 multicont neutron-server[31232]: Traceback (most recent call 
last):
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 433, in 
get_inventory
  May 09 14:15:26 multicont neutron-server[31232]: return 
self._get(url).json()
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 178, in _get
  May 09 14:15

[Yahoo-eng-team] [Bug 1795824] [NEW] Functional tests neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase are sporadically failing

2018-10-03 Thread Lajos Katona
Public bug reported:

Functional tests 
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle
 and
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle
 (perhaps others as well) are failing in the gate see logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22test_legacy_router_lifecycle%5C%22%20AND%20build_name%3A%5C%22neutron-functional%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20project%3A%5C%22openstack%2Fneutron%5C%22%20AND%20tags%3A%5C%22console%5C%22

Example traceback:
2018-10-02 17:33:48.845077 | primary | 2018-10-02 17:33:48.844 | 
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle_with_no_gateway_subnet
2018-10-02 17:33:48.848853 | primary | 2018-10-02 17:33:48.848 | 

2018-10-02 17:33:48.850703 | primary | 2018-10-02 17:33:48.850 |
2018-10-02 17:33:48.852435 | primary | 2018-10-02 17:33:48.852 | Captured 
traceback:
2018-10-02 17:33:48.854060 | primary | 2018-10-02 17:33:48.853 | 
~~~
2018-10-02 17:33:48.855863 | primary | 2018-10-02 17:33:48.855 | Traceback 
(most recent call last):
2018-10-02 17:33:48.857769 | primary | 2018-10-02 17:33:48.857 |   File 
"neutron/tests/base.py", line 137, in func
2018-10-02 17:33:48.859708 | primary | 2018-10-02 17:33:48.859 | return 
f(self, *args, **kwargs)
2018-10-02 17:33:48.861760 | primary | 2018-10-02 17:33:48.861 |   File 
"neutron/tests/functional/agent/l3/test_legacy_router.py", line 91, in 
test_legacy_router_lifecycle_with_no_gateway_subnet
2018-10-02 17:33:48.863480 | primary | 2018-10-02 17:33:48.863 | 
v6_ext_gw_with_sub=False)
2018-10-02 17:33:48.865400 | primary | 2018-10-02 17:33:48.864 |   File 
"neutron/tests/functional/agent/l3/framework.py", line 302, in _router_lifecycle
2018-10-02 17:33:48.867545 | primary | 2018-10-02 17:33:48.867 | 
self._assert_onlink_subnet_routes(router, ip_versions)
2018-10-02 17:33:48.869522 | primary | 2018-10-02 17:33:48.869 |   File 
"neutron/tests/functional/agent/l3/framework.py", line 526, in 
_assert_onlink_subnet_routes
2018-10-02 17:33:48.871655 | primary | 2018-10-02 17:33:48.871 | 
namespace=ns_name)
2018-10-02 17:33:48.873748 | primary | 2018-10-02 17:33:48.873 |   File 
"neutron/agent/linux/ip_lib.py", line 1030, in get_routing_table
2018-10-02 17:33:48.875665 | primary | 2018-10-02 17:33:48.875 | return 
list(privileged.get_routing_table(ip_version, namespace))
2018-10-02 17:33:48.877614 | primary | 2018-10-02 17:33:48.877 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 207, in _wrap
2018-10-02 17:33:48.879571 | primary | 2018-10-02 17:33:48.879 | return 
self.channel.remote_call(name, args, kwargs)
2018-10-02 17:33:48.881395 | primary | 2018-10-02 17:33:48.881 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_privsep/daemon.py",
 line 202, in remote_call
2018-10-02 17:33:48.883137 | primary | 2018-10-02 17:33:48.882 | raise 
exc_type(*result[2])
2018-10-02 17:33:48.884767 | primary | 2018-10-02 17:33:48.884 | KeyError


By manually executing the same tests (tox -e dsvm-functional-python35 -- 
neutron.tests.functional.agent.l3.test_legacy_router) I got the same error.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests legacy-router-lifecycle neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1795824

Title:
  Functional tests
  neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase
  are sporadically failing

Status in neutron:
  New

Bug description:
  Functional tests 
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle
 and
  
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle
 (perhaps others as well) are failing in the gate see logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22test_legacy_router_lifecycle%5C%22%20AND%20build_name%3A%5C%22neutron-functional%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20project%3A%5C%22openstack%2Fneutron%5C%22%20AND%20tags%3A%5C%22console%5C%22

  Example traceback:
  2018-10-02 17:33:48.845077 | primary | 2018-10-02 17:33:48.844 | 
neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle_with_no_gateway_subnet
  2018-10-02 17:33:48.848853 | primary | 2018-10-02 17:33:48.848 | 

[Yahoo-eng-team] [Bug 1778666] Re: QoS - “port” parameter is required in CLI in order to set/unset QoS policy to floating IP

2018-08-21 Thread Lajos Katona
I pushed the cherry-picked patch:
https://review.openstack.org/594244

** Project changed: neutron => python-openstackclient

** Changed in: python-openstackclient
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1778666

Title:
  QoS - “port” parameter is required in CLI in order to set/unset QoS
  policy to floating IP

Status in python-openstackclient:
  Confirmed

Bug description:
  ### User Workflow (Documentation) ###
  According documentation 
https://docs.openstack.org/neutron/queens/admin/config-qos.html in order to: 

  1) Set QoS to Floating IP:
  openstack floating ip set --qos-policy bw-limiter 
d0ed7491-3eb7-4c4f-a0f0-df04f10a067c

  2) Remove Floating IP (option 1):
  openstack floating ip set --no-qos-policy d0ed7491-3eb7-4c4f-a0f0-df04f10a067c

  3) Remove Floating IP (option 2)
  openstack floating ip unset --qos-policy d0ed7491-3eb7-4c4f-a0f0-df04f10a067c

  
  ### Testing Result ###

  1) Set QoS to floating IP
  # Scenario #
  openstack floating ip set --no-qos-policy b2895ad6-0e17-4be9-bf03-c11010c81584

  # Actual Result #
  Fails with:
  (openstack) floating ip set --no-qos-policy 
b2895ad6-0e17-4be9-bf03-c11010c81584
  usage: floating ip set [-h] --port  [--fixed-ip-address ]
 [--qos-policy  | --no-qos-policy]
 
  floating ip set: error: argument --port is required

  There is no match between “set command” in documentation and implemented code.
  Once “port” parameter is provided, QoS is set as expected.
  “Set command” provided in documentation absolutely OK (“floating-ip” should 
be a single mandatory parameter),  so the problem here is our implementation.
  I think that we have to check the “Port” value in our code, basing on 
provided by user “Floating-IP” and then to use it. So this will be kind of 
workaround and will be transparent to user.

  
  2) Unset QoS from Floating IP with “set –no-qos-policy” option
  # Scenario #
  openstack floating ip set --no-qos-policy b2895ad6-0e17-4be9-bf03-c11010c81584

  # Actual Result #
  Fails with:
  (openstack) floating ip set --no-qos-policy 
b2895ad6-0e17-4be9-bf03-c11010c81584
  usage: floating ip set [-h] --port  [--fixed-ip-address ]
 [--qos-policy  | --no-qos-policy]
 
  floating ip set: error: argument --port is required

  Similar to previous “Set QoS”, again no match between the command provided in 
documentation and our current implementation.
  And again, documentation command is OK, the problem is in our implementation. 
My suggestion is the same workaround, means getting “Port” from code, basing on 
provided “Floating IP”

  3) Unset QoS from Floating IP with “unset” option
  # Scenario #
  openstack floating ip unset --qos-policy b2895ad6-0e17-4be9-bf03-c11010c81584

  # Actual Result #
  Works as expected and no “port” value is needed, maybe this info will help 
the developers to understand why #1 and #2 scenarios behaves diferently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1778666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746082] [NEW] Hard to navigate from trunk panel to the parent or subports details in network panel

2018-01-29 Thread Lajos Katona
Public bug reported:

On the trunk panel now only the uuid of the parent port and on the trunk 
details page only the uuid of the subports are visible. To find out the details 
of these ports the user have to navigate to the networks panel.
The more user friendly approach is to give direct link from the trunk panel to 
the port details page.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1746082

Title:
  Hard to navigate from trunk panel to the parent or subports details in
  network panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the trunk panel now only the uuid of the parent port and on the trunk 
details page only the uuid of the subports are visible. To find out the details 
of these ports the user have to navigate to the networks panel.
  The more user friendly approach is to give direct link from the trunk panel 
to the port details page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1746082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >