[Yahoo-eng-team] [Bug 1885921] Re: [RFE][floatingip port_forwarding] Add port ranges

2022-10-18 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885921

Title:
  [RFE][floatingip port_forwarding] Add port ranges

Status in neutron:
  Fix Released

Bug description:
  Problem Description
  =

  Currently, if a user wants to create NAT rules that cover multiple
  ports, he/she needs to create them one by one, which is cumbersome in
  some use cases. Having said that, we suggest changing the Floating IP
  port forwarding API to allow the use of port ranges to create NAT
  rules. The changes are presented as follows

  Proposed Change
  ===

  ### API JSON

  We propose to extend the current floating IP port forwarding API to
  handle a range of ports instead of a one-to-one mapping. We have
  different alternatives to implement such a feature. We can change the
  JSON the API receives, adding new attributes, such as
  `internal_port_range` or `internal_port_beg` and `internal_port_end`,
  or we can just send a String in the attribute `internal_port` such as
  `80-83` and the same to the `external_port`.

  For example:

  Current JSON :

  {
"port_forwarding": {
  "protocol": "tcp",
  "internal_ip_address": "172.16.0.7",
  "internal_port": 80,
  "internal_port_id": "b67a7746-dc69-45b4-9b84-bb229fe198a0",
  "external_port": 8080,
  "description": "desc"
}
  }

  Adding new attributes :

  {
"port_forwarding": {
  "protocol": "tcp",
  "internal_ip_address": "172.16.0.7",
  "internal_port_beg": 80,
  "internal_port_end": 83,
  "internal_port_id": "b67a7746-dc69-45b4-9b84-bb229fe198a0",
  "external_port_beg": 8080,
  "external_port_end": 8083,
  "description": "desc"
}
  }

  Or, the alternative, changing the content of the attributes:

  {
"port_forwarding": {
  "protocol": "tcp",
  "internal_ip_address": "172.16.0.7",
  "internal_port": "80-83",
  "internal_port_id": "b67a7746-dc69-45b4-9b84-bb229fe198a0",
  "external_port": "8080-8083"
  "description": "desc"
}
  }

  We believe that the last JSON format is the best one; because it
  generates fewer changes in the API usage and is simple for the user to
  understand and use. Therefore, these are the changes we are proposing
  to be implemented.

  ### Database persistence

  Besides the JSON (rest API), we will need to change the way that the
  application persists the port forwarding rules in the database. We
  have two different alternatives to change the database schema.

  Using the current database schema sample:

  
+---+---+---+
  | id__| A1_| B4_|
  
+---+---+---+
  | floatingip_id| C2_| C2__|
  
+---+---+---+
  | external_port___| 80_| 81__|
  
+---+---+---+
  | internal_neutron_port_id | D3_| D3_|
  
+---+---+---+
  | protocol| tcp| tcp_|
  
+---+---+---+
  | socket__| 172.16.0.7:8080 | 172.16.0.7:8081 |
  
+---+---+---+

  The above DB dump shows the scenario of a user creating a floatingip
  port forwarding rules with the port ranges 80-81 mapping into a VM
  ports range 8080-8081. Using this method, we will delegate all the
  responsibility of maintaining the ranges to the application level
  since the ports range is not specified in the DB schema.

  On a different approach, we can work using an extended database
  schema:

  +---+-+
  | id__| A1_|
  +---+-+
  | floatingip_id| C2_|
  +---+-+
  | external_port___| 80-81__|
  +---+-+
  | internal_neutron_port_id | D3_|
  +---+-+
  | protocol| tcp|
  +---+-+
  | internal_ip_address__| 172.16.0.7 _|
  +---+-+
  | internal_port| 8080-8081 |
  +---+-+

  Using the above proposal, we will reduce the number of entries in the
  

[Yahoo-eng-team] [Bug 1891334] Re: [RFE] Enable change of CIDR on a subnet

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1891334

Title:
  [RFE] Enable change of CIDR on a subnet

Status in neutron:
  Won't Fix

Bug description:
  [Request]
  Reporting this RFE on behalf of a customer, who would like to inquire on the 
possibility of changing the CIDR of a subnet in Neutron.

  As of today, the only alternative for expansion is to create new
  subnets/gateways to accomodate more hosts.  The customer's desire is
  to avoid adding subnets and, instead, to change the existing one.

  
  [Env]
  This is a Bionic/Stein Juju-based deployment, but would apply to any 
supported OpenStack version.

  
  [Workaround]
  Create more subnets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1891334/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921461] Re: [RFE] Enhancement to Neutron BGPaaS to directly support Neutron Routers & bgp-peering from such routers over internal & external Neutron Networks

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921461

Title:
  [RFE] Enhancement to Neutron BGPaaS to directly support Neutron
  Routers & bgp-peering from such routers over internal & external
  Neutron Networks

Status in neutron:
  Won't Fix

Bug description:
  #Problem Description

  There are good foundation APIs in Neutron BGPaaS that brought in BGP
  service functionality into Neutron through Neutron Dynamic Routing.

  However there are telco use cases which requires “multiple service-
  addresses hosted by a VNF” to be advertised via BGP Control Plane
  towards peers which are ISP-PE-Routers. These “service-addresses” are
  typically Non-Neutron-IP-Networks and/or Non-Neutron-IP-Prefixes that
  are used internally inside the VNF applications. This advertisement
  enables the ISP-PE-Routers to learn such “service-addresses hosted by
  VNF”, thus enabling L3 connectivity towards such service-endpoints-
  hosted-by-VNF from ISP networks.

  The above requires BGPaaS APIs to support BGP-Peering directly towards
  the VNFs from a Neutron Router hosting the internal-networks of the
  VNF. In addition, we also require the BGPaaS API to support BGP-
  Peering towards the ISP-PE-Routers directly over the Neutron External
  Networks.

  Both of the above are not feasible today within existing BGPaaS,  because
  a. Existing BGPaaaS supports only peering over special networks which are not 
managed via Neutron
  b. Similarly there is a non-availability of APIs to make the BGPSpeaker 
directly peer with VNFs over Neutron Internal Networks.

  There is a another use-case where we wanted to automate multiple-BGP-
  Peering towards VNFs from a given BGPSpeaker, as and when a VNF
  Cluster is scaled-out/scaled-in.  For this we will be bringing in the
  bgp-peer-group concepts and API for use with Neutron BGPaaS.

  So through Specification we wanted to address the above 3 use-cases by :
  a. Enhancing BGPaaS API support within “bgp” extension under 
neutron-dynamic-routing
  b. Enhance BGPaaS Reference implementations to support the enhanced APIs.

  #Proposed Change

  Proposal is to enhance existing BGPaaS, allow neutron router to be
  associated to a BGP Speaker and allow BGP Speaker to peer with both
  the internal-Networks and External-Networks present on that Neutron
  Router. This will be implemented using enhancements to the neutron-
  service and neutron-dynamic-routing. A BGP speaker will be associated
  to a router. BGP speaker will be running inside the L3 router
  namespace which enables access to all the neutron-router-interfaces
  i.e.. both internal/external interfaces. BGP functionality provided by
  OS-Ken will be reused to excite BGP speaker functionality to run only
  within the neutron router namespace.

  “Enhanced-L3-Plugin” will be running in Neutron-Server on controller-
  host and “Enhanced-L3-agent” on compute-host. Once router is
  associated to bgpspeaker, the ‘Enhanced BGP Service Plugin’ will
  schedule the request to create a BGPSpeaker towards
  ‘Enhanced-L3-Plugin’. ‘Enhanced-L3-Plugin’ in turn will realize the
  scheduling of the BGP Speaker towards the ‘Enhanced-L3-Agent’ that is
  already hosting the router. ‘Enhanced-L3-Agent’ realizes bgpspeaker
  inside the router-namespace and now bgpspeaker can peer with anybody
  reachable for router, through the router-interface-ip-address of
  router.​

  The proposal is to provide the below functionalities
  Use-case 1.a)
  ~
    1. Provide the ability to associate a single neutron router to 
a BGP Speaker (along with optional address-scope)
    2. Provide the ability to disassociate that single neutron 
router from a BGP Speaker
    3. Provide the ability to implicitly make a bgp-speaker 
highly-available whenever the bgp-speaker is associated with a HA capable 
neutron-router.
    4. Provide the ability for the BGP Speaker to expose the entire 
list of routes it is currently managing (be it multiple bgp-peers)

  Use-case 1.b)
  ~
    1.Provide the ability to create a BGP Peer Group with BFD & 
other parameters
    2.Provide the ability to delete a BGP Peer Group (when its not 
in use by any BGPPeer)
    3.Provide the ability to create a bgp-peer using an existing 
bgp-peeer-group
    4.Provide the ability to create BGP peers with update-source 
and next-hop-self parameters

  * Add the following new APIs to Neutron: (more details in the spec)
  PUT /v2.0/bgp-speakers//add_router
  PUT  /v2.0/bgp-speakers//remove_router
  GET /v2.0/bgp-speakers//get_routes
  POST /v2.0/bgp-peer-groups/
  DELETE /v2.0/bgp-peer-groups/

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1825345] Re: [RFE] admin-state-down doesn't evacuate bindings in the dhcp_agent_id column

2022-10-18 Thread Rodolfo Alonso
RFE not attended, reopen if needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1825345

Title:
  [RFE] admin-state-down doesn't evacuate bindings in the dhcp_agent_id
  column

Status in neutron:
  Won't Fix

Bug description:
  Hi,

  This is a real report from the production front, with a deployment
  causing us a lot of head-scratch because of a somehow broken hardware.

  If, for some reason, a node running the neutron-dhcp-agent has some
  hardware issue, then an admin will probably want to disable the agent
  there. This is done with, for example:

  neutron agent-update --admin-state-down
  e865d619-b122-4234-aebb-3f5c24df1c8e

  or something like this too:

  openstack network agent set --disable
  e865d619-b122-4234-aebb-3f5c24df1c8e

  This works, and no new network will be assigned to this agent in the
  future, however, if there was some networks already assigned to this
  agent, they wont be evacuated.

  What needs to be done is:

  1/ Perform an update of the networkdhcpagentbindings table, and remove all 
instances of e865d619-b122-4234-aebb-3f5c24df1c8e that we see in dhcp_agent_id. 
The networks should be reassigned to another agent. Best would be to spread the 
load on many, if possible, otherwise reassigning all networks to a single agent 
would be ok-ish.
  2/ Restart the neutron-dhcp-agent process where the network have been moved, 
so that new dnsmasq process start for this network.
  3/ Attempt to get the disabled agent to restart as well, knowing that 
reaching it may fail (since it has been disabled, that's probably because it's 
broken somehow...).

  Currently, one needs to do all of this by hand. I've done that, and
  restored connectivity to a working DHCP server, as our user expected.
  This is kind of painful and boring to do, plus that's not really what
  an openstack user is expecting.

  In fact, if we could also provide something like this, it'd be super
  nice:

  openstack network agent evacuate e865d619-b122-4234-aebb-3f5c24df1c8e

  then we'd be using it during the "set --disable" process.

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1825345/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905295] Re: [RFE] Allow multiple external gateways on a router

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905295

Title:
  [RFE] Allow multiple external gateways on a router

Status in neutron:
  Won't Fix

Bug description:
  I'd like to bring the following idea to the drivers' meeting. If this
  still looks like a good idea after that discussion, I'll open a spec
  so this can be properly commented on in gerrit. Until then feel free
  to comment here of course.

  # Problem Description

  A general router can be configured to connect and route to multiple
  external networks for higher availability and/or to balance the load.
  However the current Neutron API syntax allows exactly one external
  gateway for a router.

  https://docs.openstack.org/api-ref/network/v2/?expanded=create-router-
  detail#create-router

  {
  "router": {
  "name": "router1",
  "external_gateway_info": {
  "network_id": "ae34051f-aa6c-4c75-abf5-50dc9ac99ef3",
  "enable_snat": true,
  "external_fixed_ips": [
  {
  "ip_address": "172.24.4.6",
  "subnet_id": "b930d7f6-ceb7-40a0-8b81-a425dd994ccf"
  }
  ]
  },
  "admin_state_up": true
  }
  }

  However consider the following (simplified) network architecture as an
  example:

  R3 R4
   |X|
  R1 R2
   |X|
  C1 C2 ...

  (Sorry, my original, nice ascii art was eaten by launchpad. I hope
  this still conveys what I mean.)

  Where C1, C2, ... are compute nodes, R1 and R2 are OpenStack-managed
  routers, while R3 and R4 are provider edge routers. Between R1-R2 and
  R3-R4 Equal Cost Multipath (ECMP) routing is used to utilize all links
  in an active-active manner. In such an architecture it makes sense to
  represent R1 and R2 as 2 logical routers with 2-2 external gateways,
  or in some cases (depending on other architectural choices) even as 1
  logical router with 4 external gateways. But with the current API that
  is not possible.

  # Proposed Change

  Extend the router API object with a new attribute
  'additional_external_gateways', for example:

  {
     "router" : {
    "name" : "router1",
    "admin_state_up" : true,
    "external_gateway_info" : {
   "enable_snat" : false,
   "external_fixed_ips" : [
  {
     "ip_address" : "172.24.4.6",
     "subnet_id" : "b930d7f6-ceb7-40a0-8b81-a425dd994ccf"
  }
   ],
   "network_id" : "ae34051f-aa6c-4c75-abf5-50dc9ac99ef3"
    },
    "additional_external_gateways" : [
   {
  "enable_snat" : false,
  "external_fixed_ips" : [
     {
    "ip_address" : "172.24.5.6",
    "subnet_id" : "62da64b0-29ab-11eb-9ed9-3b1175418487"
     }
  ],
  "network_id" : "592d4716-29ab-11eb-a7dd-4f4b5e319915"
   },
   ...
    ]
     }
  }

  Edited via the following HTTP PUT methods with diff semantics:

  PUT /v2.0/routers/{router_id}/add_additional_external_gateways
  PUT /v2.0/routers/{router_id}/remove_additional_external_gateways

  We keep 'external_gateway_info' for backwards compatibility. When
  additional_external_gateways is an empty list, everything behaves as
  before. When additional_external_gateways are given, then the actual
  list of external gateways is (in Python-like pseudo-code):
  [external_gateway_info] + additional_external_gateways.

  Unless otherwise specified all non-directly connected external IPs are
  routed towards the original external_gateway_info. However this
  behavior may be overriden by either using (static) extraroutes, or by
  running () routing protocols and routing towards the external gateway
  where a particular route was learned from.

  # Alternatives

  1) Using 4 logical routers with 1 external gateway each. However in
  this case the API misses the information which (2 or 4) logical
  routers represent the same backend router.

  2) Using a VRRP HA router. However this provides a different level of
  High Availability plus it is active-passive instead of active-active.

  3) Adding router interfaces (since their number is not limited in the
  API) instead of external gateways. However this creates confusion by
  blurring the line of what is internal and what is external to the
  cloud deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905295/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907089] Re: [RFE] Add BFD support for Neutron

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907089

Title:
  [RFE] Add BFD support for Neutron

Status in neutron:
  Won't Fix

Bug description:
  I plan to open a spec with more details as gerrit is more suitable for
  discussions.

  # Problem description

  BFD (Bidirectional Forwarding Detection) is used to detect link failures 
between routers.
  It can be helpful to detect 
  * if an extra route (a nexthop-destination pair) is alive or not, and change 
routes accordingly.
  * Can help routing protocols like ECMP or BGP to change routing decisions 
based on link status.

  # Proposed change (more details are coming in the spec)

  * Add the following new APIs to Neutron:
  ** Handle (Create, list, show, update, delete) bfd_monitors:
  POST /v2.0/bfd_monitors
  GET /v2.0/bfd_monitors
  GET /v2.0/bfd_monitors/{monitor_uuid}
  DELETE POST /v2.0/bfd_monitors/{monitor_uuid}
  PUT /v2.0/bfd_monitors/{monitor_uuid}

  ** Get the current status of a bfd_monitor (As current status can be fetched 
from the backend it can be an expensive operation so better to not mix it with 
show bfd_monitors operation)
  GET /v2.0/bfd_monitors/{monitor_uuid}/monitor_status

  * Change the existing router API
  ** Associate a bfd_monitor to an extra route
  PUT /v2.0/routers/{router_uuid}/add_extraroutes OR PUT 
/v2.0/routers/{router_id}
  {"router" : {"routes" : [{ "destination" : "10.0.3.0/24", "nexthop" : 
"10.0.0.13" , "bfd": }]}}

  ** show routes status for a given router:
  GET /v2.0/routers/{router_id}/routes_status

  BFD not only gives monitoring option, but generally used to allow quick 
response to link status changes. 
  In Neutron case this can be the removal of dead route from the routing table, 
and adding it back if the monitor status goes to UP again. Other backends, and 
switch/routing implementations can have more sophisticated solutions of course.

  A simple opensource backend can be OVS, as OVS is capable of BFD
  monitoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907089/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921126] Re: [RFE] Allow explicit management of default routes

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921126

Title:
  [RFE] Allow explicit management of default routes

Status in neutron:
  Won't Fix

Bug description:
  This RFE proposes to allow explicit management of the default route(s)
  of a Neutron router.  This is mostly useful for a user to install
  multiple default routes for Equal Cost Multipath (ECMP) and treat all
  these routes uniformly.

  Since I already written a spec proposal for this, please see the
  details there:

  https://review.opendev.org/c/openstack/neutron-specs/+/781475

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1921126/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828543] Re: Routed provider networks: placement API handling errors

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828543

Title:
  Routed provider networks: placement API handling errors

Status in neutron:
  Won't Fix

Bug description:
  Routed provider networks is a feature which uses placement to store 
information about segments, the subnets in segments and make possible that nova 
can use this information in scheduling.
  On master the placement API calls are failing, at first at get_inventory call:

  May 09 14:15:26 multicont neutron-server[31232]: DEBUG 
oslo_concurrency.lockutils [-] Lock 
"notifier-a76cce90-7366-495e-9784-9ddef689bc71" released by 
"neutron.notifiers.batch_notifier.BatchNotifier.queue_event..synced_send"
 :: held 0.112s {{(pid=31252) inner 
/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
  May 09 14:15:26 multicont neutron-server[31232]: Traceback (most recent call 
last):
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 433, in 
get_inventory
  May 09 14:15:26 multicont neutron-server[31232]: return 
self._get(url).json()
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 178, in _get
  May 09 14:15:26 multicont neutron-server[31232]: **kwargs)
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 1037, 
in get
  May 09 14:15:26 multicont neutron-server[31232]: return self.request(url, 
'GET', **kwargs)
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 890, in 
request
  May 09 14:15:26 multicont neutron-server[31232]: raise 
exceptions.from_response(resp, method, url)
  May 09 14:15:26 multicont neutron-server[31232]: 
keystoneauth1.exceptions.http.NotFound: Not Found (HTTP 404) (Request-ID: 
req-4133f4c6-df6c-467f-9d15-e8532fc6504b)
  May 09 14:15:26 multicont neutron-server[31232]: During handling of the above 
exception, another exception occurred:
  ...
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 229, in 
_update_nova_inventory
  May 09 14:15:26 multicont neutron-server[31232]: IPV4_RESOURCE_CLASS)
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 53, in wrapper
  May 09 14:15:26 multicont neutron-server[31232]: return f(self, *a, **k)
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 444, in 
get_inventory
  May 09 14:15:26 multicont neutron-server[31232]: if "No resource provider 
with uuid" in e.details:
  May 09 14:15:26 multicont neutron-server[31232]: TypeError: argument of type 
'NoneType' is not iterable

  Using stable/pike (not just for neutron) the syncing is OK.
  I suppose as the placement client code was moved to neutron-lib and changed 
to work with placement 1.20 something happened that makes routed networks 
placement calls failing.

  Some details:
  Used reproduction steps: 
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html (of 
course the pike one for stable/pike deployment)
  neutron: d0e64c61835801ad8fdc707fc123cfd2a65ffdd9
  neutron-lib: bcd898220ff53b3fed46cef8c460269dd6af3492
  placement: 57026255615679122e6f305dfa3520c012f57ca7
  nova: 56fef7c0e74d7512f062c4046def10401df16565
  Ubuntu 18.04.2 LTS based multihost devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828543/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1875516] Re: [RFE] Allow sharing security groups as read-only

2022-10-18 Thread Rodolfo Alonso
RFE implementation not attended, reopen if needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1875516

Title:
  [RFE] Allow sharing security groups as read-only

Status in neutron:
  Won't Fix

Bug description:
  Currently, security groups can be shared with the rbac system, but the
  only valid action is `access_as_shared`, which allows the target
  tenant to create/delete (only) new rules on the security group. This
  works fine for use-cases where the group should be shared in a nearly
  equal way.

  [Problem description]
  Some users/services may want a security group to be visible, but read-only. A 
prime example of this would be to enable ProjectB to add a security group owned 
by ProjectA as a remotely trusted group on their own security group.
  The immediate need for this is found in the following Octavia patch:
  https://review.opendev.org/723735

  Octavia would like to share the security group it creates for each
  load-balancer with the load-balancer's owner, so they can open access
  to their backend members for only a specific load-balancer.

  [Proposed solution]
  Add a new action type for security group RBAC: `access_as_readonly` (or 
similar, name up for debate). This action would allow the target tenant to see 
the shared security group with Show/List, but not create/delete new rules for 
it or change it in any way.

  [Alternatives]
  Overload `access_as_external` to be valid for security groups as well, and 
define it to mean the same as above (entirely read-only access). This makes 
some sense, but it is probably cleaner to simply add a new action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1875516/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2022-10-18 Thread Lajos Katona
implicit provider loading was long time ago merged

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  Invalid
Status in networking-l2gw:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993288] [NEW] RFE: Adopt Keystone unified limits as quota driver for Neutron

2022-10-18 Thread Lajos Katona
Public bug reported:

Keystone has the ability to store and relay project specific limits (see [1]). 
The API (see [2]) provides a way for the admin to create limits for each 
project for the resources.
The feature is considered ready and even the API (via oslo_limts) can be 
changed as more and more projects adopt it and based on user feedback.

For how to use unified limits and adopt in a project a nice guideline is
under [3].

Currently Nova (see [4]) and Glance (see [5]) partly implemented the
usage of unified limits. It is still experimental.

Cinder checked this option but decided to wait till unified limits is
more mature (see [7])

Pros (as I see):
* Common Openstack wide API for admins to define Limits for projects.
* Long term support for other enforcement models like hierarchies (as I see it 
is still not supported in oslo_limits, see [8]).

Cons (as I see):
* Keystone as bottleneck, for all operation an API req is needed (there is some 
cache in oslo_limit)
* How we can solve the concurency issue, it is now not a db_lock but we have to 
be sure that on the API level we handle concurrent resource usages.
* all resources must be first registered on Keystone API, otherwise the quota 
check/enforcement will fail.
* it is not yet ready (see the big warning on top of [1].


[1]: https://docs.openstack.org/keystone/latest/admin/unified-limits.html
[2]: https://docs.openstack.org/api-ref/identity/v3/#unified-limits
[3]: 
https://docs.openstack.org/project-team-guide/technical-guides/unified-limits.html
[4]: https://review.opendev.org/q/topic:bp%252Funified-limits-nova
[5]: https://review.opendev.org/q/topic:bp%252Fglance-unified-quotas
[6]: 
https://docs.openstack.org/keystone/latest/admin/unified-limits.html#strict-two-level
[7]: 
https://specs.openstack.org/openstack/cinder-specs/specs/zed/quota-system.html#unified-limits
[8]: 
https://opendev.org/openstack/oslo.limit/src/branch/master/oslo_limit/limit.py#L223-L240

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993288

Title:
  RFE: Adopt Keystone unified limits as quota driver for Neutron

Status in neutron:
  New

Bug description:
  Keystone has the ability to store and relay project specific limits (see 
[1]). The API (see [2]) provides a way for the admin to create limits for each 
project for the resources.
  The feature is considered ready and even the API (via oslo_limts) can be 
changed as more and more projects adopt it and based on user feedback.

  For how to use unified limits and adopt in a project a nice guideline
  is under [3].

  Currently Nova (see [4]) and Glance (see [5]) partly implemented the
  usage of unified limits. It is still experimental.

  Cinder checked this option but decided to wait till unified limits is
  more mature (see [7])

  Pros (as I see):
  * Common Openstack wide API for admins to define Limits for projects.
  * Long term support for other enforcement models like hierarchies (as I see 
it is still not supported in oslo_limits, see [8]).

  Cons (as I see):
  * Keystone as bottleneck, for all operation an API req is needed (there is 
some cache in oslo_limit)
  * How we can solve the concurency issue, it is now not a db_lock but we have 
to be sure that on the API level we handle concurrent resource usages.
  * all resources must be first registered on Keystone API, otherwise the quota 
check/enforcement will fail.
  * it is not yet ready (see the big warning on top of [1].

  
  [1]: https://docs.openstack.org/keystone/latest/admin/unified-limits.html
  [2]: https://docs.openstack.org/api-ref/identity/v3/#unified-limits
  [3]: 
https://docs.openstack.org/project-team-guide/technical-guides/unified-limits.html
  [4]: https://review.opendev.org/q/topic:bp%252Funified-limits-nova
  [5]: https://review.opendev.org/q/topic:bp%252Fglance-unified-quotas
  [6]: 
https://docs.openstack.org/keystone/latest/admin/unified-limits.html#strict-two-level
  [7]: 
https://specs.openstack.org/openstack/cinder-specs/specs/zed/quota-system.html#unified-limits
  [8]: 
https://opendev.org/openstack/oslo.limit/src/branch/master/oslo_limit/limit.py#L223-L240

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1993288/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993181] Re: [OVN] OVN metadata "MetadataProxyHandler" not working if workers=0

2022-10-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/861649
Committed: 
https://opendev.org/openstack/neutron/commit/f43891bf866b65ceef0e51633afbbf57ee2a6be8
Submitter: "Zuul (22348)"
Branch:master

commit f43891bf866b65ceef0e51633afbbf57ee2a6be8
Author: Rodolfo Alonso Hernandez 
Date:   Wed Oct 5 13:22:29 2022 +0200

[OVN] Allow to execute ``MetadataProxyHandler`` in a local thread

If configuration option "metadata_workers=0", the OVN metadata agent
will try to spawn the ``MetadataProxyHandler`` instance inside a local
thread, instead of creating a new process. In this case, the method
``MetadataProxyHandler.post_fork_initialize`` is never called and the
SB IDL is never created.

This patch passes the OVN metadata agent SB IDL instance to the proxy
handler instance. This also reduces the number of OVN database active
connections.

Closes-Bug: #1993181
Change-Id: If9d827228002de7e3a55be660da266b60b0dfb79


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993181

Title:
  [OVN] OVN metadata "MetadataProxyHandler" not working if workers=0

Status in neutron:
  Fix Released

Bug description:
  The OVN metadata service can spawn several "MetadataProxyHandler"
  instances in separate processes. The number of workers
  ("metadata_workers") will define how many processes will be created.

  If this configuration variable is set to zero, the parent process (in
  the case the OVN metadata agent) won't create a new process but start
  the service. The "wsgi.Server" instance will be started instead; that
  means the "wsgi.Server" instance will use the thread pool "self.pool =
  eventlet.GreenPool(1)" to execute the application. The server
  application is an instance of
  "neutron.agent.ovn.metadata.server.MetadataProxyHandler".

  This instance of "MetadataProxyHandler" has a SB IDL connection. This
  IDL connection is initialized in the "post_fork_initialize" call, that
  is executed when the event (resources.PROCESS, events.AFTER_INIT) is
  received. The problem is that when the "MetadataProxyHandler" instance
  is called inside a thread, not a process, this method is not called
  and the IDL connection is not initialized.

  In order to solve this issue and at the same time save OVN DB
  connections, the proposal is to resuse the OVN metadata IDL connection
  with the Proxy thread.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1993181/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp