[Yahoo-eng-team] [Bug 1993503] Re: grub_dpkg writes wrong device into debconf

2022-10-20 Thread Brett Holman
** Also affects: subiquity
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1993503

Title:
  grub_dpkg writes wrong device into debconf

Status in cloud-init:
  New
Status in subiquity:
  New

Bug description:
  After auto-installing ubuntu 22.04 onto a LV on a mdraid 1 with two
  disks cc_grub_dpkg overrides the correct `grub-pc/install_devices`
  debconf entry with a false one on first boot:

  ```
  ~# debconf-show grub-pc | grep grub-pc/install_devices:
  * grub-pc/install_devices: /dev/disk/by-id/dm-name-vg0-lv_root
  ```

  This breaks grub updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1993503/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993742] [NEW] Foreign key constraint fails with federated LDAP backed domain

2022-10-20 Thread Andy Gomez
Public bug reported:

We have an LDAP backed federated domain with OIDC via Okta.
Trying to login we see the below error.

CRITICAL keystone [req-f29ebb11-6626-4a70-99b8-966dd06f2409 - - - - -] 
Unhandled error: oslo_db.exception.DBReferenceError: 
(pymysql.err.IntegrityError) (1452, 'Cannot add or update a child row: a 
foreign key constraint fails (`keystone`.`expiring_user_group_membership`, 
CONSTRAINT `expiring_user_group_membership_ibfk_2` FOREIGN KEY (`group_id`) 
REFERENCES `group` (`id`))')
 [SQL: 
INSERT INTO expiring_user_group_membership (user_id, group_id, idp_id, 
last_verified) VALUES (%(user_id)s, %(group_id)s, %(idp_id)s, 
%(last_verified)s)]

Login works if we disable foreign key checks.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1993742

Title:
  Foreign key constraint fails with federated LDAP backed domain

Status in OpenStack Identity (keystone):
  New

Bug description:
  We have an LDAP backed federated domain with OIDC via Okta.
  Trying to login we see the below error.

  CRITICAL keystone [req-f29ebb11-6626-4a70-99b8-966dd06f2409 - - - - -] 
Unhandled error: oslo_db.exception.DBReferenceError: 
(pymysql.err.IntegrityError) (1452, 'Cannot add or update a child row: a 
foreign key constraint fails (`keystone`.`expiring_user_group_membership`, 
CONSTRAINT `expiring_user_group_membership_ibfk_2` FOREIGN KEY (`group_id`) 
REFERENCES `group` (`id`))')
   
[SQL: INSERT INTO expiring_user_group_membership (user_id, group_id, idp_id, 
last_verified) VALUES (%(user_id)s, %(group_id)s, %(idp_id)s, 
%(last_verified)s)]

  Login works if we disable foreign key checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1993742/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993736] [NEW] nova does not log delete action when deleting shelved_offloaded instances

2022-10-20 Thread Alex Chan
Public bug reported:

Description
===
When deleting an instance that is shelved, delete action is not added to 
instance action list.

Steps to reproduce
==
$ nova shelve 
$ nova delete 
$ nova instance-action-list 

Expected result
===
delete action should show up in the action list.

Actual result
=
delete action does not show up in the action list.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1993736

Title:
  nova does not log delete action when deleting shelved_offloaded
  instances

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When deleting an instance that is shelved, delete action is not added to 
instance action list.

  Steps to reproduce
  ==
  $ nova shelve 
  $ nova delete 
  $ nova instance-action-list 

  Expected result
  ===
  delete action should show up in the action list.

  Actual result
  =
  delete action does not show up in the action list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1993736/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389772] Re: Glance image hash use MD5

2022-10-20 Thread Brian Rosmaita
Glance uses os_hash_algo and os_hash_value since Rocky (default
os_hash_algo is sha512).  Legacy 'checksum' field is populated for
backward compatibility.

** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1389772

Title:
  Glance image hash use MD5

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Apparently, Glance still use MD5 to hash image. Considering the recent
  disclosed attack[1] (that supposedly allow to generate chosen colision
  in an effective amount of time), it's safe to assume MD5 is broken to
  verify anything...

  If someone is able to generate another image with the same hash, I
  guess it will appear as another entry in "glance list", but then
  beside the glance uuid, there is no other way to identify the
  malicious one right ?

  I guess it would be a nice security hardening change to, at least,
  allow the configuration of hash algorithm.

  [1]: http://natmchugh.blogspot.co.uk/2014/10/how-i-created-two-images-
  with-same-md5.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1389772/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1858610] Re: [RFE] Qos policy not supporting sharing bandwidth between several nics of the same vm.

2022-10-20 Thread Rodolfo Alonso
Bug closed due to the lack of activity. Please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1858610

Title:
  [RFE] Qos policy not supporting sharing bandwidth between several nics
  of the same vm.

Status in neutron:
  Won't Fix

Bug description:
  Currently, network interface could be associated with an single qos
  policy, and the several nics of the same vm will work with their own
  associated networking policy independently. but in our production
  situation, taking into consideration of the usage of resource, we want
  to limit the total bandwidth of the vm(with multiple nics). In other
  words, we want the sum of the bandwidth of the several nics of the vm
  will be limited to the specified value. But so far, it seems that
  neutron don't have this feature.Is there any blueprint working on this
  future?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1858610/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492021] Re: bagpipe: do not overload the ovs agent

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492021

Title:
  bagpipe: do not overload the ovs agent

Status in networking-bgpvpn:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  the bagpipe driver works with its own agent that is an overload of the
  ovs agent.

  with the neutron liberty release, the ovs agent is extendable, thanks to this 
change : 
  https://review.openstack.org/#/c/195439/

  bagpipe should be able to leverage this extension framework to avoid
  the use of its own l2 ovs agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1492021/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507651] Re: MidoNet Neutron Plugin upgrade from kilo stable 2015.1.0 to kilo unstable 2015.1.1.2.0-1~rc0 (MNv5.0) not supported

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507651

Title:
  MidoNet Neutron Plugin upgrade from kilo stable 2015.1.0 to kilo
  unstable 2015.1.1.2.0-1~rc0 (MNv5.0) not supported

Status in networking-midonet:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  New supported features in last unstable version of the kilo plugin
  2015.1.1.2.0-1~rc0 such as port_security cause backwards
  incompatibility with stable version of kilo plugin 2015.1.0.

  E.g. neutron-server logs:

  2015-10-19 11:23:23.722 29190 ERROR neutron.api.v2.resource 
[req-007bd588-78a5-4cdd-a893-7522c1820edc ] index failed
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 319, in index
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 249, in _items
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 1970, 
in get_ports
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource items = 
[self._make_port_dict(c, fields) for c in query]
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 936, 
in _make_port
  _dict
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource 
attributes.PORTS, res, port)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/common_db_mixin.py", line 162, in 
_apply_dict_
  extend_functions
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource func(*args)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/portsecurity_db.py", line 31, in 
_extend_port_
  security_dict
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource psec_value = 
db_data['port_security'][psec.PORTSECURITY]
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource
  2015-10-19 11:23:24.283 29190 ERROR oslo_messaging.rpc.dispatcher 
[req-21c014b0-c418-4ebe-822f-3789fc680af6 ] Exception during message handling: 
'NoneType' ob
  ject has no attribute '__getitem__'
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _di
  spatch_and_reply
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _di
  spatch
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do
  _dispatch
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 
120, in
   get_active_networks_info
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
networks = self._get_active_networks(context, **kwargs)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 
63, in
  _get_active_networks
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
context, host)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py", line 420, 
in lis
  t_active_networks_on_active_dhcp_agent
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
filters={'id': net_ids, 

[Yahoo-eng-team] [Bug 1511578] Re: Remove deprecated external_network_bridge option

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511578

Title:
   Remove deprecated external_network_bridge option

Status in neutron:
  Fix Released

Bug description:
  The l3-agent option external_network_bridge has been deprecated in
  Liberty[1] because when non-empty the l3-agent ignores network
  provider properties. It also ensures to handle in the same way
  internal and external networks.

  
  [1] https://bugs.launchpad.net/neutron/+bug/1491668

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511578/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521117] Re: fixed_ip of floatingip is not updated.

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521117

Title:
  fixed_ip of floatingip is not updated.

Status in neutron:
  Won't Fix

Bug description:
  When update fixed_ip of a port that associated with floatingip,
  the fixed_ip of the floatingip is not updated.

  How to reproduce:

  (1) create a port

  $ neutron port-show 9b11ce55-a404-4ae4-941d-ea0db555c331
  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| True

 |
  | allowed_address_pairs | 

 |
  | binding:host_id   | 

 |
  | binding:profile   | {}  

 |
  | binding:vif_details   | {}  

 |
  | binding:vif_type  | unbound 

 |
  | binding:vnic_type | normal  

 |
  | device_id | 

 |
  | device_owner  | 

 |
  | dns_assignment| {"hostname": "host-10-0-0-11", "ip_address": 
"10.0.0.11", "fqdn": "host-10-0-0-11.openstacklocal."}  
|
  |   | {"hostname": 
"host-fdcb-a2eb-ab3e-0-f816-3eff-fe90-aa8e", "ip_address": 
"fdcb:a2eb:ab3e:0:f816:3eff:fe90:aa8e", "fqdn": 
"host-fdcb-a2eb-ab3e-0-f816-3eff-fe90-aa8e.openstacklocal."} |
  | dns_name  | 

 |
  | extra_dhcp_opts   | 

 |
  | fixed_ips | {"subnet_id": 
"fedeed6a-b2fc-41f8-8382-6a7ebc34ff1a", "ip_address": "10.0.0.11"}  

   |
  |   | {"subnet_id": 
"68cbe554-0b26-4d6c-992d-4d8889cfb49f", "ip_address": 
"fdcb:a2eb:ab3e:0:f816:3eff:fe90:aa8e"} 
 |
  | id| 9b11ce55-a404-4ae4-941d-ea0db555c331

 |
  | mac_address   | fa:16:3e:90:aa:8e   

 |
  | name  | test-port   
   

[Yahoo-eng-team] [Bug 1526818] Re: Install guide: Add arp_ignore (sysctl.conf) to the other IP options

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526818

Title:
  Install guide: Add arp_ignore (sysctl.conf) to the other IP options

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  We are facing a very strange behaviour of ARP in tenant networks,
  causing Windows guests to incorrectly decline DHCP addresses. These
  VMs apparently do an ARP request for the address they have been
  offered, discarding them in case a different MAC is reporting to own
  that IP already.

  We are using openvswitch-agent with ml2 plugin.

  Investigating this issue using Linux guests. Please look at the
  following example. A VM with the fixed-ip 192.168.1.15 reports the
  following ARP cache:

 root@michael-test2:~# arp
 Address  HWtype  HWaddress   Flags Mask
Iface
 host-192-168-1-2.openst  ether   fa:16:3e:de:ab:ea   C 
eth0
 192.168.1.13 ether   a6:b2:dc:d8:39:c1   C 
eth0
 192.168.1.119(incomplete)  
eth0
 host-192-168-1-20.opens  ether   fa:16:3e:76:43:ce   C 
eth0
 host-192-168-1-19.opens  ether   fa:16:3e:0d:a6:0b   C 
eth0
 host-192-168-1-1.openst  ether   fa:16:3e:2a:81:ff   C 
eth0
 192.168.1.14 ether   0e:bf:04:b7:ed:52   C 
eth0
 
  Both 192.168.1.13 and 192.168.1.14 do not exist in this subnet, and their MAC 
addresses a6:b2:dc:d8:39:c1 and 0e:bf:04:b7:ed:52 actually belong to other 
instance qbr* and qvb* devices, living on their respective hypervisor hosts!

  Looking at 0e:bf:04:b7:ed:52, for example, yields

 # ip link list | grep -C1 -e 0e:bf:04:b7:ed:52
 59: qbr9ac24ac1-e1:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default 
 link/ether 0e:bf:04:b7:ed:52 brd ff:ff:ff:ff:ff:ff
 60: qvo9ac24ac1-e1:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
 --
 61: qvb9ac24ac1-e1:  mtu 1500 
qdisc pfifo_fast master qbr9ac24ac1-e1 state UP mode DEFAULT group default qlen 
1000
 link/ether 0e:bf:04:b7:ed:52 brd ff:ff:ff:ff:ff:ff
 62: tap9ac24ac1-e1:  mtu 1500 qdisc 
pfifo_fast master qbr9ac24ac1-e1 state UNKNOWN mode DEFAULT group default qlen 
500

  on the compute node. Using tcpdump on qbr9ac24ac1-e1 on the host and
  triggering a fresh ARM lookup from the guest results in

 # tcpdump -i qbr9ac24ac1-e1 -vv -l | grep ARP
 tcpdump: WARNING: qbr9ac24ac1-e1: no IPv4 address assigned
 tcpdump: listening on qbr9ac24ac1-e1, link-type EN10MB (Ethernet), capture 
size 65535 bytes
 14:00:32.089726 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
192.168.1.14 tell 192.168.1.15, length 28
 14:00:32.089740 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 0e:bf:04:b7:ed:52 (oui Unknown), length 28
 14:00:32.090141 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 7a:a5:71:63:47:94 (oui Unknown), length 28
 14:00:32.090160 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 02:f9:33:d5:04:0d (oui Unknown), length 28
 14:00:32.090168 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.14 
is-at 9a:a0:46:e4:03:06 (oui Unknown), length 28

  Four different devices are claiming to own the non-existing IP
  address! Looking them up in neutron shows they are all related to
  existing ports on the subnet, but different ones:

 # neutron port-list | grep -e 47fbb8b5-55 -e 46647cca-32 -e e9e2d7c3-7e -e 
9ac24ac1-e1
 | 46647cca-3293-42ea-8ec2-0834e19422fa |   
| fa:16:3e:7d:9c:45 | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.8"}   |
 | 47fbb8b5-5549-46e4-850e-bd382375e0f8 |   
| fa:16:3e:fa:df:32 | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.7"}   |
 | 9ac24ac1-e157-484e-b6a2-a1dded4731ac |   
| fa:16:3e:2a:80:6b | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.15"}  |
 | e9e2d7c3-7e58-4bc2-a25f-d48e658b2d56 |   
| fa:16:3e:0d:a6:0b | {"subnet_id": 
"25dbbdc0-f438-4f89-8663-1772f9c7ef36", "ip_address": "192.168.1.19"}  |

  Environment:

  Host: Ubuntu server 14.04
  Kernel: linux-image-generic-lts-vivid, 3.19.0-39-generic #44~14.04.1-Ubuntu 
SMP Wed Dec 2 10:00:35 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  OpenStack Kilo:
  # dpkg -l | grep -e nova -e neutron
  ii  neutron-common  1:2015.1.2-0ubuntu2~cloud0
all  Neutron is a virtual network 

[Yahoo-eng-team] [Bug 1528758] Re: SIGUSR1 is deprecated in Guru mediation

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528758

Title:
  SIGUSR1 is deprecated in Guru mediation

Status in Magnum:
  Fix Released
Status in neutron:
  Won't Fix

Bug description:
  Guru mediation now registers SIGUSR1 and SIGUSR2 by default for
  backward compatibility. SIGUSR1 will no longer be registered in a
  future release, so please use SIGUSR2 to generate reports[1].

  [1]http://docs.openstack.org/developer/oslo.reports/usage.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1528758/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540555] Re: Nobody listens to network delete notifications (OVS specific)

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540555

Title:
  Nobody listens to network delete notifications (OVS specific)

Status in neutron:
  Won't Fix

Bug description:
  Here it can be seen that agents are notified of network delete event:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

  But on the agent side only network update events are being listened:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

  That was uncovered during testing of Pika driver, because it does not
  allow to send messages to queues which do not exist, unlike current
  Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
  driver, but still it is worthy to get rid of unnecessary notifications
  on Neutron side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547412] Re: Router interfaces exists on all the controllers after making L3 agent down on active controller i.e duplicate ip's present with duplication of ping

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547412

Title:
  Router interfaces exists on all the controllers after making L3 agent
  down on active controller i.e duplicate ip's present with duplication
  of ping

Status in neutron:
  Won't Fix

Bug description:
  Steps to reproduce:
  1)Create a router
  2)Add router interface to subnet
  3)boot a vm
  4)create a floating ip and associate it to vm port id
  5)Make l3 agent down on active controller
  stack@padawan-ccp-c1-m2-mgmt:~$ neutron l3-agent-list-hosting-router 
23ec6a25-b578-4630-86ba-3a544293385b
  
---
  idhostadmin_state_up  alive   ha_state
  
---
  5d47d95e-3750-4007-86b3-a724092cfd85  padawan-ccp-c1-m3-mgmt  True
active
  24dd994c-5e67-41b7-acd6-8766b50f838d  padawan-ccp-c1-m2-mgmt  Truexxx 
standby
  
---
  stack@padawan-ccp-c1-m2-mgmt:~$ sudo ip netns exec 
qrouter-23ec6a25-b578-4630-86ba-3a544293385b ifconfig
  ha-b6dcf491-e6 Link encap:Ethernet HWaddr fa:16:3e:ce:ff:13
  inet addr:169.254.192.5 Bcast:169.254.255.255 Mask:255.255.192.0
  inet6 addr: fe80::f816:3eff:fece:ff13/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:3650 errors:0 dropped:391 overruns:0 frame:0
  TX packets:404 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:229496 (224.1 KiB) TX bytes:22476 (21.9 KiB)
  lo Link encap:Local Loopback
  inet addr:127.0.0.1 Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING MTU:65536 Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
  qg-3d627c6b-86 Link encap:Ethernet HWaddr fa:16:3e:70:95:f3
  inet addr:10.36.0.24 Bcast:0.0.0.0 Mask:255.255.0.0
  inet6 addr: fe80::f816:3eff:fe70:95f3/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:8961 errors:0 dropped:8 overruns:0 frame:0
  TX packets:249 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:600352 (586.2 KiB) TX bytes:20626 (20.1 KiB)
  qr-ba518ea4-f7 Link encap:Ethernet HWaddr fa:16:3e:9f:89:aa
  inet addr:7.7.7.1 Bcast:0.0.0.0 Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe9f:89aa/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:923 errors:0 dropped:1 overruns:0 frame:0
  TX packets:395 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:79946 (78.0 KiB) TX bytes:38625 (37.7 KiB)
  stack@padawan-ccp-c1-m3-mgmt:~$ sudo ip netns exec 
qrouter-23ec6a25-b578-4630-86ba-3a544293385b ifconfig
  ha-47cfce0e-c5 Link encap:Ethernet HWaddr fa:16:3e:db:39:24
  inet addr:169.254.192.6 Bcast:169.254.255.255 Mask:255.255.192.0
  inet6 addr: fe80::f816:3eff:fedb:3924/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:3223 errors:0 dropped:581 overruns:0 frame:0
  TX packets:833 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:196426 (191.8 KiB) TX bytes:46062 (44.9 KiB)
  lo Link encap:Local Loopback
  inet addr:127.0.0.1 Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING MTU:65536 Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
  qg-3d627c6b-86 Link encap:Ethernet HWaddr fa:16:3e:70:95:f3
  inet addr:10.36.0.24 Bcast:0.0.0.0 Mask:255.255.0.0
  inet6 addr: fe80::f816:3eff:fe70:95f3/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:2522 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1059 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:213140 (208.1 KiB) TX bytes:98562 (96.2 KiB)
  qr-ba518ea4-f7 Link encap:Ethernet HWaddr fa:16:3e:9f:89:aa
  inet addr:7.7.7.1 Bcast:0.0.0.0 Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe9f:89aa/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
  RX packets:1462 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1035 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:132248 (129.1 KiB) TX bytes:97302 (95.0 KiB)

  ==
  checking if there is any loss in network traffic when L3 agent comes back. 
And verifying the master ownership of the HA router.

  There is a duplicate ip in the network
  Three things to discuss here, 
  1) Before making l3 agent 

[Yahoo-eng-team] [Bug 1592028] Re: [RFE] Support security-group-rule creation with address-groups

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592028

Title:
  [RFE] Support security-group-rule creation with address-groups

Status in neutron:
  Fix Released

Bug description:
  Currently, security-group rules can be created with the remote-ip-
  prefix attribute to specify origin (if ingress) or destination (if
  egress) address filter, this RFE suggests the use of address-groups
  (group of IP CIDR blocks, as defined for FWaaS v2) to support multiple
  remote address/es in one security-group rule.

  [Problem description]
  An Openstack cloud may require connectivity between instances and external 
services which are not provisioned by Openstack, each service may also have 
multiple endpoints. in order for tenant instances to be able to access these 
external hosts (and only them), it is required to define a security-group with 
rules that allow traffic to these specific services, one rule per service 
endpoint (Assuming endpoints addresses aren't contiguous).
  This process can easily become cumbersome - for each new service endpoint it 
is required to create a specific rule for each tenant.

  To overcome this usability issue, it is suggested that Neutron will support 
an API to group IP CIDR blocks in an object which could be later referenced 
when creating a security-group-rule - the user will pass the AddressGroup 
object id as the ‘remote-ip-prefix’ attribute or as other new attribute.
  Whenever it's required to add a service endpoint, the new IP address will be 
added to the relevant AddressGroup - as a side effect, changes will be 
reflected in the underlying security-group rules.

  NOTE: For the purpose of the use-case above, the default allow-egress
  rules are removed ("zero trust" model) once the default sg is created.

  
  A possible example of use in the CLI:

  $ neutron address-group-create --cidrs 1.1.1.1,2.2.2.2 "External Services"
  $ neutron security-group-rule-create --direction egress 
--remote-address-group 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592028/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597233] Re: rbac-create should return an duplicated error when use same 'object_id', 'object_type' and 'target_tenant'

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597233

Title:
  rbac-create should return an duplicated error when use same
  'object_id','object_type' and 'target_tenant'

Status in neutron:
  Won't Fix

Bug description:
  RBAC entry should be unique by combination of 'object_id','object_type' and 
'target_tenant'. 
  But in fact, if we only change the 'action' value, we can get another entry 
with same  'object_id','object_type' and 'target_tenant'. 

  the process is:

  [root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_shared
  Created a new rbac_policy:
  +---+--+
  | Field | Value|
  +---+--+
  | action| access_as_shared |
  | id| 0897f09b-1799-416e-9b5d-99d0e153a1b1 |
  | object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
  | object_type   | network  |
  | target_tenant | tenant_id|
  | tenant_id | aced7a29bb134dec82307a880d1cc542 |
  +---+--+
  [root@localhost devstack]# neutron rbac-create 
a539e28b-5e6c-4436-b44f-e1f966b6a6a4 --type network --target_tenant tenant_id 
--action access_as_external
  Created a new rbac_policy:
  +---+--+
  | Field | Value|
  +---+--+
  | action| access_as_external   |
  | id| 2c12609e-7878-4161-b533-17b6413bcf0b |
  | object_id | a539e28b-5e6c-4436-b44f-e1f966b6a6a4 |
  | object_type   | network  |
  | target_tenant | tenant_id|
  | tenant_id | aced7a29bb134dec82307a880d1cc542 |
  +---+--+
  [root@localhost devstack]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597233/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598081] Re: [RFE] Port status update

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598081

Title:
  [RFE] Port status update

Status in neutron:
  Fix Released

Bug description:
  Neutron port status field represents the current status of a port in
  the cloud infrastructure. The field can take one of the following
  values: 'ACTIVE', 'DOWN', 'BUILD' and 'ERROR'.

  At present, if a network event occurs in the data-plane (e.g. virtual
  or physical switch fails or one of its ports, cable gets pulled
  unintentionally, infrastructure topology changes, etc.), connectivity
  to logical ports may be affected and tenants' services interrupted.
  When tenants/cloud administrators are looking up their resources'
  status (e.g. Nova instances and services running in them, network
  ports, etc.), they will wrongly see everything looks fine. The problem
  is that Neutron will continue reporting port 'status' as 'ACTIVE'.

  Many SDN Controllers managing network elements have the ability to
  detect and report network events to upper layers. This allows SDN
  Controllers' users to be notified of changes and react accordingly.
  Such information could be consumed by Neutron so that Neutron could
  update the 'status' field of those logical ports, and additionally
  generate a notification message to the message bus.

  However, Neutron misses a way to be able to receive such information
  through e.g. ML2 driver or the REST API ('status' field is read-only).
  There are pros and cons on both of these approaches as well as other
  possible approaches. This RFE intends to trigger a discussion on how
  Neutron could be improved to receive fault/change events from SDN
  Controllers or even also from 3rd parties not in charge of controlling
  the network (e.g. monitoring systems, human admins).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598081/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598219] Re: Networking API v2.0 (CURRENT): Create subnet Request parameters missing the 'no-gateway' option.

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598219

Title:
  Networking API v2.0 (CURRENT): Create subnet Request parameters
  missing the 'no-gateway' option.

Status in neutron:
  Won't Fix

Bug description:
  http://developer.openstack.org/api-ref-networking-v2.html

  Create subnet Request parameters missing the 'no-gateway' option.

  localadmin@qa4:~/devstack$ neutron subnet-create --help
  usage: neutron subnet-create [-h] [-f {html,json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ] [--noindent]
   [--prefix PREFIX] [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--name NAME]
   [--gateway GATEWAY_IP | --no-gateway] 
<
   [--allocation-pool start=IP_ADDR,end=IP_ADDR]
   [--host-route destination=CIDR,nexthop=IP_ADDR]
   [--dns-nameserver DNS_NAMESERVER]
   [--disable-dhcp] [--enable-dhcp]
   [--ip-version {4,6}]
   [--ipv6-ra-mode 
{dhcpv6-stateful,dhcpv6-stateless,slaac}]
   [--ipv6-address-mode 
{dhcpv6-stateful,dhcpv6-stateless,slaac}]
   [--subnetpool SUBNETPOOL]
   [--prefixlen PREFIX_LENGTH]
   NETWORK [CIDR]

  Create a subnet for a given tenant.

  positional arguments:
NETWORK   Network ID or name this subnet belongs to.
CIDR  CIDR of subnet to create.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
--tenant-id TENANT_ID
  The owner tenant ID.
--name NAME   Name of this subnet.
--gateway GATEWAY_IP  Gateway IP of this subnet.
--no-gateway  No distribution of gateway. 

  
  localadmin@qa4:~/devstack$ neutron --debug subnet-create --name my-subnet 
--no-gateway <<< my-net 1.1.1.0/24
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.29.85.228:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] Content-Length: 339 Vary: 
X-Auth-Token Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) 
Connection: Keep-Alive Date: Fri, 01 Jul 2016 14:57:31 GMT Content-Type: 
application/json x-openstack-request-id: 
req-47f66134-62d8-4cf8-a979-99eacbdb1069 
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://172.29.85.228:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
cliff.formatters.json_format:JSONFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('shell = 
cliff.formatters.shell:ShellFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
cliff.formatters.yaml_format:YAMLFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
  DEBUG: neutronclient.neutron.v2_0.subnet.CreateSubnet 
get_data(Namespace(allocation_pools=None, cidr=u'1.1.1.0/24', columns=[], 
disable_dhcp=False, dns_nameservers=None, enable_dhcp=False, formatter='table', 
gateway=None, host_routes=None, ip_version=4, ipv6_address_mode=None, 
ipv6_ra_mode=None, max_width=0, name=u'my-subnet', network_id=u'my-net', 
no_gateway=True, noindent=False, prefix='', prefixlen=None, 
request_format='json', subnetpool=None, tenant_id=None, variables=[]))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://172.29.85.228:5000/v2.0/tokens
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_scheduler 
= neutronclient.neutron.v2_0.cisco.routerscheduler')
  DEBUG: stevedore.extension found extension EntryPoint.parse('hosting_devices 
= neutronclient.neutron.v2_0.cisco.hostingdevice')
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_types = 
neutronclient.neutron.v2_0.cisco.routertype')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('hosting_device_scheduler = 
neutronclient.neutron.v2_0.cisco.hostingdevicescheduler')
  

[Yahoo-eng-team] [Bug 1598254] Re: Networking API v2.0 (CURRENT): Create Network Request and Response missing the 'description' parameter.

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598254

Title:
  Networking API v2.0 (CURRENT): Create Network Request and Response
  missing the 'description' parameter.

Status in neutron:
  Won't Fix

Bug description:
  http://developer.openstack.org/api-ref-networking-v2.html

  Create Network Request and Response missing the 'description'
  parameter.

  localadmin@qa4:~/devstack$ neutron --debug net-create my-net 
--description="My test network"
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.29.85.228:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] Content-Length: 339 Vary: 
X-Auth-Token Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) 
Connection: Keep-Alive Date: Fri, 01 Jul 2016 14:20:33 GMT Content-Type: 
application/json x-openstack-request-id: 
req-137144d2-eb76-4eb7-aa4b-808f2d1c69d9 
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://172.29.85.228:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
cliff.formatters.json_format:JSONFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('shell = 
cliff.formatters.shell:ShellFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
cliff.formatters.yaml_format:YAMLFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
  DEBUG: neutronclient.neutron.v2_0.network.CreateNetwork 
get_data(Namespace(admin_state=True, columns=[], formatter='table', 
max_width=0, name=u'my-net', noindent=False, prefix='', 
provider:network_type=None, provider:physical_network=None, 
provider:segmentation_id=None, qos_policy=None, request_format='json', 
tenant_id=None, variables=[]))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://172.29.85.228:5000/v2.0/tokens
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_scheduler 
= neutronclient.neutron.v2_0.cisco.routerscheduler')
  DEBUG: stevedore.extension found extension EntryPoint.parse('hosting_devices 
= neutronclient.neutron.v2_0.cisco.hostingdevice')
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_types = 
neutronclient.neutron.v2_0.cisco.routertype')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('hosting_device_scheduler = 
neutronclient.neutron.v2_0.cisco.hostingdevicescheduler')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('hosting_device_templates = 
neutronclient.neutron.v2_0.cisco.hostingdevicetemplate')
  DEBUG: stevedore.extension found extension EntryPoint.parse('hosting_devices 
= networking_cisco.neutronclient.hostingdevice')
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_types = 
networking_cisco.neutronclient.routertype')
  DEBUG: stevedore.extension found extension EntryPoint.parse('policy_profile = 
networking_cisco.neutronclient.policyprofile')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('hosting_device_templates = 
networking_cisco.neutronclient.hostingdevicetemplate')
  DEBUG: stevedore.extension found extension EntryPoint.parse('router_scheduler 
= networking_cisco.neutronclient.routerscheduler')
  DEBUG: stevedore.extension found extension EntryPoint.parse('network_profile 
= networking_cisco.neutronclient.networkprofile')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('hosting_device_scheduler = 
networking_cisco.neutronclient.hostingdevicescheduler')
  DEBUG: keystoneclient.session REQ: curl -g -i -X POST 
http://172.29.85.228:9696/v2.0/networks.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}a7c5442d73f1424784bea97819b8d2b7d8295257" -d '{"network": {"description": 
"My test network", "name": "my-net", "admin_state_up": true}}'
  DEBUG: keystoneclient.session RESP: [201] Date: Fri, 01 Jul 2016 14:20:34 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 613 X-Openstack-Request-Id: 
req-1046640c-3a4f-4fb1-b03d-ed53ad812049 
  RESP BODY: {"network": {"status": "ACTIVE", "router:external": false, 
"availability_zone_hints": [], "availability_zones": [], "description": "My 
test network", 

[Yahoo-eng-team] [Bug 1607979] Re: many tests directly modify RESOURCE_ATTRIBUTES_MAP and leave it modified

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607979

Title:
  many tests directly modify RESOURCE_ATTRIBUTES_MAP and leave it
  modified

Status in neutron:
  Won't Fix

Bug description:
  many tests directly modify the global variable,
  RESOURCE_ATTRIBUTES_MAP and leave it modified.

  As a result, the value isn't predictable. So the test result can be 
non-deterministic depending on the execution order.
  Especially the result can be different when only the test is run and when 
it's run as whole.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607979/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608347] Re: Neutron-server can't clean obsolete tunnel info

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608347

Title:
  Neutron-server can't clean obsolete tunnel info

Status in neutron:
  Won't Fix

Bug description:
  Now for tunnel network like vxlan or gre in neutron, if we change a
  compute node tunnel IP, the obsolete tunnel info was still saved in
  neutron-server, and other compute nodes still established tunnel with
  the obsolete IP, I think we should provide a approach to clean the
  obsolete tunnel info, like ovs_cleanup.py[1].

  [1] https://github.com/openstack/neutron/blob/master/setup.cfg#L56

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608347/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 968696] Re: "admin"-ness not properly scoped

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/968696

Title:
  "admin"-ness not properly scoped

Status in Cinder:
  Fix Released
Status in Glance:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Committed
Status in puppet-keystone:
  Invalid

Bug description:
  Fact: Keystone's rbac model grants roles to users on specific tenants,
  and post-keystone redux, there are no longer "global" roles.

  Problem: Granting a user an "admin" role on ANY tenant grants them
  unlimited "admin"-ness throughout the system because there is no
  differentiation between a scoped "admin"-ness and a global
  "admin"-ness.

  I don't have a specific solution to advocate, but being an admin on
  *any* tenant simply *cannot* allow you to administer all of keystone.

  Steps to reproduce (from Horizon, though you could do this with the
  CLI, too):

  1. User A (existing admin) creates Project B and User B.
  2. User A adds User B to Project B with the admin role on Project B.
  3. User B logs in and now has unlimited admin rights not only to view things 
in the dashboard, but to take actions like creating new projects and users, 
managing existing projects and users, etc.

  
  Note:  See changes ongoing under 
https://bugs.launchpad.net/neutron/+bug/1602081  which is required before 
policy changes can enforce.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/968696/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376316] Re: nova absolute-limits floating ip count is incorrect in a neutron based deployment

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376316

Title:
  nova absolute-limits floating ip count is incorrect in a neutron based
  deployment

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  In Progress

Bug description:
  1.
  $ lsb_release -rd
  Description:  Ubuntu 14.04 LTS
  Release:  14.04

  2.
  $ apt-cache policy python-novaclient 
  python-novaclient:
Installed: 1:2.17.0-0ubuntu1
Candidate: 1:2.17.0-0ubuntu1
Version table:
   *** 1:2.17.0-0ubuntu1 0
  500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 
Packages
  100 /var/lib/dpkg/status

  3. nova absolute-limits should report the correct value of allocated floating 
ips
  4. nova absolute-limits shows 0 floating ips when I have 5 allocated

  $ nova absolute-limits | grep Floating
  | totalFloatingIpsUsed| 0  |
  | maxTotalFloatingIps | 10 |

  $ nova floating-ip-list
  +---+---++-+
  | Ip| Server Id | Fixed Ip   | Pool|
  +---+---++-+
  | 10.98.191.146 |   | -  | ext_net |
  | 10.98.191.100 |   | 10.5.0.242 | ext_net |
  | 10.98.191.138 |   | 10.5.0.2   | ext_net |
  | 10.98.191.147 |   | -  | ext_net |
  | 10.98.191.102 |   | -  | ext_net |
  +---+---++-+

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: python-novaclient 1:2.17.0-0ubuntu1
  ProcVersionSignature: User Name 3.13.0-24.47-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.2
  Architecture: amd64
  Date: Wed Oct  1 15:19:08 2014
  Ec2AMI: ami-0001
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: python-novaclient
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376316/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384235] Re: Nova raises exception about existing libvirt filter

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384235

Title:
  Nova raises exception about existing libvirt filter

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.

  So I have to execute `virsh nwfilter-undefine` and try to boot
  instance again:

  In libvirt logs I can see the following:

  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error

  I use libvirt 1.2.8-3 ( Debian )

  I have the following services defined:

  service_plugins =
  
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

  
  I use Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384235/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463777] Re: Cisco CSR1kv device driver should set 'protocol' value to 'all' or to any valid protocol name.

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463777

Title:
  Cisco CSR1kv device driver should set 'protocol' value to 'all' or to
  any valid protocol name.

Status in networking-cisco:
  New
Status in neutron:
  Won't Fix

Bug description:
  If user set protocol to 'any' (using neutron client or horizon) the
  CSR1kv device driver sends following request:

  {"rules": [{"L4-options": {"dest-port-start": "22"}, "protocol": "tcp", 
"sequence
  ": "100", "destination": "any", "source": "any", "action": "permit"}, 
{"action": "permit", "source": "any", "destination": "any", "protocol": null, 
"sequence": "101"}]} 

  Notice: "protocol": null,

  'null' is not a valid value. According to Reference Guide: A protocol
  number or any of the keywords "all", "tcp", "udp", "icmp","ip"

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1463777/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483099] Re: subnetpool can create the cidr like 0.0.0.0/0

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483099

Title:
  subnetpool can create the cidr like 0.0.0.0/0

Status in neutron:
  Won't Fix

Bug description:
  neutron version : 2.6.0

  I create subnetpool like:
  neutron subnetpool-create --pool-prefix 6.6.66.6/24 --pool-prefix 99.9.9.9/30 
--pool-prefix 8.9.8.8/20 --pool-prefix 0.0.0.0/0 --pool-prefix 0.0.0.0/16  test3
  Created a new subnetpool:
  +---+--+
  | Field | Value|
  +---+--+
  | default_prefixlen | 8|
  | default_quota |  |
  | id| b1bc9e6f-94fc-4d43-bfcb-1098524092cd |
  | ip_version| 4|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | test3|
  | prefixes  | 0.0.0.0/0|
  | shared| False|
  | tenant_id | 2b47a754532a48a9a553964eb435cf0f |
  +---+--+

  And then I create a subnet to network with this subnetpool like:
  neutron subnet-create kl --subnetpool test3
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "0.0.0.2", "end": "0.255.255.254"} |
  | cidr  | 0.0.0.0/8|
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 0.0.0.1  |
  | host_routes   |  |
  | id| 17b680d2-ec29-4221-9298-ce00ae276be4 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  |  |
  | network_id| 507aeb92-9b30-46ab-b2aa-da2ad13517e3 |
  | subnetpool_id | b1bc9e6f-94fc-4d43-bfcb-1098524092cd |
  | tenant_id | 2b47a754532a48a9a553964eb435cf0f |
  +---+--+

  So the subnetpool should not allow this special type of cidrs to be
  created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483099/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499177] Re: Performance: L2 agent takes too much time to refresh sg rules

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499177

Title:
  Performance: L2 agent takes too much time to refresh sg rules

Status in neutron:
  Won't Fix

Bug description:
  This issue is introducing a performance problem for the L2 agent
  including LinuxBridge and OVS agent in Compute node when there are
  lots of networks and instances in this Compute node (eg. 500
  instances)

  The performance problem reflect in two aspects:

  1. When LinuxBridge agent service starts up(this seems only happened
  in LinuxBridge agent not for the OVS agent), I found there were two
  methods take too much time:

     1.1 get_interface_by_ip(),  we should find the interface which was
  assigned with the "local ip" defined in configuration file, and to
  check whether this interface support "vxlan" or not.  This method will
  iterate all the interface in this compute node and execute "ip link
  show [interface] to [local  ip]" to judge the result.  I think there
  should be a faster way.

     1.2 prepare_port_filter() ,  in this method ,  we should make sure
  the ipset are create correctly. But this method will execute too much
  "ipset" commands and take too much time.

  2. When devices' sg rules are changed,  L2 agent should refresh the
  firewalls.

  2.1 refresh_firewall() this method will call "modify_rules" to
  make the rules predicable, but this method also takes too much time.

  It will be very benefit for the large scales of networks if this
  performance problem can be fix or optimize.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499177/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499893] Re: Native OVSDB transation commit shows O(n) performance

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499893

Title:
  Native OVSDB transation commit shows O(n) performance

Status in neutron:
  Won't Fix

Bug description:
  Create 100 tenants each one with the following setup where each router
  is scheduled to the same legacy node that has the L3 agent configured
  to use the native OVSDB inerface.

  tenant network --- router -- external network

  Reference http://ibin.co/2GuI6plJvngR for graph of performance during set up 
of 100 routers.
  In the above graph, y-axis is time in seconds, and x-axis is pass through 
_ovs_add_port (two per router add).

  DbSetCommand's performance increases with each router add.  To support
  scale, this needs to be closer to O(1) and perform significantly
  better than using ovs-vsctl via rootwrap daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499893/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260440] Re: nova-compute host is added to scheduling pool before Neutron can bind network ports on said host

2022-10-20 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260440

Title:
  nova-compute host is added to scheduling pool before Neutron can bind
  network ports on said host

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Expired
Status in tripleo:
  Fix Released

Bug description:
  This is a race condition.

  Given a cloud with 0 compute nodes available, on a compute node:
  * Start up neutron-openvswitch-agent
  * Start up nova-compute
  * nova boot an instance

  Scenario 1:
  * neutron-openvswitch-agent registers with Neutron before nova tries to boot 
instance
  * port is bound to agent
  * instance boots with correct networking

  Scenario 2:
  * nova schedules instance to host before neutron-openvswitch-agent is 
registered with Neutron
  * nova instance fails with vif_type=binding_failed
  * instance is in ERROR state

  I would expect that Nova would not try to schedule instances on
  compute hosts that are not ready.

  Please also see this mailing list thread for more info:

  http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022084.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260440/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339028] Re: Update custom route on Router not take effect

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339028

Title:
  Update custom route on Router not take effect

Status in neutron:
  Won't Fix

Bug description:
  1. create a router
  2. create a network with subnet 4.6.72.0/23
  3. attach the above subnet to the router
  4. update the router with route {destination: 4.6.72.0/23, nexthop: 
4.6.72.10}, success
  5. remove the above route from the router, success
  6. update the router with the same route again, operation success, but the 
route isn't added to the router namespace, so not take effect

  This problem is caused by removing the connected route, so when adding
  the route the second time, "ip route replace" command fail.

  I think we need to restrict the modification of connected route.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339028/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367391] Re: ML2 DVR port binding implementation unnecessarily duplicates schema and logic

2022-10-20 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367391

Title:
  ML2 DVR port binding implementation unnecessarily duplicates schema
  and logic

Status in neutron:
  Won't Fix

Bug description:
  Support for distributed port bindings was added to ML2 in order to
  enable the same DVR port to be bound simultaneously on multiple hosts.
  This was implemented by:

  * Adding a new ml2_dvr_port_bindings table similar to the ml2_port_bindings 
table, but with the host column as part of the primary key.
  * Adding a new DvrPortContext class the overrides several functions in 
PortContext.
  * Adding DVR-specific internal functions to Ml2Plugin, 
_process_dvr_port_binding and _commit_dvr_port_binding, that are modified 
copies of existing functions.
  * In about 8 places, making code conditional on "port['device_owner'] == 
const.DEVICE_OWNER_DVR_INTERFACE" to handle DVR ports using the above models, 
classes and functions instead of the normal ones.

  This duplication of schema and code adds significant technical debt to
  the ML2 plugin implementation, requiring developers and reviewers to
  evaluate for all changes whether they need to apply to both the normal
  and DVR-specific copies. In addition, copied code is certain to
  diverge over time, making the effort to keep the copies as
  synchronized as possible become more and more difficult.

  This unnecessary duplication of schema and code should be
  significantly reduced or completely eliminated by treating a normal
  non-distributed port as a special case of a distributed port that
  happens to only bind on a single host.

  The schema would be unified by replacing the existing
  ml2_port_bindings and ml2_dvr_port_bindings tables with two new non-
  overlapping tables. One would contain the port state that is the same
  for all hosts on which the port binds, including the values of the
  binding:host, binding:vnic_type, and binding:profile attributes. The
  other would contain the port state that differs among host-specific
  bindings, such as the binding:vif_type and binding:vif_details
  attribute values, and the bound driver and segment (until these two
  move to a separate table for hierarchical port binding).

  Also, the basic idea of distributed port bindings is not specific to
  DVR, and could be used for DHCP and other services, so the schema and
  code could be made more generic as the distributed and normal schema
  and code are unified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367391/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972764] Re: [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError: _handle_lb_on_ls() got an unexpected keyword argument 'context'

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972764

Title:
  [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError:
  _handle_lb_on_ls() got an unexpected keyword argument 'context'

Status in neutron:
  Fix Released

Bug description:
  It's failing with below TraceBack:-

  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager [req-2e5ef575-b18c-4091-8c15-b37f6bcf0fdd 
f1840520501c41b2a6a534525f0f90a4 bf49659cd4cb40edb393b914198ce3c9 - default 
default] Error during notification for 
neutron.services.portforwarding.drivers.ovn.driver.OVNPortForwarding._handle_lb_on_ls-4305748
 router_interface, after_create: TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager Traceback (most recent call last):
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", line 197, 
in _notify_loop
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, 
**kwargs)
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager 

  
  Was noticed in a TripleO job https://bugs.launchpad.net/tripleo/+bug/1972660.

  This method was added in
  https://review.opendev.org/q/I0c4d492887216cad7a8155dceb738389f2886376
  and backported till wallaby. Xena+ are ok, only wallaby impacted
  because before xena old notification format is used where arguments
  are passed as kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1972764/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973783] Re: [devstack] Segment plugin reports Traceback as placement client not configured

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973783

Title:
  [devstack] Segment plugin reports Traceback as placement client not
  configured

Status in neutron:
  Fix Released

Bug description:
  Following Traceback is reported although the job passes but creates a
  noise in logs, so should be cleared:-

  
  May 17 12:08:25.056617 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: DEBUG neutron_lib.callbacks.manager [None 
req-01d22b64-1fbc-4578-8cc2-c6565188c424 admin admin] Publish callbacks 
['neutron.plugins.ml2.plugin.Ml2Plugin._handle_segment_change-1048155', 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-1983453',
 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-495']
 for segment (45896f0b-13b1-4cfc-ab32-297a8d8dae05), after_delete {{(pid=72995) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: Traceback (most recent call last):
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/hub.py", line 476, in 
fire_timers
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: timer()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/timer.py", line 59, in 
__call__
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: cb(*args, **kw)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File "/opt/stack/neutron/neutron/common/utils.py", 
line 922, in wrapper
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return func(*args, **kwargs)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 58, in 
synced_send
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self._notify()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 69, in _notify
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self.callback(batched_events)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 211, in 
_send_notifications
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: event.method(event)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 383, in 
_delete_nova_inventory
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: aggregate_id = 
self._get_aggregate_id(event.segment_id)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 370, in 
_get_aggregate_id
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: aggregate_uuid = self.p_client.list_aggregates(
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
58, in wrapper
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return f(self, *a, **k)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
554, in list_aggregates
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return self._get(url).json()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
190, in _get
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return self._client.get(url, 
endpoint_filter=self._ks_filter,
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 1141, 
in get
  May 17 12:08:25.069453 

[Yahoo-eng-team] [Bug 1979047] Re: Interface attach fails with libvirt.libvirtError: internal error: unable to execute QEMU command 'netdev_add': File descriptor named '(null)' has not been found

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979047

Title:
  Interface attach fails with libvirt.libvirtError: internal error:
  unable to execute QEMU command'netdev_add': File
  descriptor named '(null)' has not been found

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The tempest-integrated-compute-centos-9-stream job is broken since
  2022-06-16 02:26:37 [1]. Multiple interface attach tempest test fails
  with:

  libvirt.libvirtError: internal error: unable to execute QEMU command
  'netdev_add': File descriptor named '(null)' has not been found

  Full exception stack trace:

  [None req-6f95599e-022a-42ab-a4de-07c7b8f73daf tempest-
  AttachInterfacesTestJSON-2035965943 tempest-
  AttachInterfacesTestJSON-2035965943-project] [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] attaching network adapter
  failed.: libvirt.libvirtError: internal error: unable to execute QEMU
  command 'netdev_add': File descriptor named '(null)' has not been
  found

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] Traceback (most recent call
  last):

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2850, in
  attach_interface

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] guest.attach_device(cfg,
  persistent=True, live=live)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/opt/stack/nova/nova/virt/libvirt/guest.py", line 321, in
  attach_device

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]
  self._domain.attachDeviceFlags(device_xml, flags=flags)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 193,
  in doit

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] result =
  proxy_call(self._autowrap, f, *args, **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 151,
  in proxy_call

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] rv = execute(f, *args,
  **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 132,
  in execute

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] six.reraise(c, e, tb)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/six.py", line 719, in reraise

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] raise value

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 86,
  in tworker

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] rv = meth(*args, **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/lib64/python3.9/site-packages/libvirt.py", line 706, in
  attachDeviceFlags

  Jun 17 08:25:43.545335 

[Yahoo-eng-team] [Bug 1970679] Re: neutron-tempest-plugin-designate-scenario cross project job is failing on OVN

2022-10-20 Thread yatin
Based on https://bugs.launchpad.net/neutron/+bug/1970679/comments/5 and
https://review.opendev.org/c/openstack/devstack/+/848548 closing the
issue.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1970679

Title:
  neutron-tempest-plugin-designate-scenario cross project job is failing
  on OVN

Status in neutron:
  Fix Released

Bug description:
  The cross-project neutron-tempest-plugin-designate-scenario job is
  failing during the Designate gate runs due to an OVN failure.

  + lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=5
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 5 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=6
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 6 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:178 :   die 178 'Socket 
/var/run/openvswitch/ovnnb_db.sock not found'
  + functions-common:die:264 :   local exitcode=0
  [Call Trace]
  ./stack.sh:1284:start_ovn_services
  /opt/stack/devstack/lib/neutron-legacy:516:start_ovn
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:698:wait_for_sock_file
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178:die
  [ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178 Socket 
/var/run/openvswitch/ovnnb_db.sock not found
  exit_trap: cleaning up child processes

  An example job run is here:
  https://zuul.opendev.org/t/openstack/build/b014e50e018d426b9367fd3219ed489e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1970679/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp