[Yahoo-eng-team] [Bug 1832817] Re: Multiple entries for the same host allowed in the services table

2019-08-22 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832817

Title:
  Multiple entries for the same host allowed in the services table

Status in OpenStack Compute (nova):
  Expired

Bug description:
  it has been noticed that it is possible to have multiple entries in
  the services table for the same host with diferent versions, leading
  to API failures

  MariaDB [nova]> select host, services.binary, version from services where 
host="cc-compute01-kna1"
  -> ;
  +---+--+-+
  | host | binary | version |
  +---+--+-+
  | cc-compute01-kna1 | nova-compute | 35 |
  | cc-compute01-kna1 | nova-compute | 0 |
  +---+--+-+

  although a general code issue

  MariaDB [nova]> desc services;
  +-+--+--+-+-++
  | Field   | Type | Null | Key | Default | Extra  |
  +-+--+--+-+-++
  | created_at  | datetime | YES  | | NULL||
  | updated_at  | datetime | YES  | | NULL||
  | deleted_at  | datetime | YES  | | NULL||
  | id  | int(11)  | NO   | PRI | NULL| auto_increment |
  | host| varchar(255) | YES  | MUL | NULL||
  | binary  | varchar(255) | YES  | | NULL||
  | topic   | varchar(255) | YES  | | NULL||
  | report_count| int(11)  | NO   | | NULL||
  | disabled| tinyint(1)   | YES  | | NULL||
  | deleted | int(11)  | YES  | | NULL||
  | disabled_reason | varchar(255) | YES  | | NULL||
  | last_seen_up| datetime | YES  | | NULL||
  | forced_down | tinyint(1)   | YES  | | NULL||
  | version | int(11)  | YES  | | NULL||
  | uuid| varchar(36)  | YES  | UNI | NULL||
  +-+--+--+-+-++

  as per the host column, it has a MUL key, as multiple services can run
  per hosts this is ok; however can the combination of binary and host
  may be considered?

  anecdotally i am informed that when multiple service versions exist,
  the lowest is chosen - is it anticipated that multiple service
  versions could be in place (and in use) simultaneously?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1832817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841128] [NEW] Smaller project quota than user quota value can be set

2019-08-22 Thread mitsuhiro tanino
Public bug reported:

Description
===
Smaller project quota than user quota value can be set
This problems only happens when project quota value is unlimited with 
DbQuotaDriver driver.
In the quota calculation steps, _process_quotas() and get_settable_quotas() are 
called to calculate modification of quota. In these methods, "ramains" which 
indicates available amount of resource for user is set to wrong value when the 
project quota value is unlimited. As a result, this problem happens.


Steps to reproduce
==
$ openstack user create test --domain default --password password
…
| id  | a22c08f37f34447d81f597144b2b8831 |
…
$ openstack project create test-project --domain default 
--property=user_id=$(openstack user show test -f value -c id)
…
| id  | c9a3b873f10947fea01c833e25884b3e |
…
$ openstack role add --user test --project test-project member

(a) Confirm instances of project and user quota. Initial values are both of 10.
$ nova quota-show --tenant c9a3b873f10947fea01c833e25884b3e | grep instances
| instances| 10|
$ nova quota-show --user a22c08f37f34447d81f597144b2b8831 --tenant 
c9a3b873f10947fea01c833e25884b3e | grep instances
| instances| 10|

(b) Update project quota to unlimited
$ nova quota-update --instances -1 c9a3b873f10947fea01c833e25884b3e; nova 
quota-show --tenant c9a3b873f10947fea01c833e25884b3e | grep instances
| instances| -1|
(c) Update user quota to 20.
$ nova quota-update --user a22c08f37f34447d81f597144b2b8831 --instances 20 
c9a3b873f10947fea01c833e25884b3e; nova quota-show --user 
a22c08f37f34447d81f597144b2b8831 --tenant c9a3b873f10947fea01c833e25884b3e | 
grep instances
| instances| 20|

(d) Update project quota to 10 which is smaller than user quota.
$ nova quota-update --instances 10 c9a3b873f10947fea01c833e25884b3e; nova 
quota-show --tenant c9a3b873f10947fea01c833e25884b3e | grep instances
| instances| 10|

Normally, project quota(=10) can not be set smaller value than user
quota(=20), however this update can be succeeded when previous project
quota value is unlimited.


Expected result
===
Failed to update project quota value when the value is smaller than user's one.

Actual result
=
Quota update was succeeded even though the project value is smaller than user's 
one.

Environment
===
1. Exact version of OpenStack you are running. See the following
Devstack environment with latest master branch
$ git log -1
commit 0c861c29c12c2092c95ac45988ce2793e4aea20f
Merge: 170fd5a 791fa59
Author: Zuul 
Date:   Thu Aug 22 23:49:25 2019 +

Merge "Handle websockify v0.9.0 in console proxy"

2. Which hypervisor did you use?
  KVM, cent OS7.6
 $uname -a
 Linux devstack 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1841128

Title:
  Smaller project quota than user quota value can be set

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Smaller project quota than user quota value can be set
  This problems only happens when project quota value is unlimited with 
DbQuotaDriver driver.
  In the quota calculation steps, _process_quotas() and get_settable_quotas() 
are called to calculate modification of quota. In these methods, "ramains" 
which indicates available amount of resource for user is set to wrong value 
when the project quota value is unlimited. As a result, this problem happens.

  
  Steps to reproduce
  ==
  $ openstack user create test --domain default --password password
  …
  | id  | a22c08f37f34447d81f597144b2b8831 |
  …
  $ openstack project create test-project --domain default 
--property=user_id=$(openstack user show test -f value -c id)
  …
  | id  | c9a3b873f10947fea01c833e25884b3e |
  …
  $ openstack role add --user test --project test-project member

  (a) Confirm instances of project and user quota. Initial values are both of 
10.
  $ nova quota-show --tenant c9a3b873f10947fea01c833e25884b3e | grep instances
  | instances| 10|
  $ nova quota-show --user a22c08f37f34447d81f597144b2b8831 --tenant 
c9a3b873f10947fea01c833e25884b3e | grep instances
  | instances| 10|

  (b) Update project quota to unlimited
  $ nova quota-update --instances -1 c9a3b873f10947fea01c833e25884b3e; nova 
quota-show --tenant c9a3b873f10947fea01c833e25884b3e | grep instances
  | instances| -1|
  (c) Update user quota to 20.
  $ nova quota-update --user a22c08f37f34447d81f597144b2b8831 --instances 20 
c9a3b873f10947fea01c833e25884b3e; nova quota-show --user 
a22c08f37f34447d81f597144b2b8831 --tenant c9a3b873f10947fea01c

[Yahoo-eng-team] [Bug 1837252] Re: IFLA_BR_AGEING_TIME of 0 causes flooding across bridges

2019-08-22 Thread Jeremy Stanley
For a while I've been meaning to raise the topic of dropping requirement
#5 from
https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html#requirements
since it was a high bar to clear and even projects which were previously
under vulnerability management before the tag existed did not
retroactively undergo threat analysis. While I still think it would be
swell to have architectural info on critical OpenStack components, the
volume of vulnerability reports we've received in recent years is low
enough that I think we could cover more projects even without that. I
did bring this up with the other members of the OpenStack VMT and there
was no disagreement, so I'll start a thread about that on the ML.

I'll go ahead and draft an impact description since it looks like the
stable/stein change is passing and likely to merge, and then request a
CVE assignment and prepare to issue an advisory.

** Changed in: ossa
   Status: Won't Fix => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837252

Title:
  IFLA_BR_AGEING_TIME of 0 causes flooding across bridges

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Fix Released
Status in os-vif stein series:
  In Progress
Status in os-vif trunk series:
  Fix Released
Status in OpenStack Security Advisory:
  Confirmed

Bug description:
  Release: OpenStack Stein
  Driver: LinuxBridge

  Using Stein w/ the LinuxBridge mech driver/agent, we have found that
  traffic is being flooded across bridges. Using tcpdump inside an
  instance, you can see unicast traffic for other instances.

  We have confirmed the macs table shows the aging timer set to 0 for
  permanent entries, and the bridge is NOT learning new MACs:

  root@lab-compute01:~# brctl showmacs brqd0084ac0-f7
  port no   mac addris local?   ageing timer
5   24:be:05:a3:1f:e1   yes0.00
5   24:be:05:a3:1f:e1   yes0.00
1   fe:16:3e:02:62:18   yes0.00
1   fe:16:3e:02:62:18   yes0.00
7   fe:16:3e:07:65:47   yes0.00
7   fe:16:3e:07:65:47   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00

  root@lab-compute01:~# bridge fdb show | grep brqd0084ac0-f7
  01:00:5e:00:00:01 dev brqd0084ac0-f7 self permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e master brqd0084ac0-f7 permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 vlan 1 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 master brqd0084ac0-f7 permanent
  fe:16:3e:5c:a6:6c dev tap61107236-1e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:5c:a6:6c dev tap61107236-1e master brqd0084ac0-f7 permanent

  The ageing time for 

[Yahoo-eng-team] [Bug 1840979] Re: [L2] [opinion] update the port DB status directly in agent-side

2019-08-22 Thread Swaminathan Vasudevan
** Changed in: neutron
   Status: New => Opinion

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1840979

Title:
  [L2] [opinion] update the port DB status directly in agent-side

Status in neutron:
  Opinion

Bug description:
  When ovs-agent done processing the port, it will call neutron-server to make 
some DB update.
  Especially when restart the ovs-agent, all ports in one agent will do such 
RPC and DB update again to make port status consistent. When a large number of 
concurrent agent restart happen, neutron-server may not work fine.
  So how about making the following DB updating locally in neutron agent side 
directly? It may have some mechanism driver notification, IMO, this can also be 
done in agent-side.

  def update_device_down(self, context, device, agent_id, host=None):
  cctxt = self.client.prepare()
  return cctxt.call(context, 'update_device_down', device=device,
agent_id=agent_id, host=host)

  def update_device_up(self, context, device, agent_id, host=None):
  cctxt = self.client.prepare()
  return cctxt.call(context, 'update_device_up', device=device,
agent_id=agent_id, host=host)

  def update_device_list(self, context, devices_up, devices_down,
  ret = cctxt.call(context, 'update_device_list',

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1840979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841104] [NEW] Openstack's vendor_data2.json is not handled

2019-08-22 Thread Marius L
Public bug reported:

Starting with Newton, Openstack adds a vendor_data2.json in the metadata (based 
on the DynamicJSON vendor data provider).
However, cloud-init seems to not execute that.
According to the code 
(https://git.launchpad.net/cloud-init/tree/cloudinit/sources/helpers/openstack.py#n247),
 only the "vendor_data.json" is taken into account.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1841104

Title:
  Openstack's vendor_data2.json is not handled

Status in cloud-init:
  New

Bug description:
  Starting with Newton, Openstack adds a vendor_data2.json in the metadata 
(based on the DynamicJSON vendor data provider).
  However, cloud-init seems to not execute that.
  According to the code 
(https://git.launchpad.net/cloud-init/tree/cloudinit/sources/helpers/openstack.py#n247),
 only the "vendor_data.json" is taken into account.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1841104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840788] Re: websockify-0.9.0 breaks tempest tests

2019-08-22 Thread melanie witt
Tempest patch is at https://review.opendev.org/674364

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840788

Title:
  websockify-0.9.0 breaks tempest tests

Status in OpenStack Compute (nova):
  In Progress
Status in tempest:
  In Progress

Bug description:
  see https://review.opendev.org/677479 for a test review

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834875] Re: cloud-init growpart race with udev

2019-08-22 Thread Dan Watkins
So, to reset on status: I believe that we've narrowed this down to a
systemd/udev problem, as it goes away when a different udevadm binary is
used.  I don't have a good sense of how to go about digging into that
issue, so I don't think I can take it much further.

(Also, as it appears to be a systemd issue, I'm going to mark the cloud-
init task Invalid.  Feel free to move it back to New if you believe this
is a mistake!)

** Changed in: cloud-init
   Status: In Progress => Invalid

** Changed in: cloud-init
 Assignee: Dan Watkins (daniel-thewatkins) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

Status in cloud-init:
  Invalid
Status in systemd package in Ubuntu:
  New

Bug description:
  On Azure, it happens regularly (20-30%), that cloud-init's growpart
  module fails to extend the partition to full size.

  Such as in this example:

  

  2019-06-28 12:24:18,666 - util.py[DEBUG]: Running command ['growpart', 
'--dry-run', '/dev/sda', '1'] with allowed return codes [0] (shell=False, 
capture=True)
  2019-06-28 12:24:19,157 - util.py[DEBUG]: Running command ['growpart', 
'/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True)
  2019-06-28 12:24:19,726 - util.py[DEBUG]: resize_devices took 1.075 seconds
  2019-06-28 12:24:19,726 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: FAIL: running config-growpart with frequency 
always
  2019-06-28 12:24:19,727 - util.py[WARNING]: Running module growpart () failed
  2019-06-28 12:24:19,727 - util.py[DEBUG]: Running module growpart () failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 812, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 54, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 187, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
351, in handle
  func=resize_devices, args=(resizer, devices))
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2521, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
298, in resize_devices
  (old, new) = resizer.resize(disk, ptnum, blockdev)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
159, in resize
  return (before, get_size(partdev))
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
198, in get_size
  fd = os.open(filename, os.O_RDONLY)
  FileNotFoundError: [Errno 2] No such file or directory: 
'/dev/disk/by-partuuid/a5f2b49f-abd6-427f-bbc4-ba5559235cf3'

  

  @rcj suggested this is a race with udev. This seems to only happen on
  Cosmic and later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841090] [NEW] dhclient exit hook could be hiding non-zero return code

2019-08-22 Thread Thomas Stringer
Public bug reported:

I'm seeing an empty dhcp lease file. Digging through the code we are
passing the -1 opt to dhclient in cloudinit.net.dhcp.dhcp_discovery.
This would allow a non-zero return code if dhclient has an issue.

I see though that there is an exit hook for dhclient:
https://git.launchpad.net/cloud-init/tree/tools/hook-dhclient. It seems
as though this hook could be hiding a non-zero return code from dhclient
main lease implementation (unless $reason is in one of those options
_and_ `cloud-init dhclient-hook` fails).

This does not seem like the desired behavior, as we get beyond the
dhclient call then only to fail on a zero-byte dhcp lease file.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1841090

Title:
  dhclient exit hook could be hiding non-zero return code

Status in cloud-init:
  New

Bug description:
  I'm seeing an empty dhcp lease file. Digging through the code we are
  passing the -1 opt to dhclient in cloudinit.net.dhcp.dhcp_discovery.
  This would allow a non-zero return code if dhclient has an issue.

  I see though that there is an exit hook for dhclient:
  https://git.launchpad.net/cloud-init/tree/tools/hook-dhclient. It
  seems as though this hook could be hiding a non-zero return code from
  dhclient main lease implementation (unless $reason is in one of those
  options _and_ `cloud-init dhclient-hook` fails).

  This does not seem like the desired behavior, as we get beyond the
  dhclient call then only to fail on a zero-byte dhcp lease file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1841090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-08-22 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

Status in Cinder:
  Fix Released
Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Committed
Status in Ubuntu Cloud Archive train series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Committed
Status in cinder source package in Cosmic:
  Won't Fix
Status in nova source package in Cosmic:
  Won't Fix
Status in cinder source package in Disco:
  Fix Released
Status in nova source package in Disco:
  Fix Released
Status in nova source package in Eoan:
  Fix Committed

Bug description:
  [Impact]
  For >= rocky (i.e. if using py3 packages) librados.cluster.get_fsid() is 
returning a binary string which means that the fsid can't be matched against a 
string version of the same value from glance when deciding whether to use an 
image that is stored in Ceph.

  [Test Case]
  * deploy openstack rocky (using p3 packages)
  * deploy ceph and use for glance backend
  * set
  /etc/glance/glance-api.conf:show_multiple_locations = True
  /etc/glance/glance-api.conf:show_image_direct_url = True
  * upload image to glance
  * attempt to boot an instance using this image
  * confirm that instance booted properly and check that the image it booted 
from is a cow clone of the glance image by doing the following in ceph:

  rbd -p nova info | grep parent:

  * confirm that you see "parent: glance/@snap"

  [Regression Potential]
  None expected

  [Other Info]
  None expected.

  
  When using cinder, glance with ceph, in a code is support for creating 
volumes from images INSIDE ceph environment as copy-on-write volume. This 
option is saving space in ceph cluster, and increase speed of instance spawning 
because volume is created directly in ceph.   <= THIS IS NOT WORKING IN PY3

  If this function is not enabled , image is copying to compute-host
  ..convert ..create volume, and upload to ceph ( which is time
  consuming of course ).

  Problem is , that even if glance-cinder acceleration is turned-on ,
  code is executed as when it is disabled, so ..the same as above , copy
  image , create volume, upload to ceph ... BUT it should create copy-
  on-write volume inside the ceph internally. <= THIS IS A BUG IN PY3

  Glance config ( controller ):

  [DEFAULT]
  show_image_direct_url = true   <= this has to be set to true to 
reproduce issue
  workers = 7
  transport_url = rabbit://openstack:openstack@openstack-db
  [cors]
  [database]
  connection = mysql+pymysql://glance:Eew7shai@openstack-db:3306/glance
  [glance_store]
  stores = file,rbd
  default_store = rbd
  filesystem_store_datadir = /var/lib/glance/images
  rbd_store_pool = images
  rbd_store_user = images
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  [image_format]
  [keystone_authtoken]
  auth_url = http://openstack-ctrl:35357
  project_name = service
  project_domain_name = default
  username = glance
  user_domain_name = default
  password = Eew7shai
  www_authenticate_uri = http://openstack-ctrl:5000
  auth_uri = http://openstack-ctrl:35357
  cache = swift.cache
  region_name = RegionOne
  auth_type = password
  [matchmaker_redis]
  [oslo_concurrency]
  lock_path = /var/lock/glance
  [oslo_messaging_amqp]
  [oslo_messaging_kafka]
  [oslo_messaging_notifications]
  [oslo_messaging_rabbit]
  [oslo_messaging_zmq]
  [oslo_middleware]
  [oslo_policy]
  [paste_deploy]
  flavor = keystone
  [store_type_location_strategy]
  [task]
  [taskflow_executor]
  [profiler]
  enabled = true
  trace_sqlalchemy = true
  hmac_keys = secret
  connection_string = redis://127.0.0.1:6379
  trace_wsgi_transport = True
  trace_message_store = True
  trace_management_store = True

  Cinder conf (controller) :
  root@openstack-controller:/tmp# cat /etc/cinder/cinder.conf | grep -v '^#' | 
awk NF
  [DEFAULT]
  my_ip = 192.168.10.15
  glance_api_servers = http://openstack-ctrl:9292
  auth_strategy = keystone
  enabled_backends = rbd
  osapi_volume_workers = 7
  debug = true
  transport_url = rabbit://openstack:openstack@openstack-db
  [backend]
  [backend_defaults]
  rbd_pool = volumes
  rbd_user = volumes1
  rbd_secret_uuid = b2efeb49-9844-475b-92ad-5df4a3e1300e
  volume_driver = cinder.volume.drivers.rbd.RBDDriver
  [barbican]
  [brcd_fabric_example]
  [cisco_fabric_example]
  [coordination]
  [cors]
  [database]
  connection = mysql+pymysql://cinder:EeRe3ahx@openstack-db:3306/cinder
  [fc-zone-manager]
  [healthcheck]
  [key_manager]
  [keystone_authtoken]
  aut

[Yahoo-eng-team] [Bug 1841067] [NEW] SR-IOV agent depends on mac addresses for getting bound ports

2019-08-22 Thread Adrian Chiris
Public bug reported:

SR-IOV agent depends on the administrative MAC address of the VF to determine 
which ports are managed by it.
this dependency should be removed as it relies on nova virt drivers to set the 
administrative mac address.
For macvtap ports, setting the administrative mac address is not necessary but 
neutron requires it.

a recent cleanup in Nova[1] caused VMs with macvtap port to not spawn as
SR-IOV agent did not recognize the port as being configured.

A revert was proposed[2], but the long term solution should be in
Neutron.

It should be noted that this in not a new issuem, but rather a historic
design/implementation decision of SR-IOV agent.

This can be regarded as a (very old bug) exposed by the recent cleanup
in nova or as an enhancement to the existing code that removes the
dependency of MAC address from the agent.


[1]https://review.opendev.org/#/c/31/
[2]https://review.opendev.org/#/c/675776/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841067

Title:
  SR-IOV agent depends on mac addresses for getting bound ports

Status in neutron:
  New

Bug description:
  SR-IOV agent depends on the administrative MAC address of the VF to determine 
which ports are managed by it.
  this dependency should be removed as it relies on nova virt drivers to set 
the administrative mac address.
  For macvtap ports, setting the administrative mac address is not necessary 
but neutron requires it.

  a recent cleanup in Nova[1] caused VMs with macvtap port to not spawn
  as SR-IOV agent did not recognize the port as being configured.

  A revert was proposed[2], but the long term solution should be in
  Neutron.

  It should be noted that this in not a new issuem, but rather a
  historic design/implementation decision of SR-IOV agent.

  This can be regarded as a (very old bug) exposed by the recent cleanup
  in nova or as an enhancement to the existing code that removes the
  dependency of MAC address from the agent.

  
  [1]https://review.opendev.org/#/c/31/
  [2]https://review.opendev.org/#/c/675776/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1841067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841050] [NEW] Horizon network port create panel shows "port security" checkbox that breaks port creation for non-admin users

2019-08-22 Thread Radomir Dopieralski
Public bug reported:

When creating a network port, we display the "port security" checkbox
even when the user has no right to set it. That results in a policy
error when the form is submitted.

We should be checking for the user's rights, and not display that
checkbox (and not pass the related parameter in the API call) when those
are insufficient for setting it.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: neutron

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1841050

Title:
  Horizon network port create panel shows "port security" checkbox that
  breaks port creation for non-admin users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a network port, we display the "port security" checkbox
  even when the user has no right to set it. That results in a policy
  error when the form is submitted.

  We should be checking for the user's rights, and not display that
  checkbox (and not pass the related parameter in the API call) when
  those are insufficient for setting it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1841050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841049] [NEW] Angularized images overview ignores i18n for dates

2019-08-22 Thread Radomir Dopieralski
Public bug reported:

The image overview view (the one that appears when we click on image
view) displays two dates, created_at and updated_at. The Angular version
of that view (which is now the default) always displays the dates in
some weird format, that is different from Horizon's default and ignores
the user settings for internationalization.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1841049

Title:
  Angularized images overview ignores i18n for dates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The image overview view (the one that appears when we click on image
  view) displays two dates, created_at and updated_at. The Angular
  version of that view (which is now the default) always displays the
  dates in some weird format, that is different from Horizon's default
  and ignores the user settings for internationalization.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1841049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840967] Re: nova-next job does not fail when 'nova-manage db purge' fails

2019-08-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/677806
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f32671359edc7f87c9f77e58d81e0b4d88bffdbe
Submitter: Zuul
Branch:master

commit f32671359edc7f87c9f77e58d81e0b4d88bffdbe
Author: melanie witt 
Date:   Wed Aug 21 19:15:59 2019 +

Make a failure to purge_db fail in post_test_hook.sh

Currently, the 'purge_db' call occurs before 'set -e', so if and when
the database purge fails (return non-zero) it does not cause the script
to exit with a failure.

This moves the call after 'set -e' to make the script exit with a
failure if the database purge step fails.

Closes-Bug: #1840967

Change-Id: I6ae27c4e11acafdc0bba8813f47059d084758b4e


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840967

Title:
  nova-next job does not fail when 'nova-manage db purge' fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Happened upon this while working on another patch to add more testing
  to our post_test_hook.sh script, excerpt from the log [1]:

  + /usr/local/bin/nova-manage db purge --all --verbose --all-cells
  + RET=3
  + [[ 3 -eq 0 ]]
  + echo Purge failed with result 3
  Purge failed with result 3
  + return 3
  + set -e
  + set +x
  WARNING: setting legacy OS_TENANT_NAME to support cli tools.
  + /opt/stack/nova/gate/post_test_hook.sh:main:54 :   echo 'Verifying that 
instances were archived from all cells'
  Verifying that instances were archived from all cells
  ++ /opt/stack/nova/gate/post_test_hook.sh:main:55 :   openstack server list 
--deleted --all-projects -c ID -f value
  + /opt/stack/nova/gate/post_test_hook.sh:main:55 :   
deleted_servers='e4727a33-796e-4173-b369-24d7ee45d7fd
  b213a354-0830-4cc3-abf7-e9dd068cefa9
  33569d93-d7b6-4a92-825e-f36e972722db
  521e4a84-c313-433e-8cc7-6d66c821d78c

  Because of a bug in my WIP patch, the purge command failed, but the
  job continued to run and didn't fail at that point because the 'nova-
  manage db purge' command comes before the 'set -e' command [that makes
  the script exit with any non-zero return value].

  So, we need to move the purge command after 'set -e'. Note that we
  should *not* move the archive command though, because during its
  intermediate runs, it is expected to return 1, and we don't want to
  fail the job when that happens. The archive_deleted_rows function does
  its own explicit exiting in the case of actual failures.

  [1] https://object-storage-ca-
  ymq-1.vexxhost.net/v1/86bbbcfa8ad043109d2d7af530225c72/logs_40/672840/8/check
  /nova-next/9d13cfb/ara-report/result/d13f888f-d187-4c3b-b5ab-
  9326f611e534/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp