[Yahoo-eng-team] [Bug 1818292] Re: POLICY_CHECK_FUNCTION as string change incomplete

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640520
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=78de547871aedb799b9d60c051103c35357fe0bc
Submitter: Zuul
Branch:master

commit 78de547871aedb799b9d60c051103c35357fe0bc
Author: David Lyle 
Date:   Fri Mar 1 14:15:21 2019 -0700

Fix policy function check error

Change in I8a346e55bb98e4e22e0c14a614c45d493d20feb4 to make
POLICY_CHECK_FUNCTION a string rather than a function was incomplete.

The case in horizon/tables/base.py is problematic in particular and results 
in
raising the TypeError: 'str' object is not callable error.

There is another instance that is not problematic, but is changed for
consistency sake.

Change-Id: Ifc616e322eb38ec7e5ac218f7f3c5ccec52e40f4
Closes-Bug: #1818292


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818292

Title:
  POLICY_CHECK_FUNCTION as string change incomplete

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Change in I8a346e55bb98e4e22e0c14a614c45d493d20feb4 to make
  POLICY_CHECK_FUNCTION a string rather than a function was incomplete.

  The case in horizon/tables/base.py is problematic in particular and
  results in raising the TypeError: 'str' object is not callable error.

  The is another instance that is not problematic, but should be changed
  for consistency sake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818607] [NEW] Networking Option 2: Self-service networks in neutron

2019-03-04 Thread winmasta
Public bug reported:

[x] This doc is inaccurate in this way:

There is no:

auth_url = http://controller:5000
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

under [keystone_authtoken] section.

---
Release: 12.0.6.dev61 on 2019-02-23 04:01
SHA: 1139299dd03239d48186d07d1eff8cbf2c460299
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option2-ubuntu.rst
URL: 
https://docs.openstack.org/neutron/queens/install/controller-install-option2-ubuntu.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

** Description changed:

- 
- This bug tracker is for errors with the documentation, use the following
- as a template and remove or add fields as you see fit. Convert [ ] into
- [x] to check boxes:
- 
- - [x] This doc is inaccurate in this way:
+ [x] This doc is inaccurate in this way:
  
  There is no:
  
  auth_url = http://controller:5000
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = NEUTRON_PASS
  
- under [keystone_authtoken] section/
- 
- - [ ] This is a doc addition request.
- - [ ] I have a fix to the document that I can paste below including example: 
input and output. 
- 
- If you have a troubleshooting or support issue, use the following
- resources:
- 
-  - Ask OpenStack: http://ask.openstack.org
-  - The mailing list: http://lists.openstack.org
-  - IRC: 'openstack' channel on Freenode
+ under [keystone_authtoken] section.
  
  ---
  Release: 12.0.6.dev61 on 2019-02-23 04:01
  SHA: 1139299dd03239d48186d07d1eff8cbf2c460299
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option2-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/controller-install-option2-ubuntu.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818607

Title:
  Networking Option 2: Self-service networks in neutron

Status in neutron:
  New

Bug description:
  [x] This doc is inaccurate in this way:

  There is no:

  auth_url = http://controller:5000
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = neutron
  password = NEUTRON_PASS

  under [keystone_authtoken] section.

  ---
  Release: 12.0.6.dev61 on 2019-02-23 04:01
  SHA: 1139299dd03239d48186d07d1eff8cbf2c460299
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option2-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/controller-install-option2-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818560] Re: Nova test_report_client uses nova conf when starting placement intercept, causing missing config opts

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640853
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=09090c8277284848007a8de1187b5cdc68c37d09
Submitter: Zuul
Branch:master

commit 09090c8277284848007a8de1187b5cdc68c37d09
Author: Chris Dent 
Date:   Mon Mar 4 20:08:28 2019 +

Use a placement conf when testing report client

It turns out that the independent wsgi interceptors in
test_report_client were using nova's global configuration
when creating the intercepts using code from placement.
This was working because until [1] placement's set of conf
options had not diverged from nova's and nova still has
placement_database config settings.

This change takes advantage of new functionality in the
PlacementFixture to allow the fixtur to manage config and
database, but _not_ run the interceptor. This means it
can set up a config that is later used by the independent
interceptors that are used in the report client tests.

[1] Ie43a69be8b75250d9deca6a911eda7b722ef8648

Change-Id: I05326e0f917ca1b9a6ef8d3bd463f68bd00e217e
Closes-Bug: #1818560
Depends-On: I8c36f35dbe85b0c0db1a5b6b5389b160b68ca488


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818560

Title:
  Nova test_report_client uses nova conf when starting placement
  intercept, causing missing config opts

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  See: http://logs.openstack.org/98/538498/22/gate/nova-tox-functional-
  py35/7673d3e/testr_results.html.gz

  The failing tests there are failing because the Database fixture from
  placement is used directly, and configuration opts are not being
  registered properly. This was an oversight when adding a new
  configuration setting.

  The fix is to register the missing opt when requested to do so.

  This is blocking the gate.

  LATER, to clarify:

  The root cause of this is that in test_report_client, a global CONF
  from nova was being used to create the placement wsgi-intercepts. When
  a new config was added on the placement side, that global CONF was no
  longer in sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788936] Re: Network address translation in Neutron wrong RFC in documentation

2019-03-04 Thread André Luis Penteado
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788936

Title:
  Network address translation in Neutron wrong RFC in documentation

Status in neutron:
  Fix Released
Status in openstack-manuals:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ x] This doc is inaccurate in this way:

  Shouldn't it be RFC1918 which defines private IP address ranges on
  networks instead of RFC5737 which defines private ranges for use in
  documentation?

  RFC 5737 reserves the following three subnets as private addresses:

  192.0.2.0/24
  198.51.100.0/24
  203.0.113.0/24

  
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example:

  RFC 1918 reserves the following three subnets as private addresses:

   10.0.0.0/8 
   172.16.0.0/12
   192.168.0.0/16

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.6.dev66 on 2018-08-13 11:52
  SHA: b87eb4814a1a936844a0dbd726e7cd9a0de5b492
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/intro-nat.rst
  URL: https://docs.openstack.org/neutron/pike/admin/intro-nat.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804462] Re: Remove obsolete service policies from policy.v3cloudsample.json

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/619282
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c83fcbc42aac247789c9a53abfbe237fa9640d38
Submitter: Zuul
Branch:master

commit c83fcbc42aac247789c9a53abfbe237fa9640d38
Author: Lance Bragstad 
Date:   Wed Nov 21 15:45:50 2018 +

Remove service policies from policy.v3cloudsample.json

By incorporating system-scope and default roles, we've effectively
made these policies obsolete. We can simplify what we maintain and
provide a more consistent, unified view of default service behavior by
removing them.

Change-Id: Ifa2282481ee3fc544c1d50ac8e8972b0d3a5332e
Closes-Bug: 1804462


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804462

Title:
  Remove obsolete service policies from policy.v3cloudsample.json

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Once support for scope types landed in the service API policies, the
  policies in policy.v3cloudsample.json became obsolete [0][1].

  We should add formal protection for the policies with enforce_scope =
  True in keystone.tests.unit.protection.v3 and remove the old policies
  from the v3 sample policy file.

  This will reduce confusion by having a true default policy for
  services.

  [0] https://review.openstack.org/#/c/525696/
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json?id=fb73912d87b61c419a86c0a9415ebdcf1e186927#n19

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818571] [NEW] cloud-init clean removes seed directory even when --seed is not specified

2019-03-04 Thread Dan Watkins
Public bug reported:

```
./packages/bddeb
lxc launch ubuntu-daily:d reproducer
lxc file push cloud-init_all.deb reproducer/tmp/
lxc exec reproducer -- find /var/lib/cloud/seed  # Produces output
lxc exec reproducer -- cloud-init clean --logs
lxc exec reproducer -- find /var/lib/cloud/seed  # Still produces output
lxc exec reproducer -- dpkg -i /tmp/cloud-init_all.deb
lxc exec reproducer -- cloud-init clean --logs
lxc exec reproducer -- find /var/lib/cloud/seed  # RUH ROH
```

** Affects: cloud-init
 Importance: High
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: In Progress

** Changed in: cloud-init
 Assignee: (unassigned) => Dan Watkins (daniel-thewatkins)

** Changed in: cloud-init
   Status: New => In Progress

** Changed in: cloud-init
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818571

Title:
  cloud-init clean removes seed directory even when --seed is not
  specified

Status in cloud-init:
  In Progress

Bug description:
  ```
  ./packages/bddeb
  lxc launch ubuntu-daily:d reproducer
  lxc file push cloud-init_all.deb reproducer/tmp/
  lxc exec reproducer -- find /var/lib/cloud/seed  # Produces output
  lxc exec reproducer -- cloud-init clean --logs
  lxc exec reproducer -- find /var/lib/cloud/seed  # Still produces output
  lxc exec reproducer -- dpkg -i /tmp/cloud-init_all.deb
  lxc exec reproducer -- cloud-init clean --logs
  lxc exec reproducer -- find /var/lib/cloud/seed  # RUH ROH
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1037753] Re: implement cloud-init query

2019-03-04 Thread Dan Watkins
We now have `cloud-init query`.

** Changed in: cloud-init
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1037753

Title:
  implement cloud-init query

Status in cloud-init:
  Fix Released

Bug description:
  at one point there was a 'cloud-init-query' tool that woudl look just
  report data from the datasource.

  This wasn't that useful though, because it only would work as root.
  That was because it read the pickled /var/lib/cloud/instance/obj.pkl
  and because that can contain sensitive information it was made 600 and
  root:root.

  It'd be nice if we could have the datasources save off a clean version of 
data to world readable, and then
  have a tool that could read that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1037753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818566] Re: If ipv6 is disabled through the kernel, neutron-dhcp-agent fails to create the tap devices due to error regarding ipv6

2019-03-04 Thread Brian Haley
*** This bug is a duplicate of bug 1618878 ***
https://bugs.launchpad.net/bugs/1618878

** This bug has been marked a duplicate of bug 1618878
   Disabling IPv6 on an interface fails if IPv6 is completely disabled in the 
kernel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818566

Title:
  If ipv6 is disabled through the kernel, neutron-dhcp-agent fails to
  create the tap devices due to error regarding ipv6

Status in neutron:
  New

Bug description:
  If we disable ipv6 using ipv6.disable=1 at the kernel runtime,
  neutron-dhcp-agent stops creating the tap devices and fails here:

  
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.linux.utils 
[-] Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
[-] Unable to enable dhcp for 310b9752-06a5-4d7b-98ae-1ba8536e22fa.
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
Traceback (most recent call last):
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 140, 
in call_driver
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 getattr(driver, action)(**action_kwargs)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 213, 
in enable
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 interface_name = self.device_manager.setup(self.network)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1441, 
in setup
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 n_const.ACCEPT_RA_DISABLED)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 
260, in configure_ipv6_ra
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 'value': value}])
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 
912, in execute
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 log_fail_as_error=log_fail_as_error, **kwargs)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 148, 
in execute
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 raise ProcessExecutionError(msg, returncode=returncode)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: 
cannot stat /proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818566] [NEW] If ipv6 is disabled through the kernel, neutron-dhcp-agent fails to create the tap devices due to error regarding ipv6

2019-03-04 Thread David Hill
*** This bug is a duplicate of bug 1618878 ***
https://bugs.launchpad.net/bugs/1618878

Public bug reported:

If we disable ipv6 using ipv6.disable=1 at the kernel runtime, neutron-
dhcp-agent stops creating the tap devices and fails here:


dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.linux.utils [-] 
Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent [-] 
Unable to enable dhcp for 310b9752-06a5-4d7b-98ae-1ba8536e22fa.
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
Traceback (most recent call last):
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 140, 
in call_driver
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 213, 
in enable
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1441, 
in setup
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
n_const.ACCEPT_RA_DISABLED)
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 
260, in configure_ipv6_ra
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
'value': value}])
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 
912, in execute
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error, **kwargs)
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 148, 
in execute
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
raise ProcessExecutionError(msg, returncode=returncode)
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: 
cannot stat /proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- I see the same behavior here though by setting that in
- /boot/grub/grub.cfg and rebooting.  I lost all my taps.  Investigating.
+ If we disable ipv6 using ipv6.disable=1 at the kernel runtime, neutron-
+ dhcp-agent stops creating the tap devices and fails here:
  
- BTW, I'm not sure totally disabling ipv6 from the kernel is the best
- method to disable ipv6 ... In this KCS[ [1] , we notify the customers
- that this might break SSH XFowarding.   In this case, the taps are no
- longer created but according to this KCS [2], it should still be created
- with ipv4 links.  That could be a bug.  Perhaps open a BZ for this
- issue.
- 
- It looks like the problem is here:
  
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.linux.utils 
[-] Exit code: 255; Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/default/accept_ra: No such file or directory
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
[-] Unable to enable dhcp for 310b9752-06a5-4d7b-98ae-1ba8536e22fa.
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent 
Traceback (most recent call last):
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 140, 
in call_driver
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 getattr(driver, action)(**action_kwargs)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 213, 
in enable
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 interface_name = self.device_manager.setup(self.network)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   
File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1441, 
in setup
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent
 n_const.ACCEPT_RA_DISABLED)
  dhcp-agent.log:2019-03-04 19:29:13.342 5869 ERROR neutron.agent.dhcp.agent   

[Yahoo-eng-team] [Bug 1818560] [NEW] Nova's use of the placement database fixture from test_report_client doesn't register opts

2019-03-04 Thread Chris Dent
Public bug reported:

See: http://logs.openstack.org/98/538498/22/gate/nova-tox-functional-
py35/7673d3e/testr_results.html.gz

The failing tests there are failing because the Database fixture from
placement is used directly, and configuration opts are not being
registered properly. This was an oversight when adding a new
configuration setting.

The fix is to register the missing opt when requested to do so.

This is blocking the gate.

** Affects: nova
 Importance: Critical
 Status: Confirmed


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818560

Title:
  Nova's use of the placement database fixture from test_report_client
  doesn't register opts

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  See: http://logs.openstack.org/98/538498/22/gate/nova-tox-functional-
  py35/7673d3e/testr_results.html.gz

  The failing tests there are failing because the Database fixture from
  placement is used directly, and configuration opts are not being
  registered properly. This was an oversight when adding a new
  configuration setting.

  The fix is to register the missing opt when requested to do so.

  This is blocking the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818383] Re: neutron not allowing access to external network

2019-03-04 Thread Manjeet Singh Bhatia
I think you're missing an iptable masquerade rule ?

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818383

Title:
  neutron not allowing access to external network

Status in neutron:
  Invalid

Bug description:
  We did a 4 node bare metal OpenStack Queens install. After setting up
  networking and adding eth0 to br-ex and restarting network service we
  cannot ping from qrouter to external floating IP network. Below layout
  of the 4 node setup and our ovs db info.

  This was a fresh install using PackStack script modify to prep all
  nodes except the storage node.

  CentOS 7
  OpenStack Queens release

  static hostname: controller01
   Icon name: computer
  Machine ID: 0f62242dd7f04961b2fa64208526
 Boot ID: 1bf746fe751f4e58902431573696f31e
Operating System: CentOS Linux 7 (Core)
 CPE OS Name: cpe:/o:centos:centos:7
  Kernel: Linux 3.10.0-957.5.1.el7.x86_64
Architecture: x86-64

  node 1 controller/network
  node 2 compute01
  node 3 compute02
  node 4 cinder storage

  
  [root@controller01 neutron(keystone_admin)]# neutron-server --version
  neutron-server 12.0.5

  root@controller01 neutron(keystone_admin)]# ovs-vsctl show
  96de914b-630f-4014-b738-e149ee385b15
  Manager "ptcp:6640:127.0.0.1"
  is_connected: true
  Bridge "br-eth1"
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "eth1"
  Interface "eth1"
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "tap5d460e96-f2"
  tag: 1
  Interface "tap5d460e96-f2"
  type: internal
  Port br-int
  Interface br-int
  type: internal
  Port "qg-96178c89-7a"
  tag: 1
  Interface "qg-96178c89-7a"
  type: internal
  Port "qr-232080af-bb"
  tag: 2
  Interface "qr-232080af-bb"
  type: internal
  Port "tap31ad97cd-15"
  tag: 2
  Interface "tap31ad97cd-15"
  type: internal
  Bridge br-ex
  Port "eth0"
  Interface "eth0"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-c0a8015c"
  Interface "vxlan-c0a8015c"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="192.168.1.90", out_key=flow, remote_ip="192.168.1.92"}
  Port "vxlan-c0a8015b"
  Interface "vxlan-c0a8015b"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="192.168.1.90", out_key=flow, remote_ip="192.168.1.91"}
  ovs_version: "2.9.0"


  floating IP network = 192.168.30.0/24

  moment interface network on all nodes = 192.168.1.0/24

  tenant network = 10.10.1.0/24

  [root@controller01 neutron(keystone_admin)]# ip netns exec 
qrouter-c2d1460b-3585-4d37-a782-0ae4a713738b route -n
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  0.0.0.0 192.168.1.1 0.0.0.0 UG0  00 
qg-96178c89-7a
  10.10.1.0   0.0.0.0 255.255.255.0   U 0  00 
qr-232080af-bb
  192.168.1.1 0.0.0.0 255.255.255.255 UH0  00 
qg-96178c89-7a
  192.168.30.00.0.0.0 255.255.255.0   U 0  00 
qg-96178c89-7a


  [root@controller01 neutron(keystone_admin)]# openstack server list
  
+--+-++--+--+-+
  | ID   | Name| Status | Networks  
   | Image| Flavor  |
  

[Yahoo-eng-team] [Bug 1818544] Re: openstack instance not able to scp files to outside network

2019-03-04 Thread Sohny
** Also affects: centos
   Importance: Undecided
   Status: New

** No longer affects: centos

** Description changed:

  I have a openstack setup which has newton version installed on CentOS
  7.3. I am able to successfuly create VMs and associate floating Ips.
  Successfuly able to ssh into and out of VM from external network but I
  am not able to SCP any file out of the VM instance . SCP into the VM
  instance is fine. SCP btw VM instances are also fine
  
  Below is a 70kb file for which transfer is still going on after 2 hours.
  At the destination only 10kb has been copied.
  
  [root@test-server ~]# scp  /var/cache/jenkins/war/WEB-
  INF/lib/remoting-3.29.jar
  dscadmin@1.20.28.146:/home/dscadmin/jenkins/remoting-3.29.jar
  100%  771KB  35.0KB/s   00:22[
  
+ System SPECs:
  
- System SPECs:
+ [root@newton-1 neutron]# cat /etc/redhat-release
+ CentOS Linux release 7.3.1611 (Core)
  
  [root@newton-1 neutron]# openstack --version
  openstack 3.2.1
  
  [root@newton-1 neutron]# rpm -qa|grep neutron
  openstack-neutron-common-9.4.1-1.el7.noarch
  openstack-neutron-openvswitch-9.4.1-1.el7.noarch
  puppet-neutron-9.5.0-1.el7.noarch
  openstack-neutron-ml2-9.4.1-1.el7.noarch
  python-neutron-9.4.1-1.el7.noarch
  openstack-neutron-metering-agent-9.4.1-1.el7.noarch
  python-neutron-lib-0.4.0-1.el7.noarch
  openstack-neutron-9.4.1-1.el7.noarch
  python2-neutronclient-6.0.0-2.el7.noarch
  
  [root@newton-1 neutron]# uname -r
  3.10.0-514.26.2.el7.x86_64
- 
  
  Following are some log files i tracked for this at /var/log/neutron
  
  [root@newton-1 neutron]# tail -f server.log
  2019-03-04 11:54:28.737 17152 INFO neutron.wsgi 
[req-de76d2a0-72c1-4d90-abf9-42e03aa9c76c bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=113e2ce7-475b-43ea-9765-7cce7565e639_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.035162
  2019-03-04 11:54:28.749 17152 INFO neutron.wsgi 
[req-af4b081a-c900-4cd4-ad34-f64d19b57048 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.60.8_id=171c329c-e945-4bf4-bc05-82177a776c72
 HTTP/1.1" 200 217 0.009857
  2019-03-04 11:54:28.801 17152 INFO neutron.wsgi 
[req-44ab23c8-84dd-4176-865b-c47a1bc288c0 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET /v2.0/subnets.json?id=83a71783-dcc3-4d8a-8560-89972e03bab5 HTTP/1.1" 200 
863 0.049811
  2019-03-04 11:54:28.837 17152 INFO neutron.wsgi 
[req-bb98b71c-f5ca-4be1-bff9-bd7795c38ffa bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=bab474bb-88b7-490f-8c5c-19b08d758a02_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.034647
  2019-03-04 11:54:28.849 17152 INFO neutron.wsgi 
[req-ce60d323-075b-4b30-9104-e708c22c44b8 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.61.9_id=5864f529-8cb6-4d30-b3a3-c76be8750d3a
 HTTP/1.1" 200 217 0.009498
  2019-03-04 11:54:28.901 17152 INFO neutron.wsgi 
[req-384ae67c-c463-4815-be96-8b1710d88d21 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET /v2.0/subnets.json?id=95e5bc60-0309-401c-87a9-b6596957f8a6 HTTP/1.1" 200 
863 0.050315
  2019-03-04 11:54:28.940 17152 INFO neutron.wsgi 
[req-6fb418f2-1640-4daf-ad98-a9a8e870f6bd bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=3950ea82-c375-4fcc-bc19-9e01d92681fd_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.037263
  2019-03-04 11:54:28.951 17152 INFO neutron.wsgi 
[req-5e7b3c25-3c32-4b97-a595-3b548daec6ae bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.63.5_id=983c2442-ecab-49aa-988a-775fc70a9d10
 HTTP/1.1" 200 217 0.009744
  2019-03-04 11:54:29.000 17152 INFO neutron.wsgi 
[req-3e7fd50b-9fb1-4c72-9072-a0aa027f547c bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:29] 
"GET /v2.0/subnets.json?id=24c11aff-9785-4ed2-b33c-b98b037c24dc HTTP/1.1" 200 
863 0.046999
  2019-03-04 11:54:29.037 17152 INFO neutron.wsgi 
[req-2d71b123-b2aa-47d6-95db-bb254032fd7f bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:29] 
"GET 
/v2.0/ports.json?network_id=bcfd0431-aec1-42b0-98ad-b7155c836b97_owner=network%3Adhcp
 HTTP/1.1" 200 1118 0.035438
  
  [root@newton-1 neutron]# tail -f dhcp-agent.log
  2019-03-04 11:55:58.955 14992 ERROR neutron.agent.dhcp.agent 'value': 
value}])
  2019-03-04 11:55:58.955 14992 ERROR 

[Yahoo-eng-team] [Bug 1818544] [NEW] openstack instance not able to scp files to outside network

2019-03-04 Thread Sohny
Public bug reported:

I have a openstack setup which has newton version installed on CentOS
7.3. I am able to successfuly create VMs and associate floating Ips.
Successfuly able to ssh into and out of VM from external network but I
am not able to SCP any file out of the VM instance . SCP into the VM
instance is fine. SCP btw VM instances are also fine

Below is a 70kb file for which transfer is still going on after 2 hours.
At the destination only 10kb has been copied.

[root@test-server ~]# scp  /var/cache/jenkins/war/WEB-
INF/lib/remoting-3.29.jar
dscadmin@1.20.28.146:/home/dscadmin/jenkins/remoting-3.29.jar
100%  771KB  35.0KB/s   00:22[


System SPECs:

[root@newton-1 neutron]# openstack --version
openstack 3.2.1

[root@newton-1 neutron]# rpm -qa|grep neutron
openstack-neutron-common-9.4.1-1.el7.noarch
openstack-neutron-openvswitch-9.4.1-1.el7.noarch
puppet-neutron-9.5.0-1.el7.noarch
openstack-neutron-ml2-9.4.1-1.el7.noarch
python-neutron-9.4.1-1.el7.noarch
openstack-neutron-metering-agent-9.4.1-1.el7.noarch
python-neutron-lib-0.4.0-1.el7.noarch
openstack-neutron-9.4.1-1.el7.noarch
python2-neutronclient-6.0.0-2.el7.noarch

[root@newton-1 neutron]# uname -r
3.10.0-514.26.2.el7.x86_64


Following are some log files i tracked for this at /var/log/neutron

[root@newton-1 neutron]# tail -f server.log
2019-03-04 11:54:28.737 17152 INFO neutron.wsgi 
[req-de76d2a0-72c1-4d90-abf9-42e03aa9c76c bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=113e2ce7-475b-43ea-9765-7cce7565e639_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.035162
2019-03-04 11:54:28.749 17152 INFO neutron.wsgi 
[req-af4b081a-c900-4cd4-ad34-f64d19b57048 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.60.8_id=171c329c-e945-4bf4-bc05-82177a776c72
 HTTP/1.1" 200 217 0.009857
2019-03-04 11:54:28.801 17152 INFO neutron.wsgi 
[req-44ab23c8-84dd-4176-865b-c47a1bc288c0 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET /v2.0/subnets.json?id=83a71783-dcc3-4d8a-8560-89972e03bab5 HTTP/1.1" 200 
863 0.049811
2019-03-04 11:54:28.837 17152 INFO neutron.wsgi 
[req-bb98b71c-f5ca-4be1-bff9-bd7795c38ffa bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=bab474bb-88b7-490f-8c5c-19b08d758a02_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.034647
2019-03-04 11:54:28.849 17152 INFO neutron.wsgi 
[req-ce60d323-075b-4b30-9104-e708c22c44b8 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.61.9_id=5864f529-8cb6-4d30-b3a3-c76be8750d3a
 HTTP/1.1" 200 217 0.009498
2019-03-04 11:54:28.901 17152 INFO neutron.wsgi 
[req-384ae67c-c463-4815-be96-8b1710d88d21 bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET /v2.0/subnets.json?id=95e5bc60-0309-401c-87a9-b6596957f8a6 HTTP/1.1" 200 
863 0.050315
2019-03-04 11:54:28.940 17152 INFO neutron.wsgi 
[req-6fb418f2-1640-4daf-ad98-a9a8e870f6bd bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/ports.json?network_id=3950ea82-c375-4fcc-bc19-9e01d92681fd_owner=network%3Adhcp
 HTTP/1.1" 200 1119 0.037263
2019-03-04 11:54:28.951 17152 INFO neutron.wsgi 
[req-5e7b3c25-3c32-4b97-a595-3b548daec6ae bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:28] 
"GET 
/v2.0/floatingips.json?fixed_ip_address=10.62.63.5_id=983c2442-ecab-49aa-988a-775fc70a9d10
 HTTP/1.1" 200 217 0.009744
2019-03-04 11:54:29.000 17152 INFO neutron.wsgi 
[req-3e7fd50b-9fb1-4c72-9072-a0aa027f547c bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:29] 
"GET /v2.0/subnets.json?id=24c11aff-9785-4ed2-b33c-b98b037c24dc HTTP/1.1" 200 
863 0.046999
2019-03-04 11:54:29.037 17152 INFO neutron.wsgi 
[req-2d71b123-b2aa-47d6-95db-bb254032fd7f bee9b87b7aa24677b3c536f7906fbf83 
d1b8bebf20644e27b69e194b644d1154 - - -] 10.1.31.142 - - [04/Mar/2019 11:54:29] 
"GET 
/v2.0/ports.json?network_id=bcfd0431-aec1-42b0-98ad-b7155c836b97_owner=network%3Adhcp
 HTTP/1.1" 200 1118 0.035438

[root@newton-1 neutron]# tail -f dhcp-agent.log
2019-03-04 11:55:58.955 14992 ERROR neutron.agent.dhcp.agent 'value': 
value}])
2019-03-04 11:55:58.955 14992 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 912, in 
execute
2019-03-04 11:55:58.955 14992 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error, **kwargs)
2019-03-04 11:55:58.955 14992 ERROR neutron.agent.dhcp.agent   File 

[Yahoo-eng-team] [Bug 1809123] Re: OSError failure to read when creating multiple instances with NFS

2019-03-04 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1809123

Title:
  OSError failure to read when creating multiple instances with NFS

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  There is a race condition in when launching multiple instances over
  NFS simultaneously that can end up causing the os.utime function to
  fail when updating the mtime for the image base:

  2018-12-15 14:22:38.740 7 INFO nova.virt.libvirt.driver 
[req-d33edf35-733b-4591-831c-666cd159cee1 8965b22a11c44875a90fe88f50769a5a 
b9644067db0d44789e19d9d032287ada - default default] [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Creating image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager 
[req-d33edf35-733b-4591-831c-666cd159cee1 8965b22a11c44875a90fe88f50769a5a 
b9644067db0d44789e19d9d032287ada - default default] [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Instance failed to spawn: OSError: [Errno 
13] Permission denied
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Traceback (most recent call last):
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2252, in 
_build_resources
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] yield resources
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2032, in 
_build_and_run_instance
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] block_device_info=block_device_info)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3091, in 
spawn
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] block_device_info=block_device_info)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3469, in 
_create_image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] fallback_from_host)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3560, in 
_create_and_inject_local_root
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] instance, size, fallback_from_host)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7634, in 
_try_fetch_image_cache
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] size=size)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 243, 
in cache
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] *args, **kwargs)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 601, 
in create_image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] nova.privsep.path.utime(base)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, in 
_wrap
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] return self.channel.remote_call(name, 
args, kwargs)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 

[Yahoo-eng-team] [Bug 1809123] Re: OSError failure to read when creating multiple instances with NFS

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/625741
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=525631d8dc058910728e55def616358b0e7f2f69
Submitter: Zuul
Branch:master

commit 525631d8dc058910728e55def616358b0e7f2f69
Author: Tim Rozet 
Date:   Mon Dec 17 19:44:54 2018 -0500

Fixes race condition with privsep utime

There is a race condition that occurs over NFS when multiple instances
are being created where utime fails, due to some other process
modifying the file path. This patch ensures the path is created and
is readable before attempting to modify with utime.

Closes-Bug: 1809123

Change-Id: Id68aa27a8ab08d9c00655e5ed6b48d194aa8e6f6
Signed-off-by: Tim Rozet 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1809123

Title:
  OSError failure to read when creating multiple instances with NFS

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There is a race condition in when launching multiple instances over
  NFS simultaneously that can end up causing the os.utime function to
  fail when updating the mtime for the image base:

  2018-12-15 14:22:38.740 7 INFO nova.virt.libvirt.driver 
[req-d33edf35-733b-4591-831c-666cd159cee1 8965b22a11c44875a90fe88f50769a5a 
b9644067db0d44789e19d9d032287ada - default default] [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Creating image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager 
[req-d33edf35-733b-4591-831c-666cd159cee1 8965b22a11c44875a90fe88f50769a5a 
b9644067db0d44789e19d9d032287ada - default default] [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Instance failed to spawn: OSError: [Errno 
13] Permission denied
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] Traceback (most recent call last):
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2252, in 
_build_resources
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] yield resources
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2032, in 
_build_and_run_instance
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] block_device_info=block_device_info)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3091, in 
spawn
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] block_device_info=block_device_info)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3469, in 
_create_image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] fallback_from_host)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3560, in 
_create_and_inject_local_root
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] instance, size, fallback_from_host)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7634, in 
_try_fetch_image_cache
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] size=size)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 243, 
in cache
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] *args, **kwargs)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 601, 
in create_image
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552] nova.privsep.path.utime(base)
  2018-12-15 14:22:38.747 7 ERROR nova.compute.manager [instance: 
6fec5d88-09ab-4ecc-815d-c08c298fe552]   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, in 
_wrap
  2018-12-15 14:22:38.747 7 ERROR 

[Yahoo-eng-team] [Bug 1816859] Re: Server concepts in nova - automatic resize confirm is wrong in docs

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/638357
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5fdcb2ca4913f9813b50162131188a7520b41bd6
Submitter: Zuul
Branch:master

commit 5fdcb2ca4913f9813b50162131188a7520b41bd6
Author: Takashi NATSUME 
Date:   Thu Feb 21 15:45:34 2019 +0900

Remove wrong description for auto resize confirm

Remove wrong description for auto resize confirm
in the API guide.
Move a description of a configuration option
'resize_confirm_window' from the API guide
to the admin configuration guide.
Add a description of automatic resize confirm
in the user guide.

Change-Id: If739877422d5743e221c57be53ed877475db0647
Closes-Bug: #1816859


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816859

Title:
  Server concepts in nova - automatic resize confirm is wrong in docs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  - [x] This doc is inaccurate in this way:

  The section on resize:

  https://developer.openstack.org/api-guide/compute/server_concepts.html
  #server-actions

  says:

  "All resizes are automatically confirmed after 24 hours if you do not
  confirm or revert them."

  This is not true because the automatic confirm is based on the
  "resize_confirm_window" configuration option which by default is
  disabled:

  
https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.resize_confirm_window

  Since the guide already mentions this configuration option it's
  probably best to just remove the sentence about it.

  While we're fixing this, we should probably also avoid calling out the
  config option specifically since it's up to the operator / cloud and
  not the end user about whether or not the resized server is
  automatically confirmed and how long the window is. So we could just
  say, "The resized server may be automatically confirmed based on the
  administrator's configuration of the deployment".

  The place to mention automatically confirming a resized server should
  live in the admin docs:

  https://docs.openstack.org/nova/latest/admin/configuration/resize.html

  ---
  Release: 18.1.0.dev1308 on 2019-02-20 16:34:47.409737
  SHA: af78b13c24d4abf393d17ac57e9135204ef12b73
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/server_concepts.rst
  URL: https://developer.openstack.org/api-guide/compute/server_concepts.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1816859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817961] Re: populate_queued_for_delete queries the cell database for instances even if there are no instance mappings to migrate in that cell

2019-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/639840
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=47061e699b9dbf6fdeea572a5abeaa72e499ec87
Submitter: Zuul
Branch:master

commit 47061e699b9dbf6fdeea572a5abeaa72e499ec87
Author: Matt Riedemann 
Date:   Wed Feb 27 16:24:24 2019 -0500

Optimize populate_queued_for_delete online data migration

The data migration was needlessly querying the cell database
for instances even if there were no instance mappings in that
database that needed to be migrated. This simply continues to
the next cell if the instance mappings in the current cell are
migrated.

While we're in here, the joinedload on 'cell_mapping' can be
removed since it's not used.

Closes-Bug: #1817961

Change-Id: Idf35ed9d57945bc80fbd47393b7de076330160e6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1817961

Title:
  populate_queued_for_delete queries the cell database for instances
  even if there are no instance mappings to migrate in that cell

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  New

Bug description:
  If we get here:

  
https://github.com/openstack/nova/blob/eb93d0cffd11fcfca97b3d4679a0043142a5d998/nova/objects/instance_mapping.py#L169

  And the results are empty we can move on to the next cell without
  querying the cell database since we have nothing to migrate.

  Also, the joinedload on cell_mapping here:

  
https://github.com/openstack/nova/blob/eb93d0cffd11fcfca97b3d4679a0043142a5d998/nova/objects/instance_mapping.py#L164

  Is not used so could also be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1817961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792503] Re: allocation candidates "?member_of=" doesn't work with nested providers

2019-03-04 Thread Chris Dent
** Changed in: nova/rocky
   Status: In Progress => Won't Fix

** Changed in: nova
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1792503

Title:
  allocation candidates "?member_of=" doesn't work with nested providers

Status in OpenStack Compute (nova):
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Won't Fix

Bug description:
  "GET /allocation_candidates" now supports "member_of" parameter.
  With nested providers present, this should work with the following 
constraints.

  -
  (a)  With "member_of" qparam, aggregates on the root should span on the whole 
tree

  If a root provider is in the aggregate, which has been specified by 
"member_of" qparam,
  the resource providers under that root can be in allocation candidates even 
the root is absent.

  (b) Without "member_of" qparam, sharing resource provider should be
  shared with the whole tree

  If a sharing provider is in the same aggregate with one resource provider 
(rpA),
  and "member_of" hasn't been specified in qparam by user, the sharing provider 
can be in
  allocation candidates with any of the resource providers in the same tree 
with rpA.

  (c) With "member_of" qparam, the range of the share of sharing
  resource providers should shrink to the resource providers "under the
  specified aggregates" in a tree.

  Here, whether the rp is "under the specified aggregates" is determined with 
the constraints of (a). Namely, not only rps that belongs to the aggregates 
directly are "under the aggregates",
  but olso rps whose root is under the aggregates are also "under the 
aggregates".
  -

  So far at Stein PTG time, 2018 Sep. 13th, this constraint is broken in the 
point that
  when placement picks up allocation candidates, the aggregates of nested 
providers
  are assumed as the same as root providers. This means it ignores the 
aggregates of
  the nested provider itself. This could result in the lack of allocation 
candidates when
  an aggregate which on a nested provider but not on the root has been 
specified in
  the `member_of` query parameter.

  This bug is well described in a test case which is submitted shortly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1792503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1809401] Re: os-resource-classes: Could not satisfy constraints for 'os-resource-classes': installation from path or url cannot be constrained to a version

2019-03-04 Thread Chris Dent
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1809401

Title:
  os-resource-classes: Could not satisfy constraints for 'os-resource-
  classes': installation from path or url cannot be constrained to a
  version

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/85/624885/2/check/openstack-tox-pep8/a18a925
  /job-output.txt.gz#_2018-12-21_01_42_38_612849

  
  2018-12-21 01:42:19.753292 | ubuntu-xenial | pep8 create: 
/home/zuul/src/git.openstack.org/openstack/os-resource-classes/.tox/pep8
  2018-12-21 01:42:22.895392 | ubuntu-xenial | pep8 installdeps: 
-r/home/zuul/src/git.openstack.org/openstack/os-resource-classes/test-requirements.txt
  2018-12-21 01:42:35.891695 | ubuntu-xenial | pep8 develop-inst: 
/home/zuul/src/git.openstack.org/openstack/os-resource-classes
  2018-12-21 01:42:38.606499 | ubuntu-xenial | ERROR: invocation failed (exit 
code 1), logfile: 
/home/zuul/src/git.openstack.org/openstack/os-resource-classes/.tox/pep8/log/pep8-2.log
  2018-12-21 01:42:38.606661 | ubuntu-xenial | ERROR: actionid: pep8
  2018-12-21 01:42:38.606722 | ubuntu-xenial | msg: developpkg
  2018-12-21 01:42:38.607197 | ubuntu-xenial | cmdargs: 
'/home/zuul/src/git.openstack.org/openstack/os-resource-classes/.tox/pep8/bin/pip
 install 
-c/home/zuul/src/git.openstack.org/openstack/requirements/upper-constraints.txt 
--exists-action w -e 
/home/zuul/src/git.openstack.org/openstack/os-resource-classes'
  2018-12-21 01:42:38.607230 | ubuntu-xenial |
  2018-12-21 01:42:38.607406 | ubuntu-xenial | Ignoring mypy-extensions: 
markers 'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.607586 | ubuntu-xenial | Ignoring mypy-extensions: 
markers 'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.607761 | ubuntu-xenial | Ignoring mypy-extensions: 
markers 'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.607923 | ubuntu-xenial | Ignoring asyncio: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.608086 | ubuntu-xenial | Ignoring asyncio: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.608248 | ubuntu-xenial | Ignoring asyncio: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.608418 | ubuntu-xenial | Ignoring dnspython3: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.608585 | ubuntu-xenial | Ignoring dnspython3: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.608751 | ubuntu-xenial | Ignoring dnspython3: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.608908 | ubuntu-xenial | Ignoring mypy: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.609093 | ubuntu-xenial | Ignoring mypy: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.609255 | ubuntu-xenial | Ignoring mypy: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.609417 | ubuntu-xenial | Ignoring jeepney: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.609577 | ubuntu-xenial | Ignoring jeepney: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.609739 | ubuntu-xenial | Ignoring jeepney: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.609910 | ubuntu-xenial | Ignoring SecretStorage: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.610081 | ubuntu-xenial | Ignoring SecretStorage: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.610253 | ubuntu-xenial | Ignoring SecretStorage: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.610413 | ubuntu-xenial | Ignoring Django: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.610573 | ubuntu-xenial | Ignoring Django: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.610733 | ubuntu-xenial | Ignoring Django: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.610889 | ubuntu-xenial | Ignoring cmd2: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.611044 | ubuntu-xenial | Ignoring cmd2: markers 
'python_version == "3.5"' don't match your environment
  2018-12-21 01:42:38.611200 | ubuntu-xenial | Ignoring cmd2: markers 
'python_version == "3.6"' don't match your environment
  2018-12-21 01:42:38.611364 | ubuntu-xenial | Ignoring typed-ast: markers 
'python_version == "3.4"' don't match your environment
  2018-12-21 01:42:38.611528 | ubuntu-xenial 

[Yahoo-eng-team] [Bug 1818508] [NEW] Image source failure

2019-03-04 Thread Marek Lyčka
Public bug reported:

When launching instances through the NG dialog, Images can't be selected
as source.

To reproduce:
1) Open the launch instance dialog
2) Open the "Source" tab/step
3) Select "Image" in the "Select Boot Source" dropdown
=> No options for Images are displayed in the bottom portion of the dialog
=> The browser JS console displays:

Error: [ngRepeat:dupes] Duplicates in a repeater are not allowed. Use
'track by' expression to specify unique keys. Repeater: row in
ctrl.tableData.displayedAvailable track by row.id...

** Affects: horizon
 Importance: Undecided
 Assignee: Marek Lyčka (mareklycka)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Marek Lyčka (mareklycka)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818508

Title:
  Image source failure

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When launching instances through the NG dialog, Images can't be
  selected as source.

  To reproduce:
  1) Open the launch instance dialog
  2) Open the "Source" tab/step
  3) Select "Image" in the "Select Boot Source" dropdown
  => No options for Images are displayed in the bottom portion of the dialog
  => The browser JS console displays:

  Error: [ngRepeat:dupes] Duplicates in a repeater are not allowed. Use
  'track by' expression to specify unique keys. Repeater: row in
  ctrl.tableData.displayedAvailable track by row.id...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818498] [NEW] Placement aggregate creation continues to be unstable under very high load

2019-03-04 Thread Chris Dent
Public bug reported:

See: http://logs.openstack.org/89/639889/3/check/placement-
perfload/e56f0a0/logs/placement-api.log (or any other recent perfload
run) where there are multiple errors when trying to create aggregates.

Various bits of work have been done to try to fix that up, but
apparently none of them have fully worked.

Tetsuro had some ideas on using better transaction defaults in mysql's
configs, but I was reluctant to do that because presumably a lot of
people install and use the defaults and ideally our solution would "just
work" with the defaults.

Perhaps I'm completely wrong about that. In a very high concurrency
situation (which is what's happening in the perfload job) tweaks of the
db may be required.

In any case, this probably needs more attention: whatever the solution
we don't want to be able to create 500s so easily. And the solution is
not simply to make them 4xx. We want the problem to not happen.

** Affects: nova
 Importance: Low
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818498

Title:
  Placement aggregate creation continues to be unstable under very high
  load

Status in OpenStack Compute (nova):
  New

Bug description:
  See: http://logs.openstack.org/89/639889/3/check/placement-
  perfload/e56f0a0/logs/placement-api.log (or any other recent perfload
  run) where there are multiple errors when trying to create aggregates.

  Various bits of work have been done to try to fix that up, but
  apparently none of them have fully worked.

  Tetsuro had some ideas on using better transaction defaults in mysql's
  configs, but I was reluctant to do that because presumably a lot of
  people install and use the defaults and ideally our solution would
  "just work" with the defaults.

  Perhaps I'm completely wrong about that. In a very high concurrency
  situation (which is what's happening in the perfload job) tweaks of
  the db may be required.

  In any case, this probably needs more attention: whatever the solution
  we don't want to be able to create 500s so easily. And the solution is
  not simply to make them 4xx. We want the problem to not happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816230] Re: allocations by consumer and resource provider use wrong timestamp

2019-03-04 Thread Takashi NATSUME
The fix has been merged.

https://review.openstack.org/#/c/638344/

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816230

Title:
  allocations by consumer and resource provider use wrong timestamp

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When listing allocations by resource provider or consumer uuid, the
  updated_at and created_at fields in the database are not loaded into
  the object, so default to now when their times are used to generate
  last-modified headers in http responses.

  This isn't a huge problem because we tend not to care about those
  times (at the moment), but it would be a useful thing to clean up.

  The issues are in AllocationList.get_all_by_resource_provider and
  .get_all_by_consumer_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1816230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp