[Yahoo-eng-team] [Bug 1828641] Re: Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module changes from tip

2019-05-10 Thread Chad Smith
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1828641

Title:
  Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module
  changes from tip

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New
Status in cloud-init source package in Xenial:
  New
Status in cloud-init source package in Bionic:
  New
Status in cloud-init source package in Cosmic:
  New

Bug description:
  Xenial Bionic and Cosmic are currently running earlier versions of
  ubuntu-advantage-tools (10 for xenial and 17 for bionic/cosmic).

  ubuntu-advantage-tools 19 and later is a completely rewritten CLI that
  is backwards incompatible.

  Until ubuntu-advantage-tools >= 19.1 is released into Xenial, Bionic
  and Cosmic. Carry a debian patch file to revert upstream cloud-init
  config module changes for cc_ubuntu_advantage.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1828641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828641] [NEW] Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module changes from tip

2019-05-10 Thread Chad Smith
Public bug reported:

Xenial Bionic and Cosmic are currently running earlier versions of
ubuntu-advantage-tools (10 for xenial and 17 for bionic/cosmic).

ubuntu-advantage-tools 19 and later is a completely rewritten CLI that
is backwards incompatible.

Until ubuntu-advantage-tools >= 19.1 is released into Xenial, Bionic and
Cosmic. Carry a debian patch file to revert upstream cloud-init config
module changes for cc_ubuntu_advantage.py.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Summary changed:

- xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module changes 
from tip
+ Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module changes 
from tip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1828641

Title:
  Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module
  changes from tip

Status in cloud-init:
  New

Bug description:
  Xenial Bionic and Cosmic are currently running earlier versions of
  ubuntu-advantage-tools (10 for xenial and 17 for bionic/cosmic).

  ubuntu-advantage-tools 19 and later is a completely rewritten CLI that
  is backwards incompatible.

  Until ubuntu-advantage-tools >= 19.1 is released into Xenial, Bionic
  and Cosmic. Carry a debian patch file to revert upstream cloud-init
  config module changes for cc_ubuntu_advantage.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1828641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824911] Re: [scale issue] the bottleneck lock will multiply increase processing time of agent resources

2019-05-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/656164
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0f471a47c073eb0cf2ed68c30482e1ae71ff6927
Submitter: Zuul
Branch:master

commit 0f471a47c073eb0cf2ed68c30482e1ae71ff6927
Author: LIU Yulong 
Date:   Fri Apr 26 15:42:32 2019 +0800

Async notify neutron-server for HA states

RPC notifier method can sometimes be time-consuming,
this will cause other parallel processing resources
fail to send notifications in time. This patch changes
the notify to asynchronous.

Closes-Bug: #1824911
Change-Id: I3f555a0c78fbc02d8214f12b62c37d140bc71da1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1824911

Title:
  [scale issue] the bottleneck lock will multiply increase processing
  time of agent resources

Status in neutron:
  Fix Released

Bug description:
  Env: stable/queens
  CentOS 7.5
  kernel-3.10.0-862.11.6.el7.x86_64

  There are many bottleneck locks in the agent extensions. For instance, l3 
agent extensions now have lock 'qos-fip' [1], 'qos-gateway-ip' [2], 
'port-forwarding' [3], 'log-port' [4]. For L2 agent, it is 'qos-port' lock [5].
  For these agent extensions when a large number of resources need to be 
processed in parallel, the processing time may get longer and longer. Let's 
take the l3 agent extension as the example. Because, firstly, the more time for 
one router processing time, the more lock waiting time for others. Then, if 
every router have a large set of resource need to be done in the extension (for 
example, floating IP QoS tc rules), every router processing will hold the lock 
for a bit long time, and more waiting time for others then. It shows a trend of 
multiple growth.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/extensions/qos/fip.py#L283
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/extensions/qos/gateway_ip.py#L84
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/extensions/port_forwarding.py#L426
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/services/logapi/agent/l3/base.py#L96
  [5] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L241

  Here are some Logs from L3 agent, you may see there are some
  'add_router' which hold the lock for 16.271s and another wait the lock
  32.547s.

  2019-04-12 09:38:31.526 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 9.697s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
  2019-04-12 09:38:31.526 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 12.753s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
  2019-04-12 09:38:35.435 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 3.909s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
  2019-04-12 09:38:35.435 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 16.216s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
  2019-04-12 09:38:37.435 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 2.000s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
  2019-04-12 09:38:37.436 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 16.639s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
  2019-04-12 09:38:39.436 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 2.000s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
  2019-04-12 09:38:39.436 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 16.376s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
  2019-04-12 09:38:41.437 40627 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-58a2b315-d411-4aa0-bb23-7c0da0b57a70" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 2.000s inner 

[Yahoo-eng-team] [Bug 1177924] Re: Use testr instead of nose as the unittest runner.

2019-05-10 Thread Jason Grosso
https://bugs.launchpad.net/manila-ui/+bug/1177924/comments/34

** Changed in: manila-ui
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1177924

Title:
  Use testr instead of nose as the unittest runner.

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in django-openstack-auth:
  New
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Identity (keystone):
  Fix Released
Status in manila-ui:
  Won't Fix
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  We want to start using testr as our test runner instead of nose so
  that we can start running tests in parallel. For the projects that
  have switched we have seen improvements to test speed and quality.

  As part of getting set for that, we need to start using testtools and
  fixtures so provide the plumbing and test isolation needed for
  automatic parallelization. The work can be done piecemeal - with
  testtools and fixtures being added first, and then tox/run_tests being
  ported to us testr/subunit instead of nose.

  This work was semi tracked during Grizzly with this
  https://blueprints.launchpad.net/openstack-ci/+spec/grizzly-testtools
  blueprint. I am opening this bug so that we can track migration to
  testr on a per project basis.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1177924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1812117] Re: route files are not written on SUSE distros

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1812117

Title:
  route files are not written on SUSE distros

Status in cloud-init:
  Fix Released

Bug description:
  On SUSE distros the routes need to be written to ifroute-* files.

  At present the sysconfig renderer does not write the default routes to
  ifroute-* files, rather the default rout information is set in
  ifcfg-*. However the values DEFROUTE=yes and IPV6_DEFAULTGW have no
  meaning in SUSE ifcfg-* files and are ignored. The routes for an
  interface are loaded from the ifroute-* file.

  The file content is expected to be in the format

  Destination Gateway Netmask Interface Options

  
  The following config shown at https://pastebin.ubuntu.com/p/jjMKVTSK9v/ 
should produce 3 ifroute-* files

  ifroute-eth1
  default 10.80.124.81

  ifroute-eth2
  default 192.168.1.254

  ifroute-eth3
  default fe80::10:80:124:81

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1812117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818032] Re: sysconfig renders BOOTPROTO=dhcp even if dhcp=false in v2 network-config

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818032

Title:
  sysconfig renders BOOTPROTO=dhcp even if dhcp=false in v2 network-
  config

Status in cloud-init:
  Fix Released

Bug description:
  Distribution: Fedora 29
  Cloud Provider: None, NoCloud
  Cloud-Init Version: 18.5 (also 17.1)

  Network Config V2:
  version: 2
  ethernets:
ens3:
  match:
macaddress: 52:54:00:ab:cd:ef
  dhcp4: false
  dhcp6: false
  addresses:
- 192.168.42.100/24
- 2001:db8::100/32
  gateway4: 192.168.42.1
  gateway6: 2001:db8::1
  nameservers:
search: [example.com]
addresses: [192.168.42.53, 1.1.1.1]

  Renders to /etc/sysconfig/network-scripts/ifcfg-ens3:

  # Created by cloud-init on instance boot automatically, do not edit.
  #
  BOOTPROTO=dhcp
  DEFROUTE=yes
  DEVICE=ens3
  DHCPV6C=yes
  DNS1=192.168.42.53
  DNS2=1.1.1.1
  DOMAIN=example.com
  GATEWAY=192.168.42.1
  HWADDR=52:54:00:ab:cd:ef
  IPADDR=192.168.42.101
  IPV6ADDR=2001:db8::101/32
  IPV6INIT=yes
  IPV6_DEFAULTGW=2001:db8::1
  NETMASK=255.255.255.0
  NM_CONTROLLED=no
  ONBOOT=yes
  STARTMODE=auto
  TYPE=Ethernet
  USERCTL=no

  
  But 'BOOTPROTO=dhcp' should be 'BOOTPROTO=none' and 'DHCPV6C=yes' should be 
'DHCPV6C=no' or missing.

  Already fixed this: https://code.launchpad.net/~kurt-easygo/cloud-
  init/+git/cloud-init/+merge/363732

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779672] Re: netdev_pformat key error on FreeBSD with 18.3

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1779672

Title:
  netdev_pformat key error on FreeBSD with 18.3

Status in cloud-init:
  Fix Released

Bug description:
  i am running cloud-init on commit id
  c42a926ae730994f66fe87c264b65f6e4dca69a1 against a FreeBSD 10.4 Host
  an getting the following stacktrace:

  2018-07-02 11:40:18,158 - util.py[DEBUG]: Cloud-init v. 18.3 running 'init' 
at Mon, 02 Jul 2018 11:40:18 +. Up 20.11459589 seconds.
  2018-07-02 11:40:18,159 - main.py[DEBUG]: No kernel command line url found.
  2018-07-02 11:40:18,159 - main.py[DEBUG]: Closing stdin.
  2018-07-02 11:40:18,172 - util.py[DEBUG]: Writing to /var/log/cloud-init.log 
- ab: [644] 0 bytes
  2018-07-02 11:40:18,175 - util.py[DEBUG]: Changing the ownership of 
/var/log/cloud-init.log to 0:0
  2018-07-02 11:40:18,175 - util.py[DEBUG]: Running command ['ifconfig', '-a'] 
with allowed return codes [0, 1] (shell=False, capture=True)
  2018-07-02 11:40:18,195 - util.py[WARNING]: failed stage init
  2018-07-02 11:40:18,196 - util.py[DEBUG]: failed stage init
  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/site-packages/cloud_init-18.3-py2.7.egg/cloudinit/cmd/main.py",
 line 655, in status_wrapper
  ret = functor(name, args)
File 
"/usr/local/lib/python2.7/site-packages/cloud_init-18.3-py2.7.egg/cloudinit/cmd/main.py",
 line 284, in main_init
  sys.stderr.write("%s\n" % (netinfo.debug_info()))
File 
"/usr/local/lib/python2.7/site-packages/cloud_init-18.3-py2.7.egg/cloudinit/netinfo.py",
 line 447, in debug_info
  netdev_lines = netdev_pformat().splitlines()
File 
"/usr/local/lib/python2.7/site-packages/cloud_init-18.3-py2.7.egg/cloudinit/netinfo.py",
 line 392, in netdev_pformat
  (dev, data["up"], addr["ip"], empty, addr["scope6"],
  KeyError: 'scope6'
  2018-07-02 11:40:18,204 - util.py[DEBUG]: cloud-init mode 'init' took 0.142 
seconds (0.14)
  2018-07-02 11:40:18,205 - handlers.py[DEBUG]: finish: init-network: SUCCESS: 
searching for network datasources

  
  The interface setup on the host is like:

  root@host-10-1-80-61:~ # ifconfig -a
  vtnet0: flags=8843 metric 0 mtu 1500

options=6c07bb
ether fa:16:3e:14:1f:99
hwaddr fa:16:3e:14:1f:99
inet 10.1.80.61 netmask 0xf000 broadcast 10.1.95.255 
nd6 options=29
media: Ethernet 10Gbase-T 
status: active
  pflog0: flags=0<> metric 0 mtu 33160
  pfsync0: flags=0<> metric 0 mtu 1500
syncpeer: 0.0.0.0 maxupd: 128 defer: off
  lo0: flags=8049 metric 0 mtu 16384
options=63
inet6 ::1 prefixlen 128 
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4 
inet 127.0.0.1 netmask 0xff00 
nd6 options=21

  with previous 18.2 release i did not have any problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1779672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825596] Re: Azure reboot with unformatted ephemeral drive won't mount reformatted volume

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1825596

Title:
  Azure reboot with unformatted ephemeral drive won't mount reformatted
  volume

Status in cloud-init:
  Fix Released

Bug description:
  If an Azure VM is rebooted after being moved to a different host (e.g.
  after a deallocate operation or after a service-heal to remove a bad
  host from service), the ephemeral drive exposed to the VM is reset to
  the default state (NTFS format). The Azure data source detects this
  and marks cc_disk_setup and cc_mounts to be run. While cc_disk_setup
  reformats the volume as desired, cc_mounts determines that the
  appropriate mount request was already in /etc/fstab (as setup during
  initial provisioning). Since the normal boot process would already
  have mounted everything according to fstab, the cc_mounts logic is "no
  mount -a is required". This is not true in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1825596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819222] Re: cloud-init-per no longer works due to bashisms

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819222

Title:
  cloud-init-per no longer works due to bashisms

Status in cloud-init:
  Fix Released

Bug description:
  Due to https://code.launchpad.net/~vkuznets/cloud-init/+git/cloud-
  init/+merge/362024, cloud-init-per now gives:

  /usr/bin/cloud-init-per: 41: /usr/bin/cloud-init-per: Bad substitution

  when it runs, as it has a shebang pointing at /bin/sh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799540] Re: ONBOOT not supported in SUSE distros

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1799540

Title:
  ONBOOT not supported in SUSE distros

Status in cloud-init:
  Fix Released

Bug description:
  With db50bc0d9 the sysconfig renderer was enabled for openSUSE and
  SUSE Linux Enterprise. The current implementation renders ONBOOT=yes
  into ifcfg-* but this setting is not recognized and the device is not
  started by wicked. This should be rendered as STARTMODE=auto or for
  ONBOOT=no as STARTMODE=manual.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1799540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813361] Re: disco: python37 unittest/tox support

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1813361

Title:
  disco: python37 unittest/tox support

Status in cloud-init:
  Fix Released

Bug description:
  cloud-init's xenial toxenv falls over on tip of master 
7a4696596bbcccfedf5c6b6e25ad684ef30d9cea
  in Ubuntu Disco python37 environments. Some of the tox dependencies like 
httpretty are exhibiting issues with python3.7

  Make "all the tox things" work on Disco

  type of errors seen running tox -r -e xenial on Disco

  ==
  ERROR: 
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_wb__crawl_metadata_does_not_persist
  --
  Traceback (most recent call last):
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 1055, in wrapper
  return test(*args, **kw)
    File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 395, in test_wb__crawl_metadata_does_not_persist
  _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
    File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 126, in _register_uris
  body=get_request_callback)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 938, in register_uri
  match_querystring)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 760, in __init__
  self.info = URIInfo.from_uri(uri, entries)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 730, in from_uri
  result = urlsplit(uri)
    File "/usr/lib/python3.7/urllib/parse.py", line 400, in urlsplit
  url, scheme, _coerce_result = _coerce_args(url, scheme)
    File "/usr/lib/python3.7/urllib/parse.py", line 123, in _coerce_args
  return _decode_args(args) + (_encode_result,)
    File "/usr/lib/python3.7/urllib/parse.py", line 107, in _decode_args
  return tuple(x.decode(encoding, errors) if x else '' for x in args)
    File "/usr/lib/python3.7/urllib/parse.py", line 107, in 
  return tuple(x.decode(encoding, errors) if x else '' for x in args)
  AttributeError: 're.Pattern' object has no attribute 'decode'

  --

  Also of note: python27 interpreter isn't available on Disco, so we
  need to allow tox to skip py27 env if not present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1813361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813641] Re: cloud-init on Disco, opennebula will intermittently fail unittests

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1813641

Title:
  cloud-init on Disco, opennebula will intermittently fail unittests

Status in cloud-init:
  Fix Released

Bug description:
  Just like recent BUG: #1813383 bash on disco now exposed EPOCHSECONDS
  environment variable as well as EPOCHREALTIME.

  OpenNebula datasource inspects all bash environment variables in order to 
surfaces variables which have changed across bash env invocations. Since 
EPOCHREALTIME and EPOCHSECONDS are both known to increase across environment 
queries, OpenNebula needs to exclude both of these values from
  potential lists of changed environment variables to avoid false positives.

  This intermittent bug caused a FTBFS for cloud-init in disco:
  
https://launchpadlibrarian.net/408452856/buildlog_ubuntu-disco-amd64.cloud-init_18.5-17-gd1a2fe73-0ubuntu1_BUILDING.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1813641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819913] Re: cloud-init on xenial may generate network config on every boot

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819913

Title:
  cloud-init on xenial may generate network config on every boot

Status in cloud-init:
  Fix Released

Bug description:
  On at least EC2 with cloud-init xenial release, the Ec2 datasource allows the 
EventType.BOOT
  event to update metadata and will regenerate network configuration on each 
boot. Cloud-init releases newer than Xenial are not affected since cloud-init 
will detect
  which datasource to use and does not perform searching.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828479] Re: Release 19.1

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1828479

Title:
  Release 19.1

Status in cloud-init:
  Fix Released

Bug description:
  This bug tracks cloud-init upstream release of version 19.1.

  == Release Notes ==
  Hello All,

  Cloud-init release 19.1 is now available

  The 19.1 release:
   * spanned just under 5 months in length
   * had 24 contributors from 20 domains
   * fixed 30 launchpad issues

  Highlights:
   - Azure datasource telemetry, network configuration and ssh key hardening 
   - new config module for interacting with third party drivers on Ubuntu
   - EC2 Classic instance support for network config changes across reboot
   - Add support for the com.vmware.guestInfo OVF transport.
   - Scaleway: Support ssh keys provided inside an instance tag.
   - Better NoCloud support for case-insensitive fs labels.


  == Changelog ==

- tests: add Eoan release [Paride Legovini]
- cc_mounts: check if mount -a on no-change fstab path
  [Jason Zions (MSFT)] (LP: #1825596)
- replace remaining occurrences of LOG.warn [Daniel Watkins]
- DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo]
- Azure: Changes to the Hyper-V KVP Reporter [Anh Vo]
- git tests: no longer show warning about safe yaml.
- tools/read-version: handle errors [Chad Miller]
- net/sysconfig: only indicate available on known sysconfig distros
  (LP: #1819994)
- packages: update rpm specs for new bash completion path
  [Daniel Watkins] (LP: #1825444)
- test_azure: mock util.SeLinuxGuard where needed
  [Jason Zions (MSFT)] (LP: #1825253)
- setup.py: install bash completion script in new location [Daniel Watkins]
- mount_cb: do not pass sync and rw options to mount
  [Gonéri Le Bouder] (LP: #1645824)
- cc_apt_configure: fix typo in apt documentation [Dominic Schlegel]
- Revert "DataSource: move update_events from a class to an instance..."
  [Daniel Watkins]
- Change DataSourceNoCloud to ignore file system label's case.
  [Risto Oikarinen]
- cmd:main.py: Fix missing 'modules-init' key in modes dict
  [Antonio Romito] (LP: #1815109)
- ubuntu_advantage: rewrite cloud-config module
- Azure: Treat _unset network configuration as if it were absent
  [Jason Zions (MSFT)] (LP: #1823084)
- DatasourceAzure: add additional logging for azure datasource [Anh Vo]
- cloud_tests: fix apt_pipelining test-cases
- Azure: Ensure platform random_seed is always serializable as JSON.
  [Jason Zions (MSFT)]
- net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert]
- tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold]
- net: Fix ipv6 static routes when using eni renderer
  [Raphael Glon] (LP: #1818669)
- Add ubuntu_drivers config module [Daniel Watkins]
- doc: Refresh Azure walinuxagent docs [Daniel Watkins]
- tox: bump pylint version to latest (2.3.1) [Daniel Watkins]
- DataSource: move update_events from a class to an instance attribute
  [Daniel Watkins] (LP: #1819913)
- net/sysconfig: Handle default route setup for dhcp configured NICs
  [Robert Schweikert] (LP: #1812117)
- DataSourceEc2: update RELEASE_BLOCKER to be more accurate
  [Daniel Watkins]
- cloud-init-per: POSIX sh does not support string subst, use sed
  (LP: #1819222)
- Support locking user with usermod if passwd is not available.
- Example for Microsoft Azure data disk added. [Anton Olifir]
- clean: correctly determine the path for excluding seed directory
  [Daniel Watkins] (LP: #1818571)
- helpers/openstack: Treat unknown link types as physical
  [Daniel Watkins] (LP: #1639263)
- drop Python 2.6 support and our NIH version detection [Daniel Watkins]
- tip-pylint: Fix assignment-from-return-none errors
- net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig
  [Kurt Stieger] (LP: #1818032)
- cc_apt_pipelining: stop disabling pipelining by default
  [Daniel Watkins] (LP: #1794982)
- tests: fix some slow tests and some leaking state [Daniel Watkins]
- util: don't determine string_types ourselves [Daniel Watkins]
- cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967)
- Enable encrypted_data_bag_secret support for Chef
  [Eric Williams] (LP: #1817082)
- azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)]
- doc: update merging doc with fixes and some additional details/examples
- tests: integration test failure summary to use traceback if empty error
- This is to fix 

[Yahoo-eng-team] [Bug 1815109] Re: cloud-final.service: "cloud-init modules --mode final" exit with "KeyError: 'modules-init'" after upgrade to version 18.2

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815109

Title:
  cloud-final.service: "cloud-init modules --mode final" exit with
  "KeyError: 'modules-init'" after upgrade to version 18.2

Status in cloud-init:
  Fix Released

Bug description:
  Description of problem:

  After the upgrade of cloud-init to version 18.2 cloud-final.service do
  not start due to the following error and the service remains in not
  running state

  -
  # service cloud-final status
  Redirecting to /bin/systemctl status cloud-final.service
  ● cloud-final.service - Execute cloud user/final scripts
 Loaded: loaded (/usr/lib/systemd/system/cloud-final.service; enabled; 
vendor preset: disabled)
 Active: failed (Result: exit-code) since Fri 2019-02-01 13:14:31 CET; 
28min ago
Process: 21927 ExecStart=/usr/bin/cloud-init modules --mode=final 
(code=exited, status=1/FAILURE)
   Main PID: 21927 (code=exited, status=1/FAILURE)
  -

  Version-Release number of selected component (if applicable):

  Red Hat Enterprise Linux Server release 7.6 (Maipo)
  cloud-init-18.2-1.el7_6.1.x86_64

  How reproducible:

  Steps to Reproduce:
  1. [root@rhvm ~]# cloud-init modules --mode=final

  Actual results:

  [root@rhvm ~]# cloud-init modules --mode final
  Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 20:00:14 
+. Up 10634.29 seconds.
  Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 20:00:15 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 10634.40 seconds
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 9, in 
  load_entry_point('cloud-init==18.2', 'console_scripts', 'cloud-init')()
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 882, in 
main
  get_uptime=True, func=functor, args=(name, args))
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2388, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 679, in 
status_wrapper
  if v1[m]['errors']:
  KeyError: 'modules-init'

  
  Expected results:

  [root@rhvm ~]# cloud-init modules --mode final
  Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 19:41:50 
+. Up 9530.23 seconds.
  Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 19:41:50 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 9530.34 seconds

  
  Additional info:

  This problem do not happens with previous cloud-init version:

  cloud-init.x86_64 0:0.7.9-24.el7_5.1 will be updated

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639263] Re: cloud-init Unknown network_data link type: macvtap

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639263

Title:
   cloud-init Unknown network_data link type: macvtap

Status in cloud-init:
  Fix Released

Bug description:
  Cloud Init failing due to unknown network data link type: macvtap

  Sample cloud_init.log output of launched instance where compute
  running neutron-macvtap-agent

  [CLOUDINIT] util.py[WARNING]: failed stage init-local
  [CLOUDINIT] util.py[DEBUG]: failed stage init-local#012Traceback (most recent 
call last):#012  File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", 
line 521, in status_wrapper#012ret = functor(name, args)#012  File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 280, in 
main_init#012init.apply_network_config(bring_up=bool(mode != 
sources.DSMODE_LOCAL))#012  File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 631, in 
apply_network_config#012netcfg, src = self._find_networking_config()#012  
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 618, in 
_find_networking_config#012if self.datasource and hasattr(self.datasource, 
'network_config'):#012  File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", 
line 159, in network_config#012self.network_json, 
known_macs=self.known_macs)#012  File 
"/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 
627, in convert_net_json#012'Unknown network_data link type: %s' % 
link['type'])#012ValueError: Unknown network_data link type: macvtap

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645824] Re: NoCloud source doesn't work on FreeBSD

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1645824

Title:
  NoCloud source doesn't work on FreeBSD

Status in cloud-init:
  Fix Released

Bug description:
  Hey guys,

  I'm trying to use cloud-init on FreeBSD using CD to seed metadata, the
  thing is that it had some issues:

  - Mount option 'sync' is not allowed for cd9660 filesystem.
  - I optimized the list of filesystems that needed to be scanned for metadata 
by having three lists (vfat, iso9660, and label list) and then checking against 
them to see which filesystem option needs to be passed to mount command.

  Additionally I'm going to push some changes to FreeBSD cloud-init
  package so it can build last version. I will open another ticket for
  fixing networking in FreeBSD as it doesn't support sysfs
  (/sys/class/net/) by default.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1645824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813383] Re: opennebula: fail to sbuild, bash environment var failure EPOCHREALTIME

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1813383

Title:
  opennebula: fail to sbuild, bash environment var failure EPOCHREALTIME

Status in cloud-init:
  Fix Released

Bug description:
  unittests are failing during packaging of cloud-init on disco during
  an sbuild due to failures in OpenNebula datasource unit tests.

  
  Unit tests are now seeing EPOCHREALTIME values returned because those env 
values have changed across the unit test run.

  
  OpenNebula datasource tries to exclude known bash -e env values that are 
known to change. and EPOCHREALTIME is one of the expected env variables that 
should continue to have a value delta.


  ==
  FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_context_parser
  --
  Traceback (most recent call last):
File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 161, in test_context_parser
  self.assertEqual(TEST_VARS, results['metadata'])
  AssertionError: {'VAR1': 'single', 'VAR2': 'double word', '[207 chars] '$'} 
!= {'EPOCHREALTIME': '1548476675.477863', 'VAR[245 chars]e\n'}
  + {'EPOCHREALTIME': '1548476675.477863',
  - {'VAR1': 'single',
  ? ^

  +  'VAR1': 'single',
  ? ^

 'VAR10': '\\',
 'VAR11': "'",
 'VAR12': '$',
 'VAR2': 'double word',
 'VAR3': 'multi\nline\n',
 'VAR4': "'single'",
 'VAR5': "'double word'",
 'VAR6': "'multi\nline\n'",
 'VAR7': 'single\\t',
 'VAR8': 'double\\tword',
 'VAR9': 'multi\\t\nline\n'}
   >> begin captured logging << 
  cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.ms6gmudd/seed/opennebula/context.sh 
(quiet=False)
  cloudinit.util: DEBUG: Read 262 bytes from 
/tmp/ci-TestOpenNebulaDataSource.ms6gmudd/seed/opennebula/context.sh
  cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return 
codes [0] (shell=False, capture=True)
  - >> end captured logging << -

  ==
  FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_seed_dir_empty1_context
  --
  Traceback (most recent call last):
File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 140, in test_seed_dir_empty1_context
  self.assertEqual(results['metadata'], {})
  AssertionError: {'EPOCHREALTIME': '1548476675.848343'} != {}
  - {'EPOCHREALTIME': '1548476675.848343'}
  + {}
   >> begin captured logging << 
  cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.gu1w3vu_/seed/opennebula/context.sh 
(quiet=False)
  cloudinit.util: DEBUG: Read 0 bytes from 
/tmp/ci-TestOpenNebulaDataSource.gu1w3vu_/seed/opennebula/context.sh
  cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return 
codes [0] (shell=False, capture=True)
  - >> end captured logging << -

  ==
  FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_seed_dir_empty2_context
  --
  Traceback (most recent call last):
File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 147, in test_seed_dir_empty2_context
  self.assertEqual(results['metadata'], {})
  AssertionError: {'EPOCHREALTIME': '1548476675.863058'} != {}
  - {'EPOCHREALTIME': '1548476675.863058'}
  + {}
   >> begin captured logging << 
  cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.b3f_3ztm/seed/opennebula/context.sh 
(quiet=False)
  cloudinit.util: DEBUG: Read 44 bytes from 
/tmp/ci-TestOpenNebulaDataSource.b3f_3ztm/seed/opennebula/context.sh
  cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return 
codes [0] (shell=False, capture=True)
  - >> end captured logging << -

  ==
  FAIL: test_no_seconds 
(tests.unittests.test_datasource.test_opennebula.TestParseShellConfig)
  --
  Traceback (most recent call last):
File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 921, in test_no_seconds
  

[Yahoo-eng-team] [Bug 1816967] Re: cc_rsyslog.py:205: FutureWarning: Possible nested set at position 23

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1816967

Title:
  cc_rsyslog.py:205: FutureWarning: Possible nested set at position 23

Status in cloud-init:
  Fix Released

Bug description:
  With Python 3.7 this FutureWarning is seen e.g. in VM serial console:

  [4.321959] cloud-init[728]: 
/usr/lib/python3.7/site-packages/cloudinit/config/cc_rsyslog.py:205: 
FutureWarning: Possible nested set at position 23
  [4.323230] cloud-init[728]:   r'^(?P[@]{0,2})'

  I think it's fixable by changing [[] to [\[] in the HOST_PORT_RE regex
  in cc_rsyslog.py.

  https://docs.python.org/dev/whatsnew/3.7.html#re

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1816967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811446] Re: chpasswd: is mangling certain password hashes

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1811446

Title:
  chpasswd: is mangling certain password hashes

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  #cloud-config

  # from 1 files
  # part-001

  ---
  chpasswd:
  expire: false
  list: 'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/Dlew1Va

  '

  
  From #cloud-init

   Hey there, I'm not sure whether I'm running into a bug or not
   I'm trying to set the password hash for the root user on a system 
using the chpasswd module
   It should match new hash at this line in the module but it doens't 
seem to match
   
https://github.com/cloud-init/cloud-init/blame/master/cloudinit/config/cc_set_passwords.py#L163
   I can confirm this when running it through 
https://regex101.com/r/Nj7VTZ/1
   Then I was thinking, isn't [] for lists of characters rather than 
lists of strings
   Changing it to \$(1|2a|2y|5|6)(\$.+){2} does work
   At least in regex101
   smoser, you any idea, I saw you commited the change: 
https://github.com/cloud-init/cloud-init/commit/21632972df034c200578e1fbc121a07f20bb8774
   marlinc_: i'd think yes. that is a bug for the '2a' and '2y'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1811446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819994] Re: cloud-init selects sysconfig netconfig renderer if network-manager is installed on Ubuntu

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819994

Title:
  cloud-init selects sysconfig netconfig renderer if network-manager is
  installed on Ubuntu

Status in cloud-init:
  Fix Released
Status in MAAS:
  Invalid
Status in Provider for Plainbox - Canonical Certification Server:
  Confirmed

Bug description:
  Configuration:
  UEFI/BIOS:TEE136S
  IMM/BMC:  CDI333V
  CPU:  Intel(R) Xeon(R) Platinum 8253 CPU @ 2.20GHz
  Memory:   16G DIMM * 12
  Raid card:ThinkSystem RAID 530-8i 
  NIC Card: Intel X722 LOM

  Reproduce Steps:
  1.Config "network" as first boot
  2.Power on machine
  3.Visit TC through web browser and Commission machine
  4.When commission complete, deploy ubuntu 18.04 LTS on SUT
  5.The Error appeared during OS deploy.

  Deploy errors like the following(you can view the attachment for
  details):

  cloud-init[] Date_and_time - handlers.py[WARNING]: failed posting
  event: start: modules-final/config-: running config-

  cloud-init[] Date_and_time - handlers.py[WARNING]: failed posting
  event: fainish: modules-final: SUCCESS: running modules for final

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823084] Re: DataSourceAzure doesn't rebuild network-config after reboot

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1823084

Title:
  DataSourceAzure doesn't rebuild network-config after reboot

Status in cloud-init:
  Fix Released

Bug description:
  After merge 365065 (commit 0dc3a77f4), when an Azure VM (previously
  provisioned via cloud-init) is rebooted, DataSourceAzure fails to
  recreate a NetworkConfig, with multiple exceptions raised and caught.

  When the ds is restored from obj.pkl in the instance directory,
  self._network_config is reloaded as the string "_unset" rather than as
  a dictionary. Comments in the datasource indicate this was a
  deliberate decision; the intent was to force the datasource to rebuild
  the network configuration at each boot based on information fetched
  from the Azure control plane. The self._network_config dict is
  overwritten very quickly after it is generated and used; the net
  result is that the "_unset" string is deliberately saved as
  obj['ds']['network_config']

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1823084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1795508] Re: cloud-init clean from within /var/lib/cloud-init/instance

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1795508

Title:
  cloud-init clean from within /var/lib/cloud-init/instance

Status in cloud-init:
  Fix Released

Bug description:
  Attempted to cloud-init clean from a directory clean will remove:

  /var/lib/cloud/instance# cloud-init clean --logs --reboot 
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 9, in 
  load_entry_point('cloud-init==18.3', 'console_scripts', 'cloud-init')()
File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 904, in 
main
  get_uptime=True, func=functor, args=(name, args))
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2514, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/cloudinit/cmd/clean.py", line 81, in 
handle_clean_args
  exit_code = remove_artifacts(args.remove_logs, args.remove_seed)
File "/usr/lib/python3/dist-packages/cloudinit/cmd/clean.py", line 75, in 
remove_artifacts
  return 1
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
  next(self.gen)
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 832, in chdir
  os.chdir(curr)
  FileNotFoundError: [Errno 2] No such file or directory: 
'/var/lib/cloud/instances/ce3aca12-4e37-4ef9-bc51-170db3d25881'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1795508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799779] Re: LXD module installs the wrong ZFS package if it's missing

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1799779

Title:
  LXD module installs the wrong ZFS package if it's missing

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Bionic:
  Fix Committed
Status in cloud-init source package in Cosmic:
  Fix Committed
Status in cloud-init source package in Disco:
  Fix Released

Bug description:
  When using the LXD module cloud-init will attempt to install ZFS if it
  does not exist on the target system. However instead of installing the
  `zfsutils-linux` package it attempts to install `zfs` resulting in an
  error.

  This was captured from a MAAS deployed server however the bug is
  platform agnostic.

  ###
  ubuntu@node10ob68:~$ cloud-init --version
  /usr/bin/cloud-init 18.3-9-g2e62cb8a-0ubuntu1~18.04.2

  ### 
  less /var/log/cloud-init.log
  ...
  2018-10-24 19:23:54,255 - util.py[DEBUG]: apt-install [eatmydata apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install zfs] 
took 0.302 seconds
  2018-10-24 19:23:54,255 - cc_lxd.py[WARNING]: failed to install packages 
['zfs']: Unexpected error while running command.
  Command: ['eatmydata', 'apt-get', '--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'install', 'zfs']
  Exit code: 100
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1799779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794399] Re: cloud-init dhcp_discovery() crashes on preprovisioned RHEL 7.6 VM in Azure

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1794399

Title:
  cloud-init dhcp_discovery() crashes on preprovisioned RHEL 7.6 VM in
  Azure

Status in cloud-init:
  Fix Released

Bug description:
  Azure, creating a RHEL 7.6 VM from a pool of preprovisioned VM

  In /usr/lib/python2.7/site-packages/cloudinit/net/dhcp.py,
  dhcp_discovery() starts dhclient specifically so it will capture the
  DHCP leases in dhcp.leases. The function copies the dhclient binary
  and starts it with options naming unique lease and pid files. The
  function then waits for both the lease and pid files to appear before
  using the contents of the pid file to kill the dhclient instance.

  There’s a behavior difference between the Ubuntu and RHEL versions of 
dhclient:
  • On Ubuntu, dhclient writes the DHCP lease response, forks/daemonizes, 
then writes the pid file with the daemonized process ID.
  • On RHEL, dhclient writes a pid file with the pre-daemon pid, writes the 
DHCP lease response, forks/daemonizes, then overwrites the pid file with the 
new (daemonized) pid.

  On RHEL, there’s a race between dhcp_discovery() and dhclient:
  1.dhclient writes the pid file and lease file
  2.dhclient forks; the parent process exits
  3.dhcp_discovery() sees that the pid file and lease file exist
  4.dhcp_discovery() tries to kill the process named in the pid file, but 
it already exited in step 2
  5.dhclient child starts, daemonizes, and writes its pid in the pid file

  When cloud-init runs on a preprovisioned RHEL 7.6 VM in Azure, dhcp.py
  dhcp_discovery() throws an error when it tries to send SIGKILL to a
  process that does not exist.

  We have a patch that makes dhcp_discovery() wait until the pid in the
  pid file represents a daemon process (parent pid is 1) before killing
  the process. With this change, the issue is resolved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1794399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1807466] Re: add support for ovf transport com.vmware.guestInfo

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1807466

Title:
  add support for ovf transport com.vmware.guestInfo

Status in cloud-images:
  New
Status in cloud-init:
  Fix Released

Bug description:
  cloud-init OVF datasource currently supports the OVF "ISO" transport 
(attached cdrom).
  It should be updated to also support the com.vmware.guestInfo transport.

  In this transport the ovf environment file can be read with:
   vmtoolsd "--cmd=info-get guestinfo.ovfEnv"

  Things to note:
  a.) I recently modified ds-identify to invoke the vmtoolsd command above
  in order to check the presense of the transport.  It seemed to work
  fine, running even before open-vm-tools.service or vgauth.service was
  up.  See http://paste.ubuntu.com/p/Kb9RrjnMjN/ for those changes.
  I think this can be made acceptable if do so only when on vmware.

  b.) You can deploy a VM like this using OVFtool and the official Ubuntu OVA 
files. You simply need to modify the .ovf file inside the .ova to contain 
 
  Having both listed will "attach" both when deployed.

  c.) after doing this and getting the changes into released ubuntu
  we should change the official OVA on cloud-images.ubuntu.com to have
  the com.vmware.guestInfo listed as a supported transport.

  
  Example ovftool command to deploy:
ovftool --datastore=SpindleDisks1 \
   --name=sm-tmpl-ref \
   modified-bionic-server-cloudimg-amd64.ovf \
   
"vi://administrator@vsphere.local:$PASSWORD@10.245.200.22/Datacenter1/host/Autopilot/"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1807466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818669] Re: ipv6 static routes configured for eni are incorrect

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818669

Title:
  ipv6 static routes configured for eni are incorrect

Status in cloud-init:
  Fix Released

Bug description:
  static routes rendered for eni configuration are not correct

  example:

  config:
  - mac_address: aa:12:bc:34:ee:ac
    name: eno3
    subnets:
    - address: fd00::12/64
  dns_nameservers: ['fd00:2::15']
  gateway: fd00::1
  ipv6: true
  routes:
  - netmask: '32'
    network: 'fd00:12::'
    gateway: fd00::2
  type: static
    type: physical
  version: 1

  Cloud init renders:
  """
  auto lo
  iface lo inet loopback

  auto eno3
  iface eno3 inet6 static
  address fd00::12/64
  dns-nameservers fd00:2::15
  gateway fd00::1
  post-up route add -net fd00:12:: netmask 32 gw fd00::2 || true
  pre-down route del -net fd00:12:: netmask 32 gw fd00::2 || true
  """

  but the post-up/pre-down commands are incorrect (tested, even when
  replacing the 32 netmask by :::)

  One working version
  """
  post-up route add -A inet6 fd00:12::/32 gw fd00::2 || true
  pre-down route del -A inet6 fd00:12::/32 gw fd00::2 || true
  """

  Fix proposal available here
  
https://code.launchpad.net/~raphael-glon/cloud-init/+git/cloud-init/+merge/363970

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794982] Re: drop /etc/apt/apt.conf.d/90cloud-init-pipelining in 16.04+

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1794982

Title:
  drop /etc/apt/apt.conf.d/90cloud-init-pipelining in 16.04+

Status in cloud-init:
  Fix Released

Bug description:
  /etc/apt/apt.conf.d/90cloud-init-pipelining disables pipelining which
  causes a significant performance reduction in apt downloads. This
  should not be necessary in 16.04, as apt can detect broken pipeline
  responses, fix it, and disable pipelining for the next connection (it
  can also match the response based on the hashes, rather than just
  complaining the hashes are wrong).

  This is causing a significant performance decrease, as a small sample,
  firefox in a fresh lxd container:

  without pipelining: Fetched 81.1 MB in 6s (13.2 MB/s)
  with pipelining: Fetched 81.1 MB in 2s (32.2 MB/s)

  (400 Mbit/s connection, 25-30ms RTT, xenial LXD container)

  Related bugs:
   * bug 948461: apt-get hashsum/size mismatch because s3 mirrors don't support 
http pipelining correctly

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1794982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825253] Re: Unit tests with filesystem-related mocks fail in SeLinuxGuard when run on RHEL or CentOS

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1825253

Title:
  Unit tests with filesystem-related mocks fail in SeLinuxGuard when run
  on RHEL or CentOS

Status in cloud-init:
  Fix Released

Bug description:
  When the unit tests are run on RHEL or CentOS, some tests which mock
  filesystem directories so as to "lie" about things can cause
  util.SeLinuxGuard to fail. The SeLinuxGuard does nothing in
  environments which lack the selinux python module or when that module
  reports that selinux is not enabled. When the guard is functional,
  though, it can be confused by some mocks used in various tests.

  
tests.unittests.test_datasource.test_azure.TestCanDevBeReformatted.test_one_partition_ntfs_empty_with_dataloss_file_is_true

  
tests.unittests.test_datasource.test_azure.TestCanDevBeReformatted.test_one_partition_ntfs_populated_false

  
tests.unittests.test_datasource.test_azure.TestCanDevBeReformatted.test_one_partition_through_realpath_is_true

  
tests.unittests.test_datasource.test_azure.TestCanDevBeReformatted.test_two_partitions_ntfs_populated_false

  tests.unittests.test_net.TestNetplanPostcommands.test_netplan_postcmds

  In the first four cases, the tests mock os.path.realpath by remapping
  path prefixes to point to a temporary directory, but SeLinuxGuard
  doesn't see the mapping. In the last case, the test case mocks
  os.path.islink to lie and claim a directory is actually a symlink, but
  code invoked by SeLinuxGuard gets very confused when it tries to treat
  the (quite real) directory as if it were a symlink.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1825253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817082] Re: [RFE] Please add encrypted_data_bag_secret to client.rb.tmpl in cc_chef

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1817082

Title:
  [RFE] Please add encrypted_data_bag_secret to client.rb.tmpl in
  cc_chef

Status in cloud-init:
  Fix Released

Bug description:
  This is a request to add support for the client configuration option
  "encrypted_data_bag_secret" in `chef_client.rb.tmpl` and the `chef`
  configuration block.

  Use Case:

  Enable cloud-init to manage Chef deployments where encrypted data bags
  are in use. The path to the secrets can be configured with Cloud init,
  while the secrets files themselves can be supplied via an external
  facility (e.g., Barbican, Vault).

  Example:

  # cloud-init
  chef:
     install_type: "packages"
     server_url: https://api.opscode.com/organizations/myorg
     environment: dev
     validation_name: dev-validator
     validation_cert: dev-validator.pem
     run_list: role[db]
     encrypted_data_bag_secret: /etc/chef/encrypted_data_bag_secret

  =>

  # /etc/chef/client.rb
  log_level  :info
  log_location   "/var/log/chef/client.log"
  ssl_verify_mode:verify_none
  validation_client_name "dev-validator"
  validation_key "/etc/chef/validation.pem"
  client_key "/etc/chef/client.pem"
  chef_server_url"https://api.opscode.com/organizations/myorg;
  environment"dev"
  node_name  "5a2f89c3-da3a-4c83-85d8-cbc8fa63f429"
  json_attribs   "/etc/chef/firstboot.json"
  file_cache_path"/var/cache/chef"
  file_backup_path   "/var/backups/chef"
  pid_file   "/var/run/chef/client.pid"
  Chef::Log::Formatter.show_time = true
  encrypted_data_bag_secret "/etc/chef/encrypted_data_bag_secret"

  Thanks,
  Eric

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1817082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1802188] Re: Documentation grammar issue

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1802188

Title:
  Documentation grammar issue

Status in cloud-init:
  Fix Released

Bug description:
  This is just a small unimportant bug in documentation that's driving
  me up the wall.

  https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html
  #instance-metadata

  "What is a instance data"

  Should be "What is an instance data," or more preferably "What is
  instance data." Words that begin with a vowel sound (A, E, I, O, U,
  and sometimes Y and W) are preceded by "an" when given a singular
  definitive.

  Sorry for the pedantry, I'm a technical writer and this just rubbed me
  the wrong way for some reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1802188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818571] Re: cloud-init clean removes seed directory even when --seed is not specified

2019-05-10 Thread Chad Smith
This bug is believed to be fixed in cloud-init in version 19.1. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818571

Title:
  cloud-init clean removes seed directory even when --seed is not
  specified

Status in cloud-init:
  Fix Released

Bug description:
  ```
  ./packages/bddeb
  lxc launch ubuntu-daily:d reproducer
  lxc file push cloud-init_all.deb reproducer/tmp/
  lxc exec reproducer -- find /var/lib/cloud/seed  # Produces output
  lxc exec reproducer -- cloud-init clean --logs
  lxc exec reproducer -- find /var/lib/cloud/seed  # Still produces output
  lxc exec reproducer -- dpkg -i /tmp/cloud-init_all.deb
  lxc exec reproducer -- cloud-init clean --logs
  lxc exec reproducer -- find /var/lib/cloud/seed  # RUH ROH
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826570] Re: Error in "tc_lib._handle_from_hex_to_string" formatting

2019-05-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655929
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9bc45d70c6ba6e6ab02db30a843004cbb57ddc3f
Submitter: Zuul
Branch:master

commit 9bc45d70c6ba6e6ab02db30a843004cbb57ddc3f
Author: Rodolfo Alonso Hernandez 
Date:   Fri Apr 26 15:28:45 2019 +

Error in "tc_lib._handle_from_hex_to_string" formatting

"tc_lib._handle_from_hex_to_string" should print major and minor values
in hex format, not in decimal format:
  0x -> "M:m"
  0x123A456B -> "123A:456B"

Change-Id: I91eb5d9fc58e8233c48b6aabba772cd6ff65a156
Closes-Bug: #1826570


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1826570

Title:
  Error in "tc_lib._handle_from_hex_to_string" formatting

Status in neutron:
  Fix Released

Bug description:
  "tc_lib._handle_from_hex_to_string" should print major and minor values in 
hex format, not in decimal format:
0x -> "M:m"
0x123A456B -> "123A:456B"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1826570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828611] [NEW] disk_setup could deal with non-deterministic device naming on EC2 nitro instances

2019-05-10 Thread Nils Meyer
Public bug reported:

Cloud Provider: Amazon Web Services

As is documented in [1], instances on the nitro type hypervisor don't
attach the NVME disks in deterministic order, yielding a different order
of disks, example [2]. This makes it somewhat difficult to format and
partition volumes since you don't know the volume ids beforehand when
creating an instance (in an Autoscaling group for example).

My current thinking is that maybe a sort of special device name (much
like swap / ephemeralX) coul be used to locate a device, for example
ebs:root for the root drive - which is easy to detect, ebs:size=12G[0]
for the first volume found with 12GiB size. With an appropriate instance
profile and boto3 more elaborate selectors are conceivable (for example
based on tags).

Further complicating things is that the metadata endpoint doesn't expose
the correct device names, opting instead for fantasy names (sda1 for the
root volume, sdX for other volumes).

My Workaround for a 2 volume instance: Try and format both devices, then mount 
the disk by label:
#cloud-config
fs_setup:
  - label: mylabel
device: /dev/nvme0n1
filesystem: xfs
partition: none
overwrite: false
  - label: mylabel
device: /dev/nvme1n1
filesystem: xfs
partition: none
overwrite: false

mounts:
- ["/dev/disk/by-label/mylabel", "/mnt/label", "xfs", "defaults"]

[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html
[2] https://gist.github.com/nilsmeyer/eddcfa4b7fc5b04ebc0be9eaa3c7b7dd

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1828611

Title:
  disk_setup could deal with non-deterministic device naming on EC2
  nitro instances

Status in cloud-init:
  New

Bug description:
  Cloud Provider: Amazon Web Services

  As is documented in [1], instances on the nitro type hypervisor don't
  attach the NVME disks in deterministic order, yielding a different
  order of disks, example [2]. This makes it somewhat difficult to
  format and partition volumes since you don't know the volume ids
  beforehand when creating an instance (in an Autoscaling group for
  example).

  My current thinking is that maybe a sort of special device name (much
  like swap / ephemeralX) coul be used to locate a device, for example
  ebs:root for the root drive - which is easy to detect, ebs:size=12G[0]
  for the first volume found with 12GiB size. With an appropriate
  instance profile and boto3 more elaborate selectors are conceivable
  (for example based on tags).

  Further complicating things is that the metadata endpoint doesn't
  expose the correct device names, opting instead for fantasy names
  (sda1 for the root volume, sdX for other volumes).

  My Workaround for a 2 volume instance: Try and format both devices, then 
mount the disk by label:
  #cloud-config
  fs_setup:
- label: mylabel
  device: /dev/nvme0n1
  filesystem: xfs
  partition: none
  overwrite: false
- label: mylabel
  device: /dev/nvme1n1
  filesystem: xfs
  partition: none
  overwrite: false

  mounts:
  - ["/dev/disk/by-label/mylabel", "/mnt/label", "xfs", "defaults"]

  [1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html
  [2] https://gist.github.com/nilsmeyer/eddcfa4b7fc5b04ebc0be9eaa3c7b7dd

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1828611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: [SRU] Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-05-10 Thread Corey Bryant
** Also affects: neutron (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Eoan)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Eoan)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Disco)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Disco)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Cosmic)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Cosmic)
   Status: New => Triaged

** Changed in: neutron (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: cloud-archive/rocky
   Importance: Undecided => High

** Changed in: cloud-archive/rocky
   Status: New => Triaged

** Changed in: cloud-archive/queens
   Importance: Undecided => High

** Changed in: cloud-archive/queens
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  [SRU] Metadata service for instances is unavailable when the l3-agent
  on the compute host  is dvr_snat mode

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  Triaged
Status in neutron source package in Cosmic:
  Triaged
Status in neutron source package in Disco:
  Fix Released
Status in neutron source package in Eoan:
  Fix Released

Bug description:
  [Impact] 
  Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.

  [Test Case]
  * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
  * create an ubuntu instance on each compute hosts
  * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
  * verify that there is an ns-metadata haproxy process running on each compute 
host

  [Regression Potential] 
  None anticipated
   
  =

  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.

  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'

  * Actual output:  the command actually returns "curl: couldn't connect
  to host"

  * Version:
    ** Mitaka
    ** All hosts are centos7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-05-10 Thread Edward Hope-Morley
** Changed in: cloud-archive/stein
   Status: New => Fix Released

** Description changed:

+ [Impact] 
+ Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.
+ 
+ [Test Case]
+ * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
+ * create an ubuntu instance on each compute hosts
+ * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
+ * verify that there is an ns-metadata haproxy process running on each compute 
host
+ 
+ [Regression Potential] 
+ None anticipated
+  
+ =
+ 
  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.
  
  * Pre-conditions: start l3-agent with dvr_snat mode in all computer and
  network nodes and set enable_metadata_proxy to true in l3-agent.ini.
  
  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.
  
  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'
  
  * Actual output:  the command actually returns "curl: couldn't connect
  to host"
  
  * Version:
    ** Mitaka
    ** All hosts are centos7

** Tags added: sts-sru-needed

** Summary changed:

- Metadata service for instances is unavailable when the l3-agent on the 
compute host  is dvr_snat mode
+ [SRU] Metadata service for instances is unavailable when the l3-agent on the 
compute host  is dvr_snat mode

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  [SRU] Metadata service for instances is unavailable when the l3-agent
  on the compute host  is dvr_snat mode

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  New
Status in Ubuntu Cloud Archive rocky series:
  New
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  New
Status in neutron source package in Cosmic:
  New
Status in neutron source package in Disco:
  New
Status in neutron source package in Eoan:
  Fix Released

Bug description:
  [Impact] 
  Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.

  [Test Case]
  * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
  * create an ubuntu instance on each compute hosts
  * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
  * verify that there is an ns-metadata haproxy process running on each compute 
host

  [Regression Potential] 
  None anticipated
   
  =

  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for 

[Yahoo-eng-team] [Bug 1828607] [NEW] [RFE] DVR Enhancements

2019-05-10 Thread Ryan Tidwell
Public bug reported:

This involves the following items:

- Support for distributed ingress and egress for IPv6
- Support for running without network nodes. This implies
  * Support for distributed DHCP
  * Support for distributed SNAT
- Ensuring an OpenFlow-based DVR implementation is written in a way that can be 
offloaded to a smart-NIC as hardware support comes online.

Due to the broad nature of these changes, I will propose a spec in
neutron-specs to elaborate on these items.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828607

Title:
  [RFE] DVR Enhancements

Status in neutron:
  New

Bug description:
  This involves the following items:

  - Support for distributed ingress and egress for IPv6
  - Support for running without network nodes. This implies
* Support for distributed DHCP
* Support for distributed SNAT
  - Ensuring an OpenFlow-based DVR implementation is written in a way that can 
be offloaded to a smart-NIC as hardware support comes online.

  Due to the broad nature of these changes, I will propose a spec in
  neutron-specs to elaborate on these items.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828605] [NEW] [l3][scale issue] unrestricted hosting routers in network node increase service operating pressure

2019-05-10 Thread LIU Yulong
Public bug reported:

[l3][scale issue] unrestricted hosting routers in network node increase
service operating pressure


Related problem was reported here: 
https://bugs.launchpad.net/neutron/+bug/1828494
These issues have same background, unlimited router creation for entire cluster,
"""
Every tenant may create free routers for doing nothing. But neutron will create 
many resource for it, especially the HA scenario, there will be namespaces, 
keepalived processes, and monitor processes. It will absolutely increase the 
failure risk, especially for agent restart.
"""

So this bug is aimming to add config or mechanism which will add some
conditions for l3-agent to indicate whether to create resources in
network nodes. For now, the condition is whether there are resources
under the router, aka if there is no compute (virtual machind or
baremetal) resource under the router, the l3-agent in network node will
host nothing for this router.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828605

Title:
  [l3][scale issue] unrestricted hosting routers in network node
  increase service operating pressure

Status in neutron:
  New

Bug description:
  [l3][scale issue] unrestricted hosting routers in network node
  increase service operating pressure

  
  Related problem was reported here: 
https://bugs.launchpad.net/neutron/+bug/1828494
  These issues have same background, unlimited router creation for entire 
cluster,
  """
  Every tenant may create free routers for doing nothing. But neutron will 
create many resource for it, especially the HA scenario, there will be 
namespaces, keepalived processes, and monitor processes. It will absolutely 
increase the failure risk, especially for agent restart.
  """

  So this bug is aimming to add config or mechanism which will add some
  conditions for l3-agent to indicate whether to create resources in
  network nodes. For now, the condition is whether there are resources
  under the router, aka if there is no compute (virtual machind or
  baremetal) resource under the router, the l3-agent in network node
  will host nothing for this router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-05-10 Thread Corey Bryant
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  [SRU] Metadata service for instances is unavailable when the l3-agent
  on the compute host  is dvr_snat mode

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  New
Status in Ubuntu Cloud Archive rocky series:
  New
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  [Impact] 
  Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.

  [Test Case]
  * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
  * create an ubuntu instance on each compute hosts
  * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
  * verify that there is an ns-metadata haproxy process running on each compute 
host

  [Regression Potential] 
  None anticipated
   
  =

  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.

  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'

  * Actual output:  the command actually returns "curl: couldn't connect
  to host"

  * Version:
    ** Mitaka
    ** All hosts are centos7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606741] Re: Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode

2019-05-10 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741

Title:
  [SRU] Metadata service for instances is unavailable when the l3-agent
  on the compute host  is dvr_snat mode

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  New
Status in Ubuntu Cloud Archive rocky series:
  New
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  [Impact] 
  Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute 
host) only instances that are booted on the compute host that is running the VR 
master will have access to metadata. This patch ensures that both master and 
slave VRs have an associated haproxy ns-metadata proccess running local to the 
compute host.

  [Test Case]
  * deploy Openstack with dvr and l3ha enabled with 2 compute hosts
  * create an ubuntu instance on each compute hosts
  * check that both are able to access the metadata api (i.e. cloud-init 
completes successfully)
  * verify that there is an ns-metadata haproxy process running on each compute 
host

  [Regression Potential] 
  None anticipated
   
  =

  In my mitaka environment, there are five nodes here, including
  controller, network1, network2, computer1, computer2 node. I start
  l3-agents with dvr_snat mode in all network and compute nodes and set
  enable_metadata_proxy to true in l3-agent.ini. It works well for most
  neutron services unless the metadata proxy service. When I run command
  "curl http://169.254.169.254; in an instance booting from cirros, it
  returns "curl: couldn't connect to host" and the instance can't fetch
  metadata in its first booting.

  * Pre-conditions: start l3-agent with dvr_snat mode in all computer
  and network nodes and set enable_metadata_proxy to true in
  l3-agent.ini.

  * Step-by-step reproduction steps:
  1.create a network and a subnet under this network;
  2.create a router;
  3.add the subnet to the router
  4.create an instance with cirros (or other images) on this subnet
  5.open the console for this instance and run command 'curl 
http://169.254.169.254' in bash, waiting for result.

  * Expected output: this command should return the true metadata info
  with the command  'curl http://169.254.169.254'

  * Actual output:  the command actually returns "curl: couldn't connect
  to host"

  * Version:
    ** Mitaka
    ** All hosts are centos7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1806053] Re: APITestCase still needs to be patched utils.patch_middleware_get_user()

2019-05-10 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron-fwaas-dashboard -
2.0.1-0ubuntu2

---
neutron-fwaas-dashboard (2.0.1-0ubuntu2) eoan; urgency=medium

  * d/control: Add Breaks/Replaces to python3-neutron-fwaas-dashboard to
ensure python-neutron-fwaas-dashboard is removed on upgrade (LP: #1828293).
  * d/control: (Build-)Depends on latest python3-django-horizon and
openstack-dashboard to ensure unit tests are successful (LP: #1806053).

 -- Corey Bryant   Thu, 09 May 2019 22:05:46
-0400

** Changed in: neutron-fwaas-dashboard (Ubuntu Eoan)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1806053

Title:
  APITestCase still needs to be patched
  utils.patch_middleware_get_user()

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Invalid
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in neutron-fwaas-dashboard package in Ubuntu:
  Fix Released
Status in horizon source package in Bionic:
  Invalid
Status in neutron-fwaas-dashboard source package in Bionic:
  Invalid
Status in horizon source package in Cosmic:
  Triaged
Status in neutron-fwaas-dashboard source package in Cosmic:
  Triaged
Status in horizon source package in Disco:
  Fix Released
Status in neutron-fwaas-dashboard source package in Disco:
  Triaged
Status in horizon source package in Eoan:
  Fix Released
Status in neutron-fwaas-dashboard source package in Eoan:
  Fix Released

Bug description:
  After merging commit 0d163613265e036818fe567793a4fc88fe140d4a, we see
  some UT breakage in horizon plugins.

  bgpvpn-dashboard https://bugs.launchpad.net/bgpvpn/+bug/1805240
  neutron-fwaas-dashboard https://review.openstack.org/621155
  neutron-vpnaas-dashboard https://review.openstack.org/621152

  Previously APITestCase called patch_middleware_get_user explicitly,
  but the commit above dropped it. This seems to trigger the above UT
  failures.

  
  -

  
  SRU Details for Ubuntu
  --

  [Impact]
  See above.

  Also, building a horizon dashboard package, such as neutron-fwaas-
  dashboard, will result in several unit test failures such due to
  "AttributeError: 'AnonymousUser' object has no attribute 'token'".

  [Test Case]
  Build the horizon and neutron-fwaas-dashboard packages.

  [Regression Potential]
  Very minimal as the changes update test code only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1806053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828565] [NEW] OSC cannot list endpoint groups by name

2019-05-10 Thread Jose Castro Leon
Public bug reported:

While using the openstack cli and searching by name as any other
resource available, it sends as a query parameter the name of the
resource to search.

This value gets ignored in keystone resulting in the whole list of
endpoint groups returned.

On the openstack cli side, as it returns more than one entry, it throws
an error like:


$ openstack endpoint group show eg_name
More than one endpointgroup exists with the name 'eg_name'.


This is due to a missing filter in the API for listing endpoint_groups by name.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1828565

Title:
  OSC cannot list endpoint groups by name

Status in OpenStack Identity (keystone):
  New

Bug description:
  While using the openstack cli and searching by name as any other
  resource available, it sends as a query parameter the name of the
  resource to search.

  This value gets ignored in keystone resulting in the whole list of
  endpoint groups returned.

  On the openstack cli side, as it returns more than one entry, it
  throws an error like:

  
  $ openstack endpoint group show eg_name
  More than one endpointgroup exists with the name 'eg_name'.

  
  This is due to a missing filter in the API for listing endpoint_groups by 
name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1828565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828547] [NEW] neutron-dynamic-routing TypeError: argument of type 'NoneType' is not iterable

2019-05-10 Thread Tobias Urdin
Public bug reported:

Rocky with Ryu, dont have a reproduce on this one or don't know what
caused it in the first place.

python-neutron-13.0.3-1.el7.noarch
openstack-neutron-openvswitch-13.0.3-1.el7.noarch
python2-neutron-dynamic-routing-13.0.1-1.el7.noarch
openstack-neutron-bgp-dragent-13.0.1-1.el7.noarch
openstack-neutron-common-13.0.3-1.el7.noarch
openstack-neutron-ml2-13.0.3-1.el7.noarch
python2-neutronclient-6.9.0-1.el7.noarch
openstack-neutron-13.0.3-1.el7.noarch
openstack-neutron-dynamic-routing-common-13.0.1-1.el7.noarch
python2-neutron-lib-1.18.0-1.el7.noarch


python-ryu-common-4.26-1.el7.noarch
python2-ryu-4.26-1.el7.noarch


2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, 
in dispatch
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _do_dispatch
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 185, in bgp_speaker_create_end
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server 
self.add_bgp_speaker_helper(bgp_speaker_id)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 249, in add_bgp_speaker_helper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server 
self.add_bgp_speaker_on_dragent(bgp_speaker)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 359, in add_bgp_speaker_on_dragent
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server 
self.add_bgp_peers_to_bgp_speaker(bgp_speaker)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 390, in add_bgp_peers_to_bgp_speaker
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server bgp_peer)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 159, in wrapper
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server result = 
f(*args, **kwargs)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 399, in add_bgp_peer_to_bgp_speaker
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server 
self.cache.put_bgp_peer(bgp_speaker_id, bgp_peer)
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py",
 line 604, in put_bgp_peer
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server if 
bgp_peer['peer_ip'] in self.get_bgp_peer_ips(bgp_speaker_id):
2019-05-09 16:52:41.970 1659 ERROR oslo_messaging.rpc.server TypeError: 
argument of type 'NoneType' is not 

[Yahoo-eng-team] [Bug 1828543] [NEW] Routed provider networks: placement API handling errors

2019-05-10 Thread Lajos Katona
Public bug reported:

Routed provider networks is a feature which uses placement to store information 
about segments, the subnets in segments and make possible that nova can use 
this information in scheduling.
On master the placement API calls are failing, at first at get_inventory call:

May 09 14:15:26 multicont neutron-server[31232]: DEBUG 
oslo_concurrency.lockutils [-] Lock 
"notifier-a76cce90-7366-495e-9784-9ddef689bc71" released by 
"neutron.notifiers.batch_notifier.BatchNotifier.queue_event..synced_send"
 :: held 0.112s {{(pid=31252) inner 
/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
May 09 14:15:26 multicont neutron-server[31232]: Traceback (most recent call 
last):
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 433, in 
get_inventory
May 09 14:15:26 multicont neutron-server[31232]: return 
self._get(url).json()
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 178, in _get
May 09 14:15:26 multicont neutron-server[31232]: **kwargs)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 1037, 
in get
May 09 14:15:26 multicont neutron-server[31232]: return self.request(url, 
'GET', **kwargs)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py", line 890, in 
request
May 09 14:15:26 multicont neutron-server[31232]: raise 
exceptions.from_response(resp, method, url)
May 09 14:15:26 multicont neutron-server[31232]: 
keystoneauth1.exceptions.http.NotFound: Not Found (HTTP 404) (Request-ID: 
req-4133f4c6-df6c-467f-9d15-e8532fc6504b)
May 09 14:15:26 multicont neutron-server[31232]: During handling of the above 
exception, another exception occurred:
...
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 229, in 
_update_nova_inventory
May 09 14:15:26 multicont neutron-server[31232]: IPV4_RESOURCE_CLASS)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 53, in wrapper
May 09 14:15:26 multicont neutron-server[31232]: return f(self, *a, **k)
May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 444, in 
get_inventory
May 09 14:15:26 multicont neutron-server[31232]: if "No resource provider 
with uuid" in e.details:
May 09 14:15:26 multicont neutron-server[31232]: TypeError: argument of type 
'NoneType' is not iterable

Using stable/pike (not just for neutron) the syncing is OK.
I suppose as the placement client code was moved to neutron-lib and changed to 
work with placement 1.20 something happened that makes routed networks 
placement calls failing.

Some details:
Used reproduction steps: 
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html (of 
course the pike one for stable/pike deployment)
neutron: d0e64c61835801ad8fdc707fc123cfd2a65ffdd9
neutron-lib: bcd898220ff53b3fed46cef8c460269dd6af3492
placement: 57026255615679122e6f305dfa3520c012f57ca7
nova: 56fef7c0e74d7512f062c4046def10401df16565
Ubuntu 18.04.2 LTS based multihost devstack

** Affects: neutron
 Importance: Medium
 Assignee: Lajos Katona (lajos-katona)
 Status: New


** Tags: placement segments

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828543

Title:
  Routed provider networks: placement API handling errors

Status in neutron:
  New

Bug description:
  Routed provider networks is a feature which uses placement to store 
information about segments, the subnets in segments and make possible that nova 
can use this information in scheduling.
  On master the placement API calls are failing, at first at get_inventory call:

  May 09 14:15:26 multicont neutron-server[31232]: DEBUG 
oslo_concurrency.lockutils [-] Lock 
"notifier-a76cce90-7366-495e-9784-9ddef689bc71" released by 
"neutron.notifiers.batch_notifier.BatchNotifier.queue_event..synced_send"
 :: held 0.112s {{(pid=31252) inner 
/usr/local/lib/python3.6/dist-packages/oslo_concurrency/lockutils.py:339}}
  May 09 14:15:26 multicont neutron-server[31232]: Traceback (most recent call 
last):
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 433, in 
get_inventory
  May 09 14:15:26 multicont neutron-server[31232]: return 
self._get(url).json()
  May 09 14:15:26 multicont neutron-server[31232]:   File 
"/opt/stack/neutron-lib/neutron_lib/placement/client.py", line 178, in _get
  May 09 14:15:26 multicont neutron-server[31232]: **kwargs)
  May 09 14:15:26 multicont neutron-server[31232]:   File 

[Yahoo-eng-team] [Bug 1803189] Re: Error during login 'bool' object is not callable

2019-05-10 Thread Akihiro Motoki
Launchpad does not expire a bug if the bug is associated with multiple
projects, so (as horizon upstream) I am removing 'horizon' from the
affected projects. If this can be reproduced, please re-add 'horizon'.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1803189

Title:
  Error during login 'bool' object is not callable

Status in horizon package in Ubuntu:
  Incomplete

Bug description:
  I have installed Openstack horizon rocky using ubuntu apt package
  manager. But as soon as i give my login credentials, i am hitting at
  this error.

  Environment:

  
  Request Method: GET
  Request URL: http://10.10.20.10:10443/auth/login/?next=/

  Django Version: 1.11.15
  Python Version: 2.7.15
  Installed Applications:
  ['openstack_dashboard.dashboards.project',
   'heat_dashboard',
   'openstack_dashboard.dashboards.admin',
   'openstack_dashboard.dashboards.identity',
   'openstack_dashboard.dashboards.settings',
   'openstack_dashboard',
   'django.contrib.contenttypes',
   'django.contrib.auth',
   'django.contrib.sessions',
   'django.contrib.messages',
   'django.contrib.staticfiles',
   'django.contrib.humanize',
   'django_pyscss',
   'openstack_dashboard.django_pyscss_fix',
   'compressor',
   'horizon',
   'openstack_auth']
  Installed Middleware:
  ('django.middleware.common.CommonMiddleware',
   'django.middleware.csrf.CsrfViewMiddleware',
   'django.contrib.sessions.middleware.SessionMiddleware',
   'django.contrib.auth.middleware.AuthenticationMiddleware',
   'horizon.middleware.OperationLogMiddleware',
   'django.contrib.messages.middleware.MessageMiddleware',
   'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
   'horizon.middleware.HorizonMiddleware',
   'horizon.themes.ThemeMiddleware',
   'django.middleware.locale.LocaleMiddleware',
   'django.middleware.clickjacking.XFrameOptionsMiddleware',
   
'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerClientMiddleware',
   
'openstack_dashboard.contrib.developer.profiler.middleware.ProfilerMiddleware')


  Traceback:

  File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/exception.py" in 
inner
41. response = get_response(request)

  File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
_legacy_get_response
244. response = middleware_method(request)

  File "/usr/share/openstack-dashboard/horizon/middleware/base.py" in 
process_request
52. if not hasattr(request, "user") or not 
request.user.is_authenticated():

  Exception Type: TypeError at /auth/login/
  Exception Value: 'bool' object is not callable


  Any idea about the error ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1803189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828535] [NEW] [Perf] Archival job destroying records from api_database in bulk causes orphaning of records

2019-05-10 Thread Surya Seetharaman
Public bug reported:

When the nova-manage db archive_deleted rows cron jobs from several
cells run in parallel (even if they are randomized during the day if
there are a lot of cells this may happen) they all try to destroy in
bulk the instance_mappings/instance_group_members/request_specs from the
nova_api database which means each cell gets a lock on the api_database
during which another cell would not be able to reap the records from the
nova_api database. We have a patch for making this command multi-cells
aware (https://review.opendev.org/#/c/507486/), however wondering if
destroying the records one after the other in loop is better than
destroying them in bulk ?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cells nova-manage performance

** Tags added: cells nova-manage performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828535

Title:
  [Perf] Archival job destroying records from api_database in bulk
  causes orphaning of records

Status in OpenStack Compute (nova):
  New

Bug description:
  When the nova-manage db archive_deleted rows cron jobs from several
  cells run in parallel (even if they are randomized during the day if
  there are a lot of cells this may happen) they all try to destroy in
  bulk the instance_mappings/instance_group_members/request_specs from
  the nova_api database which means each cell gets a lock on the
  api_database during which another cell would not be able to reap the
  records from the nova_api database. We have a patch for making this
  command multi-cells aware (https://review.opendev.org/#/c/507486/),
  however wondering if destroying the records one after the other in
  loop is better than destroying them in bulk ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1828535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828512] [NEW] The allocation table has residual records when instance is evacuated and the source node is removed

2019-05-10 Thread Sun Mengyun
Public bug reported:

Description
===
When compute service is down, we choose to evacuate the instance of 
corresponding node. If success, there will be two records in allocations table. 
Normally, we try to restore the nova-compute service and one of the records 
will be deleted when nova-compute service is restarted. However, if the node 
fails unrecoverably and becomes unavailable, the compute service will never 
restart. Eventually, the records in the table will remain.

Further more, if we delete the compute service in failed node, the
corresponding records in resource_provider table will not be deleted
too, because the remained record in allocation table references this
record.

Ultimately, we will have at least two residual records.

Steps to reproduce
==

1. Let compute service down
2. evacuate the instance on the node with commond:
nova evacuate instance_uuid
3.delete the service
nova service-delete service_id

Expected result
===
The corrsponding records in allocations and resource_provider are deleted

Actual result
=
You will find two records in allocation table and one them is invalid, ditto 
one in resource_provider table

Environment
===
[root@nail1 ~]# rpm -qa | grep nova
openstack-nova-api-18.0.2-1.el7.noarch
openstack-nova-common-18.0.2-1.el7.noarch
python2-novaclient-11.0.0-1.el7.noarch
openstack-nova-placement-api-18.0.2-1.el7.noarch
openstack-nova-scheduler-18.0.2-1.el7.noarch
openstack-nova-conductor-18.0.2-1.el7.noarch
openstack-nova-novncproxy-18.0.2-1.el7.noarch
python-nova-18.0.2-1.el7.noarch
openstack-nova-compute-18.0.2-1.el7.noarch
openstack-nova-console-18.0.2-1.el7.noarch

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1828512

Title:
  The allocation table has residual records when instance is evacuated
  and the source node is removed

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When compute service is down, we choose to evacuate the instance of 
corresponding node. If success, there will be two records in allocations table. 
Normally, we try to restore the nova-compute service and one of the records 
will be deleted when nova-compute service is restarted. However, if the node 
fails unrecoverably and becomes unavailable, the compute service will never 
restart. Eventually, the records in the table will remain.

  Further more, if we delete the compute service in failed node, the
  corresponding records in resource_provider table will not be deleted
  too, because the remained record in allocation table references this
  record.

  Ultimately, we will have at least two residual records.

  Steps to reproduce
  ==

  1. Let compute service down
  2. evacuate the instance on the node with commond:
  nova evacuate instance_uuid
  3.delete the service
  nova service-delete service_id

  Expected result
  ===
  The corrsponding records in allocations and resource_provider are deleted

  Actual result
  =
  You will find two records in allocation table and one them is invalid, ditto 
one in resource_provider table

  Environment
  ===
  [root@nail1 ~]# rpm -qa | grep nova
  openstack-nova-api-18.0.2-1.el7.noarch
  openstack-nova-common-18.0.2-1.el7.noarch
  python2-novaclient-11.0.0-1.el7.noarch
  openstack-nova-placement-api-18.0.2-1.el7.noarch
  openstack-nova-scheduler-18.0.2-1.el7.noarch
  openstack-nova-conductor-18.0.2-1.el7.noarch
  openstack-nova-novncproxy-18.0.2-1.el7.noarch
  python-nova-18.0.2-1.el7.noarch
  openstack-nova-compute-18.0.2-1.el7.noarch
  openstack-nova-console-18.0.2-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1828512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828364] Re: DVR: Fip namespaces are created in all the controllers and computes with vm only when an external interface to the router is added

2019-05-10 Thread Slawek Kaplonski
Ok, thx Brian and Liu for comments on this. So based on Your comments it looks 
that this isn't a bug in fact.
I'm then closing this bug as there is nothing to do with it :)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828364

Title:
  DVR: Fip namespaces are created in all the controllers and computes
  with vm only when an external interface to the router is added

Status in neutron:
  Invalid

Bug description:
  Even if there is no any floating IP created in external network, "fip" 
namespace is created on all compute and network nodes for dvr router with 
external gateway set.
  In case when user don't want to use floating IPs but only SNAT, it is "waste" 
of (possible) many external IPs used for "network:floatingip_agent_gateway" 
devices.

  Also fip namespace should be (probably) created only on compute nodes
  as on network nodes it isn't necessary probably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp