[Yahoo-eng-team] [Bug 1736669] [NEW] designate driver should autodetect API version
Public bug reported: Currently the designate driver assumes that the url it is given points to the v2 API endpoint, i.e. it includes the "/v2" suffix. When this is omitted, the driver makes broken calls to designate. For more stable operations and simplified configuration, the driver should be able to handle being given the unversioned endpoint and find its way by doing version discovery. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736669 Title: designate driver should autodetect API version Status in neutron: New Bug description: Currently the designate driver assumes that the url it is given points to the v2 API endpoint, i.e. it includes the "/v2" suffix. When this is omitted, the driver makes broken calls to designate. For more stable operations and simplified configuration, the driver should be able to handle being given the unversioned endpoint and find its way by doing version discovery. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1736669/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1631193] Re: "Volume Transfer" rendered as page instead of modal
Reviewed: https://review.openstack.org/383883 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=8aca02a7328b7cde320de719eea668f88e7aca35 Submitter: Zuul Branch:master commit 8aca02a7328b7cde320de719eea668f88e7aca35 Author: Daniel ParkDate: Fri Oct 7 12:10:38 2016 -0700 Render 'Volume Transfer' as modal instead of page Change-Id: Iaa26bacb6ef369145345615821cbe4c3dc24f83f Closes-bug: #1631193 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1631193 Title: "Volume Transfer" rendered as page instead of modal Status in OpenStack Dashboard (Horizon): Fix Released Bug description: After creating a volume transfer via modal, the user is redirected to a page to be shown information about the volume transfer. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1631193/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736101] Re: nova placement resource_providers DBDuplicateEntry when name repeat
In master and stable/pike, 409 error is returned in that case. HTTP/1.1 409 Conflict {"errors": [{"status": 409, "request_id": "req-f34c6774-0d39-4deb-8d51-cda4819387f6", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provider test already exists. ", "title": "Conflict"}]} And a uncaught exception is not logged in the log file. 12月 06 15:26:19 devstack-master devstack@placement-api.service[16874]: INFO nova.api.openstack.placement.requestlog [None req-f34c6774-0d39-4deb-8d51-cda4819387f6 admin admin] 10.0.2.15 "POST /placement/resource_providers" status: 409 len: 237 microversion: 1.10 12月 06 15:26:19 devstack-master devstack@placement-api.service[16874]: [pid: 16876|app: 0|req: 2/4] 10.0.2.15 () {58 vars in 1006 bytes} [Wed Dec 6 15:26:19 2017] POST /placement/resource_providers => generated 237 bytes in 10 msecs (HTTP/1.1 409) 6 headers in 231 bytes (1 switches on core 0) They are right behavior. Environment --- master: commit 9f46043f2f2463695385a6a14634664be4833e8e stable/pike: commit 8f7f4b3ba6bb17e39fd3f2d22ed2457311988692 ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736101 Title: nova placement resource_providers DBDuplicateEntry when name repeat Status in OpenStack Compute (nova): Invalid Bug description: OpenStack Version: Pike I have two compute nodes with same name. But only one record can be successfully created in resource_providers table. When resource_providers.name repeat, the record can not insert, and get error message: Uncaught exception: DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u"Duplicate entry 'cvk17(CVM172.25.19.80)'for key 'uniq_resource_providers0name'") [SQL: u'INSERT INTO resource_providers (created_at, updated_at, uuid, name, generation) VALUES... To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1736101/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736367] Re: Create Instance failed (Internal error)
** Changed in: nova Assignee: (unassigned) => vinlin126com (vinlin126com) ** Information type changed from Public to Public Security ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736367 Title: Create Instance failed (Internal error) Status in OpenStack Compute (nova): Invalid Bug description: Hi, i have installed openstack on my centos7 which hosted on vmware workstastion machine. glance images-list is fine. but nova image-list fail,and create vm instance fail. log file is following. /var/log/nova/nova-api.conf 2017-12-05 17:26:38.052 2353 INFO nova.osapi_compute.wsgi.server [req-aa4a2b3c-d623-4a4e-bce2-59328d7154ce a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/flavors/f31fa21a-9730-4b5d-be40-b94cca89f576/os-extra_specs HTTP/1.1" status: 200 len: 351 time: 0.0245221 2017-12-05 17:26:38.101 2350 INFO nova.osapi_compute.wsgi.server [req-11bf5ef8-7dae-4a78-b795-e53f27e2ea52 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/limits?reserved=1 HTTP/1.1" status: 200 len: 844 time: 2.2066791 2017-12-05 17:26:38.456 2350 INFO nova.osapi_compute.wsgi.server [req-494b7dfb-fce1-4b9f-b292-fe42fc04464d a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/os-server-groups HTTP/1.1" status: 200 len: 353 time: 0.2469988 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions [req-d7e00de6-0ddd-44e9-a0b7-245e1de4fca5 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] Unexpected exception in API method 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, in wrapped 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 642, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions **create_kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1620, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1170, in _create_instance 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions image_id, boot_meta = self._get_image(context, image_href) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File
[Yahoo-eng-team] [Bug 1736650] [NEW] linuxbrige manages non linuxbridge ports
Public bug reported: In our ML2 environment we have 2 mech drivers, linuxbridge and midonet. We have linuxbridge and midnet networks bound to instances on the same compute nodes. All works well except the midonet ports get marked as DOWN. I've traced this back to the linuxbridge agent. It seems to mark the midonet ports as DOWN. I can see the midonet port IDs in the linuxbridge logs. Steps to reproduce: Config: [ml2] type_drivers=flat,midonet,uplink path_mtu=0 tenant_network_types=midonet mechanism_drivers=linuxbridge,midonet Boot an instance with a midonet nic, you will note port is DOWN. Stop linuxbridge agent and repeat, port will be ACTIVE start linux bridge agent and existing midonet ports will change to DOWN This is running stable/ocata ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736650 Title: linuxbrige manages non linuxbridge ports Status in neutron: New Bug description: In our ML2 environment we have 2 mech drivers, linuxbridge and midonet. We have linuxbridge and midnet networks bound to instances on the same compute nodes. All works well except the midonet ports get marked as DOWN. I've traced this back to the linuxbridge agent. It seems to mark the midonet ports as DOWN. I can see the midonet port IDs in the linuxbridge logs. Steps to reproduce: Config: [ml2] type_drivers=flat,midonet,uplink path_mtu=0 tenant_network_types=midonet mechanism_drivers=linuxbridge,midonet Boot an instance with a midonet nic, you will note port is DOWN. Stop linuxbridge agent and repeat, port will be ACTIVE start linux bridge agent and existing midonet ports will change to DOWN This is running stable/ocata To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1736650/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1718401] Re: ubuntu install doc name for linuxbridge agent package
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1718401 Title: ubuntu install doc name for linuxbridge agent package Status in neutron: Expired Bug description: - [X] This doc is inaccurate in this way: package named neutron- linuxbridge-agent is unavailable, whereas neutron-plugin-linuxbridge- agent is. --- Release: 11.0.1.dev75 on 2017-09-14 04:10 SHA: 8baed13677c70cf4f1a17c5cc457c3b65bfead1b Source: https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-option1-ubuntu.rst URL: https://docs.openstack.org/neutron/pike/install/controller-install-option1-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1718401/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736636] [NEW] When launch an instance, selecting the source gets the volume snapshot list incorrectly
Public bug reported: On dashboard,steps as follows: 1.Open the OpenStack Horizon,select Project--> Compute--> instances-->Launch Instance; 2.Input instance name,next,Select Boot Source-->Volume Snapshot; 3.Get a list of available volume snapshots; The snapshot list obtained here is incorrect.The volume on which these volume snapshots depend must be bootable,but all available snapshots of the volume are taken here, regardless of whether the volume on which the snapshot is based bootable. 4.next is ok. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1736636 Title: When launch an instance, selecting the source gets the volume snapshot list incorrectly Status in OpenStack Dashboard (Horizon): New Bug description: On dashboard,steps as follows: 1.Open the OpenStack Horizon,select Project--> Compute--> instances-->Launch Instance; 2.Input instance name,next,Select Boot Source-->Volume Snapshot; 3.Get a list of available volume snapshots; The snapshot list obtained here is incorrect.The volume on which these volume snapshots depend must be bootable,but all available snapshots of the volume are taken here, regardless of whether the volume on which the snapshot is based bootable. 4.next is ok. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1736636/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1735950] Re: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list
Thanks for the clarification. ** Changed in: maas Status: Incomplete => Triaged ** Changed in: maas Importance: Undecided => Critical ** Also affects: maas/2.3 Importance: Undecided Status: New ** Changed in: maas/2.3 Status: New => Triaged ** Changed in: maas/2.3 Importance: Undecided => Critical ** Changed in: maas/2.3 Milestone: None => 2.3.x ** Changed in: maas Milestone: None => 2.4.0alpha1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1735950 Title: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list Status in cloud-init: Incomplete Status in MAAS: Triaged Status in MAAS 2.3 series: Triaged Bug description: All nodes have these same failed events: Node post-installation failure - 'cloudinit' running modules for config Node post-installation failure - 'cloudinit' running config-apt- configure with frequency once-per-instance Experiencing odd issues with the squid proxy not being reachable. From a deployed node that had the event errors. $ sudo cat /var/log/cloud-init.log | http://paste.ubuntu.com/26098787/ $ sudo cat /var/log/cloud-init-output.log | http://paste.ubuntu.com/26098802/ ubuntu@os-util-00:~$ sudo apt install htop sudo: unable to resolve host os-util-00 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: htop 0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded. Need to get 76.4 kB of archives. After this operation, 215 kB of additional disk space will be used. Err:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 htop amd64 2.0.1-1ubuntu1 Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route to host) E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/universe/h/htop/htop_2.0.1-1ubuntu1_amd64.deb Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route to host) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Not sure if these things are related (the proxy not being reachable, and the node event errors) something is not right. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1735950/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1715569] Re: Live migration fails with an attached non-bootable Cinder volume (Pike)
** Also affects: nova/ocata Importance: Undecided Status: New ** Also affects: nova/pike Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1715569 Title: Live migration fails with an attached non-bootable Cinder volume (Pike) Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) ocata series: New Status in OpenStack Compute (nova) pike series: New Bug description: I have setup a fresh OpenStack HA Pike environment on Ubuntu 16.04.3. Live migration has been enabled and works so far. As storage backend I am using Ceph v12.2.0 (Luminous). When you attach a secondary volume to a vm and try to live migrate the vm to another host, it fails with the following exception: 2017-09-07 08:30:46.621 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Live Migration failure: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1: libvirtError: unsupported configuration: Target device drive address 0:0:0 does not match source 0:0:1 2017-09-07 08:30:47.293 3246 ERROR nova.virt.libvirt.driver [req-832fc6c9-4d8e-46f8-94be-95cb5c7a4114 dddfba8e02f746799a6408a523e6cd25 ed2d2efd86dd40e7a45491d8502318d3 - default default] [instance: 414e8bc1-0b85-4e7f-8897-74b416b9caf8] Migration operation has aborted When you (not live) migrate the corresponding vm with the attached volume, migration succeeds. When you launch the vm from a bootable volume, migration and live migration succeeds in both cases. Only live migration with an additional attached volume fails. Because of Ceph RDB volumes, migration does not require "Block migration". Compute node: $ pip list | grep -E 'nova|cinder' nova (16.0.0) python-cinderclient (3.1.0) python-novaclient (9.1.0) Controller node: $ pip list | grep -E 'nova|cinder' cinder (11.0.0) nova (16.0.0) python-cinderclient (3.1.0) python-novaclient (9.1.0) $ ceph --version ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc) Is this a normal behaviour? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1715569/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1708005] Re: 6 out 10 keystone.tests.unit.test_cert_setup.* unit test cases failed in stable/newton branch
The Newton release EOL'd on October 13th [0]. Marking this as invalid since the release is no longer supported. [0] https://releases.openstack.org/queens/schedule.html ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1708005 Title: 6 out 10 keystone.tests.unit.test_cert_setup.* unit test cases failed in stable/newton branch Status in OpenStack Identity (keystone): Invalid Bug description: The failure were caused by the formatting string for command openssl. Here is the diff to fix the issue. $ git diff keystone/common/openssl.py diff --git a/keystone/common/openssl.py b/keystone/common/openssl.py index c581e8d..4ea2410 100644 --- a/keystone/common/openssl.py +++ b/keystone/common/openssl.py @@ -217,7 +217,7 @@ class BaseCertificateConfigure(object): self.exec_command(['openssl', 'ca', '-batch', '-out', '%(signing_cert)s', '-config', '%(ssl_config)s', - '-days', '%(valid_days)dd', + '-days', '%(valid_days)d', '-cert', '%(ca_cert)s', '-keyfile', '%(ca_private_key)s', '-infiles', '%(request_file)s']) $ uname -a Linux os-cs-g3w-31.dft.twosigma.com 4.9.0-2-amd64 #1 SMP Debian 4.9.18-1 (2017-03-30) x86_64 GNU/Linux $ git branch master * stable/newton $ git log | head -4 commit 05a129e54573b6cbda1ec095f4526f2b9ba90a90 Author: Boris BobrovDate: Tue Apr 25 14:20:36 2017 + {0} keystone.tests.unit.test_cert_setup.CertSetupTestCase.test_create_pki_certs_twice_without_rebuild [0.670882s] ... FAILED Captured pythonlogging: ~~~ Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend. Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend. Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend. NeedRegenerationException no value, waiting for create lock value creation lock acquired Calling creation function Released creation lock The admin_token_auth middleware presents a security risk and should be removed from the [pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of your paste ini file. The admin_token_auth middleware presents a security risk and should be removed from the [pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of your paste ini file. The admin_token_auth middleware presents a security risk and should be removed from the [pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of your paste ini file. The admin_token_auth middleware presents a security risk and should be removed from the [pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of your paste ini file. make_dirs path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0755 user=None group=None set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0755 user=None(None) group=None(None) set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/openssl.conf' mode=0640 user=None(None) group=None(None) set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/index.txt' mode=0640 user=None(None) group=None(None) set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/serial' mode=0640 user=None(None) group=None(None) make_dirs path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0750 user=None group=None set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0750 user=None(None) group=None(None) Running command - openssl genrsa -out /home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem 2048 set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem' mode=0640 user=None(None) group=None(None) make_dirs path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0755 user=None group=None set_permissions: path='/home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs' mode=0755 user=None(None) group=None(None) Running command - openssl req -new -x509 -extensions v3_ca -key /home/tsstack/openstack/keystone/keystone/tests/unit/tmp/40309/ssl/certs/cakey.pem -out
[Yahoo-eng-team] [Bug 1736600] [NEW] cloud-init modules -h documents unsupported --mode init
Public bug reported: sudo python3 -m cloudinit.cmd.main modules -h [sudo] password for csmith: usage: /home/csmith/src/cloud-init/cloudinit/cmd/main.py modules [-h] [--mode {init,config,final}] optional arguments: -h, --helpshow this help message and exit --mode {init,config,final}, -m {init,config,final} module configuration name to use (default: config) csmith@uptown:~/src/cloud-init (master)$ sudo python3 -m cloudinit.cmd.main modules --mode init Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 835, in main(sys.argv) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 829, in main get_uptime=True, func=functor, args=(name, args)) File "/home/csmith/src/cloud-init/cloudinit/util.py", line 2238, in log_time ret = func(*args, **kwargs) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 631, in status_wrapper v1[mode]['start'] = time.time() KeyError: 'modules-init' We need to limit options to those listed in cloudinit.cmd.main line 606: modes = ('init', 'init-local', 'modules-config', 'modules-final') ** Affects: cloud-init Importance: Low Assignee: Chad Smith (chad.smith) Status: Triaged -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1736600 Title: cloud-init modules -h documents unsupported --mode init Status in cloud-init: Triaged Bug description: sudo python3 -m cloudinit.cmd.main modules -h [sudo] password for csmith: usage: /home/csmith/src/cloud-init/cloudinit/cmd/main.py modules [-h] [--mode {init,config,final}] optional arguments: -h, --helpshow this help message and exit --mode {init,config,final}, -m {init,config,final} module configuration name to use (default: config) csmith@uptown:~/src/cloud-init (master)$ sudo python3 -m cloudinit.cmd.main modules --mode init Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 835, in main(sys.argv) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 829, in main get_uptime=True, func=functor, args=(name, args)) File "/home/csmith/src/cloud-init/cloudinit/util.py", line 2238, in log_time ret = func(*args, **kwargs) File "/home/csmith/src/cloud-init/cloudinit/cmd/main.py", line 631, in status_wrapper v1[mode]['start'] = time.time() KeyError: 'modules-init' We need to limit options to those listed in cloudinit.cmd.main line 606: modes = ('init', 'init-local', 'modules-config', 'modules-final') To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1736600/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1735950] Re: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list
Setting this to Incomplete for MAAS, since it's not clear from if bad input data from MAAS is causing this traceback in cloud-init, or if the bug is only in cloud-init. @jamesbeedy, can you tell us about how you have configured/customized your apt sources in MAAS? ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: maas Status: New => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1735950 Title: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list Status in cloud-init: New Status in MAAS: Incomplete Bug description: All nodes have these same failed events: Node post-installation failure - 'cloudinit' running modules for config Node post-installation failure - 'cloudinit' running config-apt- configure with frequency once-per-instance Experiencing odd issues with the squid proxy not being reachable. From a deployed node that had the event errors. $ sudo cat /var/log/cloud-init.log | http://paste.ubuntu.com/26098787/ $ sudo cat /var/log/cloud-init-output.log | http://paste.ubuntu.com/26098802/ ubuntu@os-util-00:~$ sudo apt install htop sudo: unable to resolve host os-util-00 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: htop 0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded. Need to get 76.4 kB of archives. After this operation, 215 kB of additional disk space will be used. Err:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 htop amd64 2.0.1-1ubuntu1 Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route to host) E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/universe/h/htop/htop_2.0.1-1ubuntu1_amd64.deb Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route to host) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Not sure if these things are related (the proxy not being reachable, and the node event errors) something is not right. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1735950/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1735724] Re: Metadata iptables rules never inserted upon exception on router creation
Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions. ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1735724 Title: Metadata iptables rules never inserted upon exception on router creation Status in neutron: Confirmed Status in OpenStack Security Advisory: Incomplete Bug description: We've been debugging some issues being seen lately [0] and found out that there's a bug in l3 agent when creating routers (or during initial sync). Jakub Libosvar and I spent some time recreating the issue and this is what we got: Especially since we bumped to ovsdbapp 0.8.0, we've seen some jobs failing due to errors when authenticating using PK to a VM. The TCP connection to the SSH port was successfully established but the authentication failed. After debugging further, we found out that metadata rules in qrouter namespace which redirect traffic to haproxy (which replaced old neutron-ns-metadata-proxy) were missing, so VM's weren't fetching metadata (hence, public key). These rules are installed by metadata driver after a router is created [1] on the AFTER_CREATE notification. Also, they will get created during the initial sync of the l3 agent (since it's still unknown for the agent) [2]. Here, if we don't know the router yet, we'll call _proccess_added_router() and if it's a known router we'll call _process_updated_router(). After our tests, we've seen that iptables rules are never restored if we simulate an Exception inside ri.process() at [3] even though the router is scheduled for resync [4]. The reason why this happens is because we've already added it to our router info [5] so even though ri.process() fails at L481 and it's scheduled for resync, next time _process_updated_router() will get called instead of _process_added_router() thus not pushing the notification into metadata driver to install iptables rules and they never get installed. In conclusion, if an error occurs during _process_added_router() we might end up losing metadata forever until we restart the agent and this call succeeds. Worse, we will be forwarding metadata requests via br-ex which could lead to security issues (ie. we could be injecting wrong metadata from the outside or the metadata server running in the underlying cloud may respond). With ovsdbapp 0.9.0 we're minimizing this because if a port fails to be added to br-int, ovsdbapp will enqueue the transaction instead of throwing an Exception but there could be still some other exceptions I guess that reproduces this scenario outside of ovsdbapp so we need to fix it in Neutron. Thanks Daniel Alvarez --- [0] https://bugs.launchpad.net/tripleo/+bug/1731063 [1] https://github.com/openstack/neutron/blob/02fa049c5f5a38a276bec6e55c68ac19cd08c59f/neutron/agent/metadata/driver.py#L288 [2] https://github.com/openstack/neutron/blob/02fa049c5f5a38a276bec6e55c68ac19cd08c59f/neutron/agent/l3/agent.py#L472 [3] https://github.com/openstack/neutron/blob/02fa049c5f5a38a276bec6e55c68ac19cd08c59f/neutron/agent/l3/agent.py#L481 [4] https://github.com/openstack/neutron/blob/02fa049c5f5a38a276bec6e55c68ac19cd08c59f/neutron/agent/l3/agent.py#L565 [5] https://github.com/openstack/neutron/blob/02fa049c5f5a38a276bec6e55c68ac19cd08c59f/neutron/agent/l3/agent.py#L478 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1735724/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1735407] Re: [Nova] Evacuation doesn't respect anti-affinity rules
** Changed in: mos Status: New => Won't Fix ** Changed in: mos/9.x Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1735407 Title: [Nova] Evacuation doesn't respect anti-affinity rules Status in Mirantis OpenStack: Won't Fix Status in Mirantis OpenStack 9.x series: Won't Fix Status in OpenStack Compute (nova): In Progress Bug description: --- Environment --- MOS: 9.2 Nova: 13.1.1-7~u14.04+mos20 3 compute nodes --- Steps to reproduce --- 1. Create a new server group: nova server-group-create anti anti-affinity 2. Launch 2 VMs in this server group: nova boot --image TestVM --flavor m1.tiny --nic net-id=889e4e01-9b38-4007-829d-b69d53269874 --hint group=def58398-4a00-4066-a2aa-13f1b6e7e327 vm-1 nova boot --image TestVM --flavor m1.tiny --nic net-id=889e4e01-9b38-4007-829d-b69d53269874 --hint group=def58398-4a00-4066-a2aa-13f1b6e7e327 vm-2 3. Stop nova-compute on the nodes where these 2 VMs are running: nova show vm-1 | grep "hypervisor" OS-EXT-SRV-ATTR:hypervisor_hostname | node-12.domain.tld nova show vm-2 | grep "hypervisor" OS-EXT-SRV-ATTR:hypervisor_hostname | node-13.domain.tld [root@node-12 ~]$ service nova-compute stop nova-compute stop/waiting [root@node-13 ~]$ service nova-compute stop nova-compute stop/waiting 4. Evacuate both VMs almost at once: nova evacuate vm-1 nova evacuate vm-2 5. Check where these 2 VMs are running: nova show vm-1 | grep "hypervisor" nova show vm-2 | grep "hypervisor" --- Actual behavior --- Both VMs have been evacuated on the same node: [root@node-11 ~]$ virsh list IdName State 2 instance-0001 running 3 instance-0002 running --- Expected behavior --- According to the anti-affinity rule, only 1 VM is evacuated. Another one failed to evacuate with the appropriate message. To manage notifications about this bug go to: https://bugs.launchpad.net/mos/+bug/1735407/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1689644] Re: Keystone does not report microversion headers
Colleen brought up a good point in office hours today about this [0]. If/when keystone supports microversions officially, this will be a hard requirement of that implementation. Otherwise it doesn't make a whole lot of sense to give the appearance that keystone supports microversions when it doesn't. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-12-05.log.html#t2017-12-05T19:57:37 ** Changed in: keystone Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1689644 Title: Keystone does not report microversion headers Status in OpenStack Identity (keystone): Invalid Bug description: Keystone is now behind the other projects in reporting the microversions in the microversion header To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1689644/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1732976] Re: [OSSA-2017-006] Potential DoS by rebuilding the same instance with a new image multiple times (CVE-2017-17051)
** Summary changed: - Potential DoS by rebuilding the same instance with a new image multiple times (CVE-2017-17051) + [OSSA-2017-006] Potential DoS by rebuilding the same instance with a new image multiple times (CVE-2017-17051) ** Changed in: ossa Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1732976 Title: [OSSA-2017-006] Potential DoS by rebuilding the same instance with a new image multiple times (CVE-2017-17051) Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) pike series: In Progress Status in OpenStack Security Advisory: Fix Released Bug description: As of the fix for bug 1664931 (OSSA-2017-005, CVE-2017-16239), a regression was introduced which allows a potential denial of service. Once all computes are upgraded to >=Pike and using the (default) FilterScheduler, a rebuild with a new image will go through the scheduler. The FilterScheduler doesn't know that this is a rebuild on the same host and creates VCPU/MEMORY_MB/DISK_GB allocations in Placement against the compute node that the instance is running on. The ResourceTracker in the nova-compute service will not adjust the allocations after the rebuild, so what can happen is over multiple rebuilds of the same instance with a new image, the Placement service will report the compute node as not having any capacity left and will take it out of scheduling consideration. Eventually the rebuild would fail once the compute node is at capacity, but an attacker could then simply create a new instance (on a new host) and start the process all over again. I have a recreate of the bug here: https://review.openstack.org/#/c/521153/ This would not be a problem for anyone using another scheduler driver since only FilterScheduler uses Placement, and it wouldn't be a problem for any deployment that still has at least one compute service running Ocata code, because the ResourceTracker in the nova-compute service will adjust the allocations every 60 seconds. Beyond this issue, however, there are other problems with the fix for bug 1664931: 1. Even if you're not using the FilterScheduler, e.g. using CachingScheduler, with the RamFilter or DiskFilter or CoreFilter enabled, if the compute node that the instance is running on is at capacity, a rebuild with a new image may still fail whereas before it wouldn't. This is a regression in behavior and the user would have to delete and recreate the instance with the new image. 2. Before the fix for bug 1664931, one could rebuild an instance on a disabled compute service, but now they cannot if the ComputeFilter is enabled (which it is by default and presumably enabled in all deployments). 3. Because of the way instance.image_ref is used with volume-backed instances, we are now *always* going through the scheduler during rebuild of a volume-backed instance, regardless of whether or not the image ref provided to the rebuild API is the same as the original in the root disk. I've already reported bug 1732947 for this. -- The nova team has looked at some potential solutions, but at this point none of them are straightforward, and some involve using scheduler hints which are tied to filters that are not enabled by default (e.g. using the same_host scheduler hint which requires that the SameHostFilter is enabled). Hacking a fix in would likely result in more bugs in subtle or unforeseen ways not caught during testing. Long-term we think a better way to fix the rebuild + new image validation is to categorize each scheduler filter as being a 'resource' or 'policy' filter, and with a rebuild + new image, we only run filters that are for policy constraints (like ImagePropertiesFilter) and not run RamFilter/DiskFilter/CoreFilter (or Placement for that matter). This would likely require an internal RPC API version change on the nova-scheduler interface, which is something we wouldn't want to backport to stable branches because of upgrade implications with the RPC API version bump. At this point it might be best to just revert the fix for bug 1664931. We can still revert that through all of the upstream branches that the fix was applied to (newton is not EOL yet). This is obviously a pain for downstream consumers that have picked up and put out fixes for the CVE already. It would also mean publishing an errata for CVE-2017-16239 (we have to do that anyway probably) and saying it's now no longer fixed but is a publicly known issue. Another possible alternative is shipping a new policy rule in nova that allows operators to disable rebuilding an instance with a new image, so they could decide based on the types of images and scheduler configuration they have if rebuilding with a
[Yahoo-eng-team] [Bug 1721796] Re: wait_until_true is not rootwrap daemon friendly
Bug https://bugs.launchpad.net/neutron/+bug/1654287 was fixed and released. We'll be monitoring this one but closing for now. ** Changed in: neutron Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1721796 Title: wait_until_true is not rootwrap daemon friendly Status in neutron: Invalid Bug description: As wait_until_true uses an eventlet timeout, it can happen that timeout exception is raised while waiting for output from rootwrap daemon. This leaves data in rwd socket and next attempt to run command via rootwrap daemon returns the previous data. There is a bug reported on oslo.rootwrap - https://bugs.launchpad.net/neutron/+bug/1654287 With a switch of fullstack tests to use rootwrap, it happens a lot that timeouts are raised while running commands in rwd (eg. http://logs.openstack.org/67/488567/2/check/gate-neutron-dsvm- fullstack-ubuntu-xenial/eb8f9a3/testr_results.html.gz) This bug is to track down an eventlet free solution - we can generalize approach here https://review.openstack.org/#/c/421325/3/neutron/cmd/netns_cleanup.py and since we'll get rid of eventlet dependency, we can move wait_until_true to neutron-lib. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1721796/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736502] [NEW] Compute API: Invalid options for guest_format in block_device_mapping_v2
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: The description of the guest_format field of block_device_mapping_v2 says ephemeral is a valid option, but it is not - [ ] This is a doc addition request. - [X] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.0.0b2.dev472 on 'Tue Dec 5 13:32:26 2017, commit 6ecb535' SHA: Source: Can't derive source file URL URL: https://developer.openstack.org/api-ref/compute/ --- The doumentation for the Compute API method "POST /servers" has bad examples for the block_device_mapping_v2.guest_format field. In the description of this field it says ephemeral is a valid option, but using it results in an error that says ephemeral is not a valid format. Current: "Specifies the guest server disk file system format, such as ephemeral or swap." Change to something like: "Specifies the guest server disk file system format. A valid value is ext2, ext3, ext4, or xfs" I got those valid values from here: https://github.com/openstack/nova/blob/faacfeb07646115e7c8c91584d0e467a3e0064d7/nova/virt/libvirt/driver.py#L8102 It may not be a complete list though, as I'm not sure where else the guest_format value is used. This file was just where the traceback I got when I used "ephemeral" led. ** Affects: nova Importance: Undecided Status: New ** Tags: api-ref -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736502 Title: Compute API: Invalid options for guest_format in block_device_mapping_v2 Status in OpenStack Compute (nova): New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: The description of the guest_format field of block_device_mapping_v2 says ephemeral is a valid option, but it is not - [ ] This is a doc addition request. - [X] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.0.0b2.dev472 on 'Tue Dec 5 13:32:26 2017, commit 6ecb535' SHA: Source: Can't derive source file URL URL: https://developer.openstack.org/api-ref/compute/ --- The doumentation for the Compute API method "POST /servers" has bad examples for the block_device_mapping_v2.guest_format field. In the description of this field it says ephemeral is a valid option, but using it results in an error that says ephemeral is not a valid format. Current: "Specifies the guest server disk file system format, such as ephemeral or swap." Change to something like: "Specifies the guest server disk file system format. A valid value is ext2, ext3, ext4, or xfs" I got those valid values from here: https://github.com/openstack/nova/blob/faacfeb07646115e7c8c91584d0e467a3e0064d7/nova/virt/libvirt/driver.py#L8102 It may not be a complete list though, as I'm not sure where else the guest_format value is used. This file was just where the traceback I got when I used "ephemeral" led. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1736502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1731595] Please test proposed package
Hello Xav, or anyone else affected, Accepted neutron into queens-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository. Please help us by testing this new package. To enable the -proposed repository: sudo add-apt-repository cloud-archive:queens-proposed sudo apt-get update Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-queens-needed to verification-queens-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-queens-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! ** Changed in: cloud-archive/queens Status: Fix Released => Fix Committed ** Tags added: verification-queens-needed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1731595 Title: L3 HA: multiple agents are active at the same time Status in Ubuntu Cloud Archive: Fix Committed Status in Ubuntu Cloud Archive ocata series: Triaged Status in Ubuntu Cloud Archive pike series: Fix Committed Status in Ubuntu Cloud Archive queens series: Fix Committed Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Zesty: Triaged Status in neutron source package in Artful: Fix Committed Status in neutron source package in Bionic: Fix Released Bug description: OS: Xenial, Ocata from Ubuntu Cloud Archive We have three neutron-gateway hosts, with L3 HA enabled and a min of 2, max of 3. There are approx. 400 routers defined. At some point (we weren't monitoring exactly) a number of the routers changed from being one active, and 1+ others standby, to >1 active. This included each of the 'active' namespaces having the same IP addresses allocated, and therefore traffic problems reaching instances. Removing the routers from all but one agent, and re-adding, resolved the issue. Restarting one l3 agent also appeared to resolve the issue, but very slowly, to the point where we needed the system alive again faster and reverted to removing/re-adding. At the same time, a number of routers were listed without any agents active at all. This situation appears to have been resolved by adding routers to agents, after several minutes downtime. I'm finding it very difficult to find relevant keepalived messages to indicate what's going on, but what I do notice is that all the agents have equal priority and are configured as 'backup'. I am trying to figure out a way to get a reproducer of this, it might be that we need to have a large number of routers configured on a small number of gateways. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1731595/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736468] [NEW] glance-scrubber does not work using SSL
Public bug reported: Using glance registry and glance api over SSL, glance-scrubber doe not work in Ocata. This is the error i'm getting : glance-scrubber --config-file /etc/glance/glance-scrubber.conf Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. 2017-12-05 10:17:15.870 950 DEBUG glance_store.backend [-] Attempting to import store cinder _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.871 950 DEBUG glance_store.backend [-] Attempting to import store file _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.871 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.cinder.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.872 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.filesystem.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.872 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.http.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.872 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.rbd.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.873 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.sheepdog.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.873 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.swift.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.873 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.vmware_datastore.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.874 950 DEBUG glance_store.backend [-] Attempting to import store http _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.874 950 DEBUG glance_store.backend [-] Attempting to import store no_conf _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.874 950 DEBUG glance_store.backend [-] Attempting to import store rbd _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.875 950 DEBUG glance_store.backend [-] Attempting to import store sheepdog _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.875 950 DEBUG glance_store.backend [-] Attempting to import store swift _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.876 950 DEBUG glance_store.backend [-] Attempting to import store vmware _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.876 950 DEBUG glance_store.backend [-] Registering options for group glance_store register_opts /usr/lib/python2.7/site-packages/glance_store/backend.py:160 2017-12-05 10:17:15.876 950 DEBUG glance_store.backend [-] Registering options for group glance_store register_opts /usr/lib/python2.7/site-packages/glance_store/backend.py:160 2017-12-05 10:17:15.877 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.filesystem.Store _load_store /usr/lib/python2.7/site-packages/glance_store/backend.py:231 2017-12-05 10:17:15.878 950 DEBUG glance_store.capabilities [-] Store glance_store._drivers.filesystem.Store doesn't support updating dynamic storage capabilities. Please overwrite 'update_capabilities' method of the store to implement updating logics if needed. update_capabilities /usr/lib/python2.7/site-packages/glance_store/capabilities.py:97 2017-12-05 10:17:15.878 950 DEBUG glance_store.backend [-] Registering store glance.store.filesystem.Store with schemes ('file', 'filesystem') create_stores /usr/lib/python2.7/site-packages/glance_store/backend.py:278 2017-12-05 10:17:15.879 950 DEBUG glance_store.driver [-] Late loading location class glance_store._drivers.filesystem.StoreLocation get_store_location_class /usr/lib/python2.7/site-packages/glance_store/driver.py:89 2017-12-05 10:17:15.879 950 DEBUG glance_store.location [-] Registering scheme file with {'location_class': , 'store': , 'store_entry': 'glance.store.filesystem.Store'} register_scheme_map /usr/lib/python2.7/site-packages/glance_store/location.py:88 2017-12-05 10:17:15.879 950 DEBUG glance_store.location [-] Registering scheme filesystem with {'location_class': , 'store': , 'store_entry': 'glance.store.filesystem.Store'} register_scheme_map /usr/lib/python2.7/site-packages/glance_store/location.py:88 2017-12-05 10:17:15.880 950 DEBUG glance_store.backend [-] Attempting to import store glance.store.rbd.Store _load_store
[Yahoo-eng-team] [Bug 1736457] [NEW] resume_guests_state_on_host_boot = True creates libvirt.xml with empty nova:owner
Public bug reported: When instances are booted automatically after a compute reboot, the libvirt.xml-s are recreated withot the nova:owner data. And ceilometer likes use it. The relevant part of the xml after the reboot: . . . . At the first sight, the owner metadata is created in _get_guest_config_meta in virt/libvirt/driver.py, but only when context is not None. But when nova starts the guests on its own, probably it is None? ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736457 Title: resume_guests_state_on_host_boot = True creates libvirt.xml with empty nova:owner Status in OpenStack Compute (nova): New Bug description: When instances are booted automatically after a compute reboot, the libvirt.xml-s are recreated withot the nova:owner data. And ceilometer likes use it. The relevant part of the xml after the reboot: . . . . At the first sight, the owner metadata is created in _get_guest_config_meta in virt/libvirt/driver.py, but only when context is not None. But when nova starts the guests on its own, probably it is None? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1736457/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1501206] Re: router:dhcp ports are open resolvers
Marking Ubuntu task as Invalid; Ubuntu will pickup whatever ends up being landed into Neutron itself via Queens and other stable point releases. ** Changed in: neutron (Ubuntu) Status: New => Triaged ** Changed in: neutron (Ubuntu) Importance: Undecided => High ** Changed in: neutron (Ubuntu) Status: Triaged => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1501206 Title: router:dhcp ports are open resolvers Status in neutron: In Progress Status in OpenStack Security Advisory: Won't Fix Status in neutron package in Ubuntu: Invalid Bug description: When configuring an public IPv4 subnet with DHCP enabled inside Neutron (and attaching it to an Internet-connected router), the DNS recursive resolver service provided by dnsmasq inside the qdhcp network namespace will respond to DNS queries from the entire Internet. This is a huge problem from a security standpoint, as open resolvers are very likely to be abused for DDoS purposes. This does not only cause significant damage to third parties (i.e., the true destination of the DDoS attack and every network in between), but also on the local network or servers (due to saturation of all the available network bandwidth and/or the processing capacity of the node running the dnsmasq instance). Quoting from http://openresolverproject.org/: «Open Resolvers pose a significant threat to the global network infrastructure by answering recursive queries for hosts outside of its domain. They are utilized in DNS Amplification attacks and pose a similar threat as those from Smurf attacks commonly seen in the late 1990s. [...] What can I do? If you operate a DNS server, please check the settings. Recursive servers should be restricted to your enterprise or customer IP ranges to prevent abuse. Directions on securing BIND and Microsoft nameservers can be found on the Team CYMRU Website - If you operate BIND, you can deploy the TCP-ANY patch» It seems reasonable to expect that the dnsmasq instance within Neutron would only respond to DNS queries from the subnet prefixes it is associated with and ignore all others. Note that this only occurs for IPv4. That is however likely just a symptom of bug #1499170, which breaks all IPv6 DNS queries (external as well as internal). I would assume that when bug #1499170 is fixed, the router:dhcp ports will immediately start being open resolvers over IPv6 too. For what it's worth, the reason I noticed this issue in the first place was that NorCERT (the national Norwegian Computer Emergency Response Team - http://www.cert.no/) got in touch with us, notifying us about the open resolvers they had observed in our network and insisted that we lock them down ASAP. It only took NorCERT couple of days after the subnet was first created to do so. Tore To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1501206/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736385] Re: placement is not being properly restarted in grenade (pike to master)
Not sure it's something needing to modify Nova... ** Changed in: nova Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736385 Title: placement is not being properly restarted in grenade (pike to master) Status in devstack: In Progress Status in grenade: In Progress Status in OpenStack Compute (nova): Opinion Bug description: When the placement service is supposed to restart in grenade (pike to master) it doesn't actually restart: http://logs.openstack.org/93/385693/84/check/legacy-grenade-dsvm- neutron-multinode-live- migration/9fa93e0/logs/grenade.sh.txt.gz#_2017-12-05_00_08_01_111 This leads to issues with new microversions not being available: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Unacceptable%20version%20header%3A%201.14%5C%22 This is a latent bug that was revealed, at least in part, by efried's (correct) changes in https://review.openstack.org/#/c/524263/ It looks like a bad assumption is being made somewhere in the handling of the systemd unit files: a 'start' when it is already started is success, but does not restart (thus new code is not loaded). We can probably fix this by using the 'restart' command instead of 'start': restart PATTERN... Restart one or more units specified on the command line. If the units are not running yet, they will be started. Adding grenade and devstack as relate projects as the fix is presumably in devstack itself. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1736385/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1732593] Re: TypeError: archive_deleted_rows got multiple values for keyword arguments 'max_rows'
Reviewed: https://review.openstack.org/520765 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=0f464e55a856477a5b77ce8ce7efacf9bc139735 Submitter: Zuul Branch:master commit 0f464e55a856477a5b77ce8ce7efacf9bc139735 Author: Matt RiedemannDate: Thu Nov 16 15:34:32 2017 -0500 Fix TypeError in nova-manage db archive_deleted_rows If you run the archive_deleted_rows command without the --max_rows option name but just pass in a value, it will fail with a TypeError: venv runtests: commands[0] | nova-manage db archive_deleted_rows 1000 An error has occurred: Traceback (most recent call last): File "/home/user/git/nova/nova/cmd/manage.py", line 1924, in main ret = fn(*fn_args, **fn_kwargs) TypeError: archive_deleted_rows() got multiple values for \ keyword argument 'max_rows' This fixes it by setting a dest and moving the default to the kwarg. Change-Id: I1e60c571a8e9b875f89af6695f5427c801c8c53b Closes-Bug: #1732593 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1732593 Title: TypeError: archive_deleted_rows got multiple values for keyword arguments 'max_rows' Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: New Status in OpenStack Compute (nova) pike series: New Bug description: If you run archive_deleted_rows subcommand with following parameters, it raises TypeError exception. $nova-manage db archive_deleted_rows 1000 1000 Output An error has occurred: Traceback (most recent call last): File "/opt/stack/nova/nova/cmd/manage.py", line 1924, in main ret = fn(*fn_args, **fn_kwargs) TypeError: archive_deleted_rows() got multiple values for argument 'max_rows' Environment details: : commit 91f62818c3ab5f7f7cee11df7a7b7d3ce290ecb8 Merge: a6280e5 c09eaf8 Author: Jenkins Date: Sun Sep 10 18:31:13 2017 + Merge "Update OS_AUTH_URL in Configuration.rst" nova: commit b7f53a33faf6187ad0b16e45cb14ece07892f7bc Merge: e48db05 a9d9255 Author: Zuul Date: Wed Nov 8 07:16:14 2017 + Merge "Fix return type in FilterScheduler._legacy_find_hosts" Reason: In fn_kwargs for max_rows parameter it is assigning default value of max_rows which is 1000, and not the value which we are passing through command. So it is giving above mentioned TypeError. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1732593/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736220] Re: TypeError: __init__() got an unexpected keyword argument 'topics'
Reviewed: https://review.openstack.org/525290 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=13a65cf8a8cf550ee6df7ac35c4a63857e7b4eeb Submitter: Zuul Branch:master commit 13a65cf8a8cf550ee6df7ac35c4a63857e7b4eeb Author: Harald JensasDate: Mon Dec 4 20:10:03 2017 +0100 FakeNotifier class 'topic' argument change to 'topics'. Oslo.messaging commit: 2d53db6c51c2ac2ccddda210906c1e6418557470 changed topic to be a list. Change-Id: I24032c91d2f01687009d6e32a972d34b248962c4 Closes-Bug: #1736220 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736220 Title: TypeError: __init__() got an unexpected keyword argument 'topics' Status in neutron: Fix Released Bug description: FakeNotifier class 'topic' argument need to be updated to 'topics'. The Notifyer class in oslo.messaging was changed in Change-Id: Id89957411aa219cff92fafec2f448c81cb57b3ca To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1736220/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736395] [NEW] Unit tests unable to run on Windows
Public bug reported: The libvirt module cannot be imported on Windows due to some platform specific dependencies (eg: pwd). The 'PoisonFunctions' fixture attempts to import 'nova.virt.libvirt' for which reason all the unit tests fail to run on Windows. Would like to at least have the HyperV unit tests running on Windows. Trace: http://paste.openstack.org/raw/628147/ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736395 Title: Unit tests unable to run on Windows Status in OpenStack Compute (nova): New Bug description: The libvirt module cannot be imported on Windows due to some platform specific dependencies (eg: pwd). The 'PoisonFunctions' fixture attempts to import 'nova.virt.libvirt' for which reason all the unit tests fail to run on Windows. Would like to at least have the HyperV unit tests running on Windows. Trace: http://paste.openstack.org/raw/628147/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1736395/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1733317] Re: placement: the note of aggregates link is missing in resource provider APIs of API reference
Reviewed: https://review.openstack.org/521502 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=adac748c9ea5939095ad9406244185baefd0e622 Submitter: Zuul Branch:master commit adac748c9ea5939095ad9406244185baefd0e622 Author: Takashi NATSUMEDate: Mon Nov 20 20:45:24 2017 +0900 [placement] Add aggregate link note in API ref The aggregate link has been added in resource provider APIs since microversion 1.1. But the note is missing in Placement API reference. Add the note. The allocations link is missing in the response example of "Update resource provider" API. Add it as well. Change-Id: I325ff34c8b436429c2a2623cf1fb16b368807d29 Closes-Bug: #1733317 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1733317 Title: placement: the note of aggregates link is missing in resource provider APIs of API reference Status in OpenStack Compute (nova): Fix Released Bug description: The aggregate link has been added in resource provider APIs since microversion 1.1. But the note about it is missing in Placement API reference. The allocations link is missing in the response example of "Update resource provider" API. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1733317/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1727719] Re: Placement: resource provider can't be deleted if there is trait associated with it
Reviewed: https://review.openstack.org/516880 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=5de3317d68df8fdc6fdaff62baa62822c669bb93 Submitter: Zuul Branch:master commit 5de3317d68df8fdc6fdaff62baa62822c669bb93 Author: Takashi NATSUMEDate: Wed Nov 1 14:03:12 2017 +0900 [placement] Fix foreign key constraint error When deleting a resource provider, if there are traits on the resource provider, a foreign key constraint error occurs. So add deleting trait associations for the resource provider (records in the 'resource_provider_traits' table) when deleting the resource provider. Change-Id: I6874567a14beb9b029765bf49067af6de17f2bd2 Closes-Bug: #1727719 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1727719 Title: Placement: resource provider can't be deleted if there is trait associated with it Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: New Bug description: nova: 6d61c61a32 To reproduce: 1. Create a new RP 2. Associate a trait with it. 3. Try to delete the RP. Error: Response: {"computeFault": {"message": "The server has either erred or is incapable of performing the requested operation.", "code": 500}} (Pdb) __exception__ (, DBReferenceError(u"(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent row: a foreign key constraint fails (`nova_api`.`resource_provider_traits`, CONSTRAINT `resource_provider_traits_ibfk_1` FOREIGN KEY (`resource_provider_id`) REFERENCES `resource_providers` (`id`))') [SQL: u'DELETE FROM resource_providers WHERE resource_providers.id = %(id_1)s'] [parameters: {u'id_1': 17}]",)) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1727719/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1732947] Re: volume-backed instance rebuild with no image change is still going through scheduler
Reviewed: https://review.openstack.org/521391 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=54407afef3b446e34c44ba5392ccbf121e266de2 Submitter: Zuul Branch:master commit 54407afef3b446e34c44ba5392ccbf121e266de2 Author: Matt RiedemannDate: Sun Nov 19 17:25:56 2017 -0500 Get original image_id from volume for volume-backed instance rebuild A volume-backed instance will not have the instance.image_ref attribute set, to indicate to the API user that it is a volume-backed instance. Commit 984dd8ad6add4523d93c7ce5a666a32233e02e34 missed this subtle difference with how instance.image_ref is used, which means a rebuild of any volume-backed instance will now run through the scheduler, even if the image_href passed to rebuild is the same image ID as for the root volume. This fixes that case in rebuild by getting the image metadata off the root volume for a volume-backed instance and compares that to the image_href passed to rebuild. Change-Id: I48cda813b9effa37f6c3e0cd2e8a22bb78c79d72 Closes-Bug: #1732947 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1732947 Title: volume-backed instance rebuild with no image change is still going through scheduler Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Confirmed Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: Confirmed Bug description: Due to this recent change: https://github.com/openstack/nova/commit/984dd8ad6add4523d93c7ce5a666a32233e02e34 And the fact that volume-backed instances don't store the instance.image_ref value to indicate they are volume-backed, this condition will always evaluate to True: https://github.com/openstack/nova/blob/984dd8ad6add4523d93c7ce5a666a32233e02e34/nova/compute/api.py#L2959 And we'll unnecessarily call through the scheduler again to validate the instance on the original host during the rebuild even if the image isn't changing. For a volume-backed instance, we have to get the original image_id from the volume's volume_image_metadata field: https://review.openstack.org/#/c/520686/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1732947/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736385] [NEW] placement is not being properly restarted in grenade (pike to master)
Public bug reported: When the placement service is supposed to restart in grenade (pike to master) it doesn't actually restart: http://logs.openstack.org/93/385693/84/check/legacy-grenade-dsvm- neutron-multinode-live- migration/9fa93e0/logs/grenade.sh.txt.gz#_2017-12-05_00_08_01_111 This leads to issues with new microversions not being available: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Unacceptable%20version%20header%3A%201.14%5C%22 This is a latent bug that was revealed, at least in part, by efried's (correct) changes in https://review.openstack.org/#/c/524263/ It looks like a bad assumption is being made somewhere in the handling of the systemd unit files: a 'start' when it is already started is success, but does not restart (thus new code is not loaded). We can probably fix this by using the 'restart' command instead of 'start': restart PATTERN... Restart one or more units specified on the command line. If the units are not running yet, they will be started. Adding grenade and devstack as relate projects as the fix is presumably in devstack itself. ** Affects: devstack Importance: Undecided Status: New ** Affects: grenade Importance: Undecided Status: New ** Affects: nova Importance: Undecided Status: New ** Also affects: grenade Importance: Undecided Status: New ** Also affects: devstack Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736385 Title: placement is not being properly restarted in grenade (pike to master) Status in devstack: New Status in grenade: New Status in OpenStack Compute (nova): New Bug description: When the placement service is supposed to restart in grenade (pike to master) it doesn't actually restart: http://logs.openstack.org/93/385693/84/check/legacy-grenade-dsvm- neutron-multinode-live- migration/9fa93e0/logs/grenade.sh.txt.gz#_2017-12-05_00_08_01_111 This leads to issues with new microversions not being available: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Unacceptable%20version%20header%3A%201.14%5C%22 This is a latent bug that was revealed, at least in part, by efried's (correct) changes in https://review.openstack.org/#/c/524263/ It looks like a bad assumption is being made somewhere in the handling of the systemd unit files: a 'start' when it is already started is success, but does not restart (thus new code is not loaded). We can probably fix this by using the 'restart' command instead of 'start': restart PATTERN... Restart one or more units specified on the command line. If the units are not running yet, they will be started. Adding grenade and devstack as relate projects as the fix is presumably in devstack itself. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1736385/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736114] Re: metadata_agent.ini cannot be built reproducibly
Reviewed: https://review.openstack.org/525159 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a20845048a87b99e17f3caf204e4b88199442d8d Submitter: Zuul Branch:master commit a20845048a87b99e17f3caf204e4b88199442d8d Author: Thomas GoirandDate: Mon Dec 4 12:28:35 2017 +0100 Build metadata_agent.ini reproducibly Currently, when metadata_agent.ini is built, the default value for the directive metadata_workers is the build host's number of CPU. This is wrong because metadata_agent.ini cannot be built reproducibly, which is a bug in many distributions. See for Debian: https://wiki.debian.org/ReproducibleBuilds/About This patch therefore uses sample_default oslo.config directive to hardcode a value put in generated configuration file that would not depend on build environment. Change-Id: I7292d09b96f90d0477dd4b59766854a733e1da38 Closes-Bug: #1736114 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736114 Title: metadata_agent.ini cannot be built reproducibly Status in neutron: Fix Released Bug description: Hi, When generating metadata_agent.ini, the metadata_workers directive default value is filled with the number of CPUs used when building the file. This makes the whole Neutron package not reproducible. The config code is like this (from neutron/conf/agent/metadata/config.py): cfg.IntOpt('metadata_workers', default=host.cpu_count() // 2, help=_('Number of separate worker processes for metadata ' 'server (defaults to half of the number of CPUs)')), Instead of writing this, the default value should be set to None, then whenever something fetches the metadata_workers value, something like this should be written (probably, a //2 should be added if we want to retain the above): def get_num_metadata_workers(): """Return the configured number of workers.""" if CONF.metadata_workers is None: # None implies the number of CPUs return processutils.get_worker_count() return CONF.metadata_workers This way, the value really is taken from runtime, and not build time, which is probably what the original author wanted to write. Note that this type of fix has already been written in Glance, and many other OpenStack packages. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1736114/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736367] [NEW] Create Instance failed (Internal error)
Public bug reported: Hi, i have installed openstack on my centos7 which hosted on vmware workstastion machine. glance images-list is fine. but nova image-list fail,and create vm instance fail. log file is following. /var/log/nova/nova-api.conf 2017-12-05 17:26:38.052 2353 INFO nova.osapi_compute.wsgi.server [req-aa4a2b3c-d623-4a4e-bce2-59328d7154ce a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/flavors/f31fa21a-9730-4b5d-be40-b94cca89f576/os-extra_specs HTTP/1.1" status: 200 len: 351 time: 0.0245221 2017-12-05 17:26:38.101 2350 INFO nova.osapi_compute.wsgi.server [req-11bf5ef8-7dae-4a78-b795-e53f27e2ea52 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/limits?reserved=1 HTTP/1.1" status: 200 len: 844 time: 2.2066791 2017-12-05 17:26:38.456 2350 INFO nova.osapi_compute.wsgi.server [req-494b7dfb-fce1-4b9f-b292-fe42fc04464d a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/os-server-groups HTTP/1.1" status: 200 len: 353 time: 0.2469988 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions [req-d7e00de6-0ddd-44e9-a0b7-245e1de4fca5 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] Unexpected exception in API method 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, in wrapped 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 642, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions **create_kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1620, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1170, in _create_instance 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions image_id, boot_meta = self._get_image(context, image_href) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 806, in _get_image 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions image = self.image_api.get(context, image_href) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 95, in get 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions show_deleted=show_deleted) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File
[Yahoo-eng-team] [Bug 1736368] [NEW] Create Instance failed (Internal error)
Public bug reported: Hi, i have installed openstack on my centos7 which hosted on vmware workstastion machine. glance images-list is fine. but nova image-list fail,and create vm instance fail. log file is following. /var/log/nova/nova-api.conf 2017-12-05 17:26:38.052 2353 INFO nova.osapi_compute.wsgi.server [req-aa4a2b3c-d623-4a4e-bce2-59328d7154ce a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/flavors/f31fa21a-9730-4b5d-be40-b94cca89f576/os-extra_specs HTTP/1.1" status: 200 len: 351 time: 0.0245221 2017-12-05 17:26:38.101 2350 INFO nova.osapi_compute.wsgi.server [req-11bf5ef8-7dae-4a78-b795-e53f27e2ea52 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/limits?reserved=1 HTTP/1.1" status: 200 len: 844 time: 2.2066791 2017-12-05 17:26:38.456 2350 INFO nova.osapi_compute.wsgi.server [req-494b7dfb-fce1-4b9f-b292-fe42fc04464d a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] 192.168.30.130 "GET /v2.1/os-server-groups HTTP/1.1" status: 200 len: 353 time: 0.2469988 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions [req-d7e00de6-0ddd-44e9-a0b7-245e1de4fca5 a423338af20648268007ce109c139069 f1df077dd06349b8a919e189441b736d - default default] Unexpected exception in API method 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 338, in wrapped 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, in wrapper 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 642, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions **create_kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1620, in create 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1170, in _create_instance 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions image_id, boot_meta = self._get_image(context, image_href) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 806, in _get_image 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions image = self.image_api.get(context, image_href) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 95, in get 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions show_deleted=show_deleted) 2017-12-05 17:26:50.482 2350 ERROR nova.api.openstack.extensions File
[Yahoo-eng-team] [Bug 1736351] [NEW] VIP port is created with admin_state_up: False
Public bug reported: The VIP port is created with admin_state_up: False, i.e. the port is disabled. SDN controllers and Neutron backends which respect this field will not set up the port to receive connections. It seems this field is modified in the HAProxy agent, but the agent appears not to be used with e.g. octavia. ** Affects: neutron Importance: Undecided Assignee: Omer Anson (omer-anson) Status: New ** Tags: lbaas ** Changed in: neutron Assignee: (unassigned) => Omer Anson (omer-anson) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1736351 Title: VIP port is created with admin_state_up: False Status in neutron: New Bug description: The VIP port is created with admin_state_up: False, i.e. the port is disabled. SDN controllers and Neutron backends which respect this field will not set up the port to receive connections. It seems this field is modified in the HAProxy agent, but the agent appears not to be used with e.g. octavia. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1736351/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp