[Yahoo-eng-team] [Bug 1615577] Re: fwaas db migration faliure with postgres
** Changed in: networking-midonet Status: New => Fix Released ** Changed in: networking-midonet Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto) ** Changed in: networking-midonet Milestone: None => 3.0.0 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1615577 Title: fwaas db migration faliure with postgres Status in networking-midonet: Fix Released Status in neutron: Fix Released Bug description: Traceback (most recent call last): File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py", line 602, in test_models_sync self.db_sync(self.get_engine()) File "midonet/neutron/tests/unit/db/test_migrations.py", line 102, in db_sync migration.do_alembic_command(conf, 'upgrade', 'heads') File "/opt/stack/networking-midonet/.tox/py27/src/neutron/neutron/db/migration/cli.py", line 108, in do_alembic_command getattr(alembic_command, cmd)(config, *args, **kwargs) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade script.run_env() File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/script/base.py", line 407, in run_env util.load_python_file(self.dir, 'env.py') File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py", line 86, in run_migrations_online() File "/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py", line 77, in run_migrations_online context.run_migrations() File "", line 8, in run_migrations File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in run_migrations self.get_context().run_migrations(**kw) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/migration.py", line 312, in run_migrations step.migration_fn(**kw) File "/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/versions/d6a12e637e28_neutron_fwaas_v2_0.py", line 61, in upgrade sa.Column('enabled', sa.Boolean)) File "", line 8, in create_table File "", line 3, in create_table File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/ops.py", line 1098, in create_table return operations.invoke(op) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/base.py", line 318, in invoke return fn(self, operation) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 101, in create_table operations.impl.create_table(table) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 193, in create_table _ddl_runner=self) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 256, in __call__ fn(*args, **kw) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 546, in __call__ return getattr(self.target, self.name)(*arg, **kw) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py", line 1030, in _on_table_create t._on_table_create(target, bind, **kw) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1369, in _on_table_create self.create(bind=bind, checkfirst=checkfirst) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1317, in create bind.execute(CreateEnumType(self)) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68,
[Yahoo-eng-team] [Bug 1622833] [NEW] timestamp mechanism in linux bridge false positives
Public bug reported: The linux bridge agent is picking up too many false positives in its detection mechanism for when devices have been modified locally. In the following the 4 tap devices attached to a particular bridge had timestamps that jumped forward even though none of the interfaces actually changed: 2016-09-13 00:13:38.744 14179 DEBUG neutron.plugins.ml2.drivers.agent._common_agent [req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Adding locally changed devices to updated set: set(['tap422b85d9-95', 'tap9b365584-34', 'tapee2684f8-51', 'tap66ef2d8e-3b']) scan_devices /opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:397 2016-09-13 00:13:38.744 14179 DEBUG neutron.plugins.ml2.drivers.agent._common_agent [req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Agent loop found changes! {'current': set(['tap422b85d9-95', 'tapee2684f8-51', 'tap6028e7a2-c0', 'tap9b365584-34', 'tap0960ffac-f9', 'tap7ba5f865-54', 'tap66ef2d8e-3b', 'tapfe427ba3-63', 'tap475f33ef-c3']), 'timestamps': {'tap422b85d9-95': 1473725618.73996, 'tapee2684f8-51': 1473725618.73996, 'tap6028e7a2-c0': None, 'tap9b365584-34': 1473725618.73996, 'tap0960ffac-f9': 1473725618.73996, 'tap7ba5f865-54': 1473725616.7399597, 'tap66ef2d8e-3b': 1473725618.73996, 'tapfe427ba3-63': 1473725616.7399597, 'tap475f33ef-c3': None}, 'removed': set([]), 'added': set([]), 'updated': set(['tap422b85d9-95', 'tap9b365584-34', 'tapee2684f8-51', 'tap66ef2d8e-3b'])} daemon_loop /opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:448 This leads to the agent refetching the details, which puts the port in BUILD and then back to ACTIVE. This leads to sporadic failures when tempest tests are asserting that a port should be in the ACTIVE status. ** Affects: neutron Importance: Undecided Assignee: Kevin Benton (kevinbenton) Status: New ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) ** Changed in: neutron Milestone: None => newton-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622833 Title: timestamp mechanism in linux bridge false positives Status in neutron: New Bug description: The linux bridge agent is picking up too many false positives in its detection mechanism for when devices have been modified locally. In the following the 4 tap devices attached to a particular bridge had timestamps that jumped forward even though none of the interfaces actually changed: 2016-09-13 00:13:38.744 14179 DEBUG neutron.plugins.ml2.drivers.agent._common_agent [req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Adding locally changed devices to updated set: set(['tap422b85d9-95', 'tap9b365584-34', 'tapee2684f8-51', 'tap66ef2d8e-3b']) scan_devices /opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:397 2016-09-13 00:13:38.744 14179 DEBUG neutron.plugins.ml2.drivers.agent._common_agent [req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Agent loop found changes! {'current': set(['tap422b85d9-95', 'tapee2684f8-51', 'tap6028e7a2-c0', 'tap9b365584-34', 'tap0960ffac-f9', 'tap7ba5f865-54', 'tap66ef2d8e-3b', 'tapfe427ba3-63', 'tap475f33ef-c3']), 'timestamps': {'tap422b85d9-95': 1473725618.73996, 'tapee2684f8-51': 1473725618.73996, 'tap6028e7a2-c0': None, 'tap9b365584-34': 1473725618.73996, 'tap0960ffac-f9': 1473725618.73996, 'tap7ba5f865-54': 1473725616.7399597, 'tap66ef2d8e-3b': 1473725618.73996, 'tapfe427ba3-63': 1473725616.7399597, 'tap475f33ef-c3': None}, 'removed': set([]), 'added': set([]), 'updated': set(['tap422b85d9-95', 'tap9b365584-34', 'tapee2684f8-51', 'tap66ef2d8e-3b'])} daemon_loop /opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:448 This leads to the agent refetching the details, which puts the port in BUILD and then back to ACTIVE. This leads to sporadic failures when tempest tests are asserting that a port should be in the ACTIVE status. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622833/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622831] [NEW] The allocation record can't be removed after migration/failure
Public bug reported: Currently the update_available_resource only update the instance which still located on the host. This leads after resize, still have allocation record on the old host. Also when instance removed locally from db and compute node is offline, after comptue node startup, the allocation record won't be cleanup. ** Affects: nova Importance: Undecided Assignee: Alex Xu (xuhj) Status: Invalid ** Changed in: nova Assignee: (unassigned) => Alex Xu (xuhj) ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622831 Title: The allocation record can't be removed after migration/failure Status in OpenStack Compute (nova): Invalid Bug description: Currently the update_available_resource only update the instance which still located on the host. This leads after resize, still have allocation record on the old host. Also when instance removed locally from db and compute node is offline, after comptue node startup, the allocation record won't be cleanup. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1622831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1503179] Re: nic ordering is inconsistent between hard reboot
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1503179 Title: nic ordering is inconsistent between hard reboot Status in OpenStack Compute (nova): Expired Bug description: If instance is assigned to several networks nic ordering is inconsistent between hard reboots (for neutron). This information could be found in interfaces section of instance xml file. Related-bug (for nova-network): https://bugs.launchpad.net/nova/+bug/1405271 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1503179/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1599381] Re: [RFE] Add bandwidth quota of certain tenant's floating-ips
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1599381 Title: [RFE] Add bandwidth quota of certain tenant's floating-ips Status in neutron: Expired Bug description: For a certain scenery, a tenant's max available bandwidth of floatingips should be added to quota list. Use cases = In certain commercial scenery with neutron, such as some VNF cases using TECS of ZTE, admininstrator need to have previlege to limit the total egrees bandwidth of all floatingips belongs to a tenant, because of the physical network's expensive cost. But for now, it is not allowed to limit bandwidth resource for certain tenant. Now like below: [root@localhost devstack]# neutron quota-update --tenant-id d9cc08fe87ee49f08020baa95893e2ef --floatingips-bandwidth 10 Unrecognized attribute(s) 'floatingips_bandwidth' Neutron server returns request_ids: ['req-2b534556-5c97-4c40-9f0e-5e488403cd1b'] I think it should be: = While the bandwidth quota is set, firstly this tenant's floatingip should be limited by QoS policy with qos-bandwidth-limit-rule referring to https://bugs.launchpad.net/neutron/+bug/1596611. Secondly, the sum of all floatingips bandwidth should not be greater than the quota configuration. This implementation will bring more usablity in commertail scenery for openstack. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1599381/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1451506] Re: spawn failed with "libvirtError: internal error: received hangup / error event on socket" in the gate
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1451506 Title: spawn failed with "libvirtError: internal error: received hangup / error event on socket" in the gate Status in OpenStack Compute (nova): Expired Bug description: Looks like libvirt was temporarily disconnected which caused the spawn failure: http://logs.openstack.org/80/170780/11/gate/gate-tempest-dsvm- postgres- full/b2e3fd4/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-05-04_16_49_01_948 2015-05-04 16:49:01.948 ERROR nova.compute.manager [req-e9676551-d3ed-4f85-ab2d-34ce9e1b1446 TestServerAdvancedOps-1668477466 TestServerAdvancedOps-559926363] [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] Instance failed to spawn 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] Traceback (most recent call last): 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/compute/manager.py", line 2475, in _build_resources 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] yield resources 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/compute/manager.py", line 2347, in _build_and_run_instance 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] block_device_info=block_device_info) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2355, in spawn 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] block_device_info=block_device_info) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4393, in _create_domain_and_network 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] power_on=power_on) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4324, in _create_domain 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] LOG.error(err) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__ 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] six.reraise(self.type_, self.value, self.tb) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4308, in _create_domain 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] domain = self._conn.defineXML(xml) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] result = proxy_call(self._autowrap, f, *args, **kwargs) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in proxy_call 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] rv = execute(f, *args, **kwargs) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] six.reraise(c, e, tb) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance: 2d8249a5-df5a-4c8b-b41b-1c07f27ddb96] rv = meth(*args, **kwargs) 2015-05-04 16:49:01.948 13039 TRACE nova.compute.manager [instance:
[Yahoo-eng-team] [Bug 1515457] Re: Delete an instance created from volume would be unsuccessful if the connection_info has no volume_id
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1515457 Title: Delete an instance created from volume would be unsuccessful if the connection_info has no volume_id Status in OpenStack Compute (nova): Expired Bug description: description: when delete an instance created from the volume. It will get the volume metadata like this : volume_id = connection_info['data']['volume_id'] But the connection_info has no volume_id. It will be error. version :2014.1. Relevant log : 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2587, in do_terminate_instance 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] self._delete_instance(context, instance, bdms, quotas) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 103, in inner 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] rv = f(*args, **kwargs) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2556, in _delete_instance 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] quotas.rollback() 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] six.reraise(self.type_, self.value, self.tb) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2528, in _delete_instance 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] self._shutdown_instance(context, db_inst, bdms) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2463, in _shutdown_instance 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] requested_networks) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] six.reraise(self.type_, self.value, self.tb) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2453, in _shutdown_instance 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] block_device_info) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1009, in destroy 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] destroy_disks) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1098, in cleanup 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] volume_meta = self._get_volume_metadata(context, connection_info) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3642, in _get_volume_metadata 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] raise exception.InvalidBDMVolume(id=volume_id) 2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 4d6213cb-4761-49fb-a993-37833f5a6add] UnboundLocalError: local variable 'volume_id' referenced before assignment Reproduce steps: 1 create a volume named a from an image 2 create an instance from the volume a 3 delete the instance To
[Yahoo-eng-team] [Bug 1482392] Re: libvirt: Make nova destroy wait for vif unplugged events
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1482392 Title: libvirt: Make nova destroy wait for vif unplugged events Status in OpenStack Compute (nova): Expired Bug description: Nova destroy doesn't wait for vif unplugged events from neutron when destroying an instance. Neutron won't send a vif-plugged event if the port doesn't change. This can break operations such as rebuild if neutron fails to unplug interfaces and nova doesn't know about it. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1482392/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1600500] Re: NeutronClientException 404 listing floating IPs for a server (liberty)
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1600500 Title: NeutronClientException 404 listing floating IPs for a server (liberty) Status in OpenStack Compute (nova): Expired Bug description: Hello Description === Getting neutronclient exception error when trying to connect Cloudforms4.1 with Openstack liberty Openstack environment is not getting populated in Cloudforms because of this error Environment === 1. [root@rhlospctl01 nova]# rpm -qa | grep nova openstack-nova-cert-12.0.3-1.el7ost.noarch openstack-nova-novncproxy-12.0.3-1.el7ost.noarch python-nova-12.0.3-1.el7ost.noarch openstack-nova-scheduler-12.0.3-1.el7ost.noarch openstack-nova-console-12.0.3-1.el7ost.noarch python-novaclient-3.1.0-2.el7ost.noarch openstack-nova-common-12.0.3-1.el7ost.noarch openstack-nova-conductor-12.0.3-1.el7ost.noarch openstack-nova-api-12.0.3-1.el7ost.noarch [root@rhlospctl01 nova]# rpm -qa | grep neutron python-neutronclient-3.1.0-1.el7ost.noarch openstack-neutron-ml2-7.0.4-2.el7ost.noarch openstack-neutron-7.0.4-2.el7ost.noarch openstack-neutron-openvswitch-7.0.4-2.el7ost.noarch openstack-neutron-common-7.0.4-2.el7ost.noarch python-neutron-7.0.4-2.el7ost.noarch 2. Which hypervisor did you use? Libvirt + KVM What's the version of that? [root@rhlospctl01 nova]# rpm -qa | grep libvirt libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 2. Which storage type did you use? CEPH 3. Which networking type did you use? Neutron with OpenVSwitch VLAN provider network.(no L3) I am getting the following error 4347b744780670409b50/os-keypairs.json?limit=1000=cloud-ctrl1-key HTTP/1.1" status: 200 len: 1803 time: 0.0075929 2016-07-09 17:52:07.143 35655 INFO nova.osapi_compute.wsgi.server [req-cee3f742-ccd5-4b5a-8c74-5cc684b09d5a 9ec40e8aaf3641e695c4c57040832aba 54a4f3dc50904dff9d5ec7703e27ce0b - - -] 10.81.61.211 "GET /v2/54a4f3dc50904dff9d5ec7703e27ce0b/os-keypairs.json?limit=1000 HTTP/1.1" status: 200 len: 1803 time: 0.0080750 2016-07-09 17:52:07.159 35631 INFO nova.osapi_compute.wsgi.server [req-1c172786-3c20-473e-b167-9fa84db542b0 9ec40e8aaf3641e695c4c57040832aba 54a4f3dc50904dff9d5ec7703e27ce0b - - -] 10.81.61.211 "GET /v2/54a4f3dc50904dff9d5ec7703e27ce0b/os-keypairs.json?limit=1000=cloud-ctrl1-key HTTP/1.1" status: 200 len: 1803 time: 0.0106990 2016-07-09 17:52:08.567 35615 INFO nova.osapi_compute.wsgi.server [-] 10.81.61.239 "GET /" status: 200 len: 501 time: 0.0012670 2016-07-09 17:52:11.469 35700 INFO nova.api.ec2 [-] 0.126s 10.81.61.239 GET / None:None 200 [None] text/plain text/plain 2016-07-09 17:52:11.470 35700 INFO nova.metadata.wsgi.server [-] 10.81.61.239 "GET /" status: 200 len: 234 time: 0.0009701 2016-07-09 17:52:12.164 35615 INFO nova.osapi_compute.wsgi.server [req-28bb8960-d265-402e-894e-1465dbf36b51 9ec40e8aaf3641e695c4c57040832aba 100da705306c4347b744780670409b50 - - -] 10.81.61.211 "GET /v2/100da705306c4347b744780670409b50/servers/detail.json?limit=1000 HTTP/1.1" status: 200 len: 187 time: 0.0434470 2016-07-09 17:52:12.431 35617 INFO nova.osapi_compute.wsgi.server [req-7862ed42-31d1-4d24-9c97-7f40ddbad713 9ec40e8aaf3641e695c4c57040832aba 54a4f3dc50904dff9d5ec7703e27ce0b - - -] 10.81.61.211 "GET /v2/54a4f3dc50904dff9d5ec7703e27ce0b/servers/detail.json?limit=1000 HTTP/1.1" status: 200 len: 39905 time: 0.2637389 2016-07-09 17:52:12.591 35617 INFO nova.osapi_compute.wsgi.server [req-7be49304-9f1d-4554-93d1-915dbd76bc0e 9ec40e8aaf3641e695c4c57040832aba 54a4f3dc50904dff9d5ec7703e27ce0b - - -] 10.81.61.211 "GET /v2/54a4f3dc50904dff9d5ec7703e27ce0b/servers/detail.json?limit=1000=f68af8ce-8873-470e-837a-7e4de6b1f945 HTTP/1.1" status: 200 len: 187 time: 0.0676081 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions [req-9bec83aa-572b-4237-8171-e32f8a398d86 9ec40e8aaf3641e695c4c57040832aba 54a4f3dc50904dff9d5ec7703e27ce0b - - -] Unexpected exception in API method 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/floating_ips.py", line 117, in index 2016-07-09 17:52:12.603 35617 ERROR nova.api.openstack.extensions floating_ips = self.network_api.get_floating_ips_by_project(context)
[Yahoo-eng-team] [Bug 1622824] [NEW] l3 dvr code passing ip allocation objects to update_port
Public bug reported: The l3 dvr code is passing IP allocation objects to update_port, which is not supported by the retry decorator protecting update_port. This results in the following exception: 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource [req-347d1015-bdce-4e58-8179-68ff758b62f4 tempest-TestGettingAddress-1311327307 -] add_router_interface failed: No details. 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource Traceback (most recent call last): 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource result = method(request=request, **args) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 87, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 83, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 123, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource traceback.format_exc()) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 118, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource return f(*dup_args, **dup_kwargs) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 221, in _handle_action 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource ret_value = getattr(self._plugin, name)(*arg_list, **kwargs) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 155, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource return method(*args, **kwargs) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 87, in wrapped 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-13 00:26:29.206 13801 ERROR neutron.api.v2.resource File
[Yahoo-eng-team] [Bug 1611171] Re: re-runs self via sudo
Reviewed: https://review.openstack.org/368319 Committed: https://git.openstack.org/cgit/openstack/masakari/commit/?id=53d9c2613d734a48b0f0b30944bfd47ef5c1b06f Submitter: Jenkins Branch:master commit 53d9c2613d734a48b0f0b30944bfd47ef5c1b06f Author: Takashi KajinamiDate: Tue Sep 6 11:07:23 2016 +0900 Don't attempt to escalate masakari-manage privileges Remove code which allowed masakari-manage to attempt to escalate privileges so that configuration files can be read by users who normally wouldn't have access, but do have sudo access. NOTE: This change is create based on the change with change id I03063d2af14015e6506f1b6e958f5ff219aa4a87 from Kiall Mac Innes in designate project. Change-Id: Icba07a4bac4f41b921984204b32ad73fdbae4097 Co-Authored-By: Kiall Mac Innes Closes-Bug: 1611171 ** Changed in: masakari Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1611171 Title: re-runs self via sudo Status in Cinder: New Status in Designate: In Progress Status in ec2-api: New Status in gce-api: New Status in Manila: New Status in masakari: Fix Released Status in OpenStack Compute (nova): In Progress Status in OpenStack Security Advisory: Incomplete Status in Rally: New Bug description: Hello, I'm looking through Designate source code to determine if is appropriate to include in Ubuntu Main. This isn't a full security audit. This looks like trouble: ./designate/cmd/manage.py def main(): CONF.register_cli_opt(category_opt) try: utils.read_config('designate', sys.argv) logging.setup(CONF, 'designate') except cfg.ConfigFilesNotFoundError: cfgfile = CONF.config_file[-1] if CONF.config_file else None if cfgfile and not os.access(cfgfile, os.R_OK): st = os.stat(cfgfile) print(_("Could not read %s. Re-running with sudo") % cfgfile) try: os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv) except Exception: print(_('sudo failed, continuing as if nothing happened')) print(_('Please re-run designate-manage as root.')) sys.exit(2) This is an interesting decision -- if the configuration file is _not_ readable by the user in question, give the executing user complete privileges of the user that owns the unreadable file. I'm not a fan of hiding privilege escalation / modifications in programs -- if a user had recently used sudo and thus had the authentication token already stored for their terminal, this 'hidden' use of sudo may be unexpected and unwelcome, especially since it appears that argv from the first call leaks through to the sudo call. Is this intentional OpenStack style? Or unexpected for you guys too? (Feel free to make this public at your convenience.) Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1611171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622806] Re: v3 Credential APIs return credential in blob attribute as string instead of json object
The way the bug is initially written is very much an opinion, I don't see a bug in the message, just a preference that the "blob" attribute be turned into a proper JSON dict. Unfortunately, this isn't possible due to breaking API contracts and backwards compatibility. Further, this is as-designed, dating way back to the first v3 implementation: https://github.com/openstack/keystone/commit/ddc8c833684ff0db65553b09b87eed7b80c7075d #diff-d36d696b43e6991c5e56b7085e3ca411R56 The credentials backend saves a variety of different credentials (ec2, totp, and generic keys), and the "blob" is used because of that, so users can always look at the blob attribute, rather than a credential specific one. Why is calling json.loads() not an option? ** Changed in: keystone Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1622806 Title: v3 Credential APIs return credential in blob attribute as string instead of json object Status in OpenStack Identity (keystone): Opinion Bug description: v3 Credential APIs return credential in blob attribute but that is return as string instead of json object. Returning actual dict object as string in API is not much comfortable to use for users and seems like wrong type in response. credential in 'blob' attribute should be de serialized before returning and users can get the same as json object. ref- http://developer.openstack.org/api-ref/identity/v3/?expanded =create-credential-detail To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1622806/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1617409] Re: fixed ip gets assigned when a port is created with no fixed ip
Reviewed: https://review.openstack.org/369062 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ab4ee76c8bdacb0f272328e0b4efefc3b820e5ba Submitter: Jenkins Branch:master commit ab4ee76c8bdacb0f272328e0b4efefc3b820e5ba Author: Carl BaldwinDate: Mon Sep 12 16:09:50 2016 -0600 Only do deferred ip allocation on deferred ports Only do deferred ip allocation on ports that were originally marked as deferred IP ports on port create. Change-Id: Ia34bc2617f99cca73f58c9e615a8ba78e667c9b3 Closes-Bug: #1617409 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1617409 Title: fixed ip gets assigned when a port is created with no fixed ip Status in neutron: Fix Released Status in OpenStack Compute (nova): Invalid Bug description: When a neutron port is created with no fixed ip, and bound to a nova instance, a random IP is automatically picked up from subnet pool and assigned. Port request body : port_req_body {'port': {'network_id': u'b3bfb646-5794-44e6-86eb-0f90b18c6d78', 'tenant_id': u'ccfdd1bc5f894ad0a085e3e1d53f9329', 'mac_address': u'fa:f8:86:36:34:20', 'fixed_ips': [], 'admin_state_up': True}} After it is bound to a nova instance, it gets an ip associated with it. From neutron port-show : | fixed_ips | {"subnet_id": "90f2afbc-bc64-4a0c-823a-a39d71f1fadd", "ip_address": "10.0.0.5"} | This appears to be a problem due to changes in newton. This behavior was not there in mitaka. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1617409/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622806] [NEW] v3 Credential APIs return credential in blob attribute as string instead of json object
Public bug reported: v3 Credential APIs return credential in blob attribute but that is return as string instead of json object. Returning actual dict object as string in API is not much comfortable to use for users and seems like wrong type in response. credential in 'blob' attribute should be de serialized before returning and users can get the same as json object. ref- http://developer.openstack.org/api-ref/identity/v3/?expanded =create-credential-detail ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1622806 Title: v3 Credential APIs return credential in blob attribute as string instead of json object Status in OpenStack Identity (keystone): New Bug description: v3 Credential APIs return credential in blob attribute but that is return as string instead of json object. Returning actual dict object as string in API is not much comfortable to use for users and seems like wrong type in response. credential in 'blob' attribute should be de serialized before returning and users can get the same as json object. ref- http://developer.openstack.org/api-ref/identity/v3/?expanded =create-credential-detail To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1622806/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1539735] Re: DB migrations are not tested with data
** Changed in: neutron Milestone: newton-rc1 => ocata-1 ** Changed in: neutron Milestone: ocata-1 => mitaka-3 ** Changed in: neutron Milestone: mitaka-3 => None ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1539735 Title: DB migrations are not tested with data Status in neutron: Fix Released Bug description: Currently the DB migration tests check only the schema. In several cases migration changes should be checked with data records to fully verify that they work. We need a framework for writing migration tests with data. We should leverage from or contribute to oslo.db appropriately for this. Using the framework, develop test cases. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1539735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560963] Re: [RFE] Minimum bandwidth support (egress)
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560963 Title: [RFE] Minimum bandwidth support (egress) Status in neutron: Fix Released Bug description: Minimum bandwidth support (opposed to bandwidth limiting), guarantees a port minimum bandwidth when it's neighbours are consuming egress traffic and can be throttled in favor of the guaranteed port. Strict minimum bandwidth support requires scheduling cooperation, to avoid physical interfaces overcommit. This RFE addresses only the hypervisor side of it. Scheduling cooperation will be addressed in a separate RFE [2] , this work is a pre-requisite for the 2nd step. Use cases = NFV/telcos are interested in this type of rules to make sure functions don't overcommit computes, and that any spawn of the same architecture will perform exactly as expected. This RFE is a prerequisite for [1]. Which in the mean time will provide a best effort guarantee on minimum bandwidth. CSP could make use of it to provide guaranteed bandwidth for streaming, etc... Notes = Technologies like SR-IOV support that, and OVS & Linux bridge can be configured to support this type of service. Where in OvS it requires to use veth ports between bridges instead of patch ports, it introduces a performance overhead of a ~20%. Supporting this kind of rule for OvS agents must be made optional, so the administrators can choose it only when they really need it. SR-IOV seems not to incur in any performance penalty. This RFE title has been corrected to tackle only with instance-egress traffic, as per comments #1 and #2 of this rfe/bug, ingress is problematic, and even if it can be tackled, it's a much more complex beast, @armax knows about it [1] [1] https://www.openstack.org/summit/vancouver-2015/summit- videos/presentation/supporting-network-bandwidth-guarantees-with- openstack-an-implementation-perspective [2] https://bugs.launchpad.net/neutron/+bug/1578989 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560963/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461133] Re: [RFE] Supporting multiple L3 backends
** Changed in: neutron Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1461133 Title: [RFE] Supporting multiple L3 backends Status in neutron: Fix Released Bug description: There are a variety of hardware and software options available to handle layer 3 (routing) in Neutron environments with various tradeoffs. Currently a single Neutron instance can be configured to support only one routing mechanism at a time and this leads to a need to build multiple OpenStack zones based on different requirements. This RFE is analogous to the ML2 framework. I would like to see a standard vendor neutral framework/API for creating/maintaining L3 routing constructs with a standard way for vendors/developers to build mechanism drivers to effect the desired routing on a variety of hardware and software platforms. In terms of broader scope (perhaps not initial implementation) there are a number of L3 related developments taking place that could benefit from the logical (aka "type") constructs from the implementation (aka "mechanism") constructs. e.g. BGP VPNs, IPSec/SSL VPNs, Service Chaining, QoS. The vision here is that the OpenStack community would standardize on what virtual routers can do, then individual companies/people with an interest in specific L3 implementations would build mechanism drivers to do those things. An essential criteria is that it should be possible to mix mechanisms within a single OpenStack zone rather than building separate building entirely separate Nova/Neutron/computenode environments based on a single L3 mechanism. Some examples of ways to handle L3 currently: L3 agent on x86, SDN software Contrail, Nuage, NSX, OVN, Plumgrid, and others, in hardware on a variety of vendors' switch/router platforms Arista, Cisco, others. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1461133/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619833] Re: api-ref for create server block_device_mapping_v2 is wrong type
Reviewed: https://review.openstack.org/365270 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2ad6aa911640f506b884d7142a6d4df93059e41b Submitter: Jenkins Branch:master commit 2ad6aa911640f506b884d7142a6d4df93059e41b Author: tamilhceDate: Sat Sep 3 16:04:38 2016 + fixing block_device_mapping_v2 data_type 'block_device_mapping_v2' should be an list type Closes-Bug: #1619833 Change-Id: Id7fa0e1dc2cff6438e82ad83b2087f67e0fa628b ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1619833 Title: api-ref for create server block_device_mapping_v2 is wrong type Status in OpenStack Compute (nova): Fix Released Bug description: The current api-ref for create server shows the 'block_device_mapping_v2' request parameter as: "block_device_mapping_v2": { "boot_index": "0", "uuid": "ac408821 -c95a-448f-9292-73986c790911", "source_type": "image", "volume_size": "25", "destination_type": "volume", "delete_on_termination": true } but specifying it this way raises an error: DEBUG [nova.api.openstack.wsgi] Returning 400 to user: Invalid input for field/attribute block_device_mapping_v2. Value: {u'uuid': u'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', u'volume_size': 8192, u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'volume', u'source_type': u'image'}. {u'uuid': u'76fa36fc-c930-4bf3 -8c8a-ea2a2420deb6', u'volume_size': 8192, u'boot_index': 0, u'delete_on_termination': True, u'destination_type': u'volume', u'source_type': u'image'} is not of type 'array' so it should be more like: "block_device_mapping_v2": [{ "boot_index": "0", "uuid": "ac408821 -c95a-448f-9292-73986c790911", "source_type": "image", "volume_size": "25", "destination_type": "volume", "delete_on_termination": true }] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1619833/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1620629] Re: Octavia should filter an Amphora image from a specific tenant
** Information type changed from Private Security to Public Security ** Project changed: neutron => octavia ** Tags removed: lbaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620629 Title: Octavia should filter an Amphora image from a specific tenant Status in octavia: Triaged Bug description: _extract_amp_image_id_by_tag[1] list all images with the 'amphora' tag (or any other tag pre-defined in octavia.conf), sort by creation date and uses the newest one. Side note: at the time of filing this bug, it does not sort properly due to bug 1618921 , but when the fix for bug 1618921 gets merged, this will be the case. For security reasons, _extract_amp_image_id_by_tag should also filter the images and use images owned by pre-defined tenant. Currently, any non-admin tenant can tag an image with the 'amphora' tag and set it to public=True. By doing that, Octavia will now use that newly added image starting from the next time a loadbalancer gets created for any tenant in that openstack setup. Now, if for example the newly created image contains some pre-defined credentials and/or ssh keys so it is accessible via ssh, and if we take into account that each amphora is also connected to the lb-mgmt network. That is exposing that mgmt network for unauthorized access. [1] https://github.com/openstack/octavia/blob/08570831754d9671fbd1756d668f55f191e47ca4/octavia/compute/drivers/nova_driver.py#L35 To manage notifications about this bug go to: https://bugs.launchpad.net/octavia/+bug/1620629/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622793] [NEW] LBaaS back-end pool connection limit is 10% of listener connection limit for reference and namespace drivers
Public bug reported: Both the reference Octavia driver and the namespace driver use haproxy to deliver load balancing services with LBaaSv2. When closely looking at the operation of the haproxy daemons with a utility like hatop ( https://github.com/feurix/hatop ), one can see that the connection limit for back-ends is exactly 10% of whatever the connection limit is set for the pool's listener front-ends. This behavior could cause an unexpectedly low effective connection limit if the user has a small number of back-end servers in the pool. >From the haproxy documentation, this is because the default value of a backend's "fullconn" parameter is set to 10% of the sum of all front- ends referencing it. Specifically: "Since it's hard to get this value right, haproxy automatically sets it to 10% of the sum of the maxconns of all frontends that may branch to this backend (based on "use_backend" and "default_backend" rules). That way it's safe to leave it unset." (Source: https://cbonte.github.io/haproxy- dconv/configuration-1.6.html#fullconn ) The point of this calculation (according to the haproxy documentation) is to protect fragile back-end servers from spikes in load that might reach the front-ends' connection limits. However, for long-lasting but low-load connections to a small number of back-end servers through the load balancer, this means that the haproxy-based back-ends have an effective connection limit that is much smaller than what the user expects it to be. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622793 Title: LBaaS back-end pool connection limit is 10% of listener connection limit for reference and namespace drivers Status in neutron: New Bug description: Both the reference Octavia driver and the namespace driver use haproxy to deliver load balancing services with LBaaSv2. When closely looking at the operation of the haproxy daemons with a utility like hatop ( https://github.com/feurix/hatop ), one can see that the connection limit for back-ends is exactly 10% of whatever the connection limit is set for the pool's listener front-ends. This behavior could cause an unexpectedly low effective connection limit if the user has a small number of back-end servers in the pool. From the haproxy documentation, this is because the default value of a backend's "fullconn" parameter is set to 10% of the sum of all front- ends referencing it. Specifically: "Since it's hard to get this value right, haproxy automatically sets it to 10% of the sum of the maxconns of all frontends that may branch to this backend (based on "use_backend" and "default_backend" rules). That way it's safe to leave it unset." (Source: https://cbonte.github.io/haproxy- dconv/configuration-1.6.html#fullconn ) The point of this calculation (according to the haproxy documentation) is to protect fragile back-end servers from spikes in load that might reach the front-ends' connection limits. However, for long-lasting but low-load connections to a small number of back-end servers through the load balancer, this means that the haproxy-based back-ends have an effective connection limit that is much smaller than what the user expects it to be. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622793/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622783] [NEW] unnecessary DHCP provisioning block added when IP doesn't change
Public bug reported: Right now we insert a DHCP provisioning block on any port update. However, this isn't necessary if the IP address of the port hasn't actually changed. This can be problematic particularly if a port is updated via the internal core plugin API and the DHCP agent doesn't get notified of the change so the block doesn't get cleared. ** Affects: neutron Importance: Undecided Assignee: Kevin Benton (kevinbenton) Status: New ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) ** Changed in: neutron Milestone: None => newton-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622783 Title: unnecessary DHCP provisioning block added when IP doesn't change Status in neutron: New Bug description: Right now we insert a DHCP provisioning block on any port update. However, this isn't necessary if the IP address of the port hasn't actually changed. This can be problematic particularly if a port is updated via the internal core plugin API and the DHCP agent doesn't get notified of the change so the block doesn't get cleared. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622783/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622779] [NEW] Tasks doc links to image statuses instead of task statuses
Public bug reported: The link to "Task Statuses" at the end of Tasks Conceptual Details section (http://docs.openstack.org/developer/glance/tasks.html #conceptual-details) currently takes users to the combined Task/Image statuses page: http://docs.openstack.org/developer/glance/statuses.html, where Image Statuses are described first. That page has an anchor allowing a link to jump directly to the Task Statuses section, so it would be nice to link to that directly. ** Affects: glance Importance: Undecided Assignee: Alexander Bashmakov (abashmak) Status: New ** Tags: dev-docs documentation ** Changed in: glance Assignee: (unassigned) => Alexander Bashmakov (abashmak) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1622779 Title: Tasks doc links to image statuses instead of task statuses Status in Glance: New Bug description: The link to "Task Statuses" at the end of Tasks Conceptual Details section (http://docs.openstack.org/developer/glance/tasks.html #conceptual-details) currently takes users to the combined Task/Image statuses page: http://docs.openstack.org/developer/glance/statuses.html, where Image Statuses are described first. That page has an anchor allowing a link to jump directly to the Task Statuses section, so it would be nice to link to that directly. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1622779/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1590164] Re: Can't initialize magic-search query programatically
Reviewed: https://review.openstack.org/321132 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=c7df8a90b06deed1a06638efed599227cc6786fe Submitter: Jenkins Branch:master commit c7df8a90b06deed1a06638efed599227cc6786fe Author: Tyr JohansonDate: Wed May 25 11:15:31 2016 -0600 Allow magic search to be initialized by an event This patch adds support into magic search to initialize the search bar to a predefined search query using an event. This allows a prior search to be repeated, or to pre-populate the seach field with an initial filter. Closes-Bug: #1590164 Change-Id: I24b4fcb3df87f018d9d73aa9d1526d7b8c6026bb ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1590164 Title: Can't initialize magic-search query programatically Status in OpenStack Dashboard (Horizon): Fix Released Bug description: The current magic search query can't be initialized from Angular. This is needed in order to pre-filter the search results. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1590164/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622632] Re: Circular dependency in neutron.services.trunk.utils
Reviewed: https://review.openstack.org/368882 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=68d13b92a51b73b158b2446366e67233cc4279d5 Submitter: Jenkins Branch:master commit 68d13b92a51b73b158b2446366e67233cc4279d5 Author: Jakub LibosvarDate: Mon Sep 12 16:37:37 2016 +0200 trunk: Remove ovs constants from trunk utils module Trunk utils should be driver agnostic. Change-Id: Iec646b3b11b03687013db5af6afda3a21c03acb6 Closes-Bug: 1622632 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622632 Title: Circular dependency in neutron.services.trunk.utils Status in neutron: Fix Released Bug description: There is a circular dependency when importing neutron.services.trunk.utils: In [1]: from neutron.services.trunk import utils --- ImportError Traceback (most recent call last) in () > 1 from neutron.services.trunk import utils /opt/stack/neutron/neutron/services/trunk/utils.py in () 17 from neutron.common import utils 18 from neutron import manager ---> 19 from neutron.services.trunk.drivers.openvswitch import constants as ovs_const 20 21 /opt/stack/neutron/neutron/services/trunk/drivers/__init__.py in () 15 16 from neutron.services.trunk.drivers.linuxbridge import driver as lxb_driver ---> 17 from neutron.services.trunk.drivers.openvswitch import driver as ovs_driver 18 19 /opt/stack/neutron/neutron/services/trunk/drivers/openvswitch/driver.py in () 23 from neutron.services.trunk import constants as trunk_consts 24 from neutron.services.trunk.drivers import base ---> 25 from neutron.services.trunk import utils 26 27 LOG = logging.getLogger(__name__) ImportError: cannot import name utils To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622632/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1402709] Re: Report an error when booting an instance with a flavor which has NUMA nodes is set to 0 (hw:numa_nodes=0)
Reviewed: https://review.openstack.org/337380 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=47d8aa5e7fe0c82ed00017aa2185de1ed5e51a86 Submitter: Jenkins Branch:master commit 47d8aa5e7fe0c82ed00017aa2185de1ed5e51a86 Author: Gábor AntalDate: Tue Jul 5 00:13:25 2016 +0200 Throw exception if numa_nodes is not set to integer greater than 0 As [1] is abandoned, I used that patchset to create a new one. This patchset is freshened to the current master branch. This patch introduces InvalidNUMANodesNumber exception, which is thrown when trying to boot an instance with a flavor that has hw:numa_nodes=0 extra spec set. That means that NUMA nodes is set to 0, which is incorrect. [1]: https://review.openstack.org/#/c/190267 Change-Id: I6bd8f69e582c537a5fec40064638a8887a08cac4 Co-Authored-By: Karim Boumedhel Closes-Bug: #1402709 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1402709 Title: Report an error when booting an instance with a flavor which has NUMA nodes is set to 0 (hw:numa_nodes=0) Status in OpenStack Compute (nova): Fix Released Bug description: Booting a Nova instance successfully with hw:numa_nodes=0, with a Nova guest XML like that[*]. This bug came out of this RDO bug -- https://bugzilla.redhat.com/show_bug.cgi?id=1154152. But, talking with Daniel Berrnage and Nikola Dipanov on IRC, they suggest we should default to 1 node if nothing is specified: . . . hmm, i'm not convinced we should allow nodes=0 at all we should default 1 node if nothing is specified We should refuse a vlaue of 0, in case we wish to make use of that as a special value at a later date we don't want people relying on numa_nodes=0 accidentally working for them now danpb, correct . . . Simple test - NUMA setup instructions here: https://review.openstack.org/#/c/131818/1/doc/source/devref/testing /libvirt-numa.rst $ nova flavor-key m1.tiny set hw:numa_nodes=0 $ nova flavor-show m1.tiny +++ | Property | Value | +++ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | extra_specs| {"hw:numa_nodes": "0"} | | id | 1 | | name | m1.tiny| | os-flavor-access:is_public | True | | ram| 512| | rxtx_factor| 1.0| | swap || | vcpus | 1 | +++ $ nova boot --image cirros-0.3.1-x86_64-disk --flavor m1.tiny cirrvm4 [*] Nova guest XML with an instance booted with hw:numa_nodes=0 - instance-0004 593f8388-ac9c-4673-b1c7-aa49b1ce83f0 http://openstack.org/xmlns/libvirt/nova/1.0;> cirrvm4 2014-12-15 14:41:43 512 1 0 0 1 admin admin 524288 524288 1 /machine OpenStack Foundation OpenStack Nova 2015.1 bf6b5391-2390-df4f-b3dc-aa80d05468bb 593f8388-ac9c-4673-b1c7-aa49b1ce83f0 hvm destroy restart destroy /usr/bin/qemu-kvm system_u:system_r:svirt_t:s0:c79,c658 system_u:object_r:svirt_image_t:s0:c79,c658 - To manage notifications about this bug go to:
[Yahoo-eng-team] [Bug 1622758] [NEW] Doc fix in Nova API Guide: faults.rst, insert missing word "call"
Public bug reported: Description === Doc fix: Insert missing word "call" on line 7 of /nova/api- guide/source/faults.rst. The text currently reads: Every HTTP request has a status code. 2xx codes signify the API was a success. Suggest inserting the word "call" to improve readability: Every HTTP request has a status code. 2xx codes signify the API call was a success. ** Affects: nova Importance: Undecided Assignee: Kurtis Cobb (kvcobb) Status: New ** Changed in: nova Assignee: (unassigned) => Kurtis Cobb (kvcobb) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622758 Title: Doc fix in Nova API Guide: faults.rst, insert missing word "call" Status in OpenStack Compute (nova): New Bug description: Description === Doc fix: Insert missing word "call" on line 7 of /nova/api- guide/source/faults.rst. The text currently reads: Every HTTP request has a status code. 2xx codes signify the API was a success. Suggest inserting the word "call" to improve readability: Every HTTP request has a status code. 2xx codes signify the API call was a success. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1622758/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1544861] Re: LBaaS: connection limit does not work with HA Proxy
Re-opened this bug as the patch had to be reverted. Revert patch with details: https://review.openstack.org/#/c/345444/ ** Changed in: neutron Status: Fix Released => Confirmed ** Changed in: neutron Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1544861 Title: LBaaS: connection limit does not work with HA Proxy Status in neutron: Confirmed Bug description: connection limit does not work with HA Proxy. It sets at frontend section like: frontend 75a12b66-9d2a-4a68-962e-ec9db8c3e2fb option httplog capture cookie JSESSIONID len 56 bind 192.168.10.20:80 mode http default_backend fb8ba6e3-71a4-47dd-8a83-2978bafea6e7 maxconn 5 option forwardfor But above configuration does not work. It should be set at global section like: global daemon user nobody group haproxy log /dev/log local0 log /dev/log local1 notice stats socket /var/lib/neutron/lbaas/fb8ba6e3-71a4-47dd-8a83-2978bafea6e7/sock mode 0666 level user maxconn 5 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1544861/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622753] [NEW] [RFE] Block non-IP traffic in security groups/firewall driver
Public bug reported: Presently the IPTables firewall driver (the reference security group implementation) permits all non-IP traffic to ingress and egress an instance port. This should be altered to block non-IP traffic by default. Security groups are a collection of rules which specify which traffic should be permitted into and out of an instance port. By only including allow rules, the order in which rules are enforced doesn't matter. Security groups are deny all by default except in for non-IP traffic. This was largely an oversight, since the original implementation just used iptables which doesn't filter non-IP traffic. Later ebtables was employed to filter ARP message (which are non-IP frames), but other Ethertypes besides IPv4, IPv6 and ARP are unfiltered. Since non-IP traffic is not routed by Neutron, there is no Internet facing security risk. In the case of a shared network, this is a cross tenant/project security risk. Since this would significantly alter the behavior of security groups I propose making change in several stages: 1. Introduce a new configuration option to specify the firewall driver behavior for non-IP traffic. This should default to allow initially. Modify the IPtables firewall driver to honor this configuration. 2. Change the default of this new configuration option to deny. 3. Introduce an extension to security groups which permits arbitrary 16-bit values to be specified as Ethertypes, so tenant's can use security groups to filter non-IP traffic. ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe sg-fw ** Summary changed: - [RFE] Block non-IP traffic in security groups + [RFE] Block non-IP traffic in security groups/firewall driver -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622753 Title: [RFE] Block non-IP traffic in security groups/firewall driver Status in neutron: New Bug description: Presently the IPTables firewall driver (the reference security group implementation) permits all non-IP traffic to ingress and egress an instance port. This should be altered to block non-IP traffic by default. Security groups are a collection of rules which specify which traffic should be permitted into and out of an instance port. By only including allow rules, the order in which rules are enforced doesn't matter. Security groups are deny all by default except in for non-IP traffic. This was largely an oversight, since the original implementation just used iptables which doesn't filter non-IP traffic. Later ebtables was employed to filter ARP message (which are non-IP frames), but other Ethertypes besides IPv4, IPv6 and ARP are unfiltered. Since non-IP traffic is not routed by Neutron, there is no Internet facing security risk. In the case of a shared network, this is a cross tenant/project security risk. Since this would significantly alter the behavior of security groups I propose making change in several stages: 1. Introduce a new configuration option to specify the firewall driver behavior for non-IP traffic. This should default to allow initially. Modify the IPtables firewall driver to honor this configuration. 2. Change the default of this new configuration option to deny. 3. Introduce an extension to security groups which permits arbitrary 16-bit values to be specified as Ethertypes, so tenant's can use security groups to filter non-IP traffic. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622753/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622559] Re: linux bridge jobs filled with binding warning noise
Reviewed: https://review.openstack.org/368730 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7ff0a50ed9ea0de2b8bfb0821a4ae83ac6ac8ddd Submitter: Jenkins Branch:master commit 7ff0a50ed9ea0de2b8bfb0821a4ae83ac6ac8ddd Author: Kevin BentonDate: Fri Sep 9 07:31:47 2016 -0700 Downgrade binding skip in mech_agent When multiple mech drivers are loaded, it's very common to have on not find agents on a host, we shouldn't emit a warning in this condition. The mechanism manager will log an error if there is an actual failure to bind a port. Change-Id: Icb643f6ffc6699579394d1e5d42f7bbfdcad6de4 Closes-Bug: #1622559 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622559 Title: linux bridge jobs filled with binding warning noise Status in neutron: Fix Released Bug description: The linux bridge job server logs are filled with these 'warnings', which are actually completely normal emissions from the openvswitch mech driver. 2016-09-10 01:19:08.071 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-9205b398-aeef-4139-8cb1-a58eed94b92e - -] Port c7808ebf-1682-4b55-98e6-be1be73c4af7 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:27.668 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-f55be949-722e-4551-9dcc-959f2eec4d38 - -] Port 4fc68275-d528-47dd-b88a-01523880c500 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:33.951 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-584d52c3-6d1b-4274-9760-16faf5065290 - -] Port ef27d948-eaba-4af5-a4c0-5f201835ccaf on network 73365757-fe52-4dea-93a9-e5a68ac4f0a1 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:39.226 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-5ea82d78-d622-49b0-ae77-9260e112de67 - -] Port bd7b108e-cac8-404f-a199-fdda7b0f6698 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622559/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1616831] Re: cloud-init doesn't prefer new APT config format when old and new are provided
** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => In Progress ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1616831 Title: cloud-init doesn't prefer new APT config format when old and new are provided Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: In Progress Bug description: Trying to use the new configuration format of APT configuration while still providing the OLD format, causes cloud-init fails to configure APT. cloud-init should be ignoring the old format if the new format is provided to ensure backwards compat. This is a problem for MAAS provided that we cannot safely differentiate / determine what cloud-init version we are using for a specific release we are deploying, and as such, we still need to send the old config while still providing the new one because: 1. Yakkety uses newer cloud-init with new format above 2. Xenial, Trusty, Precise use older cloud-init that doesn't support new format. And this is a problem because: 1. MAAS won't be able to use derived repositories in Xenial, Trusty, Precise until this gets backported into cloud-init. 2. Commission is done in Xenial, while deployment in Yakkety, but both may require the same config, but it is only supported in Yakkety's cloud-init. 3. Users may be using old images that may not contain new cloud-init at all, and even though the release already supports it, the image they are using doesn't and they have to continue to use the old format. 4. MAAS cannot differentiate/identify which cloud-init version its being used, as such, needs to sends both old and new config. Aug 25 09:44:17 node02 [CLOUDINIT] cc_apt_configure.py[ERROR]: Error in apt configuration: old and new format of apt features are mutually exclusive ('apt':'{'primary': [{'arches': ['default'], 'uri': 'http://us.archive.ubuntu.com/ubuntu'}], 'preserve_sources_list': True, 'security': [{'arches': ['default'], 'uri': 'http://us.archive.ubuntu.com/ubuntu'}], 'sources': {'launchpad_3': {'source': 'deb http://ppa.launchpad.net/maas/next/ubuntu yakkety main'}}}' vs 'apt_proxy' key) Aug 25 09:51:58 node02 [CLOUDINIT] util.py[DEBUG]: Running module apt-configure () failed#012Traceback (most recent call last):#012 File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 785, in _run_modules#012freq=freq)#012 File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run#012 return self._runners.run(name, functor, args, freq, clear_on_fail)#012 File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run#012 results = functor(*args)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 77, in handle#012ocfg = convert_to_v3_apt_format(ocfg)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 527, in convert_to_v3_apt_format#012cfg = convert_v2_to_v3_apt_format(cfg)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ap t_configure.py", line 489, in convert_v2_to_v3_apt_format#012raise ValueError(msg)#012ValueError: Error in apt configuration: old and new format of apt features are mutually exclusive ('apt':'{'preserve_sources_list': True, 'primary': [{'uri': 'http://us.archive.ubuntu.com/ubuntu', 'arches': ['default']}], 'security': [{'uri': 'http://us.archive.ubuntu.com/ubuntu', 'arches': ['default']}], 'sources': {'launchpad_3': {'source': 'deb http://ppa.launchpad.net/maas/next/ubuntu yakkety main'}}}' vs 'apt_proxy, apt_preserve_sources_list' key) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1616831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607810] Re: Wrong default key 'fdqn' in POST_LIST_ALL / cc_phone_home.py
** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu) Status: New => Fix Released ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => In Progress ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1607810 Title: Wrong default key 'fdqn' in POST_LIST_ALL / cc_phone_home.py Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: In Progress Bug description: The cloud-init phone_home default list of keys to post back includes 'fdqn' but it should be 'fqdn'. This is logged in the cloud-init logfiles, because the key 'fdqn' does not exist. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1607810/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1609899] Re: salt minion module writes minion keys to the wrong directory
** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu) Status: New => Fix Released ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => In Progress ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1609899 Title: salt minion module writes minion keys to the wrong directory Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: In Progress Bug description: Cloud-init's salt minion module writes minion.pem, and minion.pub to the wrong directory. Salt-minion expects them in /etc/salt/pki/minion, but /etc/salt/pki is used by cloud-init's salt minion module. Somehow in the past this worked out, and the files would be moved to /etc/salt/pki/minion. This part I don't understand, but currently on Ubuntu 16.04 Xenial with cloud-init 0.7.7 it doesn't work out. What happens is cloud-init writes to /etc/salt/pki, and salt-minion ignores the /etc/salt/pki files and writes it's own /etc/salt/pki/minion files. This results in the salt minion generated keys being rejected by the salt master. Current: pki_dir = salt_cfg.get('pki_dir', '/etc/salt/pki') Fixed: pki_dir = salt_cfg.get('pki_dir', '/etc/salt/pki/minion') To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1609899/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621180] Re: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri
** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Also affects: juju-core (Ubuntu Xenial) Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621180 Title: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in juju-core package in Ubuntu: New Status in cloud-init source package in Xenial: New Status in juju-core source package in Xenial: New Bug description: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619394] Re: OVF datasource broken
** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu) Status: New => Fix Released ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => In Progress ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1619394 Title: OVF datasource broken Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: In Progress Bug description: I am using cloudint 0.7.7~bzr1256-0ubuntu1~16.04.1 on Ubuntu 16.04.1 LTS. When I pass yaml-formatted user-data over the OVF datasource over the ISO transport the yaml string fails to be parsed. I tracked this to minidom's inability to handle newlines in an attribute. The xml bellow works for coreos but breaks under cloud- init: If I use base64-encoded user-data I get this error: Sep 01 12:07:43 sof2-lab3-dhcp371 cloud-init[3248]: 2016-09-01 12:07:43,854 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'b'I2Nsb3VkLWNvbmZpZwotLS0K'...' Sep 01 12:07:43 sof2-lab3-dhcp371 cloud-init[3248]: [CLOUDINIT] __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'b'I2Nsb3VkLWNvbmZpZwotLS0K'...' Is there a way to pass user-data as a single-line string that doesn't confuse minidom? To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1619394/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605749] Re: ConfigDrive: cloud-init fails to configure bond from network_data.json
** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu) Status: New => Fix Released ** Also affects: cloud-init (Ubuntu Xenial) Importance: Undecided Status: New ** Changed in: cloud-init (Ubuntu Xenial) Status: New => In Progress ** Changed in: cloud-init (Ubuntu Xenial) Importance: Undecided => Medium ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1605749 Title: ConfigDrive: cloud-init fails to configure bond from network_data.json Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: In Progress Bug description: cloud-init fails to configure bond interfaces from network_data.json There is a couple of reasons: Bond links found in network_data.json do not have a name attribute. cloud-init doesn't require the name attribute to exist in links. [1] However cloud-init later expects the links to have a name attribute and crashes when it doesn't have any. [2] The name attribute is not part of the OpenStack network_data.json specification and will therefore never be provided. If a link name is provided, the generated ENI configuration has a couple of issues: 1) cloud-init currently thinks that the bond_links attribute found in a bond link are actual physical interface names and not link id as expected. This means you end up with 4 physical interfaces configured on the server: 2 existing physical interfaces (ex.: eno1 and eno2) and 2 physical interfaces based on the name found in bond_links (in that case, eth0 and eth1). The later don't exist on the server and configured bond interface tries to enslave non-existing links and fails to bring up. 2) The "auto" stanza is missing from bond and bond slave interfaces. Interfaces are never started/configured properly at boot. 3) Once 1) and 2) are fixed, it looks like cloud-init runs the network configuration again in dsmode=net and fails at multiple steps: 3.1) get_interfaces_by_mac is run once again and tries to detect all known mac addresses by listing all entries found in /sys/class/net/. At this point, the bonding is up and the file 'bond_masters' exists. This means '/sys/class/net/bond_masters/address' won't exist (because /sys/class/net/bond_masters is a file, not a directory) and get_interface_mac will throw an uncatched exception, aborting the configuration process. 3.2) Once 3.1) is fixed, configuration fails again but for a different reason. It is because once the bonding is configured, all slave interfaces will have their mac addresses updated so they are all identical. This means convert_net_json will fail at the "need_names" step and will throw this exception: "No mac_address or name entry for" because now the mac address of one of the physical interface isn't found. Here is attached to this bug a network_data.json for test purpose. For reference, here is the MAC address mapping on the server: - eno1: 0c:c4:7a:34:6e:3c - eno2: 0c:c4:7a:34:6e:3d Current rendered ENI is: auto lo iface lo inet loopback dns-nameservers 1.1.1.191 1.1.1.4 iface eno1 inet manual mtu 1500 iface eno2 inet manual mtu 1500 iface bond0 inet manual bond_xmit_hash_policy layer3+4 bond_miimon 100 bond_mode 4 bond-slaves none auto eth0 iface eth0 inet manual bond_miimon 100 bond-master bond0 bond_mode 4 bond_xmit_hash_policy layer3+4 auto eth1 iface eth1 inet manual bond_miimon 100 bond-master bond0 bond_mode 4 bond_xmit_hash_policy layer3+4 auto bond0.602 iface bond0.602 inet static netmask 255.255.255.248 address 2.2.2.13 vlan-raw-device bond0 hwaddress fa:16:3e:b3:72:30 vlan_id 602 post-up route add default gw 2.2.2.9 || true pre-down route del default gw 2.2.2.9 || true auto bond0.612 iface bond0.612 inet static netmask 255.255.255.248 address 10.0.1.5 vlan-raw-device bond0 hwaddress fa:16:3e:66:ab:a6 vlan_id 612 post-up route add -net 192.168.1.0 netmask 255.255.255.255 gw 10.0.1.1 || true pre-down route del -net 192.168.1.0 netmask 255.255.255.255 gw 10.0.1.1 || true [1] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L547 [2] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/net/network_state.py#L284 To manage notifications about this bug go to:
[Yahoo-eng-team] [Bug 1576692] Re: fully support package installation in systemd
fixed in 0.7.8. ** Changed in: cloud-init Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1576692 Title: fully support package installation in systemd Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in init-system-helpers package in Ubuntu: Fix Committed Status in cloud-init source package in Xenial: Confirmed Status in init-system-helpers source package in Xenial: In Progress Bug description: in cloud-init users can install packages via cloud-config: #cloud-config packages: [apache2] Due to some intricacies of systemd and service installation that doesn't work all that well. We fixed the issue for simple services that do not have any dependencies on other services, or at least don't check those dependencies well under bug 1575572. We'd like to have a way to fully support this in cloud-init. Related bugs: * bug 1575572: apache2 fails to start if installed via cloud config (on Xenial) * bug 1611973: postgresql@9.5-main service not started if postgres installed via cloud-init * bug 1621336: snapd.boot-ok.service hangs eternally on cloud image upgrades (snapd packaging bug, but this cloud-init fix will workaround it) * bug 1620780: dev-sda2.device job running and times out SRU INFORMATION === FIX for init-system-helpers: https://anonscm.debian.org/cgit/collab-maint/init-system-helpers.git/commit/?id=1460d6a02 REGRESSION POTENTIAL for init-system-helpers: This changes invoke-rc.d and service, two very central pieces of packaging infrastructure. Errors in it will break installation/upgrades of packages or /etc/network/if-up.d/ hooks and the like. This changes the condition when systemd units get started without their dependencies, and the condition gets weakened. This means that behaviour in a booted system is unchanged, but during boot this could change the behaviour of if- up.d/ hooks (although they have never been defined well during boot anyway). However, I tested this change extensively in cloud images and desktop installations (particularly I recreated https://bugs.debian.org/777113 and confirmed that this approach also fixes it) and could not find any regression. TEST CASE (for both packages): Run lxc launch ubuntu-daily:x --config=user.user-data="$(printf "#cloud-config\npackages: [postgresql, samba, postfix]")" x1 This will install all three packages, but "systemctl status postgresql@9.5-main" will not be running. Now prepare a new image with the proposed cloud-init and init-system- helpers: lxc launch ubuntu-daily:x xprep lxc exec xprep bash # enable -proposed and dist-upgrade, then poweroff lxc publish xprep x-proposed Now run the initial lxc launch again, but against that new x-proposed image instead of the standard daily: lxc launch x-proposed --config=user.user-data="$(printf "#cloud- config\npackages: [postgresql, samba, postfix]")" x1 You should now have "systemctl status postgresql@9.5-main" running. Directly after rebooting the instance, check that there are no hanging jobs (systemctl list-jobs), particularly networking.service, to ensure that https://bugs.debian.org/777113 did not come back. Also test interactively installing a package that ships a service, like "apache2", and verify that it starts properly after installation. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1576692/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619394] Re: OVF datasource broken
fixed in 0.7.8. ** Changed in: cloud-init Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1619394 Title: OVF datasource broken Status in cloud-init: Fix Released Bug description: I am using cloudint 0.7.7~bzr1256-0ubuntu1~16.04.1 on Ubuntu 16.04.1 LTS. When I pass yaml-formatted user-data over the OVF datasource over the ISO transport the yaml string fails to be parsed. I tracked this to minidom's inability to handle newlines in an attribute. The xml bellow works for coreos but breaks under cloud- init: If I use base64-encoded user-data I get this error: Sep 01 12:07:43 sof2-lab3-dhcp371 cloud-init[3248]: 2016-09-01 12:07:43,854 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'b'I2Nsb3VkLWNvbmZpZwotLS0K'...' Sep 01 12:07:43 sof2-lab3-dhcp371 cloud-init[3248]: [CLOUDINIT] __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'b'I2Nsb3VkLWNvbmZpZwotLS0K'...' Is there a way to pass user-data as a single-line string that doesn't confuse minidom? To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1619394/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1607810] Re: Wrong default key 'fdqn' in POST_LIST_ALL / cc_phone_home.py
fixed in 0.7.8. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1607810 Title: Wrong default key 'fdqn' in POST_LIST_ALL / cc_phone_home.py Status in cloud-init: Fix Released Bug description: The cloud-init phone_home default list of keys to post back includes 'fdqn' but it should be 'fqdn'. This is logged in the cloud-init logfiles, because the key 'fdqn' does not exist. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1607810/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1609899] Re: salt minion module writes minion keys to the wrong directory
fixed in 0.7.8. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1609899 Title: salt minion module writes minion keys to the wrong directory Status in cloud-init: Fix Released Bug description: Cloud-init's salt minion module writes minion.pem, and minion.pub to the wrong directory. Salt-minion expects them in /etc/salt/pki/minion, but /etc/salt/pki is used by cloud-init's salt minion module. Somehow in the past this worked out, and the files would be moved to /etc/salt/pki/minion. This part I don't understand, but currently on Ubuntu 16.04 Xenial with cloud-init 0.7.7 it doesn't work out. What happens is cloud-init writes to /etc/salt/pki, and salt-minion ignores the /etc/salt/pki files and writes it's own /etc/salt/pki/minion files. This results in the salt minion generated keys being rejected by the salt master. Current: pki_dir = salt_cfg.get('pki_dir', '/etc/salt/pki') Fixed: pki_dir = salt_cfg.get('pki_dir', '/etc/salt/pki/minion') To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1609899/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621180] Re: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri
fixed in 0.7.8. ** Changed in: cloud-init Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1621180 Title: specifying apt_mirror of '' renders empty entries in /etc/apt/sources.list for uri Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in juju-core package in Ubuntu: New Bug description: $ cat /tmp/foo.ud #cloud-config apt_mirror: '' $ lxc launch ubuntu-daily:yakkety sm-y0 "--config=user.user-data=$(cat /tmp/foo.ud)" $ sleep 10 $ lxc exec sm-y0 grep yakkety /etc/apt/sources.list | head -n 3 deb yakkety main restricted deb-src yakkety main restricted deb yakkety-updates main restricted basically if you provide an empty apt_mirror in the old format, then it is taken as providing an apt mirror. This non-true value should just be the same as not providing it. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: cloud-init 0.7.7-22-g763f403-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-9136.55-generic 4.4.16 Uname: Linux 4.4.0-9136-generic x86_64 ApportVersion: 2.20.3-0ubuntu7 Architecture: amd64 Date: Wed Sep 7 17:12:11 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1621180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1603238] Re: BOM error updating hostname on centos6.x
** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1603238 Title: BOM error updating hostname on centos6.x Status in cloud-init: Fix Released Bug description: Seeing the following: Jul 14 15:42:38 cent6-example [CLOUDINIT] util.py[DEBUG]: Failed to update the hostname to cent6-example.cloud.phx3.gdg (cent6-example)#012Traceback (most recent call last):#012 File "/usr/lib/python2.6/site- packages/cloudinit/config/cc_update_hostname.py", line 39, in handle#012cloud.distro.update_hostname(hostname, fqdn, prev_fn)#012 File "/usr/lib/python2.6/site- packages/cloudinit/distros/__init__.py", line 214, in update_hostname#012prev_hostname = self._read_hostname(prev_hostname_fn)#012 File "/usr/lib/python2.6 /site-packages/cloudinit/distros/rhel.py", line 172, in _read_hostname#012(_exists, contents) = rhel_util.read_sysconfig_file(filename)#012 File "/usr/lib/python2.6 /site-packages/cloudinit/distros/rhel_util.py", line 64, in read_sysconfig_file#012return (exists, SysConf(contents))#012 File "/usr/lib/python2.6/site- packages/cloudinit/distros/parsers/sys_conf.py", line 61, in __init__#012write_empty_values=True)#012 File "/usr/lib/python2.6 /site-packages/configobj.py", line 1219, in __init__#012 self._load(infile, configspec)#012 File "/usr/lib/python2.6/site- packages/configobj.py", line 1272, in _load#012infile = self._handle_bom(infile)#012 File "/usr/lib/python2.6/site- packages/configobj.py", line 1422, in _handle_bom#012if not line.startswith(BOM):#012UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) $ rpm -qa | grep configobj python-configobj-4.6.0-3.el6.noarch This might be fixed in a newer configobj (probably is) but just wanted to note this here. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1603238/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1612313] Re: maas datasource needs support for vendor-data
fixed in 0.7.8. ** Changed in: cloud-init Status: Fix Committed => Fix Released ** Changed in: cloud-init Assignee: (unassigned) => Scott Moser (smoser) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1612313 Title: maas datasource needs support for vendor-data Status in cloud-init: Fix Released Bug description: maas datasource does not support vendor-data. We would like to take advantage of vendordata in maas, and thus cloud-init needs it. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1612313/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1610784] Re: cloud-init openstack.py code does not recognize network type 'tap'
fixed in 0.7.8. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1610784 Title: cloud-init openstack.py code does not recognize network type 'tap' Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Confirmed Bug description: == System info == Xenial 16.04.1, Mitaka, MAAS 2.0 (RC2 currently), JuJu 2.0 (beta7 currently - awaiting beta14), 16.07 Charms, neutron-calico-11 (awaiting fix for another bug). == Config drive link == https://gist.github.com/anonymous/ce20298b9e12e7fe77851552c2a91243 == Error log == From line 574 in https://git.launchpad.net/cloud-init/tree/cloudinit/sources/helpers/openstack.py: if link['type'] in ['ethernet', 'vif', 'ovs', 'phy', 'bridge']: Jul 28 10:31:38 ubuntu cloud-init[1209]: failed run of stage init-local Jul 28 10:31:38 ubuntu cloud-init[1209]: Jul 28 10:31:38 ubuntu cloud-init[1209]: Traceback (most recent call last): Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 530, in status_wrapper Jul 28 10:31:38 ubuntu cloud-init[1209]: ret = functor(name, args) Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 277, in main_init Jul 28 10:31:38 ubuntu cloud-init[1209]: init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL)) Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 631, in apply_network_config Jul 28 10:31:38 ubuntu cloud-init[1209]: netcfg, src = self._find_networking_config() Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 618, in _find_networking_config Jul 28 10:31:38 ubuntu cloud-init[1209]: if self.datasource and hasattr(self.datasource, 'network_config'): Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", line 159, in network_config Jul 28 10:31:38 ubuntu cloud-init[1209]: self.network_json, known_macs=self.known_macs) Jul 28 10:31:38 ubuntu cloud-init[1209]: File "/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 599, in convert_net_json Jul 28 10:31:38 ubuntu cloud-init[1209]: 'Unknown network_data link type: %s' % link['type']) Jul 28 10:31:38 ubuntu cloud-init[1209]: ValueError: Unknown network_data link type: tap Jul 28 10:31:38 ubuntu cloud-init[1209]: To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1610784/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605749] Re: ConfigDrive: cloud-init fails to configure bond from network_data.json
fixed in 0.7.8. ** Changed in: cloud-init Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1605749 Title: ConfigDrive: cloud-init fails to configure bond from network_data.json Status in cloud-init: Fix Released Bug description: cloud-init fails to configure bond interfaces from network_data.json There is a couple of reasons: Bond links found in network_data.json do not have a name attribute. cloud-init doesn't require the name attribute to exist in links. [1] However cloud-init later expects the links to have a name attribute and crashes when it doesn't have any. [2] The name attribute is not part of the OpenStack network_data.json specification and will therefore never be provided. If a link name is provided, the generated ENI configuration has a couple of issues: 1) cloud-init currently thinks that the bond_links attribute found in a bond link are actual physical interface names and not link id as expected. This means you end up with 4 physical interfaces configured on the server: 2 existing physical interfaces (ex.: eno1 and eno2) and 2 physical interfaces based on the name found in bond_links (in that case, eth0 and eth1). The later don't exist on the server and configured bond interface tries to enslave non-existing links and fails to bring up. 2) The "auto" stanza is missing from bond and bond slave interfaces. Interfaces are never started/configured properly at boot. 3) Once 1) and 2) are fixed, it looks like cloud-init runs the network configuration again in dsmode=net and fails at multiple steps: 3.1) get_interfaces_by_mac is run once again and tries to detect all known mac addresses by listing all entries found in /sys/class/net/. At this point, the bonding is up and the file 'bond_masters' exists. This means '/sys/class/net/bond_masters/address' won't exist (because /sys/class/net/bond_masters is a file, not a directory) and get_interface_mac will throw an uncatched exception, aborting the configuration process. 3.2) Once 3.1) is fixed, configuration fails again but for a different reason. It is because once the bonding is configured, all slave interfaces will have their mac addresses updated so they are all identical. This means convert_net_json will fail at the "need_names" step and will throw this exception: "No mac_address or name entry for" because now the mac address of one of the physical interface isn't found. Here is attached to this bug a network_data.json for test purpose. For reference, here is the MAC address mapping on the server: - eno1: 0c:c4:7a:34:6e:3c - eno2: 0c:c4:7a:34:6e:3d Current rendered ENI is: auto lo iface lo inet loopback dns-nameservers 1.1.1.191 1.1.1.4 iface eno1 inet manual mtu 1500 iface eno2 inet manual mtu 1500 iface bond0 inet manual bond_xmit_hash_policy layer3+4 bond_miimon 100 bond_mode 4 bond-slaves none auto eth0 iface eth0 inet manual bond_miimon 100 bond-master bond0 bond_mode 4 bond_xmit_hash_policy layer3+4 auto eth1 iface eth1 inet manual bond_miimon 100 bond-master bond0 bond_mode 4 bond_xmit_hash_policy layer3+4 auto bond0.602 iface bond0.602 inet static netmask 255.255.255.248 address 2.2.2.13 vlan-raw-device bond0 hwaddress fa:16:3e:b3:72:30 vlan_id 602 post-up route add default gw 2.2.2.9 || true pre-down route del default gw 2.2.2.9 || true auto bond0.612 iface bond0.612 inet static netmask 255.255.255.248 address 10.0.1.5 vlan-raw-device bond0 hwaddress fa:16:3e:66:ab:a6 vlan_id 612 post-up route add -net 192.168.1.0 netmask 255.255.255.255 gw 10.0.1.1 || true pre-down route del -net 192.168.1.0 netmask 255.255.255.255 gw 10.0.1.1 || true [1] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L547 [2] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/net/network_state.py#L284 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1605749/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1616831] Re: cloud-init doesn't prefer new APT config format when old and new are provided
fixed in 0.7.8. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1616831 Title: cloud-init doesn't prefer new APT config format when old and new are provided Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Bug description: Trying to use the new configuration format of APT configuration while still providing the OLD format, causes cloud-init fails to configure APT. cloud-init should be ignoring the old format if the new format is provided to ensure backwards compat. This is a problem for MAAS provided that we cannot safely differentiate / determine what cloud-init version we are using for a specific release we are deploying, and as such, we still need to send the old config while still providing the new one because: 1. Yakkety uses newer cloud-init with new format above 2. Xenial, Trusty, Precise use older cloud-init that doesn't support new format. And this is a problem because: 1. MAAS won't be able to use derived repositories in Xenial, Trusty, Precise until this gets backported into cloud-init. 2. Commission is done in Xenial, while deployment in Yakkety, but both may require the same config, but it is only supported in Yakkety's cloud-init. 3. Users may be using old images that may not contain new cloud-init at all, and even though the release already supports it, the image they are using doesn't and they have to continue to use the old format. 4. MAAS cannot differentiate/identify which cloud-init version its being used, as such, needs to sends both old and new config. Aug 25 09:44:17 node02 [CLOUDINIT] cc_apt_configure.py[ERROR]: Error in apt configuration: old and new format of apt features are mutually exclusive ('apt':'{'primary': [{'arches': ['default'], 'uri': 'http://us.archive.ubuntu.com/ubuntu'}], 'preserve_sources_list': True, 'security': [{'arches': ['default'], 'uri': 'http://us.archive.ubuntu.com/ubuntu'}], 'sources': {'launchpad_3': {'source': 'deb http://ppa.launchpad.net/maas/next/ubuntu yakkety main'}}}' vs 'apt_proxy' key) Aug 25 09:51:58 node02 [CLOUDINIT] util.py[DEBUG]: Running module apt-configure () failed#012Traceback (most recent call last):#012 File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 785, in _run_modules#012freq=freq)#012 File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run#012 return self._runners.run(name, functor, args, freq, clear_on_fail)#012 File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run#012 results = functor(*args)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 77, in handle#012ocfg = convert_to_v3_apt_format(ocfg)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_apt_configure.py", line 527, in convert_to_v3_apt_format#012cfg = convert_v2_to_v3_apt_format(cfg)#012 File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ap t_configure.py", line 489, in convert_v2_to_v3_apt_format#012raise ValueError(msg)#012ValueError: Error in apt configuration: old and new format of apt features are mutually exclusive ('apt':'{'preserve_sources_list': True, 'primary': [{'uri': 'http://us.archive.ubuntu.com/ubuntu', 'arches': ['default']}], 'security': [{'uri': 'http://us.archive.ubuntu.com/ubuntu', 'arches': ['default']}], 'sources': {'launchpad_3': {'source': 'deb http://ppa.launchpad.net/maas/next/ubuntu yakkety main'}}}' vs 'apt_proxy, apt_preserve_sources_list' key) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1616831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1597521] Re: files should not be injected if config drive is configured
Reviewed: https://review.openstack.org/335676 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=691eb01b5997e08932f386dc84009e4e1761 Submitter: Jenkins Branch:master commit 691eb01b5997e08932f386dc84009e4e1761 Author: Vladik RomanovskyDate: Thu Sep 8 14:14:08 2016 -0400 libvirt: inject files when config drive is not requested This patch fixes a regression introduced by [1], which made files to be injected into the root disk even if the config disk is not in use. [1] https://review.openstack.org/#/c/303335 Closes-bug: #1597521 Change-Id: I990f19943f36356760b7bf425bc59901c9cdc1de ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1597521 Title: files should not be injected if config drive is configured Status in OpenStack Compute (nova): Fix Released Bug description: A regression introduced by [1], which made files to be injected into the root disk even if the config disk is not in use. [1] https://review.openstack.org/#/c/303335 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1597521/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1618666] Re: deprecated warning for SafeConfigParser
Reviewed: https://review.openstack.org/368413 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=408820cbe360e94388d63ba1778f020e9894cdce Submitter: Jenkins Branch:master commit 408820cbe360e94388d63ba1778f020e9894cdce Author: xianming maoDate: Sun Sep 11 16:58:13 2016 +0800 Use ConfigParser instead of SafeConfigParser The SafeConfigParser class has been renamed to ConfigParser in Python 3.2 [1]. This alias will be removed in future versions.So we can use ConfigParser directly instead. [1] http://bugs.python.org/issue10627 Closes-Bug: #1618666 Change-Id: If01186cefad2149d65ffcc1fc6550d72d26f5b11 ** Changed in: keystone Status: In Progress => Fix Released ** Bug watch added: Python Roundup #10627 http://bugs.python.org/issue10627 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1618666 Title: deprecated warning for SafeConfigParser Status in Glance: In Progress Status in glance_store: In Progress Status in OpenStack Identity (keystone): Fix Released Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in PBR: Fix Released Status in python-swiftclient: In Progress Status in OpenStack Object Storage (swift): In Progress Status in OpenStack DBaaS (Trove): In Progress Bug description: tox -e py34 is reporting a deprecation warning for SafeConfigParser /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead. parser = configparser.SafeConfigParser() To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1618666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1278690] Re: VMware: Fetching images from Glance is slower than it should be
Reviewed: https://review.openstack.org/281134 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2df83abaa0a5c828421fc38602cc1e5145b46ff4 Submitter: Jenkins Branch:master commit 2df83abaa0a5c828421fc38602cc1e5145b46ff4 Author: Radoslav GerganovDate: Wed Feb 17 10:35:59 2016 +0200 VMware: Refactor the image transfer The image transfer is unnecessary complicated and buggy. When transferring streamOptimized images we have to update the progress of the NFC lease to prevent timeouts. This patch replaces the complex usage of blocking queues and threads with a simple read+write loop. It has the same performance and the code is much cleaner. The NFC lease is updated with the loopingcall utility. Closes-Bug: #1546454 Closes-Bug: #1278690 Related-Bug: #1495429 Change-Id: I96e8e0682bcc642a2a5c4b7d2851812bef60d2ff ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1278690 Title: VMware: Fetching images from Glance is slower than it should be Status in OpenStack Compute (nova): Fix Released Bug description: Ran an experiment with a 1.5GB file on a very basic devstack setup: glance download | curl PUT: 2.5min nova code: 4min Customers have reported similar concerns. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1278690/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1546454] Re: VMware: NFC lease has to be updated when transferring streamOpt images
Reviewed: https://review.openstack.org/281134 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=2df83abaa0a5c828421fc38602cc1e5145b46ff4 Submitter: Jenkins Branch:master commit 2df83abaa0a5c828421fc38602cc1e5145b46ff4 Author: Radoslav GerganovDate: Wed Feb 17 10:35:59 2016 +0200 VMware: Refactor the image transfer The image transfer is unnecessary complicated and buggy. When transferring streamOptimized images we have to update the progress of the NFC lease to prevent timeouts. This patch replaces the complex usage of blocking queues and threads with a simple read+write loop. It has the same performance and the code is much cleaner. The NFC lease is updated with the loopingcall utility. Closes-Bug: #1546454 Closes-Bug: #1278690 Related-Bug: #1495429 Change-Id: I96e8e0682bcc642a2a5c4b7d2851812bef60d2ff ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1546454 Title: VMware: NFC lease has to be updated when transferring streamOpt images Status in OpenStack Compute (nova): Fix Released Bug description: Booting large streamOptimized images (>2GB) fails because the NFC lease is not updated. This causes the lease to timeout and kill the image transfer. The fix is to call update_progress() method every 60sec. This is also an opportunity to refactor the image transfer code and make it simpler. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1546454/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622566] Re: logs from linuxbridge agent have "with host None"
Reviewed: https://review.openstack.org/368750 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2fe2efc55db4cffe90dd284cb074de81e0efbd4e Submitter: Jenkins Branch:master commit 2fe2efc55db4cffe90dd284cb074de81e0efbd4e Author: Kevin BentonDate: Fri Sep 9 07:52:57 2016 -0700 LinuxBridge: Pass host into get_devices_details_list Pass the host into get_devices_details_list on the linux bridge agent so the debug logs on the server side don't show "host None". This is mainly just for cosmetics and consistency with the OVS agent since the only thing the host is really used for on the server side is special treatment of DVR ports, which does not currently apply to linux bridge. Change-Id: I700fa26982bdb087cf7ea4b3eb69aec2f2e099c8 Closes-Bug: #1622566 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622566 Title: logs from linuxbridge agent have "with host None" Status in neutron: Fix Released Bug description: The linux bridge agent does not pass the host into get_devices_details_list, which results in misleading debug logs on the server side that imply there may be missing data: 2016-09-10 01:54:42.307 14800 DEBUG neutron.plugins.ml2.rpc [req- 96973c7b-8753-40be-b52a-0ce4b250ffb9 - -] Device tap7495d684-71 details requested by agent lb6679bf6e7639 with host None get_device_details /opt/stack/new/neutron/neutron/plugins/ml2/rpc.py:71 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622566/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622698] [NEW] Use unique names for event strings
Public bug reported: Angular widgets that listen for or dispatch events have the possibility of accidentally picking up the wrong event unless the event string is unique. For example, see https://review.openstack.org/#/c/321132/8/horizon/static/framework/widgets /magic-search/magic-search.module.js "textSearch" is fairly generic, and might collide with another component accidentally. Consider instead including the module name as part of the event string. For example: "horizon.framework.widgets.magic-search.events.textSearch" ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1622698 Title: Use unique names for event strings Status in OpenStack Dashboard (Horizon): New Bug description: Angular widgets that listen for or dispatch events have the possibility of accidentally picking up the wrong event unless the event string is unique. For example, see https://review.openstack.org/#/c/321132/8/horizon/static/framework/widgets /magic-search/magic-search.module.js "textSearch" is fairly generic, and might collide with another component accidentally. Consider instead including the module name as part of the event string. For example: "horizon.framework.widgets.magic- search.events.textSearch" To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622694] [NEW] [FWaaS] Unit test race condition in creating/updating firewall
Public bug reported: The FWaaS unit test neutron_fwaas.tests.unit.services.firewall.test_fwaas_plugin.TestFirewallPluginBase.test_update_firewall_shared_fails_for_non_admin creates a firewall, and then tries an update. If that update occurs before the creation is completed, then the router is still in PENDING_UPDATE state and a successful return code is returned. Since the test is expecting exc.HTTPForbidden.code (HTTP 403), this means the test fails. This looks like a race condition, but it should be handled properly. ** Affects: neutron Importance: Medium Status: Confirmed ** Tags: fwaas -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622694 Title: [FWaaS] Unit test race condition in creating/updating firewall Status in neutron: Confirmed Bug description: The FWaaS unit test neutron_fwaas.tests.unit.services.firewall.test_fwaas_plugin.TestFirewallPluginBase.test_update_firewall_shared_fails_for_non_admin creates a firewall, and then tries an update. If that update occurs before the creation is completed, then the router is still in PENDING_UPDATE state and a successful return code is returned. Since the test is expecting exc.HTTPForbidden.code (HTTP 403), this means the test fails. This looks like a race condition, but it should be handled properly. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622694/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622630] Re: Failed to launch instance due to "Unexpected API error"
You're hitting messaging timeouts, which means there is something wrong with rabbitmq or the configuration between services to talk to rabbitmq, check your setup. This is not a nova bug, it's a problem in your deployment/configuration. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622630 Title: Failed to launch instance due to "Unexpected API error" Status in OpenStack Compute (nova): Invalid Bug description: Installed openstack Mitaka on ubuntu 16.04 (s390x) following http://docs.openstack.org/mitaka/install-guide-ubuntu/index.html Configured neutron and cinder according to my local environment created an image - now attempting to start an instance - that fails with an error message in horizon: (in German: "der Server kann nicht erstellt werden") Checked syslog and found: Sep 12 16:23:14 s42lp12 nova-compute[3644]: 2016-09-12 16:23:14.859 3644 INFO nova.compute.resource_tracker [req-1781920a-3d2c-4a77-bf96-3df4b579fd23 - - - - -] Compute_service record updated for s42lp12:s42lp12.boeblingen.de.ibm.com Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions [req-a5c08333-24fe-419c-869d-fdc374a444a3 8f3671f652524ce3862bfcc3840cb363 adf169e55d0d4949b2831809bf0f6900 - - -] Unexpected except ion in API method Sep 12 16:23:44 s42lp12 rsyslogd-2007: action 'action 10' suspended, next retry is Mon Sep 12 16:25:14 2016 [v8.16.0 try http://www.rsyslog.com/e/2007 ] Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions Traceback (most recent call last): Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return f(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 629, in create Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions **create_kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1562, in create Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1145, in _create_instance Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions reservation_id, max_count) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 834, in _validate_and_build_base_options Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions requested_networks, max_count) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR
[Yahoo-eng-team] [Bug 1622684] [NEW] Keycode error using novnc and Horizon consloe
Public bug reported: When using Newton or Mitaka versons of OpenStack Horizon, I am unable to talk to the vm in the Horizon console window. I am using noVNC and I see the following in the console when ever pressing any key on the keyboard: atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.750245] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.815590] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ 41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.945017] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 42.393227] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). This appears to be related to recent code changes in noVNC. If I revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9, everything works. This sha commit date is August 26, 2016. Phil ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1622684 Title: Keycode error using novnc and Horizon consloe Status in OpenStack Dashboard (Horizon): New Bug description: When using Newton or Mitaka versons of OpenStack Horizon, I am unable to talk to the vm in the Horizon console window. I am using noVNC and I see the following in the console when ever pressing any key on the keyboard: atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.750245] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.750591] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.815590] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ 41.816087] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 41.945017] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ 41.945848] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ 42.393227] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). This appears to be related to recent code changes in noVNC. If I revert noVNC to the sha 4e0c36dda708628836dc6f5d68fc40d05c7716d9, everything works. This sha commit date is August 26, 2016. Phil To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622684/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622672] [NEW] Unknown filters aren't validated by the API
Public bug reported: During the integration of Subnet Olso-Versioned Object, Artur discovered[1] that there are some cases where the API receives filters which are not defined in the model. It's necessary to modify the current implementation of OVO to support cases like: * Using 'admin_state_up' in Subnet model class. * Using 'network_id' and 'router:external' as filters for Network model class. [1] http://lists.openstack.org/pipermail/openstack- dev/2016-July/100286.html ** Affects: neutron Importance: Wishlist Assignee: Victor Morales (electrocucaracha) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Victor Morales (electrocucaracha) ** Changed in: neutron Status: New => In Progress ** Changed in: neutron Importance: Undecided => Wishlist -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622672 Title: Unknown filters aren't validated by the API Status in neutron: In Progress Bug description: During the integration of Subnet Olso-Versioned Object, Artur discovered[1] that there are some cases where the API receives filters which are not defined in the model. It's necessary to modify the current implementation of OVO to support cases like: * Using 'admin_state_up' in Subnet model class. * Using 'network_id' and 'router:external' as filters for Network model class. [1] http://lists.openstack.org/pipermail/openstack- dev/2016-July/100286.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622672/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1456073] Re: Connection to an instance with floating IP breaks during block migration when using DVR
** Tags removed: mitaka-backport-potential newton-rc-potential ** Also affects: nova/mitaka Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1456073 Title: Connection to an instance with floating IP breaks during block migration when using DVR Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) mitaka series: New Bug description: During migration of an instance, using block migration with a floating IP when the router is DVR the connection to the instance breaks (e.g. Having an SSH connection to the instance). Reconnect to the instance is successful. Version == RHEL 7.1 python-nova-2015.1.0-3.el7ost.noarch python-neutron-2015.1.0-1.el7ost.noarch How to reproduce == 1. Create a distributed router and attach an internal and an external network to it. # neutron router-create --distributed True router1 # neutron router-interface-add router1 # neutron router-gateway-set 2. Launch an instance and associate it with a floating IP. # nova boot --flavor m1.small --image fedora --nic net-id= vm1 3. SSH into the instance which will be migrated and run a command "while true; do echo "Hello"; sleep 1; done" 4. Migrate the instance using block migration # nova live-migration --block-migrate 5. Verify that the connection to the instance is lost. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1456073/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1381961] Re: Keystone API GET 5000/v3 returns wrong endpoint URL in response body
** Also affects: tripleo Importance: Undecided Status: New ** Changed in: tripleo Status: New => Confirmed ** Changed in: keystone Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1381961 Title: Keystone API GET 5000/v3 returns wrong endpoint URL in response body Status in OpenStack Identity (keystone): Fix Released Status in tripleo: Confirmed Bug description: When I was invoking a GET request to public endpoint of Keystone, I found the admin endpoint URL in response body, I assume it should be the public endpoint URL: GET https://192.168.101.10:5000/v3 { "version": { "status": "stable", "updated": "2013-03-06T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" }, { "base": "application/xml", "type": "application/vnd.openstack.identity-v3+xml" } ], "id": "v3.0", "links": [ { "href": "https://172.20.14.10:35357/v3/;, "rel": "self" } ] } } === Btw, I can get the right URL for public endpoint in the response body of the versionless API call: GET https://192.168.101.10:5000 { "versions": { "values": [ { "status": "stable", "updated": "2013-03-06T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" }, { "base": "application/xml", "type": "application/vnd.openstack.identity-v3+xml" } ], "id": "v3.0", "links": [ { "href": "https://192.168.101.10:5000/v3/;, "rel": "self" } ] }, { "status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json" }, { "base": "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml" } ], "id": "v2.0", "links": [ { "href": "https://192.168.101.10:5000/v2.0/;, "rel": "self" }, { "href": "http://docs.openstack.org/api/openstack-identity-service/2.0/content/;, "type": "text/html", "rel": "describedby" }, { "href": "http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;, "type": "application/pdf", "rel": "describedby" } ] } ] } } To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1381961/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1621987] Re: Power state synchronization in compute is too agressive
Reviewed: https://review.openstack.org/367746 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=386812e287198cd2d340d273753ef06075f7c05d Submitter: Jenkins Branch:master commit 386812e287198cd2d340d273753ef06075f7c05d Author: Mathieu GagnéDate: Fri Sep 9 00:35:50 2016 -0400 Add sync_power_state_pool_size option The sync_power_state_pool_size option allows to set the number of greenthreads available for use to sync power states. It can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons. Closes-bug: #1621987 Change-Id: I9cf900314d71c44ec51eb190c102ddc60e13a767 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621987 Title: Power state synchronization in compute is too agressive Status in OpenStack Compute (nova): Fix Released Bug description: By default, 1000 greenthreads can be used to synchronize the power state of instances running on a compute node. In the Ironic context, this means 1000 simultaneous HTTP requests can be initiated to the Ironic API. Some of those requests can fail with the following error: ERROR nova.compute.manager [-] [instance: XX] Periodic sync_power_state task had an error while processing an instance. ERROR nova.compute.manager [instance: XX] Traceback (most recent call last): ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 6083, in _sync ERROR nova.compute.manager [instance: XX] query_driver_power_state_and_sync() ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 252, in inner ERROR nova.compute.manager [instance: XX] return f(*args, **kwargs) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 6080, in query_driver_power_state_and_sync ERROR nova.compute.manager [instance: XX] self._query_driver_power_state_and_sync(context, db_instance) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 6110, in _query_driver_power_state_and_sync ERROR nova.compute.manager [instance: XX] vm_instance = self.driver.get_info(db_instance) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 557, in get_info ERROR nova.compute.manager [instance: XX] node = _validate_instance_and_node(self.ironicclient, instance) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 126, in _validate_instance_and_node ERROR nova.compute.manager [instance: XX] return ironicclient.call("node.get_by_instance_uuid", instance.uuid) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/nova/virt/ironic/client_wrapper.py", line 122, in call ERROR nova.compute.manager [instance: XX] return self._multi_getattr(client, method)(*args, **kwargs) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/ironicclient/v1/node.py", line 151, in get_by_instance_uuid ERROR nova.compute.manager [instance: XX] nodes = self._list(self._path(path), 'nodes') ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/ironicclient/common/base.py", line 119, in _list ERROR nova.compute.manager [instance: XX] resp, body = self.api.json_request('GET', url) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/ironicclient/common/http.py", line 351, in json_request ERROR nova.compute.manager [instance: XX] resp, body_iter = self._http_request(url, method, **kwargs) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/ironicclient/common/http.py", line 160, in wrapper ERROR nova.compute.manager [instance: XX] return func(self, url, method, **kwargs) ERROR nova.compute.manager [instance: XX] File "/opt/nova/local/lib/python2.7/site-packages/ironicclient/common/http.py", line 296, in _http_request
[Yahoo-eng-team] [Bug 1621883] Re: PortNotFoundClient stacktrace in n-cpu when unbinding ports
Reviewed: https://review.openstack.org/368079 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=6a2691cf6db1d1f4e4f5dd9e758c0d42f235478d Submitter: Jenkins Branch:master commit 6a2691cf6db1d1f4e4f5dd9e758c0d42f235478d Author: Matt RiedemannDate: Fri Sep 9 11:20:59 2016 -0400 neutron: don't trace on port not found when unbinding ports There is a race in the gate when deleting an instance and deallocating the network at the same time that preexisting ports attached to the instance are being deleted. So when nova goes to unbind the port it's already gone and we log an exception trace in the n-cpu logs. We shouldn't actually care if the port isn't found when unbinding it from an instance, so this change handles that case and logs it at debug rather than an exception trace. Change-Id: Ia95c626cefcb1e099e11d3bf5a651ad5d5f9406f Closes-Bug: #1621883 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1621883 Title: PortNotFoundClient stacktrace in n-cpu when unbinding ports Status in OpenStack Compute (nova): Fix Released Bug description: Saw this in a CI run today: http://logs.openstack.org/07/367307/1/check/gate-tempest-dsvm-neutron- full-ubuntu- xenial/cf54759/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-08_12_05_26_139 We shouldn't stacktrace on a 404 port not found when unbinding ports from an instance (that is probably be deleted). 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api [req-aef4186a-7fdb-4e62-a4b1-ec7482db7e5b tempest-AttachInterfacesTestJSON-554004321 tempest-AttachInterfacesTestJSON-554004321] Unable to clear device ID for port '6dff1db9-e6e1-490b-9246-ea479281b3ff' 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api Traceback (most recent call last): 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 434, in _unbind_ports 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api port_client.update_port(port_id, port_req_body) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 94, in wrapper 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api ret = obj(*args, **kwargs) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 754, in update_port 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api return self.put(self.port_path % (port), body=body) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 94, in wrapper 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api ret = obj(*args, **kwargs) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 369, in put 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api headers=headers, params=params) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 94, in wrapper 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api ret = obj(*args, **kwargs) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 337, in retry_request 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api headers=headers, params=params) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 94, in wrapper 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api ret = obj(*args, **kwargs) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 300, in do_request 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api self._handle_fault_response(status_code, replybody, resp) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/opt/stack/new/nova/nova/network/neutronv2/api.py", line 94, in wrapper 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api ret = obj(*args, **kwargs) 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 275, in _handle_fault_response 2016-09-08 12:05:26.139 17305 ERROR nova.network.neutronv2.api exception_handler_v20(status_code, error_body) 2016-09-08 12:05:26.139 17305 ERROR
[Yahoo-eng-team] [Bug 1622654] [NEW] Security Group doesn't work if the specific allowed-address-pairs value is set
Public bug reported: Summary: Security Group doesn't work if the specific allowed-address-pairs value is set to the port associated with it. High level description: OpenStack user is allowed to specify arbitrary mac_address/ip_address pairs that are allowed to pass through a port. For some practical reasons, OpenStack users can specify huge subnets, and CIRDs provided there are not sanitized. If the CIRD provided with 'allowed-address- pairs' for any single port associated with Security Group overlaps with a subnet used by the VM, the VM is always accessible by any port and any protocol, despite the fact that its security group denies all ingress traffic. Step-by-step reproduction process: 1) Create a VM in OpenStack 2) Check that there are no rules allowing icmp (for instance) in the security group associated with the VM 3) perform: neutron port-update [any-port-associated-with-the-secgroup] --allowed-address-pairs type=dict list=true ip_address=[a-very-huge-cidr] if your VM uses a private IPv4 address from networks 192.168.x or 172.16.x, then 128.0.0.0/1 will work as "a-very-huge-cidr", if it uses 10.x network then 0.0.0.0/1 should. 4) ping all the VMs in this secgroup successfully (from router namespace, or from any host which is allowed to access floating IPs if floating IP is also assigned to the VM), as well as access it by any port and protocol which the VM is listening. Version: All OpenStack releases up to Mitaka. Perceived severity: It's not a blocker as workaround are pretty obvious, but it's a huge security bug: all the network security provided by Security Groups might be ruined easily, just by updating a single port in neutron. If we restrict the value of allowed-address-pairs in neutron to a single address (/32 or /128), might it break anything? ** Affects: neutron Importance: Undecided Status: Confirmed ** Changed in: neutron Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622654 Title: Security Group doesn't work if the specific allowed-address-pairs value is set Status in neutron: Confirmed Bug description: Summary: Security Group doesn't work if the specific allowed-address-pairs value is set to the port associated with it. High level description: OpenStack user is allowed to specify arbitrary mac_address/ip_address pairs that are allowed to pass through a port. For some practical reasons, OpenStack users can specify huge subnets, and CIRDs provided there are not sanitized. If the CIRD provided with 'allowed-address- pairs' for any single port associated with Security Group overlaps with a subnet used by the VM, the VM is always accessible by any port and any protocol, despite the fact that its security group denies all ingress traffic. Step-by-step reproduction process: 1) Create a VM in OpenStack 2) Check that there are no rules allowing icmp (for instance) in the security group associated with the VM 3) perform: neutron port-update [any-port-associated-with-the-secgroup] --allowed-address-pairs type=dict list=true ip_address=[a-very-huge-cidr] if your VM uses a private IPv4 address from networks 192.168.x or 172.16.x, then 128.0.0.0/1 will work as "a-very-huge-cidr", if it uses 10.x network then 0.0.0.0/1 should. 4) ping all the VMs in this secgroup successfully (from router namespace, or from any host which is allowed to access floating IPs if floating IP is also assigned to the VM), as well as access it by any port and protocol which the VM is listening. Version: All OpenStack releases up to Mitaka. Perceived severity: It's not a blocker as workaround are pretty obvious, but it's a huge security bug: all the network security provided by Security Groups might be ruined easily, just by updating a single port in neutron. If we restrict the value of allowed-address-pairs in neutron to a single address (/32 or /128), might it break anything? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622654/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622644] [NEW] OVS agent ryu/native implementation breaks non-OF1.3 uses
Public bug reported: The ryu-based OVS agent variant forces the bridge Openflow version to 1.3 only [1], which breaks a few things: - troubleshooting tools relying on ovs-ofctl, unless they specify "-O Openflow13", will break: version negotiation failed (we support version 0x01, peer supports version 0x04) ovs-ofctl: br-tun: failed to connect to socket (Broken pipe) - calling add_flow on an OVSCookieBridge derived from a bridge that is an native.ovs_bridge.OVSAgentBridge, will fail with the same error, because add_flow will call ovs-ofctl without specifying "-O Openflow13" (this issue is currently hitting networking-bgpvpn: [2]) It seems like a possible fix would be to not restrict the set of Openflow versions supported by the bridge to Openflow13, but to just *add* Openflow13 to the set of supported versions. [1] https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L78 [2] https://github.com/openstack/networking-bagpipe/blob/master/networking_bagpipe/agent/bagpipe_bgp_agent.py#L512 ** Affects: bgpvpn Importance: Undecided Status: New ** Affects: networking-bagpipe Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Status: New ** Also affects: neutron Importance: Undecided Status: New ** Also affects: networking-bagpipe Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622644 Title: OVS agent ryu/native implementation breaks non-OF1.3 uses Status in networking-bgpvpn: New Status in BaGPipe: New Status in neutron: New Bug description: The ryu-based OVS agent variant forces the bridge Openflow version to 1.3 only [1], which breaks a few things: - troubleshooting tools relying on ovs-ofctl, unless they specify "-O Openflow13", will break: version negotiation failed (we support version 0x01, peer supports version 0x04) ovs-ofctl: br-tun: failed to connect to socket (Broken pipe) - calling add_flow on an OVSCookieBridge derived from a bridge that is an native.ovs_bridge.OVSAgentBridge, will fail with the same error, because add_flow will call ovs-ofctl without specifying "-O Openflow13" (this issue is currently hitting networking-bgpvpn: [2]) It seems like a possible fix would be to not restrict the set of Openflow versions supported by the bridge to Openflow13, but to just *add* Openflow13 to the set of supported versions. [1] https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L78 [2] https://github.com/openstack/networking-bagpipe/blob/master/networking_bagpipe/agent/bagpipe_bgp_agent.py#L512 To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1622644/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622630] [NEW] Failed to launch instance due to "Unexpected API error"
Public bug reported: Installed openstack Mitaka on ubuntu 16.04 (s390x) following http://docs.openstack.org/mitaka/install-guide-ubuntu/index.html Configured neutron and cinder according to my local environment created an image - now attempting to start an instance - that fails with an error message in horizon: (in German: "der Server kann nicht erstellt werden") Checked syslog and found: Sep 12 16:23:14 s42lp12 nova-compute[3644]: 2016-09-12 16:23:14.859 3644 INFO nova.compute.resource_tracker [req-1781920a-3d2c-4a77-bf96-3df4b579fd23 - - - - -] Compute_service record updated for s42lp12:s42lp12.boeblingen.de.ibm.com Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions [req-a5c08333-24fe-419c-869d-fdc374a444a3 8f3671f652524ce3862bfcc3840cb363 adf169e55d0d4949b2831809bf0f6900 - - -] Unexpected except ion in API method Sep 12 16:23:44 s42lp12 rsyslogd-2007: action 'action 10' suspended, next retry is Mon Sep 12 16:25:14 2016 [v8.16.0 try http://www.rsyslog.com/e/2007 ] Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions Traceback (most recent call last): Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return f(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 629, in create Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions **create_kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1562, in create Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1145, in _create_instance Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions reservation_id, max_count) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 834, in _validate_and_build_base_options Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions requested_networks, max_count) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 448, in _check_requested_networks Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions max_count) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 49, in wrapped Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683 4407 ERROR nova.api.openstack.extensions return func(self, context, *args, **kwargs) Sep 12 16:23:44 s42lp12 nova-api[3695]: 2016-09-12 16:23:44.683
[Yahoo-eng-team] [Bug 1622632] [NEW] Circular dependency in neutron.services.trunk.utils
Public bug reported: There is a circular dependency when importing neutron.services.trunk.utils: In [1]: from neutron.services.trunk import utils --- ImportError Traceback (most recent call last) in () > 1 from neutron.services.trunk import utils /opt/stack/neutron/neutron/services/trunk/utils.py in () 17 from neutron.common import utils 18 from neutron import manager ---> 19 from neutron.services.trunk.drivers.openvswitch import constants as ovs_const 20 21 /opt/stack/neutron/neutron/services/trunk/drivers/__init__.py in () 15 16 from neutron.services.trunk.drivers.linuxbridge import driver as lxb_driver ---> 17 from neutron.services.trunk.drivers.openvswitch import driver as ovs_driver 18 19 /opt/stack/neutron/neutron/services/trunk/drivers/openvswitch/driver.py in () 23 from neutron.services.trunk import constants as trunk_consts 24 from neutron.services.trunk.drivers import base ---> 25 from neutron.services.trunk import utils 26 27 LOG = logging.getLogger(__name__) ImportError: cannot import name utils ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622632 Title: Circular dependency in neutron.services.trunk.utils Status in neutron: New Bug description: There is a circular dependency when importing neutron.services.trunk.utils: In [1]: from neutron.services.trunk import utils --- ImportError Traceback (most recent call last) in () > 1 from neutron.services.trunk import utils /opt/stack/neutron/neutron/services/trunk/utils.py in () 17 from neutron.common import utils 18 from neutron import manager ---> 19 from neutron.services.trunk.drivers.openvswitch import constants as ovs_const 20 21 /opt/stack/neutron/neutron/services/trunk/drivers/__init__.py in () 15 16 from neutron.services.trunk.drivers.linuxbridge import driver as lxb_driver ---> 17 from neutron.services.trunk.drivers.openvswitch import driver as ovs_driver 18 19 /opt/stack/neutron/neutron/services/trunk/drivers/openvswitch/driver.py in () 23 from neutron.services.trunk import constants as trunk_consts 24 from neutron.services.trunk.drivers import base ---> 25 from neutron.services.trunk import utils 26 27 LOG = logging.getLogger(__name__) ImportError: cannot import name utils To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622632/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592270] Re: can get shared network/subnet, but fail to create port when fixed_ip is specified
Discussed on the IRC, and this is an expected behavior ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592270 Title: can get shared network/subnet, but fail to create port when fixed_ip is specified Status in neutron: Invalid Bug description: For user who doesn't have admin role or isn't shared network's owner, he / she can see shared network and related subnet, but fail to create port when specifying fixed_ips. Policy to allow GET, but disallow to create port when specified fixed_ips. #user can see share networks "get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc", #user can see share subnets "get_subnet": "rule:admin_or_owner or rule:shared", #user won't be able to create port when specifying fixed_ips "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592270/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622624] [NEW] Message missing for Not Implemented IPv6 Combos
Public bug reported: As per [1] , there are certain combinations for IPv6 Address Mode and IPv6 Router Advertisement which are not currently implemented. Expected output: Currently no message is raised for them to the API consumers. Actual output: Proper message should be raised to inform the API consumers about the missing/incorrect combination of IPv6 Addressing options. [1]: http://docs.openstack.org/mitaka/networking-guide/config- ipv6.html#ipv6-ra-mode-and-ipv6-address-mode-combinations ** Affects: neutron Importance: Low Assignee: Reedip (reedip-banerjee) Status: New ** Changed in: neutron Assignee: (unassigned) => Reedip (reedip-banerjee) ** Changed in: neutron Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622624 Title: Message missing for Not Implemented IPv6 Combos Status in neutron: New Bug description: As per [1] , there are certain combinations for IPv6 Address Mode and IPv6 Router Advertisement which are not currently implemented. Expected output: Currently no message is raised for them to the API consumers. Actual output: Proper message should be raised to inform the API consumers about the missing/incorrect combination of IPv6 Addressing options. [1]: http://docs.openstack.org/mitaka/networking-guide/config- ipv6.html#ipv6-ra-mode-and-ipv6-address-mode-combinations To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622624/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622616] [NEW] delete_subnet update_port appears racey with ipam
Public bug reported: failure spotted in a patch on a delete_subnet call: 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource [req-746d769c-2388-48e0-8e09-38e4190e5364 tempest-PortsTestJSON-432635984 -] delete failed: Exception deleting fixed_ip from port 862b5dea-dca2-4669-b280-867175f5f351 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource Traceback (most recent call last): 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource result = method(request=request, **args) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 526, in delete 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return self._delete(request, id, **kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 87, in wrapped 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 83, in wrapped 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 123, in wrapped 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource traceback.format_exc()) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/db/api.py", line 118, in wrapped 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return f(*dup_args, **dup_kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 548, in _delete 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource obj_deleter(request.context, id, **kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/common/utils.py", line 618, in inner 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return f(self, context, *args, **kwargs) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1159, in delete_subnet 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource "port %s"), port_id) 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource self.force_reraise() 2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2016-09-10 01:04:43.452 13725 ERROR
[Yahoo-eng-team] [Bug 1616769] Re: When assigning neutron-port to PF sriov_numvfs parameter get "0" value
You need to create these ports yourself: nova/neutron will use the ports, but will not create them on the host. Please refer to the SR-IOV documentation [1] for information on how to do this. [1] http://docs.openstack.org/mitaka/networking-guide/config-sriov.html #create-virtual-functions-compute ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1616769 Title: When assigning neutron-port to PF sriov_numvfs parameter get "0" value Status in OpenStack Compute (nova): Invalid Bug description: Description of problem: When manage SR-IOV PFs as Neutron ports I can see that /sys/class/net/enp5s0f1/device/sriov_numvfs parameter gets "0" value. When I delete the PF port so I can switch to SRIOV - direct port (VF) I can't boot vm because sriov_numvfs parameter equal to "0" value Version-Release number of selected component (if applicable): $ rpm -qa |grep neutron python-neutron-lib-0.3.0-0.20160803002107.405f896.el7ost.noarch openstack-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch puppet-neutron-9.1.0-0.20160813031056.7cf5e07.el7ost.noarch python-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch openstack-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch python-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch python-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch openstack-neutron-ml2-9.0.0-0.20160817153328.b9169e3.el7ost.noarch openstack-neutron-metering-agent-9.0.0-0.20160817153328.b9169e3.el7ost.noarch openstack-neutron-openvswitch-9.0.0-0.20160817153328.b9169e3.el7ost.noarch python-neutronclient-5.0.0-0.20160812094704.ec20f7f.el7ost.noarch openstack-neutron-common-9.0.0-0.20160817153328.b9169e3.el7ost.noarch openstack-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch $ rpm -qa |grep nova python-novaclient-5.0.1-0.20160724130722.6b11a1c.el7ost.noarch openstack-nova-api-14.0.0-0.20160817225441.04cef3b.el7ost.noarch puppet-nova-9.1.0-0.20160813014843.b94f0a0.el7ost.noarch openstack-nova-common-14.0.0-0.20160817225441.04cef3b.el7ost.noarch openstack-nova-novncproxy-14.0.0-0.20160817225441.04cef3b.el7ost.noarch openstack-nova-conductor-14.0.0-0.20160817225441.04cef3b.el7ost.noarch python-nova-14.0.0-0.20160817225441.04cef3b.el7ost.noarch openstack-nova-scheduler-14.0.0-0.20160817225441.04cef3b.el7ost.noarch openstack-nova-cert-14.0.0-0.20160817225441.04cef3b.el7ost.noarch openstack-nova-console-14.0.0-0.20160817225441.04cef3b.el7ost.noarch How reproducible: Always Steps to Reproduce: 1.Set SRIOV ENV and PF support : https://docs.google.com/document/d/1qQbJlLI1hSlE4uwKpmVd0BoGSDBd8Z0lTzx5itQ6WL0/edit# 2. BOOT VM that assign to PF (neutron port- direct-physical) - should boot well 3. check cat /sys/class/net/enp5s0f1/device/sriov_numvfs (= 0) 4. delete vm and check again sriov_numvfs (=0) 5. I expect that numvfs should return to the default value that was configured To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1616769/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1619758] Re: Credential Encryption breaks deployments without Fernet
** Changed in: tripleo Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1619758 Title: Credential Encryption breaks deployments without Fernet Status in OpenStack Identity (keystone): Fix Released Status in tripleo: Fix Released Bug description: A recent change to encrypt credetials broke RDO/Tripleo deployments: 2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils [req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] key_repository does not exist or Keystone does not have sufficient permission to access it: /etc/keystone/credential-keys/ 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi [req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at least one Fernet instance 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most recent call last): 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in __call__ 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = method(req, **params) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in inner 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi return f(self, request, *args, **kwargs) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/credential/controllers.py", line 69, in create_credential 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ref = self.credential_api.create_credential(ref['id'], ref) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in wrapped 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi __ret_val = __f(*args, **kwargs) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 106, in create_credential 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi credential_copy = self._encrypt_credential(credential) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 72, in _encrypt_credential 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi json.dumps(credential['blob']) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py", line 68, in encrypt 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto, keys = get_multi_fernet_keys() 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py", line 49, in get_multi_fernet_keys 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto = fernet.MultiFernet(fernet_keys) 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 128, in __init__ 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi "MultiFernet requires at least one Fernet instance" 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ValueError: MultiFernet requires at least one Fernet instance 2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1619758/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1571657] Re: it's possible to create IPv6 subnet without address & RA mode
This was intentional ** Changed in: neutron Status: Opinion => Invalid ** Changed in: python-neutronclient Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1571657 Title: it's possible to create IPv6 subnet without address & RA mode Status in neutron: Invalid Status in python-neutronclient: Invalid Bug description: We can create IPv6 subnet without address mode & Ra_mode parameters . When creating this kind of subnet and attach it to network that have ipv4 when booting VM it does not get any ip address . No Meaning produce ipv6 subnet without those parameters [root@puma15 ~(keystone_admin)]# rpm -qa |grep neut openstack-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-ml2-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-openvswitch-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-common-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutronclient-4.1.2-0.20160304195803.5d28651.el7.centos.noarch python-neutron-lib-0.0.3-0.20160227020344.999828a.el7.centos.noarch openstack-neutron-metering-agent-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch [root@puma15 ~(keystone_admin)]# [root@puma15 ~(keystone_admin)]# neutron net-list +--+--+-+ | id | name | subnets | +--+--+-+ | e1af28fe-6725-471f-9dcb-21139dd815af | tempest-network-smoke--922626247 | dd3f38b2-ba3d-49cf-b5c5-9fd05af1b207 2003::/64 | | | | bdd6fe30-d278-437f-97a1-10a93862a338 10.100.0.0/28 | | ae26ce01-d0f4-46ac-ac74-f652477b0a4c | external_network | 1771fd81-9c6c-4f5b-a7e9-246221e28577 10.35.166.0/24 | +--+--+-+ [root@puma15 ~(keystone_admin)]# neutron subnet-create e1af28fe-6725-471f-9dcb-21139dd815af 2001:db1:0::2/64 --name dhcpv6_slaac_subnete --ip-version 6Created a new subnet: +---+--+ | Field | Value | +---+--+ | allocation_pools | {"start": "2001:db1::2", "end": "2001:db1:::::"} | | cidr | 2001:db1::/64 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip| 2001:db1::1 | | host_routes | | | id| 6d30c5c2-c7a7-4439-aa7b-24c10bd26ec8 | | ip_version| 6 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | dhcpv6_slaac_subnete | | network_id| e1af28fe-6725-471f-9dcb-21139dd815af | | subnetpool_id | | | tenant_id | 9a0058cf974c47ecb4f778d8e199f7ae | +---+--+ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1571657/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1571657] Re: it's possible to create IPv6 subnet without address & RA mode
** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: python-neutronclient Assignee: (unassigned) => Reedip (reedip-banerjee) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1571657 Title: it's possible to create IPv6 subnet without address & RA mode Status in neutron: Opinion Status in python-neutronclient: New Bug description: We can create IPv6 subnet without address mode & Ra_mode parameters . When creating this kind of subnet and attach it to network that have ipv4 when booting VM it does not get any ip address . No Meaning produce ipv6 subnet without those parameters [root@puma15 ~(keystone_admin)]# rpm -qa |grep neut openstack-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-ml2-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-openvswitch-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-common-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutronclient-4.1.2-0.20160304195803.5d28651.el7.centos.noarch python-neutron-lib-0.0.3-0.20160227020344.999828a.el7.centos.noarch openstack-neutron-metering-agent-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch [root@puma15 ~(keystone_admin)]# [root@puma15 ~(keystone_admin)]# neutron net-list +--+--+-+ | id | name | subnets | +--+--+-+ | e1af28fe-6725-471f-9dcb-21139dd815af | tempest-network-smoke--922626247 | dd3f38b2-ba3d-49cf-b5c5-9fd05af1b207 2003::/64 | | | | bdd6fe30-d278-437f-97a1-10a93862a338 10.100.0.0/28 | | ae26ce01-d0f4-46ac-ac74-f652477b0a4c | external_network | 1771fd81-9c6c-4f5b-a7e9-246221e28577 10.35.166.0/24 | +--+--+-+ [root@puma15 ~(keystone_admin)]# neutron subnet-create e1af28fe-6725-471f-9dcb-21139dd815af 2001:db1:0::2/64 --name dhcpv6_slaac_subnete --ip-version 6Created a new subnet: +---+--+ | Field | Value | +---+--+ | allocation_pools | {"start": "2001:db1::2", "end": "2001:db1:::::"} | | cidr | 2001:db1::/64 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip| 2001:db1::1 | | host_routes | | | id| 6d30c5c2-c7a7-4439-aa7b-24c10bd26ec8 | | ip_version| 6 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | dhcpv6_slaac_subnete | | network_id| e1af28fe-6725-471f-9dcb-21139dd815af | | subnetpool_id | | | tenant_id | 9a0058cf974c47ecb4f778d8e199f7ae | +---+--+ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1571657/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622578] [NEW] cloud init metadata injection failed while booting VM
Public bug reported: Hello, when we start a VM the process of cloud init failed with: Inside the VM url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [62/120s]: bad status code [500] The VM is reachable and you can curl http://169.254.169.254 and get a valid result but when you curl http://169.254.169.254/latest we get: Remote metadata server experienced an internal server error. In nova-api log I see: Unauthorized: The request you have made requires authentication. (HTTP 401) Environment OS: Ubuntu 16.04.1 LTS packages on nova: ii nova-api 2:13.1.0-0ubuntu1 all OpenStack Compute - API frontend ii nova-common2:13.1.0-0ubuntu1 all OpenStack Compute - common files ii nova-conductor 2:13.1.0-0ubuntu1 all OpenStack Compute - conductor service ii nova-consoleauth 2:13.1.0-0ubuntu1 all OpenStack Compute - Console Authenticator ii nova-novncproxy2:13.1.0-0ubuntu1 all OpenStack Compute - NoVNC proxy ii nova-scheduler 2:13.1.0-0ubuntu1 all OpenStack Compute - virtual machine scheduler ii nova-spiceproxy2:13.1.0-0ubuntu1 all OpenStack Compute - spice html5 proxy ii python-nova2:13.1.0-0ubuntu1 all OpenStack Compute Python libraries ii python-novaclient 2:3.3.1-2 all client library for OpenStack Compute API - Python 2.7 ii python-neutronclient 1:4.1.1-2 all client API library for Neutron - Python 2.7 packages on neutron: ii neutron-common 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - common ii neutron-dhcp-agent 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - DHCP agent ii neutron-l3-agent 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - l3 agent ii neutron-metadata-agent 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - metadata agent ii neutron-openvswitch-agent 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - Open vSwitch plugin agent ii neutron-plugin-ml2 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - ML2 plugin ii neutron-plugin-openvswitch-agent 2:8.1.2-0ubuntu1all Transitional package for neutron-openvswitch-agent ii neutron-server 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - server ii python-neutron 2:8.1.2-0ubuntu1all Neutron is a virtual network service for Openstack - Python library ii python-neutron-fwaas 1:8.0.0-0ubuntu1all Firewall-as-a-Service driver for OpenStack Neutron ii python-neutron-lib 0.0.2-2 all Neutron shared routines and utilities - Python 2.7 ii python-neutronclient 1:4.1.1-2 all client API library for Neutron - Python 2.7 Hypervisor is KVM ii nova-compute-kvm 2:13.1.0-0ubuntu1 all OpenStack Compute - compute node (KVM) ii libvirt-bin 1.3.1-1ubuntu10.1 amd64programs for the libvirt library ii libvirt0:amd64 1.3.1-1ubuntu10.1 amd64library for interfacing with different virtualization systems ii nova-compute-libvirt 2:13.1.0-0ubuntu1 all OpenStack Compute - compute node libvirt support ii python-libvirt 1.3.1-1ubuntu1 amd64libvirt Python bindings Storage is CEPH root@openstack11:~# ceph version ceph version 10.2.2 Network is Neutron with openvswitch I tried a lot to get around this issue, I hope I didn't oversee something. Maybe someone can help or fix this bug. Thanks in advance ** Affects: nova Importance: Undecided Status: New ** Attachment added: "logs.zip" https://bugs.launchpad.net/bugs/1622578/+attachment/4739277/+files/logs.zip -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622578 Title: cloud init metadata injection failed
[Yahoo-eng-team] [Bug 1622574] [NEW] Metering-agent failed with TypeError to add/remove rule on router without gateway
Public bug reported: Assuming that, there are routers that don't have gateway in tenant. When we try to add/remove meter label rule for the tenant, process in metering-agent failed with Type Error. Meter rules among other routers in the tenant won't be updated because of error above. Part of this bug is reported and fixed in following bug report. https://bugs.launchpad.net/neutron/+bug/1527274 How to reproduce = 1.neutron router-create test 2.neutron net-create test 3.neutron subnet-create test 192.168.1.0/24 4.neutron meter-label-create test 5.neutron meter-label-create After procedure 5, following error trace can be outputted. trace in metering-agent === 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server [req-5265cc49-4c97-4996-af45-2cacba64de84 da7194b4e98b4c8badc5912bbcd7aea4 9384ef06bc9d4af08a384692c92761ce - - -] Exception during message handling 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in _process_incomin g 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in wrapper 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line 212, in add_metering_l abel_rule 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server 'add_metering_label_rule') 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 271, in inner 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line 166, in _invoke_driver 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return getattr(self.metering_driver, func_name)(context, meterings) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in wrapper 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return method(*args, **kwargs) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py", line 264, in add _metering_label_rule 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server self._add_metering_label_rule(router) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py", line 277, in _ad d_metering_label_rule 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server self._process_metering_rule_action(router, 'create') 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py", line 286, in _process_metering_rule_action 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server ext_dev = self.get_external_device_name(rm.router['gw_port_id']) 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server File "/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py", line 133, in get_external_device_name 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server return (EXTERNAL_DEV_PREFIX + port_id)[:self.driver.DEV_NAME_LEN] 2016-09-12 11:09:12.629 4205 ERROR oslo_messaging.rpc.server TypeError: cannot concatenate 'str' and 'NoneType' objects ** Affects: neutron Importance: Undecided Assignee: Kengo Hobo (hobo-kengo) Status: New ** Tags: metering ** Changed in: neutron Assignee: (unassigned) => Kengo Hobo (hobo-kengo) ** Tags added: metering -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622574 Title: Metering-agent failed with
[Yahoo-eng-team] [Bug 1622503] Re: nova notifier called 3 times for each port
Reviewed: https://review.openstack.org/368631 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=dfbc809169da9af7dfc2421244d188caadcada05 Submitter: Jenkins Branch:master commit dfbc809169da9af7dfc2421244d188caadcada05 Author: Kevin BentonDate: Fri Sep 9 05:12:03 2016 -0700 Use singleton for Nova notifier By adding a singleton getter method and using it in the places that use nova notifiers, we ensure that only one object exists so all callers share the same batching queues and the sqlalchemy events will only be fired once. Change-Id: I89752d9c69feb578f0294339aae1a5cf51ec124b Closes-Bug: #1622503 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622503 Title: nova notifier called 3 times for each port Status in neutron: Fix Released Bug description: The nova notifier is being called 3 different times for each port event due to multiple instantiations of db_base_plugin_v2 each constructing a new notifier and subscribing it to sqlalchemy events. This can be seen in the logs on each event: 2016-09-10 01:25:52.375 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622566] [NEW] logs from linuxbridge agent have "with host None"
Public bug reported: The linux bridge agent does not pass the host into get_devices_details_list, which results in misleading debug logs on the server side that imply there may be missing data: 2016-09-10 01:54:42.307 14800 DEBUG neutron.plugins.ml2.rpc [req- 96973c7b-8753-40be-b52a-0ce4b250ffb9 - -] Device tap7495d684-71 details requested by agent lb6679bf6e7639 with host None get_device_details /opt/stack/new/neutron/neutron/plugins/ml2/rpc.py:71 ** Affects: neutron Importance: Undecided Assignee: Kevin Benton (kevinbenton) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) ** Changed in: neutron Milestone: None => newton-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622566 Title: logs from linuxbridge agent have "with host None" Status in neutron: In Progress Bug description: The linux bridge agent does not pass the host into get_devices_details_list, which results in misleading debug logs on the server side that imply there may be missing data: 2016-09-10 01:54:42.307 14800 DEBUG neutron.plugins.ml2.rpc [req- 96973c7b-8753-40be-b52a-0ce4b250ffb9 - -] Device tap7495d684-71 details requested by agent lb6679bf6e7639 with host None get_device_details /opt/stack/new/neutron/neutron/plugins/ml2/rpc.py:71 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622566/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1490236] Re: Swapping volume can't been swap again
Reviewed: https://review.openstack.org/257135 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=be553fb15591c6fc212ef3a07c1dd1cbc43d6866 Submitter: Jenkins Branch:master commit be553fb15591c6fc212ef3a07c1dd1cbc43d6866 Author: Takashi NATSUMEDate: Thu Jun 9 13:01:51 2016 +0900 Set 'serial' to new volume ID in swap volumes In swap_volume method of nova/virt/libvirt/driver.py, before BDM was got by using the instance's UUID and 'serial' of new connection_info as the volume ID, and driver BDM was updated by using the BDM. ('serial' has the volume ID information.) But in _init_volume_connection method in ComputeManager class, 'serial' is passed from old connection_info to new connection_info. It works fine in the case that cinder initiates swapping volumes because the ID of the attached volume isn't changed after swapping volumes. But in the case that nova initiates swapping volumes, the ID of the attached volume is changed. So in the case that nova initiated swapping volumes, after swap volume function was performed once, BDM was got by wrong old volume id (serial) when swap volume function was performed for the second time. So if 'serial' of new connection_info is None, it is set to new volume ID. And if cinder 'migrate_volume_completion' API returns old volume ID (the case that cinder initiates swapping volumes), the 'serial' of new connection_info is set to old volume ID. If cinder 'migrate_volume_completion' API returns new volume ID (the case that nova initiated swapping volumes), the 'serial' is left as it is (new volume ID). Change-Id: I86b8fbb09b0f1ed4c667683de3827cd9b63bca7f Closes-Bug: #1490236 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1490236 Title: Swapping volume can't been swap again Status in OpenStack Compute (nova): Fix Released Bug description: If we had two volume, one is attached by a instance, and the other is still available. Due to https://bugs.launchpad.net/nova/+bug/1489744, those volumes will stay in wrong status. It will swap successful after fix the issue. Volumes will be available and in-use status. But when I try to swap in-use volume with other available volume. Nova compute will throw following exception: 2015-08-30 04:55:13.772 ERROR oslo_messaging.rpc.dispatcher [req-4d999362-7a13-4b43-8c6a-0d85f3b9aa5b admin admin] Exception during message handling: No volume Block Device Mapping with id cafc833a-8645-47db-b464-999142afa7be. Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/manager.py", line 443, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 169, in wrapper result = fn(cls, context, *args, **kwargs) File "/opt/stack/nova/nova/objects/block_device.py", line 204, in get_by_volume_id raise exception.VolumeBDMNotFound(volume_id=volume_id) VolumeBDMNotFound: No volume Block Device Mapping with id cafc833a-8645-47db-b464-999142afa7be. 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher executor_callback)) 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher executor_callback) 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 89, in wrapped 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher payload) 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in __exit__ 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/exception.py", line 72, in wrapped 2015-08-30 04:55:13.772 TRACE oslo_messaging.rpc.dispatcher return f(self, context, *args, **kw)
[Yahoo-eng-team] [Bug 1620226] Re: Wrong cinder quota value been accepted
Reviewed: https://review.openstack.org/365512 Committed: https://git.openstack.org/cgit/openstack/heat/commit/?id=c99165d2666e2296f6bd4da8a64f214756d13d3e Submitter: Jenkins Branch:master commit c99165d2666e2296f6bd4da8a64f214756d13d3e Author: ricolinDate: Mon Sep 5 16:35:01 2016 +0800 Pre-validate cinder quotas with the real fact This patch aim to provide some validation to OS::Cinder::Quotas to check if we accept those quotas base on the real fact we have. For example, if we have 100 volumes and we set quota `volumes` to 1. That quota will be pointless (event if can prevent further creation of volume). With management consideration, this patch add validation before actually set quota in cinder. Closes-Bug: #1620226 Change-Id: Ib717c78cd4a220e00ff28fb24d5a8375aa19eb1b ** Changed in: heat Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1620226 Title: Wrong cinder quota value been accepted Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): New Bug description: While testing cinder quota in heat resource, find out we have accept wrong quota value and added it to cinder. For example, if we already have 2 volumes in project, we then update quota to accept 1 volume. That means quotas will not consider the real fact in heat (and to limit the real fact should be the only good quota provided). Consider that horizon already precheck this kind of mistake, we should add check to prevent that. On further test with horizon, also find out that horizon should not use only total volumes size to validate gigabytes quota. Gigabytes refer to total size of volumes and snapshots. That's what cinder react as well(if you set gigabytes equal to total volumes size, it will raise error when creating snapshots). To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1620226/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622559] [NEW] linux bridge jobs filled with binding warning noise
Public bug reported: The linux bridge job server logs are filled with these 'warnings', which are actually completely normal emissions from the openvswitch mech driver. 2016-09-10 01:19:08.071 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-9205b398-aeef-4139-8cb1-a58eed94b92e - -] Port c7808ebf-1682-4b55-98e6-be1be73c4af7 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:27.668 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-f55be949-722e-4551-9dcc-959f2eec4d38 - -] Port 4fc68275-d528-47dd-b88a-01523880c500 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:33.951 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-584d52c3-6d1b-4274-9760-16faf5065290 - -] Port ef27d948-eaba-4af5-a4c0-5f201835ccaf on network 73365757-fe52-4dea-93a9-e5a68ac4f0a1 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:39.226 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-5ea82d78-d622-49b0-ae77-9260e112de67 - -] Port bd7b108e-cac8-404f-a199-fdda7b0f6698 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 ** Affects: neutron Importance: Undecided Assignee: Kevin Benton (kevinbenton) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) ** Changed in: neutron Milestone: None => newton-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622559 Title: linux bridge jobs filled with binding warning noise Status in neutron: In Progress Bug description: The linux bridge job server logs are filled with these 'warnings', which are actually completely normal emissions from the openvswitch mech driver. 2016-09-10 01:19:08.071 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-9205b398-aeef-4139-8cb1-a58eed94b92e - -] Port c7808ebf-1682-4b55-98e6-be1be73c4af7 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:27.668 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-f55be949-722e-4551-9dcc-959f2eec4d38 - -] Port 4fc68275-d528-47dd-b88a-01523880c500 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:33.951 14800 WARNING neutron.plugins.ml2.drivers.mech_agent [req-584d52c3-6d1b-4274-9760-16faf5065290 - -] Port ef27d948-eaba-4af5-a4c0-5f201835ccaf on network 73365757-fe52-4dea-93a9-e5a68ac4f0a1 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 2016-09-10 01:19:39.226 14802 WARNING neutron.plugins.ml2.drivers.mech_agent [req-5ea82d78-d622-49b0-ae77-9260e112de67 - -] Port bd7b108e-cac8-404f-a199-fdda7b0f6698 on network ca5d8c9d-302d-47a8-8dbe-cc335d1534c9 not bound, no agent registered on host ubuntu-xenial-bluebox-sjc1-4207542 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622559/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622560] [NEW] There is no back button available after creating a key pair.
Public bug reported: There is no back button available after creating a key pair from project-->access and security -->key pairs --> create keypair and create it. There is option to download but no button to go to previous page. ** Affects: horizon Importance: Undecided Assignee: Sunkara Ramya Sree (ramyasunkara) Status: New ** Tags: horizon-core -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1622560 Title: There is no back button available after creating a key pair. Status in OpenStack Dashboard (Horizon): New Bug description: There is no back button available after creating a key pair from project-->access and security -->key pairs --> create keypair and create it. There is option to download but no button to go to previous page. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622560/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1603443] Re: [QoS][DSCP] 'delete_dscp_marking' function raises exception, 'vif_port' not present
Reviewed: https://review.openstack.org/345915 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=af8ca1bca6657f7152eeb0ca55bc7f3a0aea2301 Submitter: Jenkins Branch:master commit af8ca1bca6657f7152eeb0ca55bc7f3a0aea2301 Author: David ShaughnessyDate: Fri Aug 19 17:29:44 2016 +0100 Retain port info from DSCP rule creation. When a VM is deleted all info except the port number is removed. delete_dscp_marking requires the ofport to be present. This results in an exception being thrown when a port with the DSCP_Marking rule attached is deleted. This patch: - Stores the port info when the dscp_marking rule is updated or created. - Pops the stored info when the dscp_marking rule is removed from the port or the port is deleted. - Expands existing unit tests for the QoS Open vswitch driver to cover this scenario. Change-Id: I77f632fdc7d612267af9a4a3bf0f74288696332b Closes-bug: #1603443 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1603443 Title: [QoS][DSCP] 'delete_dscp_marking' function raises exception, 'vif_port' not present Status in neutron: Fix Released Bug description: DESCRIPTION. During the deletion of a port with a QoS policy and a DSCP rule, an exception is trown. That happens because the port information passed to the function doesn't have the "vif_port" information; this port was already deleted before. HOW TO REPRODUCE. - Create a VM with a port. - Create a QoS policy. - Create a QoS rule, DSCP. - Assign this QoS rule to the port. - Delete the port (the VM). --> the error will be thrown. POSSIBLE SOLUTION. This function needs to reed the "in_port" to delete the flows. When the port is deleted, the QoS Agent is called only with the port UUID. But QosAgentExtension stores all port info in self.policy_map. - If this port is stored, retrieve the in_port info and return it in def _process_reset_port(self, port): try: = self.policy_map.clean_by_port(port) self.qos_driver.delete(port) - If the port is not stored, no QoS was applied and no clean action is needed. ERROR LOG. 2016-07-15 14:23:03.285 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5c858304-88a8-427b-aae4-7dd7616db182 None None] Error while processing VIF ports 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2038, in rpc_loop 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, ovs_restarted) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 147, in wrapper 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1658, in process_network_ports 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info['removed']) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 147, in wrapper 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return f(*args, **kwargs) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1585, in treat_devices_removed 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.ext_manager.delete_port(self.context, {'port_id': device}) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/l2/extensions/manager.py", line 80, in delete_port 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent extension.obj.delete_port(context, data) 2016-07-15 14:23:03.285 TRACE neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/l2/extensions/qos.py", line 316, in delete_port 2016-07-15
[Yahoo-eng-team] [Bug 1580648] Re: Two HA routers in master state during functional test
I've faced this problem on a production cluster twice in a few weeks, so setting the bug status back to 'confirmed'. 2 L3 agents were 'active' for the routers, and 1 inactive (3 nodes setup). ** Changed in: neutron Status: Expired => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1580648 Title: Two HA routers in master state during functional test Status in neutron: Confirmed Bug description: Scheduling ha routers end with two routers in master state. Issue discovered in that bug fix - https://review.openstack.org/#/c/273546 - after preparing new functional test. ha_router.py in method - _get_state_change_monitor_callback() is starting a neutron-keepalived-state-change process with parameter --monitor-interface as ha_device (ha-xxx) and it's IP address. That application is monitoring using "ip netns exec xxx ip -o monitor address" all changes in that namespace. Each addition of that ha-xxx device produces a call to neutron-server API that this router becomes "master". It's producing false results because that device doesn't tell anything about that router is master or not. Logs from test_ha_router.L3HATestFailover.test_ha_router_lost_gw_connection Agent2: 2016-05-10 16:23:20.653 16067 DEBUG neutron.agent.linux.async_process [-] Launching async process [ip netns exec qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2 ip -o monitor address]. start /neutron/neutron/agent/linux/async_process.py:109 2016-05-10 16:23:20.654 16067 DEBUG neutron.agent.linux.utils [-] Running command: ['ip', 'netns', 'exec', 'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2', 'ip', '-o', 'monitor', 'address'] create_process /neutron/neutron/agent/linux/utils.py:82 2016-05-10 16:23:20.661 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Monitor: ha-8aedf0c6-2a, 169.254.0.1/24 run /neutron/neutron/agent/l3/keepalived_state_change.py:59 2016-05-10 16:23:20.661 16067 INFO neutron.agent.linux.daemon [-] Process runs with uid/gid: 1000/1000 2016-05-10 16:23:20.767 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: qr-88c93aa9-5a, fe80::c8fe:deff:fead:beef/64, False parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:20.901 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: qg-814d252d-26, fe80::c8fe:deff:fead:beee/64, False parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:21.324 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: ha-8aedf0c6-2a, fe80::2022:22ff:fe22:/64, True parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:29.807 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: ha-8aedf0c6-2a, 169.254.0.1/24, True parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] Wrote router 962f19e6-f592-49f7-8bc4-add116c0b7a3 state master write_state_change /neutron/neutron/agent/l3/keepalived_state_change.py:87 2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change [-] State: master notify_agent /neutron/neutron/agent/l3/keepalived_state_change.py:93 Agent1: 2016-05-10 16:23:19.417 15906 DEBUG neutron.agent.linux.async_process [-] Launching async process [ip netns exec qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1 ip -o monitor address]. start /neutron/neutron/agent/linux/async_process.py:109 2016-05-10 16:23:19.418 15906 DEBUG neutron.agent.linux.utils [-] Running command: ['ip', 'netns', 'exec', 'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1', 'ip', '-o', 'monitor', 'address'] create_process /neutron/neutron/agent/linux/utils.py:82 2016-05-10 16:23:19.425 15906 DEBUG neutron.agent.l3.keepalived_state_change [-] Monitor: ha-22a4d1e0-ad, 169.254.0.1/24 run /neutron/neutron/agent/l3/keepalived_state_change.py:59 2016-05-10 16:23:19.426 15906 INFO neutron.agent.linux.daemon [-] Process runs with uid/gid: 1000/1000 2016-05-10 16:23:19.525 15906 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: qr-88c93aa9-5a, fe80::c8fe:deff:fead:beef/64, False parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:19.645 15906 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: qg-814d252d-26, fe80::c8fe:deff:fead:beee/64, False parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:19.927 15906 DEBUG neutron.agent.l3.keepalived_state_change [-] Event: ha-22a4d1e0-ad, fe80::1034:56ff:fe78:2b5d/64, True parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73 2016-05-10 16:23:28.543 15906 DEBUG neutron.agent.l3.keepalived_state_change [-] Event:
[Yahoo-eng-team] [Bug 1620864] Re: control size of neutron logs
Reviewed: https://review.openstack.org/366440 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=5743ed4679d43c325c5b3a2a4955088e35018c97 Submitter: Jenkins Branch:master commit 5743ed4679d43c325c5b3a2a4955088e35018c97 Author: Armando MigliaccioDate: Thu Sep 8 13:42:11 2016 -0700 Reduce log level for ryu in OVS agent log In a tempest run, Ryu is logging about ~30K debug traces and it puts the OVS agent's log amongst the biggest log files in a single node OpenStack deployment in the gate. This patch sets Ryu and its components to log at INFO level and above to reduce the number of traces emitted. This patch, alongside [1,2], brings down the size of the biggest Neutron log files, server and OVS agent respectively. More can surely be done, however callbacks and ryu are the most obvious ones, and following up with further cuts may be dealt with on a ad-hoc basis. Closes-bug: #1620864 [1] https://review.openstack.org/#/c/366424/ [2] https://review.openstack.org/#/c/366478/ Change-Id: I40b7c77601788ae2e7428c7a0206ca2a807d10dc ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1620864 Title: control size of neutron logs Status in neutron: Fix Released Bug description: From a recent analysis [1] on a master change, it has been noted that the size of neutron logs is amongst the biggest in an OpenStack deployment. This bug report is tracking the effort to trim down some of the non necessary traces that make the logs bloat, as this may as well affect operability. [1] http://paste.openstack.org/show/567259/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1620864/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622545] [NEW] archive_deleted_rows isn't archiving instances
Public bug reported: Running "nova-manage archive_deleted_rows ..." clears out little or none of the deleted nova instances for example running the command several times $ nova-manage --debug db archive_deleted_rows --max_rows 10 --verbose I get +--+-+ | Table| Number of Rows Archived | +--+-+ | block_device_mapping | 10108 | | instance_actions | 31838 | | instance_actions_events | 2 | | instance_extra | 10108 | | instance_faults | 459 | | instance_info_caches | 10108 | | instance_metadata| 6037| | instance_system_metadata | 17883 | | reservations | 9 | +--+-+ the only way I've been able to get an instances archived is to lower the --max-rows parameter, but this only deletes a small number of the instances and sometimes doesn't archive any at all In my nova-mange.log I have the following error 2016-09-12 09:22:21.658 17603 WARNING nova.db.sqlalchemy.api [-] IntegrityError detected when archiving table instances: (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent row: a foreign key constraint fails (`nova`.`instance_extra`, CONSTRAINT `instance_extra_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) REFERENCES `instances` (`uuid`))') [SQL: u'DELETE FROM instances WHERE instances.id in (SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)'] [parameters: (0, 787)] mysql -e 'select count(*) from instances where deleted_at is not NULL;' nova +--+ | count(*) | +--+ |70829 | +--+ I'm running mitaka with this patch installed https://review.openstack.org/#/c/326730/1 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622545 Title: archive_deleted_rows isn't archiving instances Status in OpenStack Compute (nova): New Bug description: Running "nova-manage archive_deleted_rows ..." clears out little or none of the deleted nova instances for example running the command several times $ nova-manage --debug db archive_deleted_rows --max_rows 10 --verbose I get +--+-+ | Table| Number of Rows Archived | +--+-+ | block_device_mapping | 10108 | | instance_actions | 31838 | | instance_actions_events | 2 | | instance_extra | 10108 | | instance_faults | 459 | | instance_info_caches | 10108 | | instance_metadata| 6037| | instance_system_metadata | 17883 | | reservations | 9 | +--+-+ the only way I've been able to get an instances archived is to lower the --max-rows parameter, but this only deletes a small number of the instances and sometimes doesn't archive any at all In my nova-mange.log I have the following error 2016-09-12 09:22:21.658 17603 WARNING nova.db.sqlalchemy.api [-] IntegrityError detected when archiving table instances: (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent row: a foreign key constraint fails (`nova`.`instance_extra`, CONSTRAINT `instance_extra_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) REFERENCES `instances` (`uuid`))') [SQL: u'DELETE FROM instances WHERE instances.id in (SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE instances.deleted != %s ORDER BY instances.id \n LIMIT %s) as T1)'] [parameters: (0, 787)] mysql -e 'select count(*) from instances where deleted_at is not NULL;' nova +--+ | count(*) | +--+ |70829 | +--+ I'm running mitaka with this patch installed https://review.openstack.org/#/c/326730/1 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1622545/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561200] Re: created_at and updated_at times don't include timezone
Fix proposed to branch: master Review: https://review.openstack.org/368682 ** Changed in: neutron Status: Expired => In Progress ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561200 Title: created_at and updated_at times don't include timezone Status in neutron: In Progress Bug description: created_at and updated_at were recently added to the API calls and notifications for many neutron resources (networks, subnets, ports, possibly more), which is awesome! I've noticed that the times don't include a timezone (compare to nova servers and glance images, for instance). Even if there's an assumption a user can make, this can create problems with some display tools (I noticed this because a javascript date formatting filter does local timezone conversions when a timezone is created, which meant times for resources created seconds apart looked as though they were several hours adrift. Tested on neutron mitaka RC1. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561200/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622538] [NEW] Wrong "can_host" field of compute node resource providers
Public bug reported: The "can_host" field of compute node records should be 1. However, according to the latest placement implementation, it is 0. mysql> select * from resource_providers; +-+-++--+-++--+ | created_at | updated_at | id | uuid | name| generation | can_host | +-+-++--+-++--+ | 2016-09-12 08:54:19 | 2016-09-12 09:33:41 | 1 | 508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 | +-+-++--+---++--+ 1 row in set (0.00 sec) ** Affects: nova Importance: Undecided Assignee: Yingxin (cyx1231st) Status: New ** Tags: placement ** Changed in: nova Assignee: (unassigned) => Yingxin (cyx1231st) ** Tags added: placement ** Description changed: The "can_host" field of compute node records should be 1. However, - according to the latest placement code, it is 0. - + according to the latest placement implementation, it is 0. mysql> select * from resource_providers; +-+-++--+-++--+ | created_at | updated_at | id | uuid | name| generation | can_host | +-+-++--+-++--+ | 2016-09-12 08:54:19 | 2016-09-12 09:33:41 | 1 | 508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 | +-+-++--+---++--+ 1 row in set (0.00 sec) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1622538 Title: Wrong "can_host" field of compute node resource providers Status in OpenStack Compute (nova): New Bug description: The "can_host" field of compute node records should be 1. However, according to the latest placement implementation, it is 0. mysql> select * from resource_providers; +-+-++--+-++--+ | created_at | updated_at | id | uuid | name| generation | can_host | +-+-++--+-++--+ | 2016-09-12 08:54:19 | 2016-09-12 09:33:41 | 1 | 508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 | +-+-++--+---++--+ 1 row in set (0.00 sec) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1622538/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1618666] Re: deprecated warning for SafeConfigParser
Reviewed: https://review.openstack.org/368408 Committed: https://git.openstack.org/cgit/openstack-dev/pbr/commit/?id=77d9ab7d07feb38281531deeeb4399017b5735d0 Submitter: Jenkins Branch:master commit 77d9ab7d07feb38281531deeeb4399017b5735d0 Author: jiansongDate: Sun Sep 11 01:49:10 2016 -0700 Deprecated warning for SafeConfigParser tox -e py34 is reporting a deprecation warning for SafeConfigParser /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead. parser = configparser.SafeConfigParser() Closes-Bug: #1618666 Change-Id: Ib280b778938b64717ee1cf94efae2f7b553c8f5e ** Changed in: pbr Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1618666 Title: deprecated warning for SafeConfigParser Status in Glance: In Progress Status in glance_store: In Progress Status in OpenStack Identity (keystone): In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in PBR: Fix Released Status in python-swiftclient: In Progress Status in OpenStack Object Storage (swift): In Progress Status in OpenStack DBaaS (Trove): In Progress Bug description: tox -e py34 is reporting a deprecation warning for SafeConfigParser /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead. parser = configparser.SafeConfigParser() To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1618666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622516] [NEW] TestNetworkBasicOps.test_router_rescheduling failed with 'No IPv4 addresses found in: ...'
Public bug reported: http://logs.openstack.org/01/367501/1/gate/gate-tempest-dsvm-neutron- linuxbridge-ubuntu-xenial/3370b9c/console.html 2016-09-11 05:27:03.407325 | Captured traceback: 2016-09-11 05:27:03.407342 | ~~~ 2016-09-11 05:27:03.407364 | Traceback (most recent call last): 2016-09-11 05:27:03.407389 | File "tempest/test.py", line 152, in wrapper 2016-09-11 05:27:03.407414 | return func(*func_args, **func_kwargs) 2016-09-11 05:27:03.407439 | File "tempest/test.py", line 107, in wrapper 2016-09-11 05:27:03.407464 | return f(self, *func_args, **func_kwargs) 2016-09-11 05:27:03.407500 | File "tempest/scenario/test_network_basic_ops.py", line 717, in test_router_rescheduling 2016-09-11 05:27:03.407524 | self._setup_network_and_servers() 2016-09-11 05:27:03.407561 | File "tempest/scenario/test_network_basic_ops.py", line 124, in _setup_network_and_servers 2016-09-11 05:27:03.407588 | floating_ip = self.create_floating_ip(server) 2016-09-11 05:27:03.407619 | File "tempest/scenario/manager.py", line 816, in create_floating_ip 2016-09-11 05:27:03.407648 | port_id, ip4 = self._get_server_port_id_and_ip4(thing) 2016-09-11 05:27:03.407681 | File "tempest/scenario/manager.py", line 795, in _get_server_port_id_and_ip4 2016-09-11 05:27:03.407707 | "No IPv4 addresses found in: %s" % ports) 2016-09-11 05:27:03.407753 | File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py", line 845, in assertNotEqual 2016-09-11 05:27:03.40 | raise self.failureException(msg) 2016-09-11 05:27:03.408051 | AssertionError: 0 == 0 : No IPv4 addresses found in: [{u'binding:host_id': u'ubuntu-xenial-rax-ord-4213156', u'id': u'ef74f0f4-aa7c-4fc7-ad90-e6aab9531309', u'allowed_address_pairs': [], u'status': u'BUILD', u'extra_dhcp_opts': [], u'name': u'', u'admin_state_up': True, u'tenant_id': u'05e1e2af7c5246b1ab35831d6ac59896', u'network_id': u'ebc57b5b-63cc-4ff9-86b1-f5bf13950908', u'description': u'', u'security_groups': [u'aa9f20fa-bdf7-4d4b-b072-55ade7d92776'], u'device_id': u'd08c8b65-ac5b-4d44-a954-41b3f8eb357f', u'created_at': u'2016-09-11T05:13:53', u'binding:vnic_type': u'normal', u'fixed_ips': [{u'ip_address': u'10.1.0.9', u'subnet_id': u'abbb64d7-a2e9-4ead-8b55-937e2415faff'}], u'device_owner': u'compute:None', u'binding:vif_type': u'bridge', u'updated_at': u'2016-09-11T05:14:00', u'port_security_enabled': True, u'binding:vif_details': {u'port_filter': True}, u'binding:profile': {}, u'revision_number': 9, u'mac_address': u'fa:16:3e:73:99:fe'}] ** Affects: neutron Importance: Undecided Status: New ** Tags: gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622516 Title: TestNetworkBasicOps.test_router_rescheduling failed with 'No IPv4 addresses found in: ...' Status in neutron: New Bug description: http://logs.openstack.org/01/367501/1/gate/gate-tempest-dsvm-neutron- linuxbridge-ubuntu-xenial/3370b9c/console.html 2016-09-11 05:27:03.407325 | Captured traceback: 2016-09-11 05:27:03.407342 | ~~~ 2016-09-11 05:27:03.407364 | Traceback (most recent call last): 2016-09-11 05:27:03.407389 | File "tempest/test.py", line 152, in wrapper 2016-09-11 05:27:03.407414 | return func(*func_args, **func_kwargs) 2016-09-11 05:27:03.407439 | File "tempest/test.py", line 107, in wrapper 2016-09-11 05:27:03.407464 | return f(self, *func_args, **func_kwargs) 2016-09-11 05:27:03.407500 | File "tempest/scenario/test_network_basic_ops.py", line 717, in test_router_rescheduling 2016-09-11 05:27:03.407524 | self._setup_network_and_servers() 2016-09-11 05:27:03.407561 | File "tempest/scenario/test_network_basic_ops.py", line 124, in _setup_network_and_servers 2016-09-11 05:27:03.407588 | floating_ip = self.create_floating_ip(server) 2016-09-11 05:27:03.407619 | File "tempest/scenario/manager.py", line 816, in create_floating_ip 2016-09-11 05:27:03.407648 | port_id, ip4 = self._get_server_port_id_and_ip4(thing) 2016-09-11 05:27:03.407681 | File "tempest/scenario/manager.py", line 795, in _get_server_port_id_and_ip4 2016-09-11 05:27:03.407707 | "No IPv4 addresses found in: %s" % ports) 2016-09-11 05:27:03.407753 | File "/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py", line 845, in assertNotEqual 2016-09-11 05:27:03.40 | raise self.failureException(msg) 2016-09-11 05:27:03.408051 | AssertionError: 0 == 0 : No IPv4 addresses found in: [{u'binding:host_id': u'ubuntu-xenial-rax-ord-4213156', u'id': u'ef74f0f4-aa7c-4fc7-ad90-e6aab9531309', u'allowed_address_pairs': [], u'status': u'BUILD', u'extra_dhcp_opts':
[Yahoo-eng-team] [Bug 1622503] [NEW] nova notifier called 3 times for each port
Public bug reported: The nova notifier is being called 3 different times for each port event due to multiple instantiations of db_base_plugin_v2 each constructing a new notifier and subscribing it to sqlalchemy events. This can be seen in the logs on each event: 2016-09-10 01:25:52.375 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 ** Affects: neutron Importance: Undecided Assignee: Kevin Benton (kevinbenton) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Kevin Benton (kevinbenton) ** Changed in: neutron Milestone: None => mitaka-rc1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1622503 Title: nova notifier called 3 times for each port Status in neutron: In Progress Bug description: The nova notifier is being called 3 different times for each port event due to multiple instantiations of db_base_plugin_v2 each constructing a new notifier and subscribing it to sqlalchemy events. This can be seen in the logs on each event: 2016-09-10 01:25:52.375 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 2016-09-10 01:25:52.376 14798 DEBUG neutron.notifiers.nova [req-35cf2ca3-edaa-4e88-9551-c106ab84a031 neutron -] Ignoring state change previous_port_status: DOWN current_port_status: DOWN port_id 92aa19ff-c9c9-4771-ad68-ebe1910ba3dc record_port_status_changed /opt/stack/new/neutron/neutron/notifiers/nova.py:223 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1622503/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1618666] Re: deprecated warning for SafeConfigParser
** Also affects: swift Importance: Undecided Status: New ** Changed in: swift Assignee: (unassigned) => Pallavi (pallavi-s) ** Also affects: python-swiftclient Importance: Undecided Status: New ** Changed in: python-swiftclient Assignee: (unassigned) => Pallavi (pallavi-s) ** Changed in: swift Status: New => In Progress ** Changed in: python-swiftclient Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1618666 Title: deprecated warning for SafeConfigParser Status in Glance: In Progress Status in glance_store: In Progress Status in OpenStack Identity (keystone): In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in PBR: In Progress Status in python-swiftclient: In Progress Status in OpenStack Object Storage (swift): In Progress Status in OpenStack DBaaS (Trove): In Progress Bug description: tox -e py34 is reporting a deprecation warning for SafeConfigParser /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead. parser = configparser.SafeConfigParser() To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1618666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1622460] [NEW] neutron-server report 'FirewallNotFound' when delete firewall under l3_ha mode
Public bug reported: When we delete a firewall under l3-ha mode, some neutron-servers reports error logs: 'FirewallNotFound': Environment: * Openstack Kilo version * Three neutron servers using ha-proxy in balance roundrobin mode, and provides VIP (keepalived) * l3_ha=True is set in neutron-servers to provide L3 HA. * Three l3-agents on 3 network nodes We found 2 out of 3 neutron-servers print the following error logs --- Error logs: === 2016-09-12 14:33:34.250 22722 DEBUG oslo_messaging._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': u'zhaoyi', u'user_id': u'5f228382dd8d4001bd079cfab624e870', u'roles': [u'_member_', u'Member', u'user', u'admin'], u'tenant_id': u'd3147020bd1f4a709654b7e62885bd9f', u'auth_token': u'***', u'request_id': u'req-89116778-b4fb-4232-8249-500f1db5d3f8', u'is_admin': True, u'user': u'5f228382dd8d4001bd079cfab624e870', u'timestamp': u'2016-09-12 06:33:33.570294', u'tenant_name': u'zhaoyi', u'project_id': u'd3147020bd1f4a709654b7e62885bd9f', u'user_name': u'zhaoyi', u'tenant': u'd3147020bd1f4a709654b7e62885bd9f'} unpack_context /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:203 2016-09-12 14:33:34.253 22722 DEBUG neutron_fwaas.services.firewall.fwaas_plugin [req-89116778-b4fb-4232-8249-500f1db5d3f8 ] firewall_deleted() called firewall_deleted /usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py:62 2016-09-12 14:33:34.260 22722 DEBUG neutron_fwaas.db.firewall.firewall_db [req-89116778-b4fb-4232-8249-500f1db5d3f8 ] delete_firewall() called delete_firewall /usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:362 2016-09-12 14:33:34.303 22759 DEBUG keystoneclient.session [-] RESP: [200] content-length: 3084 x-subject-token: {SHA1}fb0b83f9b5d3a4b459a1f2845c1a1bd4bba3c008 vary: X-Auth-Token date: Mon, 12 Sep 2016 06:33:34 GMT content-type: application/json x-openstack-request-id: req-38cd15ab-f193-4d04-80bd-088638497b26 RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": "bf49f6a34f5b4ec8843efee8a840a8b3", "name": "Member"}, {"id": "a104bc435a5d4031b9712dd702cb8672", "name": "user"}, {"id": "11b9aa45b311407ba9460b95eb1534c2", "name": "admin"}], "expires_at": "2016-09-12T07:03:12.00Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "d3147020bd1f4a709654b7e62885bd9f", "name": "zhaoyi"}, "catalog": "", "extras": {}, "user": {"domain": {"id": "default", "name": "Default"}, "id": "5f228382dd8d4001bd079cfab624e870", "name": "zhaoyi"}, "audit_ids": ["Q9kbnQe8SzOcZ7ig6_-2ew", "guy9k1VsThmCGwfavbmvjw"], "issued_at": "2016-09-12T06:03:12.784462"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:224 2016-09-12 14:33:34.307 22759 DEBUG neutron_fwaas.db.firewall.firewall_db [req-b354359e-4536-43bd-a0ca-6ffc27ef72d7 ] get_firewall_rules() called get_firewall_rules /usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:534 2016-09-12 14:33:34.322 22759 INFO neutron.wsgi [req-b354359e-4536-43bd-a0ca-6ffc27ef72d7 ] 10.65.0.99,10.65.0.99 - - [12/Sep/2016 14:33:34] "GET /v2.0/fw/firewall_rules.json HTTP/1.1" 200 6555 0.163946 2016-09-12 14:33:34.447 22722 ERROR oslo_messaging.rpc.dispatcher [req-89116778-b4fb-4232-8249-500f1db5d3f8 ] Exception during message handling: Firewall 6278942e-6485-4ceb-92e5-8ddfa9fb4d25 could not be found. 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher executor_callback)) 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher executor_callback) 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py", line 67, in firewall_deleted 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher self.plugin.delete_db_firewall_object(context, firewall_id) 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py", line 323, in delete_db_firewall_object 2016-09-12 14:33:34.447 22722 TRACE oslo_messaging.rpc.dispatcher
[Yahoo-eng-team] [Bug 1622453] [NEW] Multiple validations for network address
Public bug reported: In master/mitaka, while creating a network, under Subnet section if wrong Network Address is given mutliple validations are being appended each time when we click on Next continuously without correcting it. ** Affects: horizon Importance: Undecided Status: New ** Tags: horizon-core ** Attachment added: "error message.png" https://bugs.launchpad.net/bugs/1622453/+attachment/4739037/+files/error%20message.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1622453 Title: Multiple validations for network address Status in OpenStack Dashboard (Horizon): New Bug description: In master/mitaka, while creating a network, under Subnet section if wrong Network Address is given mutliple validations are being appended each time when we click on Next continuously without correcting it. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1622453/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp