[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls
Reviewed: https://review.openstack.org/347155 Committed: https://git.openstack.org/cgit/openstack/python-neutronclient/commit/?id=ec20f7f85c3a8ecd788536401eeeb0fef4ef18c2 Submitter: Jenkins Branch:master commit ec20f7f85c3a8ecd788536401eeeb0fef4ef18c2 Author: Takashi NATSUMEDate: Tue Jul 26 15:48:11 2016 +0900 Fix string interpolation at logging call Skip creating the formatted log message if the message is not going to be emitted because of the log level. Change-Id: I19d985addb2bdc1b5e17ecd5ac90223e5347d7b2 Closes-Bug: #1596829 ** Changed in: python-neutronclient Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1596829 Title: String interpolation should be delayed at logging calls Status in Ceilometer: New Status in Glance: In Progress Status in glance_store: New Status in heat: New Status in Ironic: Fix Released Status in OpenStack Identity (keystone): New Status in networking-vsphere: Fix Released Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in os-brick: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: In Progress Status in python-neutronclient: Fix Released Status in OpenStack Object Storage (swift): New Status in taskflow: New Bug description: String interpolation should be delayed to be handled by the logging code, rather than being done at the point of the logging call. Wrong: LOG.debug('Example: %s' % 'bad') Right: LOG.debug('Example: %s', 'good') See the following guideline. * http://docs.openstack.org/developer/oslo.i18n/guidelines.html #adding-variables-to-log-messages The rule for it should be added to hacking checks. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1596829/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1585608] Re: theme switcher broken
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1585608 Title: theme switcher broken Status in OpenStack Dashboard (Horizon): Expired Bug description: while adding a new theme via local_settings.d, all themes from theme switcher get removed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1585608/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605760] Re: Inaccurate column name in hypervisor compute host
Reviewed: https://review.openstack.org/346233 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=9c05819519a599d6b2e78172ab87bb3abd2de250 Submitter: Jenkins Branch:master commit 9c05819519a599d6b2e78172ab87bb3abd2de250 Author: Revon MathewsDate: Fri Jul 22 15:44:58 2016 -0500 Modified columns names in Admin->Hypervisor Changed column name back to Hostname This patch modifies column names to be more specific and consistent Change-Id: Ia220b57631dc43b0a3ad2bc9949c2f6eb4e0ba40 Closes-bug: #1605760 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1605760 Title: Inaccurate column name in hypervisor compute host Status in OpenStack Dashboard (Horizon): Fix Released Bug description: 1. Go to Admin>System>Hypervisors - Compute Host tab: Actual Result: Compute tab displays column names "Zone" and "Updated At" which are inaccurate. Expected Result: "Zone" could be replaced with "Availability Zone" and "Time since updated" with "Last Updated" which are more consistent and specific with other tables. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1605760/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606747] [NEW] [sriov] nova does't regard second nic after booting
Public bug reported: Description === After booting a vm, we attach a sriov port to the vm. It can't work. Because the port have not corrent profile.pci_slot information. Steps to reproduce == * boot a vm * create a sriov port * attach sriov port to the vm Expected result === add sr-iov pci device successfully Actual result = because lack of pci_slot, we can not add pci device correctly Environment === Kilo & Mitaka ** Affects: nova Importance: Undecided Status: New ** Tags: sriov ** Tags added: sriov -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606747 Title: [sriov] nova does't regard second nic after booting Status in OpenStack Compute (nova): New Bug description: Description === After booting a vm, we attach a sriov port to the vm. It can't work. Because the port have not corrent profile.pci_slot information. Steps to reproduce == * boot a vm * create a sriov port * attach sriov port to the vm Expected result === add sr-iov pci device successfully Actual result = because lack of pci_slot, we can not add pci device correctly Environment === Kilo & Mitaka To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1606747/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606741] [NEW] Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode
Public bug reported: In my mitaka environment, there are five nodes here, including controller, network1, network2, computer1, computer2 node. I start l3-agents with dvr_snat mode in all network and compute nodes, and it works well for most neutron services unless the metadata proxy service. Then enable metadata-proxy true. When I run command "curl http://169.254.169.254; in an instance booting from cirros, it returns "curl: couldn't connect to host" and the instance can't get metadata in its first booting. * Pre-conditions: start l3-agent with dvr_snat mode in all computer and network nodes and set enable_metadata_proxy to true in l3-agent.ini. * Step-by-step reproduction steps: 1.create a network and a subnet under this network; 2.create a router; 3.add the subnet to the router 4.create an instance with cirros (or other images) on this subnet 5.open the console for this instance and run command 'curl http://169.254.169.254' in bash, waiting for result. * Expected output: this command should return the true metadata info with the command 'curl http://169.254.169.254' * Actual output: the command actually returns "curl: couldn't connect to host" * Version: ** OpenStack version (Specific stable branch, or git hash if from trunk): Mitaka ** Linux distro, kernel. For a distro, it’s also worth knowing specific versions of client and server: all hosts are centos7 ** DevStack or other _deployment_ mechanism? * Tags (Affected component): l3-agent dvr metadata-proxy ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606741 Title: Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode Status in neutron: New Bug description: In my mitaka environment, there are five nodes here, including controller, network1, network2, computer1, computer2 node. I start l3-agents with dvr_snat mode in all network and compute nodes, and it works well for most neutron services unless the metadata proxy service. Then enable metadata-proxy true. When I run command "curl http://169.254.169.254; in an instance booting from cirros, it returns "curl: couldn't connect to host" and the instance can't get metadata in its first booting. * Pre-conditions: start l3-agent with dvr_snat mode in all computer and network nodes and set enable_metadata_proxy to true in l3-agent.ini. * Step-by-step reproduction steps: 1.create a network and a subnet under this network; 2.create a router; 3.add the subnet to the router 4.create an instance with cirros (or other images) on this subnet 5.open the console for this instance and run command 'curl http://169.254.169.254' in bash, waiting for result. * Expected output: this command should return the true metadata info with the command 'curl http://169.254.169.254' * Actual output: the command actually returns "curl: couldn't connect to host" * Version: ** OpenStack version (Specific stable branch, or git hash if from trunk): Mitaka ** Linux distro, kernel. For a distro, it’s also worth knowing specific versions of client and server: all hosts are centos7 ** DevStack or other _deployment_ mechanism? * Tags (Affected component): l3-agent dvr metadata-proxy To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606741/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606740] [NEW] The schema of quota's update isn't include networks quota
Public bug reported: The network work quota is configurable https://github.com/openstack/nova/blob/master/nova/conf/api.py#L307 But we forget enable network quota in the quota's update json-schema. ** Affects: nova Importance: High Assignee: Alex Xu (xuhj) Status: New ** Changed in: nova Assignee: (unassigned) => Alex Xu (xuhj) ** Changed in: nova Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606740 Title: The schema of quota's update isn't include networks quota Status in OpenStack Compute (nova): New Bug description: The network work quota is configurable https://github.com/openstack/nova/blob/master/nova/conf/api.py#L307 But we forget enable network quota in the quota's update json-schema. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1606740/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606426] Re: Upgrading to Mitaka casues significant slow down on user-list
** Also affects: keystone/mitaka Importance: Undecided Status: New ** Summary changed: - Upgrading to Mitaka casues significant slow down on user-list + user list is much slower in mitaka and newton ** Changed in: keystone/mitaka Status: New => In Progress ** Changed in: keystone/mitaka Importance: Undecided => Critical ** Changed in: keystone/mitaka Assignee: (unassigned) => Boris Bobrov (bbobrov) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1606426 Title: user list is much slower in mitaka and newton Status in OpenStack Identity (keystone): In Progress Status in OpenStack Identity (keystone) mitaka series: In Progress Bug description: With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds In Mitaka it takes 19-22 seconds. This is a significant slow down. We have ~9,000 users We also changed from going under eventlet to moving to apache wsgi We have ~10,000 project and this api (project-list) hasn't slowed down so I think this is something specific to the user-list api To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1606426/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606718] [NEW] logging pci_devices from the resource tracker is kind of terrible
Public bug reported: The _report_hypervisor_resource_view method in the resource tracker on the compute node is logging pci devices (if set in the resources dict from the virt driver). I have a compute node with libvirt 1.2.2 with several hundred devices: http://paste.openstack.org/show/542185/ Those get logged every TWICE every 60 seconds (by default) because of the update_available_resource periodic task in the compute manager. We should at the very least only log the giant dict of pci devices once in _report_hypervisor_resource_view, or maybe not at all. ** Affects: nova Importance: Medium Assignee: Matt Riedemann (mriedem) Status: In Progress ** Tags: compute pci ** Changed in: nova Status: New => Triaged ** Changed in: nova Importance: Undecided => Medium ** Tags added: compute pci -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606718 Title: logging pci_devices from the resource tracker is kind of terrible Status in OpenStack Compute (nova): In Progress Bug description: The _report_hypervisor_resource_view method in the resource tracker on the compute node is logging pci devices (if set in the resources dict from the virt driver). I have a compute node with libvirt 1.2.2 with several hundred devices: http://paste.openstack.org/show/542185/ Those get logged every TWICE every 60 seconds (by default) because of the update_available_resource periodic task in the compute manager. We should at the very least only log the giant dict of pci devices once in _report_hypervisor_resource_view, or maybe not at all. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1606718/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1599461] Re: Missing fields in router details page
Reviewed: https://review.openstack.org/341709 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=88984bcb4f5f844c618f6e547e984410c36b35be Submitter: Jenkins Branch:master commit 88984bcb4f5f844c618f6e547e984410c36b35be Author: Brian BowenDate: Wed Jul 13 11:23:13 2016 -0400 Add missing fields to Router Details page Change-Id: Idc83cd7c42617cdede9d8cd642753e05503b42c8 Closes-Bug: #1599461 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1599461 Title: Missing fields in router details page Status in OpenStack Dashboard (Horizon): Fix Released Bug description: There are a few fields provided by the Neutron API for Routers that are not displayed in the details page, such as "description" and "availability_zones" To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1599461/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606707] [NEW] instance snapshot fails with "AttributeError: size" when using glance v1
Public bug reported: This is on a newton server deployed on 7/25 from master (newton). I set use_glance_v1=True in nova.conf on the compute node. I created a server from this image: +--+--+ | Property | Value | +--+--+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-07-06T17:25:08Z | | disk_format | qcow2 | | id | aac0314e-9bdf-4cee-a3d8-5f089008ea96 | | locations| [{"url": "file:///var/lib/glance/images/aac0314e-9bdf-4cee-a3d8-5f089008ea96", | | | "metadata": {}}] | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner| None | | protected| False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2016-07-06T17:25:09Z | | virtual_size | None | | visibility | public | +--+--+ I tried to snapshot the instance and it failed with this: http://paste.openstack.org/show/542180/ 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] Traceback (most recent call last): 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 231, in decorated_function 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] *args, **kwargs) 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 3024, in snapshot_instance 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] task_states.IMAGE_SNAPSHOT) 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager.py", line 3054, in _snapshot_instance 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] update_task_state) 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1482, in snapshot 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] snapshot = self._image_api.get(context, image_id) 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/image/api.py", line 93, in get 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] show_deleted=show_deleted) 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd] File "/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/image/glance.py", line 266, in show 2016-07-26 21:25:34.563 51383 ERROR nova.compute.manager [instance: 85bcc918-4d00-4d21-82fe-65b13035adcd]
[Yahoo-eng-team] [Bug 1580116] Re: Make sure Horizon can run without Neutron, Glance, Nova and Keystone
Reviewed: https://review.openstack.org/342283 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=018e99d20e769811babf61d87192ec7a76c8f2eb Submitter: Jenkins Branch:master commit 018e99d20e769811babf61d87192ec7a76c8f2eb Author: Steve McLellanDate: Thu Jul 14 11:55:37 2016 -0500 Allow horizon to function without nova Adds conditional block to nova quotas to exclude them if nova is not enabled; adds 'permission' checks to the project overview and access_and_security panels to only enable them if compute is enabled; adds permission checks on compute and image to the admin overview and metadef panels; disables 'modify quota' and 'view usage' project actions; disables 'update defaults' if there are no quotas available. The 'access and security' panel still appears (under Compute) but tabs other than the keystone endpoint and RC download tab are hidden. Closes-Bug: #1580116 Change-Id: I1b2ddee0395ad9f55692111604b31618c4eaf69e ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1580116 Title: Make sure Horizon can run without Neutron, Glance, Nova and Keystone Status in OpenStack Dashboard (Horizon): Fix Released Bug description: As stated in the installation guide (http://docs.openstack.org/developer/horizon/topics/install.html), the Horizon dashboard requires Neutron, Glance, Nova and Keystone. However, some OpenStack services can be used separately without any of those, such as Mistral. As a Mistral user, I would like to be able to use the Mistral dashboard, without having to install Neutron, Glance, Nova nor Keystone (as long as I use Mistral without authentication). It would be silly to have to start a separate UI project just because of that, when the Mistral dashboard Horizon plugin already does what we need. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1580116/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1579601] Re: Move dns related data field out of ml2 plugin into dns plugin
Reviewed: https://review.openstack.org/313291 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=64f5fc82596ec6b78b76ca5d9cfc1d4b5a0b975d Submitter: Jenkins Branch:master commit 64f5fc82596ec6b78b76ca5d9cfc1d4b5a0b975d Author: Bin YuDate: Fri May 6 17:20:04 2016 +0800 Refactor DNS integration out of DB core plugin This patch set aims to move all the code related to DNS integration from the DB core plugin to the DNS ML2 extension module. By doing this, this patchset removes the dns related code in db_base_plugin_v2 and the dns exteions module talks with core plugin only through the method extension_manager and apply_dict_extend_functions By properly implementing the generation of the dns_assignment attribute for ports in the DNS ML2 extension module, this patchset also fixes https://bugs.launchpad.net/neutron/+bug/1579977 Change-Id: I63afb1a1bfeeb14eefb54681dc64959144deeb25 Closes-Bug: #1579601 Closes-Bug: #1579977 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1579601 Title: Move dns related data field out of ml2 plugin into dns plugin Status in neutron: Fix Released Bug description: Now, We prefer to enable DNS support in Neutron by enabling the extension_drivers in Neutron plugin configurations. However, we still have the dns related data in ml2 core plugin like dns_name in ports data fields. We prefer to remove the code section like"if ('dns-integration' in self.supported_extension_aliases and 'dns_name' in p)" out of db_base_plugin_v2.py and code like if "dns_name" in port: res["dns_name"] = port["dns_name"] out of db_base_plugin_common.py. The goal is to let the dns plugin handle the dns request, and the core plugin calling the dns plugin through the method like extension_drivers and app_dict. we hope to hide the detail of dns information from the prespective of core plugin. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1579601/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1579977] Re: dns_assignment is lost during port creation after VIF binding
Reviewed: https://review.openstack.org/313291 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=64f5fc82596ec6b78b76ca5d9cfc1d4b5a0b975d Submitter: Jenkins Branch:master commit 64f5fc82596ec6b78b76ca5d9cfc1d4b5a0b975d Author: Bin YuDate: Fri May 6 17:20:04 2016 +0800 Refactor DNS integration out of DB core plugin This patch set aims to move all the code related to DNS integration from the DB core plugin to the DNS ML2 extension module. By doing this, this patchset removes the dns related code in db_base_plugin_v2 and the dns exteions module talks with core plugin only through the method extension_manager and apply_dict_extend_functions By properly implementing the generation of the dns_assignment attribute for ports in the DNS ML2 extension module, this patchset also fixes https://bugs.launchpad.net/neutron/+bug/1579977 Change-Id: I63afb1a1bfeeb14eefb54681dc64959144deeb25 Closes-Bug: #1579601 Closes-Bug: #1579977 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1579977 Title: dns_assignment is lost during port creation after VIF binding Status in neutron: Fix Released Bug description: DESCRIPTION: The dns_assignment attribute is not actually part of the port's DB schema. It is a field that is populated on the fly during port creation (if dns_domain is set in neutron.conf and the port has a dns_name set), in order to send port information to DHCP agent, for example. This occurs in create_port in db_base_plugin_v2.py In our ML2 plugin for create_port (create_port in plugins/ml2/plugin.py), However, DNS assignment lost when attempting VIF binding. If VIF binding is committed, a new context created from Port DB - dns_assignment not a DB field. As such, any dns_assignment that was previously populated is lost. Please see: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#1130 The dns_assignment from incoming mech_context needs to be copied over to the new bound_context. Else DHCP will receive an empty dns_assignment and VM DNS resolution will not work. PRE-CONDITIONS: 1. I have a small local changes to Nova's network/neutronv2/api.py (allocate_for_instance), in which it sends the dns_name as the instance's name, in the port_req_body during VM instance creation. This part of the Nova neutron API code triggers a port_create via neutron client. This enables setting DNS automatically during instance creation 2. In neutron.conf, must set the dns_domain to some non-default value. Else, DNS resolution is disabled. REPRODUCTION STEPS: I just created a VM like normal (via GUI or NOVA CLI), and don't attach it to exist port. I verified that neutron server API for port creation was correctly getting the instance hostname as the dns_name in the port request payload. However, DHCP agent was receiving an empty dns_assignment. EXPECTED OUTPUT: Creating a VM should set DNS for the port, the DHCP agent and hosts file should have correct entries. ACTUAL OUTPUT: the DHCP host file entry has default "host-IP- openstack.local" format, and does not use DNS resolution of "hostname.domain.com" Version: Liberty, Centos 7.1 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1579977/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606659] [NEW] add-tag Normal response code is 200, not 201
Public bug reported: This is found at stable/mitaka release According to networking API v2.0 extensions: http://developer.openstack.org/api-ref/networking/v2-ext/index.html?expanded=add-a-tag-detail,confirm-a-tag-detail,remove-all-tags-detail,replace-all-tags-detail,remove-a-tag-detail#tag-extension-tags API DOC: ## PUT /v2.0/{resource_type}/{resource_id}/tags/{tag}Add a tagclose Adds a tag on the resource. Error response codes:201,404,500,401,503, Although the document did not specify normal response code is 200. It is generally 200 to represent an successful operation. The current response code is 201 when tag is added. stack@falcon-devstack ~ $ neutron --debug tag-add --resource-type network --resource tempest-test-network--402537940 --tag xBlue DEBUG: stevedore.extension found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') DEBUG: stevedore.extension found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') DEBUG: stevedore.extension found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') DEBUG: stevedore.extension found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') DEBUG: stevedore.extension found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') DEBUG: stevedore.extension found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') DEBUG: stevedore.extension found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') DEBUG: stevedore.extension found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') DEBUG: stevedore.extension found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') DEBUG: neutronclient.neutron.v2_0.tag.AddTag run(Namespace(request_format='json', resource=u'tempest-test-network--402537940', resource_type=u'network', tag=u'xBlue')) DEBUG: keystoneauth.session REQ: curl -g -i -X GET http://10.34.57.68:5000/v2.0 -H "Accept: application/json" -H "User-Agent: keystoneauth1/2.4.0 python-requests/2.9.1 CPython/2.7.6" DEBUG: keystoneauth.session RESP: [200] Content-Length: 337 Vary: X-Auth-Token Keep-Alive: timeout=5, max=100 Server: Apache/2.4.10 (Ubuntu) Connection: Keep-Alive Date: Tue, 26 Jul 2016 15:46:42 GMT Content-Type: application/json x-openstack-request-id: req-6dd80f57-ec78-4bdb-8d2b-a6f15e48211f RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://10.34.57.68:5000/v2.0/;, "rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}} DEBUG: keystoneauth.identity.v2 Making authentication request to http://10.34.57.68:5000/v2.0/tokens DEBUG: stevedore.extension found extension EntryPoint.parse('l2_gateway_connection = networking_l2gw.l2gatewayclient.l2gw_client_ext._l2_gateway_connection') DEBUG: stevedore.extension found extension EntryPoint.parse('l2_gateway = networking_l2gw.l2gatewayclient.l2gw_client_ext._l2_gateway') DEBUG: keystoneauth.session REQ: curl -g -i -X GET http://10.34.57.68:9696/v2.0/networks.json?fields=id=tempest-test-network--402537940 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a02258f9d970cdb9c1c2e483389f103d34134991" DEBUG: keystoneauth.session RESP: [200] Date: Tue, 26 Jul 2016 15:46:42 GMT Connection: keep-alive Content-Type: application/json; charset=UTF-8 Content-Length: 62 X-Openstack-Request-Id: req-28ce177f-042d-488e-b595-370d263873fe RESP BODY: {"networks": [{"id": "2907e8e9-b825-4f53-bd1f-1a974edbc345"}]} DEBUG: keystoneauth.session REQ: curl -g -i -X PUT http://10.34.57.68:9696/v2.0/networks/2907e8e9-b825-4f53-bd1f-1a974edbc345/tags/xBlue.json -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}a02258f9d970cdb9c1c2e483389f103d34134991" DEBUG: keystoneauth.session RESP: [201] Date: Tue, 26 Jul 2016 15:46:42 GMT Connection: keep-alive Content-Type: application/json; charset=UTF-8 Content-Length: 4 X-Openstack-Request-Id: req-41f5dbc6-e04c-44b3-8aa8-b544a9a781e6 RESP BODY: null stack@falcon-devstack ~ $ neutron net-show 2907e8e9-b825-4f53-bd1f-1a974edbc345 +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | default | | created_at
[Yahoo-eng-team] [Bug 1606647] [NEW] Invalid UUIDs during tests
Public bug reported: In many places, wrong fake UUID is used. We should get rid of them. Except nova_manage which uses the markers, and UUIDs the wrong way, so we cannot removed that one (this time). During testint (e.g.: tox -epy27), lot of warnings get created in the log. Some examples, which we should remove: {6} nova.tests.unit.compute.test_compute_mgr.ComputeManagerUnitTestCase.test_check_device_tagging_tagged_net_req_no_virt_support [0.157444s] ... ok Captured stderr: /home/user/nova/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:348: FutureWarning: bar is an invalid UUID. Using UUIDFields with invalid UUIDs is no longer supported, and will be removed in a future release. Please update your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField for further details "for further details" % value, FutureWarning) {0} nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_allocate_for_instance_not_enough_macs [0.135878s] ... ok Captured stderr: /home/user/nova/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:348: FutureWarning: fake is an invalid UUID. Using UUIDFields with invalid UUIDs is no longer supported, and will be removed in a future release. Please update your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField for further details "for further details" % value, FutureWarning) {5} nova.tests.unit.network.test_neutronv2.TestNeutronv2.test_allocate_for_instance_with_externalnet_admin_ctx [0.139779s] ... ok Captured stderr: /home/user/nova/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:348: FutureWarning: fake is an invalid UUID. Using UUIDFields with invalid UUIDs is no longer supported, and will be removed in a future release. Please update your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField for further details "for further details" % value, FutureWarning) But there are some warnings, which we can't remove easily. These are the markers in nova manage. Some warnings, we cannot remove easily: nova.tests.unit.test_nova_manage.CellV2CommandsTestCase.test_map_instances_marker_deleted [0.652649s] ... ok Captured stderr: /home/user/nova/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:348: FutureWarning: 73e1b0e8 cb2f 460b 926d 547f1f24bfef is an invalid UUID. Using UUIDFields with invalid UUIDs is no longer supported, and will be removed in a future release. Please update your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField for further details "for further details" % value, FutureWarning) {1} nova.tests.unit.test_nova_manage.CellV2CommandsTestCase.test_map_instances_max_count [0.515909s] ... ok Captured stderr: /home/user/nova/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:348: FutureWarning: ca009c24 21b8 471c 8ca7 1cc0981d97e5 is an invalid UUID. Using UUIDFields with invalid UUIDs is no longer supported, and will be removed in a future release. Please update your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField for further details "for further details" % value, FutureWarning) ** Affects: nova Importance: Undecided Assignee: Gábor Antal (gabor.antal) Status: New ** Changed in: nova Assignee: (unassigned) => Gábor Antal (gabor.antal) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606647 Title: Invalid UUIDs during tests Status in OpenStack Compute (nova): New Bug description: In many places, wrong fake UUID is used. We should get rid of them. Except nova_manage which uses the markers, and UUIDs the wrong way, so we cannot removed that one (this time). During testint (e.g.: tox -epy27), lot of warnings get created in the log. Some examples, which we should remove: {6} nova.tests.unit.compute.test_compute_mgr.ComputeManagerUnitTestCase.test_check_device_tagging_tagged_net_req_no_virt_support [0.157444s] ... ok Captured stderr:
[Yahoo-eng-team] [Bug 1604943] Re: non-ASCII chars ( Chinese ) not allowed in Keypair name
This is as intended, please see [1]. You should also get the error message "Keypair data is invalid: Keypair name contains unsafe characters" References: [1] https://github.com/openstack/nova/blob/aa81d6c301d6549af6fe8e8a9fb55facf898f809/nova/compute/api.py#L3907-L3911 ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1604943 Title: non-ASCII chars ( Chinese ) not allowed in Keypair name Status in OpenStack Compute (nova): Invalid Bug description: On a create (POST) request on /os-keypairs API, using a non-ASCII character such as Chinese or Japanese characters for the name parameter produces a 400 error return. "POST /v2.1/f60dbb1f1d2e4f8cb2434f0ed1016d97/os-keypairs HTTP/1.1" status: 400 len: 401 time: 0.0861628 Openstack version is mitaka. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1604943/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1603979] Re: gate: context tests failed because missing parameter "is_admin_project" (oslo.context 2.6.0)
Looks like https://review.openstack.org/#/c/345633/ solved this issue. I see no hits in logstash in the last 24 hours: Logstash query: http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=build_name:gate-nova-python27-db%20AND%20message:%5C%22testtools.matchers._impl.MismatchError:%200%20!%3D%201:%20%5B%5C%22Arguments%20dropped%20when%20creating%20context:%20%7B'is_admin_project':%20True%7D%5C%22%5D%5C%22 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1603979 Title: gate: context tests failed because missing parameter "is_admin_project" (oslo.context 2.6.0) Status in OpenStack Compute (nova): Fix Released Bug description: Description === The following 3 tests failed: 1. nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict Captured traceback: ~~~ Traceback (most recent call last): File "nova/tests/unit/test_context.py", line 230, in test_convert_from_dict_then_to_dict self.assertEqual(values, values2) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: !=: reference = { .. 'is_admin': True, ..} actual= { .. 'is_admin': True, 'is_admin_project': True, ..} 2. nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict Captured traceback: ~~~ Traceback (most recent call last): File "nova/tests/unit/test_context.py", line 203, in test_convert_from_rc_to_dict self.assertEqual(expected_values, values2) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: !=: reference = { .. 'is_admin': True, ..} actual= { .. 'is_admin': True, 'is_admin_project': True, ..} 3. nova.tests.unit.test_context.ContextTestCase.test_to_dict_from_dict_no_log Captured traceback: ~~~ Traceback (most recent call last): File "nova/tests/unit/test_context.py", line 144, in test_to_dict_from_dict_no_log self.assertEqual(0, len(warns), warns) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: 0 != 1: ["Arguments dropped when creating context: {'is_admin_project': True}"] Steps to reproduce == Just run the context tests: tox -e py27 test_context This is because we missed to pass "is_admin_project" parameter to __init__() of oslo.context.ResourceContext when initializing a nova ResourceContext object. In nova/context.py @enginefacade.transaction_context_provider class RequestContext(context.RequestContext): """Security context and request information. Represents the user taking a given action within the system. """ def __init__(self, user_id=None, project_id=None, is_admin=None, read_deleted="no", roles=None, remote_address=None, timestamp=None, request_id=None, auth_token=None, overwrite=True, quota_class=None, user_name=None, project_name=None, service_catalog=None, instance_lock_checked=False, user_auth_plugin=None, **kwargs): .. super(RequestContext, self).__init__( .. is_admin=is_admin, ..) But in oslo_context/context.py, class RequestContext(object): .. def __init__(.. is_admin=False, .. is_admin_project=True): To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1603979/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help :
[Yahoo-eng-team] [Bug 1543937] Re: db purge records fails for very large number
Reviewed: https://review.openstack.org/322757 Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=25f5eed56faa81969a0e0cd6a732f789deaddb0c Submitter: Jenkins Branch:master commit 25f5eed56faa81969a0e0cd6a732f789deaddb0c Author: bhagyashrisDate: Wed May 18 18:04:30 2016 +0530 Add check to limit maximum value of age_in_days If you pass age_in_days value greater than unix epoch then it raises OverflowError. Added check to ensure age_in_days is within unix epoch time to fix this problem. Removed the redundant check for age_in_days value from cinder.db.sqlalchemy.api module which is already checked at cinder.cmd.manage module. Closes-Bug: #1543937 Change-Id: Ib418627fb8527a1275c2656d1451f1e1dfeb72ef ** Changed in: cinder Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1543937 Title: db purge records fails for very large number Status in Cinder: Fix Released Status in Glance: In Progress Status in OpenStack Compute (nova): Fix Released Bug description: The command: $ nova-manage db archive_deleted_rows --verbose fails for very large NUMBER value on nova master Nova version: openstack@openstack-136:/opt/stack/nova$ git log -1 commit 29641bd9778b51ac5794dfed9d4b881c5d47dc50 Merge: 21e79d5 9fbe683 Author: Jenkins Date: Wed Feb 10 06:03:00 2016 + Merge "Top 100 slow tests: api.openstack.compute.test_api" Example: openstack@openstack-136:~$ nova-manage db archive_deleted_rows 214748354764774747774747477536654545649 --verbose 2016-02-09 22:17:10.713 ERROR oslo_db.sqlalchemy.exc_filters [-] DBAPIError exception wrapped from (pymysql.err.ProgrammingError) (1064, u"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '214748354764774747774747477536654545649' at line 4") [SQL: u'INSERT INTO shadow_instance_actions_events (created_at, updated_at, deleted_at, deleted, id, event, action_id, start_time, finish_time, result, traceback, host, details) SELECT instance_actions_events.created_at, instance_actions_events.updated_at, instance_actions_events.deleted_at, instance_actions_events.deleted, instance_actions_events.id, instance_actions_events.event, instance_actions_events.action_id, instance_actions_events.start_time, instance_actions_events.finish_time, instance_actions_events.result, instance_actions_events.traceback, instance_actions_events.host, instance_actions_events.details \nFROM instance_actions_events \nWHERE inst ance_actions_events.deleted != %s ORDER BY instance_actions_events.id \n LIMIT %s'] [parameters: (0, 214748354764774747774747477536654545649L)] 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters context) 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, in execute 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters result = self._query(query) 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, in _query 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters conn.query(q) 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 781, in query 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 942, in _read_query_result 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters result.read() 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1138, in read 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters File
[Yahoo-eng-team] [Bug 1606253] Re: replace image-update response example in api-ref
Reviewed: https://review.openstack.org/346872 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=8161bde357b1168f3b1de82bb6c180acd38958c2 Submitter: Jenkins Branch:master commit 8161bde357b1168f3b1de82bb6c180acd38958c2 Author: bria4010Date: Mon Jul 25 10:52:09 2016 -0400 api-ref: Replace image-update response example The image-update sample response in the current api-ref is a bit weird (too many null values). This patch replaces it with a more typical response. Change-Id: I1f837d8fa0e42b9f9f1c421555339924d72cb9fe Closes-bug: #1606253 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606253 Title: replace image-update response example in api-ref Status in Glance: Fix Released Bug description: Not only is the owner null, but the image is active and has a non-null file with null disk_format and container_format. Replace with a more typical response. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1577982] Re: ConfigDrive: cloud-init fails to configure network from network_data.json
Hi, This bug was fixed in cloud-init at revision 1225 and was sru'd to xenial under bug 1595302, but unfortunately not even marked in that bug. So anything newer than 0.7.7~bzr1245-0ubuntu1~16.04.1 in xenial should have the fix. ** Changed in: cloud-init (Ubuntu Xenial) Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1577982 Title: ConfigDrive: cloud-init fails to configure network from network_data.json Status in cloud-init: Fix Committed Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Fix Released Bug description: When running Ubuntu 16.04 on OpenStack, cloud-init fails to properly configure the network from network_data.json found in ConfigDrive. When instance boots, network is configured fine until next reboot where it falls back to dhcp. The /etc/network/interfaces.d/50-cloud-init.cfg file has the following content when instance is initially booted, this could explain why dhcp is used on second boot: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp When debugging, if this line in stages.py [1] is commented, we can see that cloud-init initially copy the /etc/network/interfaces file found in the configdrive (the network template injected by Nova) and isn't using the network config found in network_data.json. But later it falls back to "dhcp" and rewrites yet again the network config. I also found that within self._find_networking_config(), it looks like no datasource is found at this point. This could be because cloud-init is still in "local" dsmode and then refuses to use the network config found in the ConfigDrive. (triggering the "dhcp" fallback logic) Manually forcing "net" dsmode makes cloud-init configure /etc/network/interfaces.d/50-cloud-init.cfg properly with network config found in the ConfigDrive. However no gateway is configured and so, instance doesn't respond to ping or SSH. At that point, I'm not sure what's going on and how I can debug further. Notes: * The image used for testing uses "net.ifnames=0". Removing this config makes things much worst. (no ping at all on first boot) * Logs, configs and configdrive can be found attached to this bug report. [1] http://bazaar.launchpad.net/~cloud-init-dev/cloud- init/trunk/view/head:/cloudinit/stages.py#L604 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1577982/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606443] Re: lacking of scenario test for routed network
We are not going to need this bug. We are going to track this work under the routed networks blue print. The commit message of your patchset should include: Partially-Implements: blueprint routed-networks ** Changed in: neutron Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606443 Title: lacking of scenario test for routed network Status in neutron: Invalid Bug description: Adding scenario test cases for routed network, this scenario test should cover Scenario tests that tagrget the interaction between Nova and Neutron when using routed netwroks Basic happy deferred IP port (Pre-existing port given to Nova). Put subnet on a segment and make sure that Nova can boot with a deferred ip port Check connectivity to instance Basic happy deferred IP port (Nova creates the port) Check connectivity to instance Failed IP allocation. Adjust allocation_ranges so that IP allocation fails to get an IP address. VM should go to error state. Ports cleaned up. Try this with a VM instance with two ports. If one is deferred and fails IP allocation, does the other get cleaned up? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606443/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606250] Re: refresh images schemas for api-ref
Reviewed: https://review.openstack.org/346858 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=57458276a9dc5bf7c074ee776385425d1e799889 Submitter: Jenkins Branch:master commit 57458276a9dc5bf7c074ee776385425d1e799889 Author: bria4010Date: Mon Jul 25 10:24:55 2016 -0400 api-ref: Refresh images schemas The schemas in the current api-ref are outdated. This patch adds the schemas contained in the 13.0.0.0b2 release. Change-Id: I45615e049339b3df8d1c6cda74d7408a177aba4e Closes-bug: #1606250 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606250 Title: refresh images schemas for api-ref Status in Glance: Fix Released Bug description: The api-ref got merged with at least one outdated image-related schema. Refresh them all. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606250/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1590104] Re: network config from datasource overrides network config from system
Hi, This bug was fixed in cloud-init at revision 1228 and was sru'd to xenial under bug 1595302, but unfortunately not even marked in that bug. So anything newer than 0.7.7~bzr1245-0ubuntu1~16.04.1 in xenial should have the fix. ** Changed in: cloud-init (Ubuntu Xenial) Status: Confirmed => Fix Released ** Changed in: cloud-init Status: Confirmed => Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1590104 Title: network config from datasource overrides network config from system Status in cloud-init: Fix Committed Status in cloud-init package in Ubuntu: Fix Released Status in cloud-init source package in Xenial: Fix Released Status in cloud-init source package in Yakkety: Fix Released Bug description: network configuration in system config should override that found as provided by a datasource. The order of precedence should be: datasource system config kernel command line When juju creates lxc containers they want to be in control of networking and do not want cloud-init to configure networking either from the datasource (lxc's template provided nocloud) or from fallback. They are specifying that configuration directly in /etc/network/interfaces. ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: cloud-init 0.7.7~bzr1212-0ubuntu1 ProcVersionSignature: Ubuntu 4.4.0-23.41-generic 4.4.10 Uname: Linux 4.4.0-23-generic x86_64 ApportVersion: 2.20.1-0ubuntu2.1 Architecture: amd64 Date: Tue Jun 7 18:16:09 2016 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1590104/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606394] Re: Support for LaunchPad git repos
** Also affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1606394 Title: Support for LaunchPad git repos Status in cloud-init: New Status in jenkins-launchpad-plugin: New Bug description: LaunchPad supports the use of git as a version control system. The plugin (as far as I can tell) does not allow specifying a git based repo as it was built around the use of bzr. In addition to using git over bzr, some of the changes between the two systems' URLs are outlined below: Merge Request: bzr: https://code.launchpad.net/~harlowja/cloud-init/cloud-init-tag-distros/+merge/300840 git: https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+merge/301108 Proposed Branch: bzr: https://code.launchpad.net/~harlowja/cloud-init/cloud-init-tag-distros git: https://code.launchpad.net/~harlowja/cloud-init/+git/cloud-init/+ref/distro-tags Merge Branch: bzr: https://code.launchpad.net/~cloud-init-dev/cloud-init/trunk git: https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+ref/master I am currently trying to determine what the best route would be to add git support. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1606394/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606615] [NEW] Unable to create instance
Public bug reported: I have a controller and one compute node. Followed the instructions on http://docs.openstack.org/mitaka/install-guide- ubuntu/common/conventions.html to install openstack Mitaka on ubuntu 14.04 LTS servers virtualized in virtualbox. The compute node is always reflected dead when i execute 'neutron agent- list' on the controller node. When i try to create a .iso instance on horizon, the error message "Unable to create the server." will appear on the dashboard. A check on the nova-api.log shows "HTTP exception thrown: Unexpected API Error. Below are the logs from nova-api.log from the moment i click 'Launch Instance' on horizon until the error. 2016-07-18 17:25:54.491 24247 INFO nova.osapi_compute.wsgi.server [req-62b11e12-b49d-4700-9611-5b81d2edca53 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/limits HTTP/1.1" status: 200 len: 779 time: 0.3234909 2016-07-18 17:25:55.136 24247 INFO nova.osapi_compute.wsgi.server [req-962b85f2-b664-4628-810e-4bdeeb3297bc 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/detail HTTP/1.1" status: 200 len: 2358 time: 0.9781041 2016-07-18 17:25:55.164 24246 INFO nova.osapi_compute.wsgi.server [req-06b096a3-ec49-40fa-bfb5-53b93dc6cedc 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/os-keypairs HTTP/1.1" status: 200 len: 283 time: 0.9958830 2016-07-18 17:25:55.167 24247 INFO nova.osapi_compute.wsgi.server [req-0572ba1b-6a3e-44fd-a6fa-040b3e8826ae 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/1/os-extra_specs HTTP/1.1" status: 200 len: 286 time: 0.0242019 2016-07-18 17:25:55.198 24247 INFO nova.osapi_compute.wsgi.server [req-34e80098-c40a-4425-9d51-4e8c59fb2ff1 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/2/os-extra_specs HTTP/1.1" status: 200 len: 286 time: 0.0227361 2016-07-18 17:25:55.220 24246 INFO nova.osapi_compute.wsgi.server [req-c162715d-5479-4d1b-bcfe-c0f02f674c4e 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/os-availability-zone HTTP/1.1" status: 200 len: 364 time: 1.0783610 2016-07-18 17:25:55.233 24247 INFO nova.osapi_compute.wsgi.server [req-cd4db0f6-d428-450e-ba0e-5edad8ae117f 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/3/os-extra_specs HTTP/1.1" status: 200 len: 286 time: 0.0279779 2016-07-18 17:25:55.266 24247 INFO nova.osapi_compute.wsgi.server [req-c6000a74-2e7d-4eae-a967-2006b6b5c9d4 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/4/os-extra_specs HTTP/1.1" status: 200 len: 286 time: 0.0262952 2016-07-18 17:25:55.292 24246 INFO nova.osapi_compute.wsgi.server [req-651027a2-cd8d-4f08-a0f7-3e70dcf30da8 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] 10.0.0.11 "GET /v2.1/4239deeccc6a4266940737027a36da20/flavors/5/os-extra_specs HTTP/1.1" status: 200 len: 286 time: 0.0184162 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions [req-ae970713-3f7f-4acd-bbf6-f8ca5cae4606 1e6d6364acca4e199b4d8e3655433eec 4239deeccc6a4266940737027a36da20 - - -] Unexpected exception in API method 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-07-18 17:26:09.863 24246 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 629, in create 2016-07-18
[Yahoo-eng-team] [Bug 1606608] [NEW] Invalid links in Common Image Properties doc
Public bug reported: The architecture and os_distro property description in http://docs.openstack.org/developer/glance/common-image-properties.html both point to the same link: http://docs.openstack.org/cli- reference/glance.html#image-service-property-keys, which simply links to the Image service command-line client reference page. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606608 Title: Invalid links in Common Image Properties doc Status in Glance: New Bug description: The architecture and os_distro property description in http://docs.openstack.org/developer/glance/common-image- properties.html both point to the same link: http://docs.openstack.org /cli-reference/glance.html#image-service-property-keys, which simply links to the Image service command-line client reference page. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606608/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606381] Re: API-REF LBaaS v2 API version mixed with LBaaS v1 API
** Project changed: openstack-manuals => neutron ** Tags removed: doc ** Tags added: lib -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606381 Title: API-REF LBaaS v2 API version mixed with LBaaS v1 API Status in neutron: New Bug description: Somehow the LBaaS v2 API docs got merged/mixed with the LBaaS v1 API docs. Previously there were two sections, one for the deprecated LBaaSv1 API and one for the LBaaSv2 API. http://developer.openstack.org/api-ref/networking/v2-ext/index.html#lbaas-1-0-deprecated-lb-vips-health-monitors-pools-members To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606381/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606381] [NEW] API-REF LBaaS v2 API version mixed with LBaaS v1 API
You have been subscribed to a public bug: Somehow the LBaaS v2 API docs got merged/mixed with the LBaaS v1 API docs. Previously there were two sections, one for the deprecated LBaaSv1 API and one for the LBaaSv2 API. http://developer.openstack.org/api-ref/networking/v2-ext/index.html#lbaas-1-0-deprecated-lb-vips-health-monitors-pools-members ** Affects: neutron Importance: Undecided Status: New ** Tags: doc lbaas -- API-REF LBaaS v2 API version mixed with LBaaS v1 API https://bugs.launchpad.net/bugs/1606381 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606381] Re: API-REF LBaaS v2 API version mixed with LBaaS v1 API
** Tags added: doc lbaas ** Project changed: neutron => openstack-manuals -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606381 Title: API-REF LBaaS v2 API version mixed with LBaaS v1 API Status in openstack-manuals: New Bug description: Somehow the LBaaS v2 API docs got merged/mixed with the LBaaS v1 API docs. Previously there were two sections, one for the deprecated LBaaSv1 API and one for the LBaaSv2 API. http://developer.openstack.org/api-ref/networking/v2-ext/index.html#lbaas-1-0-deprecated-lb-vips-health-monitors-pools-members To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-manuals/+bug/1606381/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
** Also affects: glance Importance: Undecided Status: New ** Changed in: glance Importance: Undecided => Low ** Changed in: glance Status: New => Triaged ** Changed in: glance Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: In Progress Status in devstack: Fix Released Status in Glance: Triaged Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in PBR: In Progress Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: In Progress Status in python-neutronclient: Fix Released Status in python-troveclient: In Progress Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1601842] Re: stack.sh fails on devstack with python3
marking as invalid as originator OK'ed it ** Changed in: keystone Status: New => Invalid ** Changed in: keystone Assignee: lavanya sirigudi (lavanya553) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1601842 Title: stack.sh fails on devstack with python3 Status in OpenStack Identity (keystone): Invalid Bug description: Steps to reproduce: 1. localrc with python 3 enabled (the precise one I used http://paste.openstack.org/show/529714/) 2. run stack.sh Script terminated with the following error: (17:32:46) ivasilevskaya: CRITICAL keystone [-] ImportError: No module named 'memcache' (more logs here http://paste.openstack.org/show/529680/). memcache is present in pip3.4 installed package list and can be successfully imported manually. I made some blind guesses what might be wrong (never configured python3 devstack before), after installing libpython3-dev the error transfered to http://paste.openstack.org/show/529710/. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1601842/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1584676] Re: 500 Error when attaching an interface with the network that doesn't have any subnets
Reviewed: https://review.openstack.org/323332 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=dba6713e83967be513b5b2bfc39408f15c0d0fe6 Submitter: Jenkins Branch:master commit dba6713e83967be513b5b2bfc39408f15c0d0fe6 Author: Takashi NATSUMEDate: Tue May 31 21:08:35 2016 +0900 Return 400 when SecurityGroupCannotBeApplied is raised Return 400 when SecurityGroupCannotBeApplied is raised in attaching an interface to a VM instance. Change-Id: I6cc0e2b43b82dc3b16a581b1f8ec75b35995934e Closes-Bug: #1584676 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1584676 Title: 500 Error when attaching an interface with the network that doesn't have any subnets Status in OpenStack Compute (nova): Fix Released Bug description: The nova-api returns 500 error when attaching an interface with the network that doesn't have any subnets. [How to reproduce] stack@devstack-master:~/nova$ openstack server list +--+-+++ | ID | Name| Status | Networks | +--+-+++ | 82e55546-3496-499b-82eb-7b819d0a5e8e | server1 | ACTIVE | public=10.0.2.197, 2001:db8::6 | +--+-+++ stack@devstack-master:~/nova$ openstack network list +--+-++ | ID | Name| Subnets | +--+-++ | 7687f6d3-8ec7-4d47-863b-aab88f95d88b | private | 38025678-a6fc-4318-b8c9-4a4fe4c1acb9, fae11795-7253-4d58-a890-d9b1c74054e1 | | 0f3c1a14-3c7c-4c8c-ae0e-ce7e38d01fde | public | 6fd1d52b-9762-4178-8424-07fe21473334, e921f73c-8606-4436-92f8-22eb5e518491 | | 00ad2c69-2796-4c01-b578-e48842854274 | net1| | +--+-++ stack@devstack-master:~/nova$ nova interface-attach --net-id 00ad2c69-2796-4c01-b578-e48842854274 server1 ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-8a3d18c2-ef33-4174-beab-e45057071353) [log] nova-comupte.log 2016-05-23 18:41:54.285 ERROR oslo_messaging.rpc.server [req-8a3d18c2-ef33-4174-beab-e45057071353 admin admin] Exception during handling message 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server Traceback (most recent call last): 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 153, in dispatch 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 122, in _do_dispatch 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server result = func(ctxt, **new_args) 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception.py", line 110, in wrapped 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server payload) 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server self.force_reraise() 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server File "/opt/stack/nova/nova/exception.py", line 89, in wrapped 2016-05-23 18:41:54.285 TRACE oslo_messaging.rpc.server return f(self, context, *args, **kw) 2016-05-23
[Yahoo-eng-team] [Bug 1606573] [NEW] dev-docs: missing 'ova' container_format
Public bug reported: in glanceapi.rst: x-image-meta-container_format is missing 'ova'. Note: the following are correct: formats.rst api-ref ** Affects: glance Importance: Low Status: Confirmed ** Tags: dev-docs ** Changed in: glance Status: New => Confirmed ** Changed in: glance Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606573 Title: dev-docs: missing 'ova' container_format Status in Glance: Confirmed Bug description: in glanceapi.rst: x-image-meta-container_format is missing 'ova'. Note: the following are correct: formats.rst api-ref To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606573/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1537121] Re: Add functionality to define requests without body
This has been fixed in the current api-ref, which is located in the Glance code repository. ** Changed in: glance Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1537121 Title: Add functionality to define requests without body Status in Glance: Fix Released Bug description: https://review.openstack.org/207150 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. commit 9b430f99518e10ae263bedec3062408af332068c Author: Niall BuntingDate: Tue Jul 28 15:05:10 2015 + Add functionality to define requests without body This allows functions that do not accept bodies to define this in the router file. As currently many requests will cause a 500 if a body is supplied when the API request does not expect it. This currently only affects the core parts of the v2 api, that is, calls to v2/images and v2/schemas. It does not cover the "tasks" API or the metadefs api as I was keeping this patch concise. As this does not affect the behaviour if not included this makes no change to the metadefs api behaviour. DocImpact Partial-Bug: 1475647 Change-Id: Ieb510e5516128078d40d39fd9b4f339ce64e10e7 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1537121/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1565631] Re: [Shared images] is missing and wrong parameter
Fixed by https://review.openstack.org/#/c/312259/ Published as: http://developer.openstack.org/api- ref/image/v1/index.html?expanded=#list-shared-images ** Changed in: glance Status: Confirmed => Fix Released ** Changed in: glance Assignee: Nam (namnh) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1565631 Title: [Shared images] is missing and wrong parameter Status in Glance: Fix Released Bug description: Response parameters in List shared images [1] miss "shared_images" parameter. And type of "image_id" parameter is "xsd:string", this value is wrong. It should be change "csapi:UUID". [1] http://developer.openstack.org/api-ref- image-v1.html#listSharedImages-v1 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1565631/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1481600] Re: Remove Catalog Index Service
The current Glance api-ref does not include information about the "catalog index service". The api-ref is now located in the Glance repository. ** Changed in: glance Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1481600 Title: Remove Catalog Index Service Status in Glance: Fix Released Bug description: https://review.openstack.org/197043 commit feb927c8a11813c2cd4fbb73b2561a73e281f1aa Author: Louis TaylorDate: Tue Jun 30 12:19:34 2015 + Remove Catalog Index Service The Catalog Index Service added in the kilo cycle has been split into a new project named searchlight. This code now lives in a seperate repository: https://git.openstack.org/openstack/searchlight For more information about the split, see the governance change: I8b44aac03585c651ef8d5e94624f64a0ed2d10b2 DocImpact UpgradeImpact APIImpact Change-Id: I239ac9e32857f6a728f40c169e773ee977cca3ca To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1481600/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1538837] Re: Invalid exception raised (ComputeServiceUnavailable) if host is not found
Reviewed: https://review.openstack.org/243105 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=c824982e6a3d6660697e503f7236377cc8202d41 Submitter: Jenkins Branch:master commit c824982e6a3d6660697e503f7236377cc8202d41 Author: JavemeDate: Mon Nov 9 20:37:34 2015 +0800 raise exception ComputeHostNotFound if host is not found When performing a live migration,I passed a wrong host parameter, and then I got a wrong result like this: "Compute service of host-1 is unavailable at this time". I thought the compute service stopped, but the fact was I passed a host id instead of a host name,therefore it could not find the host. In order to distinguish the error "host not found" from the error "service not available", I think we should raise a different exception ComputeHostNotFound instead of ComputeServiceUnavailable. This patch we will return more accurate error msg for live migration. Closes-Bug: #1538837 Change-Id: I6ad377147070f85b9b1d5a1d1be459e890e02bcc ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1538837 Title: Invalid exception raised (ComputeServiceUnavailable) if host is not found Status in OpenStack Compute (nova): Fix Released Bug description: Invalid exception raised (ComputeServiceUnavailable) if host is not found When performing a live migration, I passed a wrong host parameter, and then I got a wrong result like this: "Compute service of host-1 is unavailable at this time". I thought the compute service stopped, but the fact was I passed a host id instead of a host name, therefore it could not find the host. In order to distinguish the error "host not found" from the error "service not available", I think we should raise a different exception like ComputeServiceNotExist. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1538837/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1539467] Re: [Image service API v2] Incorrect description for List API
This has been fixed by https://review.openstack.org/#/c/312259/ It is published as http://developer.openstack.org/api- ref/image/versions/index.html Please take a look, and if you think the text requires improvement, please open a new bug and submit a patch. The code is now in the Glance repository in api-ref/source/versions ** Changed in: glance Status: In Progress => Fix Released ** Changed in: glance Assignee: Sharat Sharma (sharat-sharma) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1539467 Title: [Image service API v2] Incorrect description for List API Status in Glance: Fix Released Bug description: I 've tested this API: http://developer.openstack.org/api-ref-image-v2.html#listVersions-image-v2 The description for the response of the API says: "This operation does not accept a request body and does not return a response body." But the API does return a response. The description should be: "This operation does not accept a request body." To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1539467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606381] Re: API-REF LBaaS v2 API version mixed with LBaaS v1 API
API-Ref issue ** Project changed: openstack-manuals => neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606381 Title: API-REF LBaaS v2 API version mixed with LBaaS v1 API Status in neutron: New Bug description: Somehow the LBaaS v2 API docs got merged/mixed with the LBaaS v1 API docs. Previously there were two sections, one for the deprecated LBaaSv1 API and one for the LBaaSv2 API. http://developer.openstack.org/api-ref/networking/v2-ext/index.html#lbaas-1-0-deprecated-lb-vips-health-monitors-pools-members To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606381/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606562] [NEW] Add 'vhdx' disk format support
Public bug reported: VHDX is the newer version of VHD: https://technet.microsoft.com/en-us/library/hh831446(v=ws.11).aspx It removes VHD's 2 TB disk size limit (amoung other things). There's no reason we shouldn't support it as a default disk format. ** Affects: glance Importance: Undecided Assignee: Stuart McLaren (stuart-mclaren) Status: In Progress ** Changed in: glance Assignee: (unassigned) => Stuart McLaren (stuart-mclaren) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1606562 Title: Add 'vhdx' disk format support Status in Glance: In Progress Bug description: VHDX is the newer version of VHD: https://technet.microsoft.com/en-us/library/hh831446(v=ws.11).aspx It removes VHD's 2 TB disk size limit (amoung other things). There's no reason we shouldn't support it as a default disk format. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1606562/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606381] [NEW] API-REF LBaaS v2 API version mixed with LBaaS v1 API
You have been subscribed to a public bug: Somehow the LBaaS v2 API docs got merged/mixed with the LBaaS v1 API docs. Previously there were two sections, one for the deprecated LBaaSv1 API and one for the LBaaSv2 API. http://developer.openstack.org/api-ref/networking/v2-ext/index.html#lbaas-1-0-deprecated-lb-vips-health-monitors-pools-members ** Affects: neutron Importance: Undecided Status: New -- API-REF LBaaS v2 API version mixed with LBaaS v1 API https://bugs.launchpad.net/bugs/1606381 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
** Also affects: pbr Importance: Undecided Status: New ** Changed in: pbr Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: In Progress Status in devstack: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in PBR: In Progress Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: In Progress Status in python-neutronclient: Fix Released Status in python-troveclient: In Progress Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
** Also affects: python-troveclient Importance: Undecided Status: New ** Changed in: python-troveclient Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: In Progress Status in devstack: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: In Progress Status in python-neutronclient: Fix Released Status in python-troveclient: In Progress Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
** Also affects: cinder Importance: Undecided Status: New ** Changed in: cinder Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: In Progress Status in devstack: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: In Progress Status in python-neutronclient: Fix Released Status in python-troveclient: New Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
** Also affects: python-manilaclient Importance: Undecided Status: New ** Changed in: python-manilaclient Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: New Status in devstack: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: In Progress Status in python-neutronclient: Fix Released Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606527] [NEW] VPNAAS ipsec site connection stuck on "PENDING_CREATE" status in mitaka release.
Public bug reported: With strongswan driver, ipsec-site-connection instance creation remains stuck in pending_create. devstack@devstack-virtual-machine:/opt/stack/logs$ neutron vpn-service-list +--++--++ | id | name | router_id | status | +--++--++ | e5a52006-ebc4-43b8-a8ed-8a0ce319ab5c | myvpnA | 5d8f0c8a-167c-48c2-aa48-c569b3fd3096 | PENDING_CREATE | +--++--++ devstack@devstack-virtual-machine:/opt/stack/logs$ neutron vpn-ipsecpolicy-list +--+-++--++ | id | name| auth_algorithm | encryption_algorithm | pfs| +--+-++--++ | d200a7d7-b030-472f-b2b6-2820fffe489c | ipsecpolicy | sha1 | aes-128 | group5 | +--+-++--++ devstack@devstack-virtual-machine:/opt/stack/logs$ neutron ipsec-site-connection-list +--++--+---++ | id | name | peer_address | auth_mode | status | +--++--+---++ | 54e922a3-54b8-488e-a629-b627eb85491f | vpnconnection2 | 20.20.20.8 | psk | PENDING_CREATE | +--++--+---++ ** Affects: neutron Importance: Undecided Assignee: Tarun Jain (tarun-jain2) Status: New ** Changed in: neutron Assignee: (unassigned) => Tarun Jain (tarun-jain2) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606527 Title: VPNAAS ipsec site connection stuck on "PENDING_CREATE" status in mitaka release. Status in neutron: New Bug description: With strongswan driver, ipsec-site-connection instance creation remains stuck in pending_create. devstack@devstack-virtual-machine:/opt/stack/logs$ neutron vpn-service-list +--++--++ | id | name | router_id | status | +--++--++ | e5a52006-ebc4-43b8-a8ed-8a0ce319ab5c | myvpnA | 5d8f0c8a-167c-48c2-aa48-c569b3fd3096 | PENDING_CREATE | +--++--++ devstack@devstack-virtual-machine:/opt/stack/logs$ neutron vpn-ipsecpolicy-list +--+-++--++ | id | name| auth_algorithm | encryption_algorithm | pfs| +--+-++--++ | d200a7d7-b030-472f-b2b6-2820fffe489c | ipsecpolicy | sha1 | aes-128 | group5 | +--+-++--++ devstack@devstack-virtual-machine:/opt/stack/logs$ neutron ipsec-site-connection-list +--++--+---++ | id | name | peer_address | auth_mode | status | +--++--+---++ | 54e922a3-54b8-488e-a629-b627eb85491f | vpnconnection2 | 20.20.20.8 | psk | PENDING_CREATE | +--++--+---++ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606527/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606496] [NEW] Instance affinity filters do not work in a heterogeneous cloud with Ironic computes
Public bug reported: Description === In a heterogeneous cloud with both libvirt and ironic compute nodes instance affinity filters like DifferentHostFilter or SameHostFilter do not filter hosts out when scheduling a subsequent instance. Steps to reproduce == Make sure you have at least two libvirt compute nodes and one ironic node. Make sure DifferentHostFilter and SameHostFilter are configured as nova- scheduler filters in nova.conf, filters scheduler is used. 1. Boot a libvirt instance A. 2. Check the host name of the compute node instance A is running on (nova show from an admin user). 3. Boot a libvirt instance B passing a different_host=$A.uuid hint for nova-scheduler. 4. Check the host name of the compute node instance B is running on (nova show from an admin user). Expected result === Instances A and B are running on two different compute nodes. Actual result = Instances A and B are running on the same compute node. nova-scheduler logs shows that DifferentHost filter was run, but did not filter out one of the hosts: Filter DifferentHostFilter returned 2 host(s) get_filtered_objects Environment === OpenStack Mitaka 2 libvirt compute nodes 1 ironic compute node FiltersScheduler is used DifferentHostFilter and SameHostFilter filters are enabled in nova.conf Root cause analysis === Debugging shown that IronicHostManager is configured to be used by nova- scheduler instead of the default host manager, when Ironic compute are deployed in the same cloud together with libvirt compute nodes. IronicHostManager overrides the _get_instance_info() method and unconditionally returns an empty instance dict, even if this method is called for non-ironic computes of the same cloud. DifferentHostFilter and similar filters later use this info to find an intersection of a set of instances running on a libvirt compute node (currently, always {}) and a set of instances uuids passed as a hint for nova-scheduler, thus compute nodes are never filtered out and the hint is effectively ignored. ** Affects: nova Importance: Undecided Assignee: Roman Podoliaka (rpodolyaka) Status: New ** Tags: ironic scheduler ** Changed in: nova Assignee: (unassigned) => Roman Podoliaka (rpodolyaka) ** Description changed: Description === In a heterogeneous cloud with both libvirt and ironic compute nodes instance affinity filters like DifferentHostFilter or SameHostFilter do not filter hosts out when scheduling a subsequent instance. - Steps to reproduce == Make sure you have at least two libvirt compute nodes and one ironic node. Make sure DifferentHostFilter and SameHostFilter are configured as nova- scheduler filters in nova.conf, filters scheduler is used. 1. Boot a libvirt instance A. 2. Check the host name of the compute node instance A is running on (nova show from an admin user). 3. Boot a libvirt instance B passing a different_host=$A.uuid hint for nova-scheduler. 4. Check the host name of the compute node instance B is running on (nova show from an admin user). - Expected result === Instances A and B are running on two different compute nodes. - Actual result = Instances A and B are running on the same compute node. nova-scheduler logs shows that DifferentHost filter was run, but did not filter out one of the hosts: Filter DifferentHostFilter returned 2 host(s) get_filtered_objects - Environment === OpenStack Mitaka 2 libvirt compute nodes 1 ironic compute node FiltersScheduler is used DifferentHostFilter and SameHostFilter filters are enabled in nova.conf - Root cause analysis === Debugging shown that IronicHostManager is configured to be used by nova- scheduler instead of the default host manager, when Ironic compute are deployed in the same cloud together with libvirt compute nodes. IronicHostManager overrides the _get_instance_info() method and unconditionally returns an empty instance dict, even if this method is called for non-ironic computes of the same cloud. DifferentHostFilter - and similar filters later use this info to find ab intersection of a set + and similar filters later use this info to find an intersection of a set of instances running on a libvirt compute node (currently, always {}) and a set of instances uuids passed as a hint for nova-scheduler, thus compute nodes are never filtered out and the hint is effectively ignored. ** Tags added: ironic ** Tags added: scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606496 Title: Instance affinity filters do not work in a heterogeneous cloud with Ironic computes Status in OpenStack Compute (nova):
[Yahoo-eng-team] [Bug 1606475] [NEW] unable to create instance in liberty
Public bug reported: Error launching new instance in Error: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-6831fe1e-3236-4a30-9bd4-1b8a923dcc5b) reply to message ID 0e988d6a55e74a1c9c8c3522d7b8b428\n'] 2016-07-26 13:39:23.516 12910 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 71be54c972444d4cacdd6834f3be97c1 2016-07-26 13:39:23.517 12910 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : fe31b3fcac8e4beea70e43ba676eaedb 2016-07-26 13:39:23.519 12911 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 2fdc3b9accd94c64b6dc09dda7405707 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher [req-e30b0bb2-4942-460e-96fc-86ba05f5be4c - - - - -] Exception during message handling: Timed out waiting for a reply to message ID dd70d3ed54d44cb68014e489293335b1 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 124, in _do_dispatch 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher new_args[argname] = self.serializer.deserialize_entity(ctxt, arg) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/rpc.py", line 111, in deserialize_entity 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher return self._base.deserialize_entity(context, entity) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 322, in deserialize_entity 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher entity = self._process_object(context, entity) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 284, in _process_object 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher context, objprim, version_manifest) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 77, in object_backport_versions 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher object_versions) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 247, in object_backport_versions 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher object_versions=object_versions) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher retry=self.retry) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher timeout=timeout, retry=retry) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 431, in send 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher retry=retry) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 420, in _send 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher result = self._waiter.wait(msg_id, timeout) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 318, in wait 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher message = self.waiters.get(msg_id, timeout=timeout) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 223, in get 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 'to message ID %s' % msg_id) 2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher MessagingTimeout: Timed out waiting for a reply to
[Yahoo-eng-team] [Bug 1606462] [NEW] OVS firewall doesn't handle security group updates properly
Public bug reported: Look at this: class OVSFirewallDriver(firewall.FirewallDriver): ... def security_group_updated(self, action_type, sec_group_ids, device_ids=None): """This method is obsolete The current driver only supports enhanced rpc calls into security group agent. This method is never called from that place. """ but this is used by the enhanced rpc. See SecurityGroupAgentRpc._security_group_updated. Also this can be checked by inserting a test raise statement into the above method. ** Affects: neutron Importance: Undecided Status: New ** Tags: ovs sg-fw ** Tags added: ovs sg-fw -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606462 Title: OVS firewall doesn't handle security group updates properly Status in neutron: New Bug description: Look at this: class OVSFirewallDriver(firewall.FirewallDriver): ... def security_group_updated(self, action_type, sec_group_ids, device_ids=None): """This method is obsolete The current driver only supports enhanced rpc calls into security group agent. This method is never called from that place. """ but this is used by the enhanced rpc. See SecurityGroupAgentRpc._security_group_updated. Also this can be checked by inserting a test raise statement into the above method. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606462/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606455] [NEW] Neutron server was not compatible with member actions
Public bug reported: == Problem Description === Register a new extension into Neutron server, and this extension contains a member action. Just like this: @classmethod def get_resources(cls): """Returns rate limit resources. """ plural_mappings = resource_helper.build_plural_mappings( {}, EXTENDED_ATTRIBUTES_2_0) attr.PLURALS.update(plural_mappings) action_map = {'floatingip': { 'update_floatingip_ratelimit': 'PUT'} } return resource_helper.build_resource_info(plural_mappings, EXTENDED_ATTRIBUTES_2_0, constants.L3_ROUTER_NAT, action_map=action_map) Adding a new member action named "update_floatingip_ratelimit". Exception will happen by calling this method by a non admin user. Exception reports: 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in resource 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource result = method(request=request, **args) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__ 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 216, in _handle_action 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource pluralized=self._collection) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/policy.py", line 399, in enforce 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource pluralized) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/policy.py", line 324, in _prepare_check 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource match_rule = _build_match_rule(action, target, pluralized) 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/policy.py", line 168, in _build_match_rule 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource target, action): 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/policy.py", line 95, in _is_attribute_explicitly_set 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource return (attribute_name in target[const.ATTRIBUTES_TO_UPDATE] and 2016-07-26 10:07:28.813 7562 ERROR neutron.api.v2.resource KeyError: 'attributes_to_update' Because new member action contains the "update" string. As a result, Neutron server check the target whether contains "ATTRIBUTES_TO_UPDATE". Because this is a member action so that neutron server will not go "_update" method normally. It will go "_handle_action" method. So the exception happens. == How to fix == >>> if 'update' in action: change into >>> if 'update' in action and target.get(const.ATTRIBUTES_TO_UPDATE): By doing such change, will solve this problem. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606455 Title: Neutron server was not compatible with member actions Status in neutron: New Bug description: == Problem Description === Register a new extension into Neutron server, and this extension contains a member action. Just like this: @classmethod def get_resources(cls): """Returns rate limit resources. """ plural_mappings = resource_helper.build_plural_mappings( {}, EXTENDED_ATTRIBUTES_2_0) attr.PLURALS.update(plural_mappings) action_map = {'floatingip': { 'update_floatingip_ratelimit': 'PUT'} } return resource_helper.build_resource_info(plural_mappings, EXTENDED_ATTRIBUTES_2_0, constants.L3_ROUTER_NAT, action_map=action_map)
[Yahoo-eng-team] [Bug 1606443] [NEW] lacking scenario test for routed network
Public bug reported: Adding scenario test cases for routed network, this scenario test should cover Scenario tests that tagrget the interaction between Nova and Neutron when using routed netwroks Basic happy deferred IP port (Pre-existing port given to Nova). Put subnet on a segment and make sure that Nova can boot with a deferred ip port Check connectivity to instance Basic happy deferred IP port (Nova creates the port) Check connectivity to instance Failed IP allocation. Adjust allocation_ranges so that IP allocation fails to get an IP address. VM should go to error state. Ports cleaned up. Try this with a VM instance with two ports. If one is deferred and fails IP allocation, does the other get cleaned up? ** Affects: neutron Importance: Undecided Assignee: bin Yu (froyo-bin) Status: New ** Changed in: neutron Assignee: (unassigned) => bin Yu (froyo-bin) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1606443 Title: lacking scenario test for routed network Status in neutron: New Bug description: Adding scenario test cases for routed network, this scenario test should cover Scenario tests that tagrget the interaction between Nova and Neutron when using routed netwroks Basic happy deferred IP port (Pre-existing port given to Nova). Put subnet on a segment and make sure that Nova can boot with a deferred ip port Check connectivity to instance Basic happy deferred IP port (Nova creates the port) Check connectivity to instance Failed IP allocation. Adjust allocation_ranges so that IP allocation fails to get an IP address. VM should go to error state. Ports cleaned up. Try this with a VM instance with two ports. If one is deferred and fails IP allocation, does the other get cleaned up? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1606443/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp