[Yahoo-eng-team] [Bug 1523694] Re: Cannot inject root password
[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.] ** Changed in: nova Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1523694 Title: Cannot inject root password Status in OpenStack Compute (nova): Expired Bug description: We are upgrading our environment from icehouse to kilo and discovered that new deployment no longer allows admin password injection. SETUP: Deployed using Mirantis 7.0 2x Controller nodes 2x Ceph storage many compute nodes Compute node is already setup as inject_password=true inject_key=true Key injection works fine, its only the setting root password from dashboard that does not work. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1523694/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560313] [NEW] Angular actions menu does not close after clicking action
Public bug reported: I'm not sure when this started but I noticed today that the angular actions menu does not close on its own after clicking on an action. You need to click somewhere else on the page for the menu to close itself. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1560313 Title: Angular actions menu does not close after clicking action Status in OpenStack Dashboard (Horizon): New Bug description: I'm not sure when this started but I noticed today that the angular actions menu does not close on its own after clicking on an action. You need to click somewhere else on the page for the menu to close itself. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1560313/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560303] [NEW] Angular actions do not prevent multiple clicks
Public bug reported: The angular actions list widget does not prevent the user from being able to click an action multiple times before it has been performed. This is mainly a problem when the action results in some asynchronous operation that must resolve before the action can actually be performed. Specifically the Update Metadata action on the angular images table is a good example of this. After clicking the action, there are two API requests that need to resolve before the user sees the metadata modal. If those requests take a long time it's easy to click the action multiple times and get more than one modal. ** Affects: horizon Importance: Undecided Assignee: Justin Pomeroy (jpomero) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Justin Pomeroy (jpomero) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1560303 Title: Angular actions do not prevent multiple clicks Status in OpenStack Dashboard (Horizon): In Progress Bug description: The angular actions list widget does not prevent the user from being able to click an action multiple times before it has been performed. This is mainly a problem when the action results in some asynchronous operation that must resolve before the action can actually be performed. Specifically the Update Metadata action on the angular images table is a good example of this. After clicking the action, there are two API requests that need to resolve before the user sees the metadata modal. If those requests take a long time it's easy to click the action multiple times and get more than one modal. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1560303/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560300] [NEW] nova soft-delete leaves "Attached to None" volumes
Public bug reported: 1. version `git log -1` c4763d46fe76c524363a0cf55d1e8afe4bd23f53 This is the version i used to test on my devstack, but in fact this bug exists from at least Juno release till now as i know. 2. Relevant log: When a soft-delete is made on one instance boot from volume. There will be one line like this in nova-compute.log: WARNING nova.compute.manager [req-7bbc1701-fbce-41bc-8182-b2cbb6e5ac93 None None] [instance: a3645529-6b11-437e-b1e4-773e87db7223] Ignoring EndpointNotFound: The service catalog is empty. This is because nova-compute uses a separate thread to do reclaiming instances job, which has an admin context. Since there is no service_catalog in admin context, nova-compute will raise exception EndpointNotFound while it tries to detach the volume. 3. Reproduce steps: (1) Set a non-zero value for reclaim_instance_interval in /etc/nova/nova.conf on both nova-controller and nova-compute nodes. reclaim_instance_interval=10, eg. This enables soft-delete feature. (2) Create an instance with this: nova boot --flavor xxx --block-device id=,source=image,dest=volume,size=,bootindex=0 --nic net- id= test (3) Delete the created instance: nova delete test (4) On the nova-compute node which hosted "test", there will be one warnging in nova-compute.log like this[NOTE: you should wait until the reclaim_instance_interval is ended, until then nova-compute are going to really terminate the instance]: "WARNING nova.compute.manager [req-7bbc1701-fbce-41bc-8182-b2cbb6e5ac93 None None] [instance: a3645529-6b11-437e-b1e4-773e87db7223] Ignoring EndpointNotFound: The service catalog is empty." (5) if you list your volumes, you can find there still exists one volume attached to the deleted "test" instance. Check that on dashboard, the volume info says "Attached to None". (6) If you try to delete the that volume with " cinder delete ". It says the volume is unable to be deleted because it is in attached status. 4. Expected result: soft-delete detached instance's volume. 5. Actual result: the volume is still left attached, and undeletable. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1560300 Title: nova soft-delete leaves "Attached to None" volumes Status in OpenStack Compute (nova): New Bug description: 1. version `git log -1` c4763d46fe76c524363a0cf55d1e8afe4bd23f53 This is the version i used to test on my devstack, but in fact this bug exists from at least Juno release till now as i know. 2. Relevant log: When a soft-delete is made on one instance boot from volume. There will be one line like this in nova-compute.log: WARNING nova.compute.manager [req-7bbc1701-fbce-41bc-8182-b2cbb6e5ac93 None None] [instance: a3645529-6b11-437e-b1e4-773e87db7223] Ignoring EndpointNotFound: The service catalog is empty. This is because nova-compute uses a separate thread to do reclaiming instances job, which has an admin context. Since there is no service_catalog in admin context, nova-compute will raise exception EndpointNotFound while it tries to detach the volume. 3. Reproduce steps: (1) Set a non-zero value for reclaim_instance_interval in /etc/nova/nova.conf on both nova-controller and nova-compute nodes. reclaim_instance_interval=10, eg. This enables soft-delete feature. (2) Create an instance with this: nova boot --flavor xxx --block-device id=,source=image,dest=volume,size=,bootindex=0 --nic net- id= test (3) Delete the created instance: nova delete test (4) On the nova-compute node which hosted "test", there will be one warnging in nova-compute.log like this[NOTE: you should wait until the reclaim_instance_interval is ended, until then nova-compute are going to really terminate the instance]: "WARNING nova.compute.manager [req-7bbc1701-fbce- 41bc-8182-b2cbb6e5ac93 None None] [instance: a3645529-6b11-437e- b1e4-773e87db7223] Ignoring EndpointNotFound: The service catalog is empty." (5) if you list your volumes, you can find there still exists one volume attached to the deleted "test" instance. Check that on dashboard, the volume info says "Attached to None". (6) If you try to delete the that volume with " cinder delete ". It says the volume is unable to be deleted because it is in attached status. 4. Expected result: soft-delete detached instance's volume. 5. Actual result: the volume is still left attached, and undeletable. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1560300/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560301] [NEW] Transaction did not save when deleteing floatingip_agent_gateway port
Public bug reported: In dvr environment, after the last floatingip being deleted on one compute node, the l3-agent will clear the fip namespace and call the neutron-server to delete the floatingip_agent_gateway, but db transaction did not save after deleting the data, so port date is still in the db. ** Affects: neutron Importance: Undecided Assignee: Jie Li (jieli2087) Status: New ** Changed in: neutron Assignee: (unassigned) => Jie Li (jieli2087) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560301 Title: Transaction did not save when deleteing floatingip_agent_gateway port Status in neutron: New Bug description: In dvr environment, after the last floatingip being deleted on one compute node, the l3-agent will clear the fip namespace and call the neutron-server to delete the floatingip_agent_gateway, but db transaction did not save after deleting the data, so port date is still in the db. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560301/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560279] [NEW] Metadata definitions table with pagination shows inconsistent data
Public bug reported: Steps to reproduce: 1. Set the items per page on user settings as 4. 2. Go to Admin -> Systems -> Metadata Definitions panel. 3. Click Next -> Prev -> Next -> Prev 4. Note that you should be on the first page with only "Next" available but it's showing only 1 metadata definition instead of 4 as specified on user settings. Also, the items shown on the pages are different when you go back and forth. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1560279 Title: Metadata definitions table with pagination shows inconsistent data Status in OpenStack Dashboard (Horizon): New Bug description: Steps to reproduce: 1. Set the items per page on user settings as 4. 2. Go to Admin -> Systems -> Metadata Definitions panel. 3. Click Next -> Prev -> Next -> Prev 4. Note that you should be on the first page with only "Next" available but it's showing only 1 metadata definition instead of 4 as specified on user settings. Also, the items shown on the pages are different when you go back and forth. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1560279/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1533876] Re: plug_vhostuser may fail due to device not found error when setting mtu
Reviewed: https://review.openstack.org/271444 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=adf7ba61dd73fe4bfffa20295be9a4b1006a1fe6 Submitter: Jenkins Branch:master commit adf7ba61dd73fe4bfffa20295be9a4b1006a1fe6 Author: Sean MooneyDate: Fri Jan 22 17:00:36 2016 + stop setting mtu when plugging vhost-user ports vhost-user is a userspace protocol to establish connectivity between a virto-net frontend typically qemu and a userspace virtio backend such as ovs with dpdk. vhost-user interfaces exist only in userspace from the host perspective and are not represented in the linux networking stack as kernel netdevs. As a result attempting to set the mtu on a vhost-user interface using ifconfig or ip link will fail with a device not found error. - this change removes a call to _set_device_mtu when plugging vhost-user interfaces. - this change prevents the device not found error from occurring which stopped vms booting with vhost-user interfaces due to an uncaught exception resulting in a failure to set the interface type in ovs. - this change make creating vhost-user interface an atomic action. This latent bug is only triggered when the mtu value is set to a value other than 0 which was the default proir to mitaka. Change-Id: I2e17723d5052d57cd1557bd8a173c06ea0dcb2d4 Closes-Bug: #1533876 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1533876 Title: plug_vhostuser may fail due to device not found error when setting mtu Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) kilo series: In Progress Status in OpenStack Compute (nova) liberty series: In Progress Status in OpenStack Compute (nova) mitaka series: In Progress Bug description: Setting the mtu of a vhost-user port with the ip command will cause vms to fail to boot with a device not found error as vhost-user prots are not represented as kernel netdevs. this bug is present in stable/kilo, stable/liberty and master and i would like to ask that it be back ported if accepted and fixed in master. when using vhost-user with ovs-dpdk the vhost-user port is plugged into ovs by nova using a non atomic call to linux_net.create_ovs_vif_port to add an ovs port followed by a second call to linux_net.ovs_set_vhostuser_port_type to update the port type. https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652-L655 the reuse of the create_ovs_vif_port has an untended concequece of introducing an error where the ip tool is invoked to try and set the mtu on the userspace vhost-user interface which dose not exist as a kernel netdev. https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1379 this results in the in the call to set_device_mtu throwing an exception as the ip comand exits with code 1 https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1340-L1342 as a result the second function call to ovs_set_vhostuser_port_type is never maid and the vm fails to boot. to resolve this issue i would like to introduce a new function to inux_net.py create_ovs_vhostuser_port which will create the vhostuser port as an atomic action and will not set the mtu similar to the impentation in the os-vif vhost-user driver https://github.com/jaypipes/vif_plug_vhostuser/blob/8ac30ce32b3e0bae5d2d8f1edc9d64ac2871608e/vif_plug_vhostuser/linux_net.py#L34-L46 an alternitive solution would be to add "1" to the retrun code check here https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1339 or catch the exception here https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652 however neither solve the underlying cause. this was observed with kilo openstack on ubuntu 14.04 with ovs-dpdk deployed with puppet/fule. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1533876/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560277] [NEW] Fullstack neutron-server fails to start with: 'RuntimeError: Could not bind to 0.0.0.0:X after trying for 30 seconds'
Public bug reported: Paste of TRACE: http://paste.openstack.org/show/491377/ Example of failure: http://logs.openstack.org/06/286106/3/check/gate-neutron-dsvm-fullstack/df82460/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_/neutron-server--2016-03-22--00-18-18-027088.log.txt.gz ** Affects: neutron Importance: High Status: Confirmed ** Tags: fullstack -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560277 Title: Fullstack neutron-server fails to start with: 'RuntimeError: Could not bind to 0.0.0.0:X after trying for 30 seconds' Status in neutron: Confirmed Bug description: Paste of TRACE: http://paste.openstack.org/show/491377/ Example of failure: http://logs.openstack.org/06/286106/3/check/gate-neutron-dsvm-fullstack/df82460/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_/neutron-server--2016-03-22--00-18-18-027088.log.txt.gz To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560277/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1413276] Re: Filtering (and limiting) list domains is not tested
Reviewed: https://review.openstack.org/207456 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=1ed8d3ac89a99ac523de3915f2c33870995be9c0 Submitter: Jenkins Branch:master commit 1ed8d3ac89a99ac523de3915f2c33870995be9c0 Author: Konstantin MaximovDate: Fri Mar 4 19:25:19 2016 +0300 Add test for domains list filtering and limiting We test the filtering and limiting of projects, users and groups lists in test_backends.py, and we don't do this for domains. Created a special test for the filtering and limiting of domains list for SQL and LDAP multi-domain classes. Change-Id: I86094021e2e12e0c0ecf5e1745e0c66ecfb2f96e Closes-Bug: #1413276 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1413276 Title: Filtering (and limiting) list domains is not tested Status in OpenStack Identity (keystone): Fix Released Bug description: We test the filtering and limiting of lists in test_backend.py - and do this for projects, users and groups: class LimitTests(filtering.FilterTests): ENTITIES = ['user', 'group', 'project'] We don't do this for domain, since this would have problems with LDAP. We should create a special test for this inside one of the LDAP specific multi-domain classes in test_backend_ldap.py. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1413276/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1546778] Re: libvirt: resize with deleted backing image fails
There is a backport proposed to stable/kilo but I don't think we should take it since the original fix for this introduced a regression which we are having to fix on master, stable/mitaka and stable/liberty now. I'd rather not deal with that in stable/kilo too. ** Also affects: nova/kilo Importance: Undecided Status: New ** Changed in: nova/kilo Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1546778 Title: libvirt: resize with deleted backing image fails Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) kilo series: Won't Fix Status in OpenStack Compute (nova) liberty series: Fix Committed Bug description: Once the glance image from which an instance was spawned is deleted, resizes of that image fail if they would take place across more than one compute node. Migration and live block migration both succeed. Resize fails, I believe, because 'qemu-img resize' is called (https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221) before the backing image has been transferred from the source compute node (https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233). Replication requires two compute nodes. To replicate: 1. Boot an instance from an image or snapshot. 2. Delete the image from Glance. 3. Resize the instance. It will fail with an error similar to: Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c- 71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could not open '/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224': No such file or directory\n" To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1546778/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558343] Re: configdrive is lost after resize.(libvirt driver)
** Changed in: nova/kilo Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558343 Title: configdrive is lost after resize.(libvirt driver) Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) kilo series: Won't Fix Status in OpenStack Compute (nova) liberty series: In Progress Status in OpenStack Compute (nova) mitaka series: In Progress Bug description: Used the trunk code as of 2016/03/16 my environment disabled metadata agent and forced the use of config drive. console log before resize: http://paste.openstack.org/show/490825/ console log after resize: http://paste.openstack.org/show/490824/ qemu 18683 1 4 18:40 ?00:00:32 /usr/bin/qemu-system-x86_64 -name instance-0002 -S -machine pc-i440fx-2.0,accel=tcg,usb=off -m 128 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 018892c7-8144-49c0-93d2-79ee83efd6a9 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=13.0.0,serial=16c127e2-6369-4e19-a646-251a416a8dcd,uuid=018892c7-8144-49c0-93d2-79ee83efd6a9,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-0002/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/da ta/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=23,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:34:d6:f3,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on $ blkid /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" TYPE="ext3" $ another vm without resize: $ blkid /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" TYPE="ext3" /dev/sr0: LABEL="config-2" TYPE="iso9660" $ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1558343/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1533876] Re: plug_vhostuser may fail due to device not found error when setting mtu
** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova/mitaka Status: New => In Progress ** Changed in: nova/mitaka Assignee: (unassigned) => sean mooney (sean-k-mooney) ** Tags added: mitaka-backport-potential ** Tags added: mitaka-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1533876 Title: plug_vhostuser may fail due to device not found error when setting mtu Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) kilo series: In Progress Status in OpenStack Compute (nova) liberty series: In Progress Status in OpenStack Compute (nova) mitaka series: In Progress Bug description: Setting the mtu of a vhost-user port with the ip command will cause vms to fail to boot with a device not found error as vhost-user prots are not represented as kernel netdevs. this bug is present in stable/kilo, stable/liberty and master and i would like to ask that it be back ported if accepted and fixed in master. when using vhost-user with ovs-dpdk the vhost-user port is plugged into ovs by nova using a non atomic call to linux_net.create_ovs_vif_port to add an ovs port followed by a second call to linux_net.ovs_set_vhostuser_port_type to update the port type. https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652-L655 the reuse of the create_ovs_vif_port has an untended concequece of introducing an error where the ip tool is invoked to try and set the mtu on the userspace vhost-user interface which dose not exist as a kernel netdev. https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1379 this results in the in the call to set_device_mtu throwing an exception as the ip comand exits with code 1 https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1340-L1342 as a result the second function call to ovs_set_vhostuser_port_type is never maid and the vm fails to boot. to resolve this issue i would like to introduce a new function to inux_net.py create_ovs_vhostuser_port which will create the vhostuser port as an atomic action and will not set the mtu similar to the impentation in the os-vif vhost-user driver https://github.com/jaypipes/vif_plug_vhostuser/blob/8ac30ce32b3e0bae5d2d8f1edc9d64ac2871608e/vif_plug_vhostuser/linux_net.py#L34-L46 an alternitive solution would be to add "1" to the retrun code check here https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1339 or catch the exception here https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652 however neither solve the underlying cause. this was observed with kilo openstack on ubuntu 14.04 with ovs-dpdk deployed with puppet/fule. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1533876/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1546110] Re: DB error causes router rescheduling loop to fail
Reviewed: https://review.openstack.org/280753 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b6ec40cbf754de9d189f843cbddfca67d4103ee3 Submitter: Jenkins Branch:master commit b6ec40cbf754de9d189f843cbddfca67d4103ee3 Author: Oleg BondarevDate: Tue Feb 16 18:03:52 2016 +0300 Move db query to fetch down bindings under try/except In case of intermittent DB failures router and network auto-rescheduling tasks may fail due to error on fetching down bindings from db. Need to put this queries under try/except to prevent unexpected exit. Closes-Bug: #1546110 Change-Id: Id48e899a5b3d906c6d1da4d03923bdda2681cd92 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1546110 Title: DB error causes router rescheduling loop to fail Status in neutron: Fix Released Bug description: In router rescheduling looping task db call to get down bindings is done outside of try/except block which may cause task to fail (see traceback below). Need to put db operation inside try/except. 2016-02-15T10:44:44.259995+00:00 err: 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall [req-79bce4c3-2e81-446c-8b37-6d30e3a964e2 - - - - -] Fixed interval looping call 'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin.reschedule_routers_from_down_agents' failed 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall Traceback (most recent call last): 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, in _run_loop 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 101, in reschedule_routers_from_down_agents 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall down_bindings = self._get_down_bindings(context, cutoff) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/neutron/db/l3_dvrscheduler_db.py", line 460, in _get_down_bindings 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall context, cutoff) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 149, in _get_down_bindings 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return query.all() 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2399, in all 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return list(self) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2516, in __iter__ 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return self._execute_and_instances(context) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2529, in _execute_and_instances 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall close_with_result=True) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2520, in _connection_from_session 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall **kw) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 882, in connection 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall execution_options=execution_options) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 889, in _connection_for_bind 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall conn = engine.contextual_connect(**kw) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, in contextual_connect 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall self._wrap_pool_connect(self.pool.connect, None), 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall e, dialect, self) 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1401, in _handle_dbapi_exception_noconnection 2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall
[Yahoo-eng-team] [Bug 1524356] Re: a level binding implement issue in _check_driver_to_bind
Reviewed: https://review.openstack.org/271959 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9db81351ed8ddeeb279f640eccbdf539945bde7f Submitter: Jenkins Branch:master commit 9db81351ed8ddeeb279f640eccbdf539945bde7f Author: Hong Hui XiaoDate: Mon Jan 25 04:09:48 2016 -0500 Fix wrong use of list of dict in _check_driver_to_bind From [1], the segments_to_bind should be a list of dict, so the "level.segment_id in segments_to_bind" will never work. This patch extracts a set of segment ids and uses the set in the if condition. [1] https://goo.gl/yKYSTA Change-Id: I58f1d128e6cd79546d84f7d5bfcb026affc4fc5e Closes-bug: #1524356 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1524356 Title: a level binding implement issue in _check_driver_to_bind Status in neutron: Fix Released Bug description: in the function of _check_driver_to_bind, the below row 3 condition will never be satisfied. the type of Level.segment_id is string, but Segment_to_bind is a list dict. 1for level in binding_levels: 2if (level.driver == driver and 3level.segment_id in segments_to_bind): 4return False To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1524356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1445255] Re: DVR FloatingIP to unbound allowed_address_pairs does not work
Reviewed: https://review.openstack.org/254439 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=6185a09d130edb7a21e21a354b3fa12fcbebe8a6 Submitter: Jenkins Branch:master commit 6185a09d130edb7a21e21a354b3fa12fcbebe8a6 Author: Swaminathan VasudevanDate: Fri Dec 4 16:44:44 2015 -0800 DVR: Handle unbound allowed_address_pair port with FIP If an allowed_address_pair port associated with a FloatingIP is configured to a service_port, the allowed_address_pair port should inherit the service_ports host binding and device owner if device_owner is not configured. Hence the DVR will be able to deploy the FloatingIP for the provided allowed_address_pair. In this case if the associated port's admin state changes, the allowed_address_pairs device_owner and host binding will be reverted back to None. When associated service port is deleted the allowed_address_ pairs device_owner and host binding will be reverted as well. Change-Id: I32b8d3e85a8e12fc146c419766596fcfb61f32f6 Closes-Bug: #1445255 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1445255 Title: DVR FloatingIP to unbound allowed_address_pairs does not work Status in neutron: Fix Released Bug description: I was trying to follow Aaron's guide here: http://blog.aaronorosen.com /implementing-high-availability-instances-with-neutron-using-vrrp/ VRRP is working fine, but with DVR enabled there is no way to get a floatingIP address working with a vIP. There has been a discussion about this on #openstack-neutron on the 16th of April 2015: [23:49:26] dguerri was trying to follow Aaron's guide here: http://blog.aaronorosen.com/implementing-high-availability-instances-with-neutron-using-vrrp/ [23:49:35] and it doesn't work with DVR [23:50:49] kevinbenton: ok, but are we sure that’s because of an unbound port? [23:51:37] armax: seems to be [23:51:56] armax: no l3 agent will respond to an ARP request for the floating IP when i try it [23:52:57] kevinbenton: ok, now I am with you [23:53:53] kevinbenton: in aaron’s case the fip is associated to an unbound port [23:54:05] kevinbenton: and yet routing works fine [23:55:18] kevinbenton: I don’t think taht for such scenario DVR makes much sense [23:55:48] kevinbenton: because if we allowed to have teh FIP namespace to land on the dvr_snat agent [23:56:02] kevinbenton: you’re basically back to central routing [23:56:07] armax: right [23:56:11] kevinbenton: am I making any sense? [23:56:29] kevinbenton: I am not saying that lack of VRRP support is nice [23:56:37] kevinbenton: I am tryign to wrap my head around this [23:56:49] armax: i was thinking maybe there was some fallback logic where the SNAT one would host a floating IP if there wasn't another l3 agent that could handle it [23:57:16] armax: for example if one of the compute nodes wasn't running the l3 agent [23:57:35] armax: it would be the same scenario [23:57:37] armax: right? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1445255/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560226] [NEW] No notifications on tag operations
Public bug reported: When a tag's added to (or removed from) a resource, no notification is generated indicating that the network (or port or whatever) has changed, although tags *are* included in notification and API data for those resources. It'd be more consistent if attaching a tag to a network generated a notification in the same way as if it were renamed. My use case is that Searchlight would really like to index tags attached to networks, routers, etc since it's a very powerful feature but we can't provide up to date information unless a notification's sent. Tested on neutron mitaka rc1. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560226 Title: No notifications on tag operations Status in neutron: New Bug description: When a tag's added to (or removed from) a resource, no notification is generated indicating that the network (or port or whatever) has changed, although tags *are* included in notification and API data for those resources. It'd be more consistent if attaching a tag to a network generated a notification in the same way as if it were renamed. My use case is that Searchlight would really like to index tags attached to networks, routers, etc since it's a very powerful feature but we can't provide up to date information unless a notification's sent. Tested on neutron mitaka rc1. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560226/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560221] [NEW] No port create notifications received for DHCP subnet creation nor router interface attach
Public bug reported: Creating a subnet with DHCP enabled either creates or updates a port with device_owner network:dhcp matching the network id to which the subnet belongs. While there is a notification received for the subnet creation, the port creation or update is implicit and has not necessarily taken place when the subnet creation event is received (and similarly we don't get a notification that the port has changed or been deleted when the subnet has DHCP disabled). My specific use case is that we're trying to index resource create/update/delete events for searchlight and we cannot track the network DHCP ports in the same way as we can ports created explicitly or as part of nova instance boots. The same problem exists for router interface:attach events, though with a difference that we do at least get a notification indicating the port id created. It would be nice if the ports created when attaching a router to a network also sent port.create notifications. Tested under mitaka RC-1 (or very close to) with 'messaging' as the notification driver. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560221 Title: No port create notifications received for DHCP subnet creation nor router interface attach Status in neutron: New Bug description: Creating a subnet with DHCP enabled either creates or updates a port with device_owner network:dhcp matching the network id to which the subnet belongs. While there is a notification received for the subnet creation, the port creation or update is implicit and has not necessarily taken place when the subnet creation event is received (and similarly we don't get a notification that the port has changed or been deleted when the subnet has DHCP disabled). My specific use case is that we're trying to index resource create/update/delete events for searchlight and we cannot track the network DHCP ports in the same way as we can ports created explicitly or as part of nova instance boots. The same problem exists for router interface:attach events, though with a difference that we do at least get a notification indicating the port id created. It would be nice if the ports created when attaching a router to a network also sent port.create notifications. Tested under mitaka RC-1 (or very close to) with 'messaging' as the notification driver. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560221/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560219] [NEW] Orchestration Resource Types does not handle additional namespaces
Public bug reported: During the Mitaka release, Heat released new resources with an additional namespace. They are: OS::Neutron::LBaaS::LoadBalancer OS::Neutron::LBaaS::Listener OS::Neutron::LBaaS::Pool OS::Neutron::LBaaS::PoolMember OS::Neutron::LBaaS::HealthMonitor The additional namespace is to differ these LBaaS v2 resources from their v1 counterparts. However in the Resource Types list display in Horizon, the LBaaS is displayed as the Resource rather than the resource name. From a Horizon perspective, we shouldn't care about the additional namespace, and just display the resource name. ** Affects: horizon Importance: Undecided Assignee: Bryan Jones (jonesbr) Status: New ** Changed in: horizon Assignee: (unassigned) => Bryan Jones (jonesbr) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1560219 Title: Orchestration Resource Types does not handle additional namespaces Status in OpenStack Dashboard (Horizon): New Bug description: During the Mitaka release, Heat released new resources with an additional namespace. They are: OS::Neutron::LBaaS::LoadBalancer OS::Neutron::LBaaS::Listener OS::Neutron::LBaaS::Pool OS::Neutron::LBaaS::PoolMember OS::Neutron::LBaaS::HealthMonitor The additional namespace is to differ these LBaaS v2 resources from their v1 counterparts. However in the Resource Types list display in Horizon, the LBaaS is displayed as the Resource rather than the resource name. From a Horizon perspective, we shouldn't care about the additional namespace, and just display the resource name. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1560219/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1486335] Re: Create nova.conf with tox -egenconfig : ValueError: ("Expected ',' or end-of-list in", "Routes!=2.0,!=2.1,>=1.12.3; python_version=='2.7'", 'at', "; python_version==
I'm going to re-target this at the upstream Designate project since it doesn't appear to have anything to do with the Ubuntu Designate package. Ubuntu packages don't use pip or tox. ** Also affects: designate Importance: Undecided Status: New ** No longer affects: designate (Ubuntu) ** Also affects: designate (Ubuntu) Importance: Undecided Status: New ** Changed in: designate (Ubuntu) Status: New => Invalid ** No longer affects: designate -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1486335 Title: Create nova.conf with tox -egenconfig :ValueError: ("Expected ',' or end-of-list in", "Routes!=2.0,!=2.1,>=1.12.3;python_version=='2.7'", 'at', ";python_version=='2.7'") Status in OpenStack Compute (nova): Invalid Status in designate package in Ubuntu: Invalid Bug description: $git clone https://git.openstack.org/openstack/nova.git $pip install tox $tox -egenconfig cmdargs: [local('/home/ubuntu/nova/.tox/genconfig/bin/pip'), 'install', '-U', '--force-reinstall', '-r/home/ubuntu/nova/requirements.txt', '-r/home/ubuntu/nova/test-requirements.txt'] env: {'LC_ALL': 'en_US.utf-8', 'XDG_RUNTIME_DIR': '/run/user/1000', 'VIRTUAL_ENV': '/home/ubuntu/nova/.tox/genconfig', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '27.189.208.43 5793 22', 'LOGNAME': 'ubuntu', 'USER': 'ubuntu', 'HOME': '/home/ubuntu', 'PATH': '/home/ubuntu/nova/.tox/genconfig/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'XDG_SESSION_ID': '25', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': '27.189.208.43 5793 10.0.0.18 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE': 'en_US', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/5', 'OLDPWD': '/home/ubuntu', 'PWD': '/home/ubuntu/nova', 'PYTHONHASHSEED': '67143794', 'OS_TEST_PATH': './nova/tests/unit', 'MAIL': '/var/mail/ubuntu', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tg z=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36 :*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'} Exception: Traceback (most recent call last): File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/commands/install.py", line 262, in run for req in parse_requirements(filename, finder=finder, options=options, session=session): File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py", line 1631, in parse_requirements req = InstallRequirement.from_line(line, comes_from, prereleases=getattr(options, "pre", None)) File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py", line 172, in from_line return cls(req, comes_from, url=url, prereleases=prereleases) File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py", line 70, in __init__ req = pkg_resources.Requirement.parse(req) File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2606, in parse reqs = list(parse_requirements(s)) File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2544, in parse_requirements line, p, specs = scan_list(VERSION,LINE_END,line,p,(1,2),"version spec") File "/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2522, in scan_list "Expected ',' or end-of-list in",line,"at",line[p:] ValueError:
[Yahoo-eng-team] [Bug 1560210] [NEW] Resize server up failed when server's diskConfig option is Manual
Public bug reported: In my setup, there are two type of hypervisors ESX and KVM. The server is on ESX host. But during resize, it failed as it is migrated from one host (i.e. ESX host) to another (i.e. KVM host). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1560210 Title: Resize server up failed when server's diskConfig option is Manual Status in OpenStack Compute (nova): New Bug description: In my setup, there are two type of hypervisors ESX and KVM. The server is on ESX host. But during resize, it failed as it is migrated from one host (i.e. ESX host) to another (i.e. KVM host). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1560210/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests
** Changed in: designate (Ubuntu) Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1280522 Title: Replace assertEqual(None, *) with assertIsNone in tests Status in Anchor: Fix Released Status in bifrost: Fix Committed Status in Blazar: In Progress Status in Cinder: Fix Released Status in congress: Fix Released Status in dox: New Status in Glance: Fix Released Status in glance_store: Fix Released Status in heat: Fix Released Status in heat-cfntools: Fix Released Status in Heat Translator: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in Ironic: Fix Released Status in ironic-python-agent: Fix Released Status in OpenStack Identity (keystone): Fix Released Status in keystoneauth: Fix Released Status in kolla-mesos: Fix Released Status in Manila: Fix Released Status in networking-cisco: Fix Released Status in OpenStack Compute (nova): Fix Released Status in octavia: Fix Released Status in ooi: Fix Committed Status in os-client-config: Fix Released Status in python-barbicanclient: Fix Released Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-congressclient: Fix Released Status in python-cueclient: Fix Released Status in python-designateclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-ironicclient: Fix Released Status in python-manilaclient: Fix Released Status in python-neutronclient: Fix Released Status in python-openstackclient: In Progress Status in OpenStack SDK: In Progress Status in python-swiftclient: Fix Released Status in python-troveclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in Solum: Fix Released Status in Stackalytics: In Progress Status in tempest: Fix Released Status in Trove: Fix Released Status in tuskar: Fix Released Status in zaqar: Fix Released Status in designate package in Ubuntu: Fix Released Status in python-tuskarclient package in Ubuntu: Fix Committed Bug description: Replace assertEqual(None, *) with assertIsNone in tests to have more clear messages in case of failure. To manage notifications about this bug go to: https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558699] Re: 25 second timeout while polling the machine-id file/dev/urandom
Seems to be an issue with checking keyring on your system. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558699 Title: 25 second timeout while polling the machine-id file/dev/urandom Status in OpenStack Compute (nova): Invalid Bug description: I am installing OpenStack using Nova, Glance and Keystone. The system mostly works (I have another bug report open but it is not linked to this problem at all). I find that whatever command I run, Nova or Glance or even an openstack one, takes a substantial number of seconds to complete, whether it works or not. This is true for 'openstack --help' so it's going to be an OpenStack issue. I ran a few commands through 'strace -T' and found the system call that is causing the delay: poll([{fd=11, events=POLLIN}], 1, 25000) = 0 (Timeout) I traced the file descriptor back to where it was created: socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC, 0) = 11 connect(11, {sa_family=AF_LOCAL, sun_path=@"/tmp/dbus-CDsvziK6kt"}, 23) = 0 fcntl(11, F_GETFL) = 0x2 (flags O_RDWR) fcntl(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0 It then polls this FD and sends and receives a number of bits of data through it, indicating that it's all working correctly. Further down you can see where it stops, for no apparent reason: recvmsg(11, {msg_name(0)=NULL, msg_iov(1)=[{"l\3\1\1H\0\0\0\3\0\0\0u\0\0\0\6\1s\0\6\0\0\0:1.182\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 208 write(12, "\1\0\0\0\0\0\0\0", 8)= 8 recvmsg(11, 0x7ffddb6d0cb0, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) sendmsg(11, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1 \0\0\0\3\0\0\0\210\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 152}, {"\27\0\0\0org.freedesktop.secrets\0\0\0\0\0", 32}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 184 poll([{fd=11, events=POLLIN}], 1, 25000) = 0 (Timeout) I can consistently repeat this. Find attached the strace file for this. I am running: OpenStack v1.7.0 (even generating this caused the delay). Nova v2.30.1 Glance v1.1.0 I installed it all via pkg-add: glance/trusty-updates,now 2:11.0.1-0ubuntu1~cloud0 all [installed] glance-api/trusty-updates,now 2:11.0.1-0ubuntu1~cloud0 all [installed,automatic] glance-common/trusty-updates,now 2:11.0.1-0ubuntu1~cloud0 all [installed,automatic] glance-registry/trusty-updates,now 2:11.0.1-0ubuntu1~cloud0 all [installed,automatic] nova-api/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] nova-cert/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] nova-common/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed,automatic] nova-conductor/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] nova-consoleauth/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] nova-novncproxy/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] nova-scheduler/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed] python-glance/trusty-updates,now 2:11.0.1-0ubuntu1~cloud0 all [installed,automatic] python-glance-store/trusty-updates,now 0.9.1-1ubuntu1~cloud0 all [installed,automatic] python-glanceclient/trusty-updates,now 1:1.1.0-0ubuntu1~cloud0 all [installed] python-nova/trusty-updates,now 2:12.0.1-0ubuntu1~cloud0 all [installed,automatic] python-novaclient/trusty-updates,now 2:2.30.1-1~cloud0 all [installed] python-openstackclient/trusty-updates,now 1.7.0-1~cloud0 all [installed] |\/|artin To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1558699/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1369019] Re: VMware: need to support inventory folders
Reviewed: https://review.openstack.org/289613 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=c8472c88433153f7b8415170cfb63242741f3684 Submitter: Jenkins Branch:master commit c8472c88433153f7b8415170cfb63242741f3684 Author: Giridhar JayaveluDate: Mon Mar 7 14:34:26 2016 -0800 VMware: use datacenter path to fetch image When fetching image on nova using VMware VC driver, datacenter name is passed instead of using complete path to the datacenter. The incorrect url generated causes error in downloading the image as reported in Bug #1369019. This patch uses the complete inventory path of the datacenter to fetch the image. Change-Id: I88bfe37bffb4dc38eb27472167753d1c28db7a97 Closes-Bug: #1369019 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1369019 Title: VMware: need to support inventory folders Status in OpenStack Compute (nova): Fix Released Bug description: The VMware driver is failing to transfer images to datastores if the datacenter is inside inventory folders. This is due to the fact that the HTTP url needs to contain the folders to access the datacenter (datacenter path). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1369019/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558866] Re: Architecture ValueError Uncaught API Exception
Russell, I have a fix up for the nova API validation part, I just need to write a functional test for it. If you could validate that patch resolves your issue it'd be helpful. ** Changed in: horizon Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558866 Title: Architecture ValueError Uncaught API Exception Status in OpenStack Dashboard (Horizon): Opinion Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) liberty series: Confirmed Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: If an image is imported with an invalid Architecture, instances are unable to launch and cause a ValueError exception. This exception is only visible in logs and UI only tells user an exception occurred. Running Mirantis Openstack 8.0 (nova-api 2:12.0.0-1~u14.04+mos43) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 457, in from_dict 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions obj._set_attr_from_legacy_names(image_props) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 388, in _set_attr_from_legacy_names 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions setattr(self, new_key, image_props[legacy_key]) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in setter 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions field_value = field.coerce(self, name, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 189, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions return self._type.coerce(obj, attr, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/fields.py", line 87, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions raise ValueError(msg) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions ValueError: Architecture name 'x64' is not valid 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 2016-03-18 01:13:35.848 28025 INFO nova.api.openstack.wsgi [req-f56ff830-6e2d-46ab-b1a3-50f021725374 813401d7df1d4ad68388dee16def6a6b 9e90e9d0bb8c43b3a6fa3d2b1fb08efa - - -] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. Reproduce: Import image with architecture named 'x64' (or presumably anything, since it's a freeform input), try to launch instance of image. Expected Result: Image launches, or if it cannot and error is needed, error should tell user there is an invalid architecture. If architecture can only be chosen from limited options, it should probably be a combobox rather than a freeform input when creating a new image. Actual Result: Generic API exception. Image fails to launch. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1558866/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1552686] Re: Argument 'primary' to tables.bind_row_action() decorator makes it inflexible
Reviewed: https://review.openstack.org/289287 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=222774b210f127c5161326f9ebdf2cd5ee06b705 Submitter: Jenkins Branch:master commit 222774b210f127c5161326f9ebdf2cd5ee06b705 Author: Timur SufievDate: Mon Mar 7 14:41:20 2016 +0300 Auto-detect in i9n tests which row action to bind to It is no longer needed to specify `primary=True|False` in integration tests. Now test first tries to bind the action as primary and then (if unsuccessful) tries to bind one of secondary actions. Change-Id: Id1e2c921c15d6ef8ce7d2781623b3968ec7df374 Closes-Bug: #1552686 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1552686 Title: Argument 'primary' to tables.bind_row_action() decorator makes it inflexible Status in OpenStack Dashboard (Horizon): Fix Released Bug description: During the transitioning from legacy Launch Instance wizard to Angular one it was decided to test both workflows in integration test: legacy for deployers' peace of mind, Angular (once tests are written) for developers' confidence. With both workflows enabled, the following bug appeared in integration tests: Row-level action '[legacy] Launch Instance from Image' was the first button in an actions dropdown, but became the second one once Angular workflow was enabled. Thus we had to remove `primary=True` argument in the corresponding test action decorator. That imposes another inconvenience: right now the Angular action at the same table is not primary, but when we eventually disable legacy action completely, it will become primary, and we'll have to change the tests once again. Thus `primary` keyword arg is considered unnecessary as it add inflexibility to tables.bind_row_action() decorator behavior. The decorator must search the action is going to be bound in both shown and collapsed part of dropdown. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1552686/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1489724] Re: The check about project scope and domain scope has a problem
On the findings of comment #2, when requested for a token for (1). project scope, existing token generation method returns token under valid credential Token request curl command and returned token is available here: https://gist.github.com/Prosunjit/f5b859089ec340dd6584 (2). domain scope, existing token generation method returns token under valid credential. Token request curl command and returned token is available here: https://gist.github.com/Prosunjit/7bfab9d4c23379da21dc (3). When both project and domain scope is presented, exiting code returns 400 as specified in the API. Token request curl command and return status is available here: https://gist.github.com/Prosunjit/52e0f129e7836a5a0c3c Code Review: In file: keystone/keystone/auth/controllers.py In function: authenticate_for_token AuthInfo.create() command generates token for incoming token request. When both domain and project scope are present existing code DO check this in the following code and return output following the specification. def _validate_and_normalize_scope_data(self): """Validate and normalize scope data.""" if 'scope' not in self.auth: return if sum(['project' in self.auth['scope'], 'domain' in self.auth['scope'], 'unscoped' in self.auth['scope'], 'OS-TRUST:trust' in self.auth['scope']]) != 1: raise exception.ValidationError( attribute='project, domain, OS-TRUST:trust or unscoped', target='scope') So, I think, this bug fails to demonstrate its existence. ** Changed in: keystone Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1489724 Title: The check about project scope and domain scope has a problem Status in OpenStack Identity (keystone): Invalid Bug description: The keystone.common.authorization.token_to_auth_context function has part check code about scope, it as follows: --- def token_to_auth_context(token): ... if token.project_scoped: auth_context['project_id'] = token.project_id elif token.domain_scoped: auth_context['domain_id'] = token.domain_id else: LOG.debug('RBAC: Proceeding without project or domain scope') ... --- However if the token includes the project_scoped and domain_scoped at the same time,it should raise an exception. But now the above check code does not include the check when the project_scoped and domain_scoped exist at the same time . Reference the api manual has the following description about scope. --- The authorization scope includes either a project or domain. If you include both project and domain, this call returns the HTTP Bad Request (400) status code because a token cannot be simultaneously scoped as both a project and domain. --- To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1489724/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558306] Re: VPNaaS returns 500 INTERNAL on long names
Reviewed: https://review.openstack.org/293747 Committed: https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=0067e2671d1c33e630dff947ac7090c370152225 Submitter: Jenkins Branch:master commit 0067e2671d1c33e630dff947ac7090c370152225 Author: James ArendtDate: Thu Mar 10 20:27:08 2016 -0800 VPNaaS returns 500 INTERNAL error with long names, descriptions Should be verifying length at REST boundary to return more reasonable 400 class user error, both because under user control and because more meaningful to the user error message. Closes-Bug: #1558306 Change-Id: I44a64d841e61b4f7f6872124a8f6242c4c96cf44 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1558306 Title: VPNaaS returns 500 INTERNAL on long names Status in neutron: Fix Released Bug description: Current VPNaaS returns a 500 INTERNAL SERVER error when given long names or descriptions. Instead should give a 400 BAD REQUEST explaining user given value exceeds maximum threshold, like: neutron vpn-service-create router1 --name 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 Invalid input for name. Reason: '012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789' exceeds maximum length of 255. Applies to both names and descriptions. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1558306/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560145] [NEW] "Copy Data" checkbox on Image create has the wrong label
Public bug reported: Probably coming from https://github.com/openstack/horizon/commit/259973dd06cfa8b477ecffea3bde78dd2dc15864 How to reproduce: - Go to project -> Image - Click on "Create Image" Expected: There is a "Copy Data" checkbox Result: There is a "Image Location" checkbox with weird alignment. ** Affects: horizon Importance: Undecided Status: New ** Tags: mitaka-backport-potential ux -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1560145 Title: "Copy Data" checkbox on Image create has the wrong label Status in OpenStack Dashboard (Horizon): New Bug description: Probably coming from https://github.com/openstack/horizon/commit/259973dd06cfa8b477ecffea3bde78dd2dc15864 How to reproduce: - Go to project -> Image - Click on "Create Image" Expected: There is a "Copy Data" checkbox Result: There is a "Image Location" checkbox with weird alignment. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1560145/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1556818] Re: docstring warnings in glance
Reviewed: https://review.openstack.org/291926 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=adfc7e5a3fef26d3781d5aa19f5eaa86ac09d033 Submitter: Jenkins Branch:master commit adfc7e5a3fef26d3781d5aa19f5eaa86ac09d033 Author: Tom CocozzelloDate: Wed Mar 9 13:53:37 2016 -0600 fix docstring warnings and errors There are many warnings and errors that occur when the docs are generated. Co-Author-By: Danny Al-Gaaf Closes-Bug: 1556818 Change-Id: Ifebeb3904f136a56bd6fe6877220b279a1f98354 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1556818 Title: docstring warnings in glance Status in Glance: Fix Released Bug description: Building the glance docs produces some docstring warnings: /develop/OpenStack/glance/glance/location.py:docstring of glance.location.StoreLocations:4: ERROR: Unexpected indentation. /develop/OpenStack/glance/glance/api/v1/images.py:docstring of glance.api.v1.images.Controller.detail:4: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v1/images.py:docstring of glance.api.v1.images.Controller.index:11: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v1/images.py:docstring of glance.api.v1.images.Controller.meta:6: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v1/members.py:docstring of glance.api.v1.members.Controller.index:6: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v1/members.py:docstring of glance.api.v1.members.Controller.index_shared_images:5: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v2/image_members.py:docstring of glance.api.v2.image_members.ImageMembersController.index:6: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/api/v2/image_members.py:docstring of glance.api.v2.image_members.ImageMembersController.show:5: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/async/flows/ovf_process.py:docstring of glance.async.flows.ovf_process.OVAImageExtractor.extract:7: ERROR: Unexpected indentation. /develop/OpenStack/glance/glance/async/flows/ovf_process.py:docstring of glance.async.flows.ovf_process.OVAImageExtractor.extract:8: WARNING: Block quote ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/common/rpc.py:docstring of glance.common.rpc.Controller:9: WARNING: Definition list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/common/rpc.py:docstring of glance.common.rpc.Controller:15: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/common/rpc.py:docstring of glance.common.rpc.Controller.register:5: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/common/rpc.py:docstring of glance.common.rpc.RPCClient.bulk_request:4: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/common/rpc.py:docstring of glance.common.rpc.RPCClient.bulk_request:9: WARNING: Definition list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/registry/api/v1/images.py:docstring of glance.registry.api.v1.images.Controller.detail:4: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/registry/api/v1/images.py:docstring of glance.registry.api.v1.images.Controller.index:4: WARNING: Field list ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/tests/functional/v1/test_api.py:docstring of glance.tests.functional.v1.test_api.TestApi.test_download_non_exists_image_raises_http_forbidden:14: ERROR: Unexpected indentation. /develop/OpenStack/glance/glance/tests/integration/legacy_functional/test_v1_api.py:docstring of glance.tests.integration.legacy_functional.test_v1_api.TestApi.test_queued_process_flow:9: ERROR: Unexpected indentation. /develop/OpenStack/glance/glance/tests/integration/legacy_functional/test_v1_api.py:docstring of glance.tests.integration.legacy_functional.test_v1_api.TestApi.test_queued_process_flow:10: WARNING: Block quote ends without a blank line; unexpected unindent. /develop/OpenStack/glance/glance/tests/integration/legacy_functional/test_v1_api.py:docstring of glance.tests.integration.legacy_functional.test_v1_api.TestApi.test_queued_process_flow:11: WARNING: Bullet list ends without a blank line; unexpected
[Yahoo-eng-team] [Bug 1558866] Re: Architecture ValueError Uncaught API Exception
** Tags added: liberty-backport-potential ** Also affects: nova/liberty Importance: Undecided Status: New ** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/liberty Status: New => Confirmed ** Changed in: nova Importance: Low => Medium ** Changed in: nova/mitaka Importance: Undecided => Medium ** Changed in: nova/liberty Importance: Undecided => Medium ** Tags added: mitaka-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558866 Title: Architecture ValueError Uncaught API Exception Status in OpenStack Dashboard (Horizon): New Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) liberty series: Confirmed Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: If an image is imported with an invalid Architecture, instances are unable to launch and cause a ValueError exception. This exception is only visible in logs and UI only tells user an exception occurred. Running Mirantis Openstack 8.0 (nova-api 2:12.0.0-1~u14.04+mos43) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 457, in from_dict 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions obj._set_attr_from_legacy_names(image_props) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 388, in _set_attr_from_legacy_names 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions setattr(self, new_key, image_props[legacy_key]) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in setter 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions field_value = field.coerce(self, name, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 189, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions return self._type.coerce(obj, attr, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/fields.py", line 87, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions raise ValueError(msg) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions ValueError: Architecture name 'x64' is not valid 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 2016-03-18 01:13:35.848 28025 INFO nova.api.openstack.wsgi [req-f56ff830-6e2d-46ab-b1a3-50f021725374 813401d7df1d4ad68388dee16def6a6b 9e90e9d0bb8c43b3a6fa3d2b1fb08efa - - -] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. Reproduce: Import image with architecture named 'x64' (or presumably anything, since it's a freeform input), try to launch instance of image. Expected Result: Image launches, or if it cannot and error is needed, error should tell user there is an invalid architecture. If architecture can only be chosen from limited options, it should probably be a combobox rather than a freeform input when creating a new image. Actual Result: Generic API exception. Image fails to launch. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1558866/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558866] Re: Architecture ValueError Uncaught API Exception
The valid architectures for an image in nova are defined here: https://github.com/openstack/nova/blob/13.0.0.0rc1/nova/compute/arch.py#L72 I've added Horizon to this bug report since Horizon could create a dropdown box using that list (although horizon might not be able to import that code and it's not available in the API/CLI, so it might just have to be copied into horizon and kept synchronized with nova). For nova, we could do some image property validation for the architecture in the API so we could fail with a better error than what you get from the compute side when spawn fails. ** Also affects: horizon Importance: Undecided Status: New ** Changed in: nova Status: New => Confirmed ** Changed in: nova Importance: Undecided => Low ** Tags added: api images -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558866 Title: Architecture ValueError Uncaught API Exception Status in OpenStack Dashboard (Horizon): New Status in OpenStack Compute (nova): Confirmed Bug description: If an image is imported with an invalid Architecture, instances are unable to launch and cause a ValueError exception. This exception is only visible in logs and UI only tells user an exception occurred. Running Mirantis Openstack 8.0 (nova-api 2:12.0.0-1~u14.04+mos43) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 457, in from_dict 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions obj._set_attr_from_legacy_names(image_props) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 388, in _set_attr_from_legacy_names 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions setattr(self, new_key, image_props[legacy_key]) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in setter 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions field_value = field.coerce(self, name, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 189, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions return self._type.coerce(obj, attr, value) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/fields.py", line 87, in coerce 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions raise ValueError(msg) 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions ValueError: Architecture name 'x64' is not valid 2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 2016-03-18 01:13:35.848 28025 INFO nova.api.openstack.wsgi [req-f56ff830-6e2d-46ab-b1a3-50f021725374 813401d7df1d4ad68388dee16def6a6b 9e90e9d0bb8c43b3a6fa3d2b1fb08efa - - -] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. Reproduce: Import image with architecture named 'x64' (or presumably anything, since it's a freeform input), try to launch instance of image. Expected Result: Image launches, or if it cannot and error is needed, error should tell user there is an invalid architecture. If architecture can only be chosen from limited options, it should probably be a combobox rather than a freeform input when creating a new image. Actual Result: Generic API exception. Image fails to launch. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1558866/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1559241] Re: cryptography-1.3 breaks unit tests
pyOpenSSL 16.0.0 has been released, which appears to have fixed this. ** Changed in: os-cloud-config Status: Triaged => Fix Released ** Changed in: os-cloud-config Assignee: (unassigned) => Ben Nemec (bnemec) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1559241 Title: cryptography-1.3 breaks unit tests Status in neutron: Confirmed Status in os-cloud-config: Fix Released Bug description: With cryptography 1.3, the unit tests are failing with: Traceback (most recent call last): File "os_cloud_config/tests/test_keystone_pki.py", line 36, in test_create_ca_and_signing_pairs self.assertTrue(ca_key.check()) File "/home/fedora/os-cloud-config/.tox/py27/lib/python2.7/site-packages/OpenSSL/crypto.py", line 243, in check if _lib.EVP_PKEY_type(self._pkey.type) != _lib.EVP_PKEY_RSA: AttributeError: '_cffi_backend.CDataGCP' object has no attribute 'type' I've opened https://github.com/pyca/cryptography/issues/2837 upstream to get this fixed. In the meantime we may need to exclude cryptography 1.3 in global requirements. logstash query: http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_status%3A%20FAILURE%20AND%20message%3A%20%5C%22_cffi_backend.CDataGCP%5C%22 Appears to be affecting os-cloud-config and neutron-lbaas right now. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1559241/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1556672] Re: doc: wrong internal link in opts/index.rst
Reviewed: https://review.openstack.org/292140 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=b9e710659fc98cefb256a345409b2e0e8eef164d Submitter: Jenkins Branch:master commit b9e710659fc98cefb256a345409b2e0e8eef164d Author: Danny Al-GaafDate: Sun Mar 13 21:47:50 2016 +0100 Fix link to document Since opts/index.rst is not on the same level as the configuring.rst the link need to specify the correct location. Closes-Bug: #1556672 Change-Id: I9b9b4de130fa7c0d26f2a8ad8031afd6473ae479 Signed-off-by: Danny Al-Gaaf ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1556672 Title: doc: wrong internal link in opts/index.rst Status in Glance: Fix Released Bug description: Building the docs generates the following output: glance/doc/source/opts/index.rst:5: WARNING: unknown document: configuring The code is: Refer to :doc:`Basic Configuration ` Due to the location of the file in the tree it should be: Refer to :doc:`Basic Configuration <../../configuring>` to generate the correct link. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1556672/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461459] Re: Allow disabling the evacuate cleanup mechanism in compute manager
I think the DocImpact in the nova change was probably just to get the config options docs updated with the new workaround option. If there is anything else we could do with this, it could be to note in the docs related to evacuate operations that if you're running nova < liberty, there is a potential data loss issue with the evacuate functionality if you don't have that patch and don't set the option appropriately. For example: http://docs.openstack.org/user-guide-admin/cli_nova_evacuate.html http://docs.openstack.org/admin-guide-cloud/compute-node-down.html There was a spec in liberty to make this smarter, but the existing problem description applies to nova compute nodes < liberty: http://specs.openstack.org/openstack/nova- specs/specs/liberty/implemented/robustify_evacuate.html#problem- description If the hostname changes on the compute or you have a typo in your configs (multiple compute nodes managing the same vcenter running at the same time), that evacuate code can delete your instances. That's why the workarounds.destroy_after_evacuate=False option is a way to safely get around this until you're sure that you're cleaning up a failed compute node (a real evacuation rather than a misconfiguration or hostname change), until you get your computes to liberty+. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1461459 Title: Allow disabling the evacuate cleanup mechanism in compute manager Status in OpenStack Compute (nova): Invalid Status in openstack-manuals: Triaged Bug description: https://review.openstack.org/174779 commit 6f1f9dbc211356a3d0e2d46d3a984d7ceee79ca6 Author: Tony BreedsDate: Tue Jan 27 11:17:54 2015 -0800 Allow disabling the evacuate cleanup mechanism in compute manager This mechanism attempts to destroy any locally-running instances on startup if instance.host != self.host. The assumption is that the instance has been evacuated and is safely running elsewhere. This is a dangerous assumption to make, so this patch adds a configuration variable to disable this behavior if it's not desired. Note that disabling it may have implications for the case where instances *were* evacuated, given potential shared resources. To counter that problem, this patch also makes _init_instance() skip initialization of the instance if it appears to be owned by another host, logging a prominent warning in that case. As a result, if you have destroy_after_evacuate=False and you start a nova compute with an incorrect hostname, or run it twice from another host, then the worst that will happen is you get log warnings about the instances on the host being ignored. This should be an indication that something is wrong, but still allow for fixing it without any loss. If the configuration option is disabled and a legitimate evacuation does occur, simply enabling it and then restarting the compute service will cause the cleanup to occur. This is added to the workarounds config group because it is really only relevant while evacuate is fundamentally broken in this way. It needs to be refactored to be more robust, and once that is done, this should be able to go away. Conflicts: nova/compute/manager.py nova/tests/unit/compute/test_compute.py nova/tests/unit/compute/test_compute_mgr.py nova/utils.py NOTE: In nova/utils.py a new section has been introduced but only the option addessed by this backport has been included. DocImpact: New configuration option, and peril warning Partial-Bug: #1419785 (cherry picked from commit 922148ac45c5a70da8969815b4f47e3c758d6974) -- squashed with commit -- Create a 'workarounds' config group. This group is for very specific reasons. If you're: - Working around an issue in a system tool (e.g. libvirt or qemu) where the fix is in flight/discussed in that community. - The tool can be/is fixed in some distributions and rather than patch the code those distributions can trivially set a config option to get the "correct" behavior. This is a good place for your workaround. (cherry picked from commit b1689b58409ab97ef64b8cec2ba3773aacca7ac5) -- Change-Id: Ib9a3c72c096822dd5c65c905117ae14994c73e99 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1461459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team
[Yahoo-eng-team] [Bug 1560060] Re: test_get_certificate fails on stable/liberty with "AttributeError: load_der_x509_certificate"
** Also affects: glance/newton Importance: Undecided Status: New ** Also affects: glance/mitaka Importance: Undecided Status: Confirmed ** Also affects: glance/liberty Importance: Undecided Status: New ** Changed in: glance/liberty Importance: Undecided => Medium ** Changed in: glance/liberty Status: New => Triaged ** Changed in: glance/mitaka Status: Confirmed => Triaged ** Changed in: glance/mitaka Importance: Undecided => Medium ** Changed in: glance/newton Status: New => Triaged ** Changed in: glance/newton Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1560060 Title: test_get_certificate fails on stable/liberty with "AttributeError: load_der_x509_certificate" Status in Glance: Triaged Status in Glance liberty series: Triaged Status in Glance mitaka series: Triaged Status in Glance newton series: Triaged Bug description: http://logs.openstack.org/periodic-stable/periodic-glance- python27-liberty/2193fe7/console.html#_2016-03-21_06_34_24_295 2016-03-21 06:34:24.295 | FAIL: glance.tests.unit.common.test_signature_utils.TestSignatureUtils.test_get_certificate 2016-03-21 06:34:24.295 | tags: worker-6 2016-03-21 06:34:24.295 | -- 2016-03-21 06:34:24.295 | Traceback (most recent call last): 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1318, in patched 2016-03-21 06:34:24.295 | patching.__exit__(*exc_info) 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1482, in __exit__ 2016-03-21 06:34:24.295 | delattr(self.target, self.attribute) 2016-03-21 06:34:24.295 | AttributeError: load_der_x509_certificate 2016-03-21 06:34:24.295 | Ran 2851 tests in 312.180s This is due to the cryptography 1.3 release on 3/18 which removed that public method which is mocked in the test: https://github.com/openstack/glance/blob/stable/liberty/glance/tests/unit/common/test_signature_utils.py#L322 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1560060/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560060] [NEW] test_get_certificate fails on stable/liberty with "AttributeError: load_der_x509_certificate"
Public bug reported: http://logs.openstack.org/periodic-stable/periodic-glance- python27-liberty/2193fe7/console.html#_2016-03-21_06_34_24_295 2016-03-21 06:34:24.295 | FAIL: glance.tests.unit.common.test_signature_utils.TestSignatureUtils.test_get_certificate 2016-03-21 06:34:24.295 | tags: worker-6 2016-03-21 06:34:24.295 | -- 2016-03-21 06:34:24.295 | Traceback (most recent call last): 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1318, in patched 2016-03-21 06:34:24.295 | patching.__exit__(*exc_info) 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1482, in __exit__ 2016-03-21 06:34:24.295 | delattr(self.target, self.attribute) 2016-03-21 06:34:24.295 | AttributeError: load_der_x509_certificate 2016-03-21 06:34:24.295 | Ran 2851 tests in 312.180s This is due to the cryptography 1.3 release on 3/18 which removed that public method which is mocked in the test: https://github.com/openstack/glance/blob/stable/liberty/glance/tests/unit/common/test_signature_utils.py#L322 ** Affects: glance Importance: Undecided Status: Confirmed ** Tags: gate liberty-backport-potential ** Changed in: glance Status: New => Confirmed ** Tags added: gate -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1560060 Title: test_get_certificate fails on stable/liberty with "AttributeError: load_der_x509_certificate" Status in Glance: Confirmed Bug description: http://logs.openstack.org/periodic-stable/periodic-glance- python27-liberty/2193fe7/console.html#_2016-03-21_06_34_24_295 2016-03-21 06:34:24.295 | FAIL: glance.tests.unit.common.test_signature_utils.TestSignatureUtils.test_get_certificate 2016-03-21 06:34:24.295 | tags: worker-6 2016-03-21 06:34:24.295 | -- 2016-03-21 06:34:24.295 | Traceback (most recent call last): 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1318, in patched 2016-03-21 06:34:24.295 | patching.__exit__(*exc_info) 2016-03-21 06:34:24.295 | File "/home/jenkins/workspace/periodic-glance-python27-liberty/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1482, in __exit__ 2016-03-21 06:34:24.295 | delattr(self.target, self.attribute) 2016-03-21 06:34:24.295 | AttributeError: load_der_x509_certificate 2016-03-21 06:34:24.295 | Ran 2851 tests in 312.180s This is due to the cryptography 1.3 release on 3/18 which removed that public method which is mocked in the test: https://github.com/openstack/glance/blob/stable/liberty/glance/tests/unit/common/test_signature_utils.py#L322 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1560060/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1551037] Re: HTTP 409 error being masked as HTTP 500 on image delete
** Also affects: glance/newton Importance: Undecided Status: New ** Changed in: glance/newton Status: New => Triaged ** Changed in: glance/newton Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1551037 Title: HTTP 409 error being masked as HTTP 500 on image delete Status in Glance: Confirmed Status in Glance mitaka series: Confirmed Status in Glance newton series: Triaged Bug description: Using ceph for image storage. When calling image-delete on an in-use image: 500 Internal Server Error 500 Internal Server Error The server has either erred or is incapable of performing the requested operation. (HTTP 500) Though the log shows: glance.common.wsgi InUseByStore: The image cannot be deleted because it is in use through the backend store outside of Glance. I would expect that this return HTTP 409. In the code (images.py), I see an except clause for exception.InUseByStore, but the RBD driver is raising a different type of exception. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1551037/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1497940] Re: ovs_neutron_agent doesn't start on Windows because validate_local_ip_method uses linux specific implementation
Reviewed: https://review.openstack.org/227077 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=371e8aa0769086d069cc9005f1c454fb348afd46 Submitter: Jenkins Branch:master commit 371e8aa0769086d069cc9005f1c454fb348afd46 Author: Adelina TuvenieDate: Wed Sep 23 17:59:11 2015 -0700 Ovs agent can't start on Windows because of validate_local_ip Change I4b4527c28d0738890e33b343c9e17941e780bc24 introduced a validate_local_ip sanity check for the local_ip to see that it belongs to the host. This method uses linux specific implementation that fails on windows. This patch fixes this bug by adding a implementation for validate_local_ip that works on windows as well, using netifaces. Change-Id: Ia8299512687d9d7135fe013fbb38f2b28d54125d Closes-Bug: #1497940 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1497940 Title: ovs_neutron_agent doesn't start on Windows because validate_local_ip_method uses linux specific implementation Status in neutron: Fix Released Bug description: Change I4b4527c28d0738890e33b343c9e17941e780bc24 introduced a validate_local_ip sanity check for the local_ip to see that it belongs to the host. This method uses linux specific implementation [1] that fails on windows. [1] https://review.openstack.org/#/c/154043/13/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1497940/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1557946] Re: functional test test_icmp_from_specific_address fails
Reviewed: https://review.openstack.org/294576 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=420d5c7987f8ff7473d70697e5606f07ccfe7c1d Submitter: Jenkins Branch:master commit 420d5c7987f8ff7473d70697e5606f07ccfe7c1d Author: Jakub LibosvarDate: Fri Mar 18 12:51:23 2016 + conn_testers: Bump timeout for ICMPv6 echo tests In IPv6 scenarios NDP can increase round-trip time of ICMPv6 packets over 1 seconds. The patch increases timeout for ICMPv6 to 2 seconds. Note that this will extend scenarios when ping is supposed to fail. Change-Id: Iec7d3138aee3fc904312dbc45ef76854ad0ea789 Closes-Bug: 1557946 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1557946 Title: functional test test_icmp_from_specific_address fails Status in neutron: Fix Released Bug description: This is the trace: Traceback (most recent call last): File "neutron/tests/functional/agent/test_firewall.py", line 532, in test_icmp_from_specific_address direction=self.tester.INGRESS) File "neutron/tests/common/conn_testers.py", line 36, in wrap return f(self, direction, *args, **kwargs) File "neutron/tests/common/conn_testers.py", line 162, in assert_connection testing_method(direction, protocol, src_port, dst_port) File "neutron/tests/common/conn_testers.py", line 147, in _test_icmp_connectivity src_namespace, ip_address)) neutron.tests.common.conn_testers.ConnectionTesterException: ICMP packets can't get from test-d5baf3c4-aca8-4fab-84aa-ae3bfe9dbc14 namespace to 2001:db8:::2 address Example of a failure [1], logstash query [2] 16 hits in the last 2 days [1] http://logs.openstack.org/47/286347/12/check/gate-neutron-dsvm-functional/6dd0cdb/testr_results.html.gz [2] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ICMP%20packets%20can't%20get%5C%22 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1557946/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560005] [NEW] Read only or reserved properties could contain better error messages
Public bug reported: Overview: The task output makes do difference just contains a message about an internal error. However the logs this difference is specified so we should pass this onto the user, to make it easier to debug problems. How to reproduce: curl -X POST http://127.0.0.1:9292/v2/tasks -H "X-Auth-Token: $token" -d '{"type": "import","input": {"import_from": "http://google.com","import_from_format": "qcow2","image_properties": {"disk_format": "vhd","id": "ovf","created_at": "1"}}}' glance task-show *id returned* ** Affects: glance Importance: Low Assignee: Niall Bunting (niall-bunting) Status: New ** Changed in: glance Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1560005 Title: Read only or reserved properties could contain better error messages Status in Glance: New Bug description: Overview: The task output makes do difference just contains a message about an internal error. However the logs this difference is specified so we should pass this onto the user, to make it easier to debug problems. How to reproduce: curl -X POST http://127.0.0.1:9292/v2/tasks -H "X-Auth-Token: $token" -d '{"type": "import","input": {"import_from": "http://google.com","import_from_format": "qcow2","image_properties": {"disk_format": "vhd","id": "ovf","created_at": "1"}}}' glance task-show *id returned* To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1560005/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560001] [NEW] [RFE] Moving Bgp out of the neutron
Public bug reported: This defect is about moving BGP dynamic routing functionality out of the Neutron main repo to it's own specific repo, if required. ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-bgp ** Summary changed: - moving-bgp-out-of-the-neutron-tree + [RFE] Moving Bgp out of the neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560001 Title: [RFE] Moving Bgp out of the neutron Status in neutron: New Bug description: This defect is about moving BGP dynamic routing functionality out of the Neutron main repo to it's own specific repo, if required. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560001/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1560003] [NEW] [RFE] Creating a new stadium project for BGP Dynamic Routing effort
Public bug reported: This bug is logged for getting driver's team opinion on creating a new stadium project for BGP dynamic routing project. ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-bgp -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1560003 Title: [RFE] Creating a new stadium project for BGP Dynamic Routing effort Status in neutron: New Bug description: This bug is logged for getting driver's team opinion on creating a new stadium project for BGP dynamic routing project. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1560003/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461459] Re: Allow disabling the evacuate cleanup mechanism in compute manager
** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1461459 Title: Allow disabling the evacuate cleanup mechanism in compute manager Status in OpenStack Compute (nova): New Status in openstack-manuals: Triaged Bug description: https://review.openstack.org/174779 commit 6f1f9dbc211356a3d0e2d46d3a984d7ceee79ca6 Author: Tony BreedsDate: Tue Jan 27 11:17:54 2015 -0800 Allow disabling the evacuate cleanup mechanism in compute manager This mechanism attempts to destroy any locally-running instances on startup if instance.host != self.host. The assumption is that the instance has been evacuated and is safely running elsewhere. This is a dangerous assumption to make, so this patch adds a configuration variable to disable this behavior if it's not desired. Note that disabling it may have implications for the case where instances *were* evacuated, given potential shared resources. To counter that problem, this patch also makes _init_instance() skip initialization of the instance if it appears to be owned by another host, logging a prominent warning in that case. As a result, if you have destroy_after_evacuate=False and you start a nova compute with an incorrect hostname, or run it twice from another host, then the worst that will happen is you get log warnings about the instances on the host being ignored. This should be an indication that something is wrong, but still allow for fixing it without any loss. If the configuration option is disabled and a legitimate evacuation does occur, simply enabling it and then restarting the compute service will cause the cleanup to occur. This is added to the workarounds config group because it is really only relevant while evacuate is fundamentally broken in this way. It needs to be refactored to be more robust, and once that is done, this should be able to go away. Conflicts: nova/compute/manager.py nova/tests/unit/compute/test_compute.py nova/tests/unit/compute/test_compute_mgr.py nova/utils.py NOTE: In nova/utils.py a new section has been introduced but only the option addessed by this backport has been included. DocImpact: New configuration option, and peril warning Partial-Bug: #1419785 (cherry picked from commit 922148ac45c5a70da8969815b4f47e3c758d6974) -- squashed with commit -- Create a 'workarounds' config group. This group is for very specific reasons. If you're: - Working around an issue in a system tool (e.g. libvirt or qemu) where the fix is in flight/discussed in that community. - The tool can be/is fixed in some distributions and rather than patch the code those distributions can trivially set a config option to get the "correct" behavior. This is a good place for your workaround. (cherry picked from commit b1689b58409ab97ef64b8cec2ba3773aacca7ac5) -- Change-Id: Ib9a3c72c096822dd5c65c905117ae14994c73e99 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1461459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558343] Re: configdrive is lost after resize.(libvirt driver)
** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/mitaka Importance: Undecided => High ** Changed in: nova/mitaka Milestone: None => mitaka-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1558343 Title: configdrive is lost after resize.(libvirt driver) Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) kilo series: Confirmed Status in OpenStack Compute (nova) liberty series: Confirmed Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: Used the trunk code as of 2016/03/16 my environment disabled metadata agent and forced the use of config drive. console log before resize: http://paste.openstack.org/show/490825/ console log after resize: http://paste.openstack.org/show/490824/ qemu 18683 1 4 18:40 ?00:00:32 /usr/bin/qemu-system-x86_64 -name instance-0002 -S -machine pc-i440fx-2.0,accel=tcg,usb=off -m 128 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 018892c7-8144-49c0-93d2-79ee83efd6a9 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=13.0.0,serial=16c127e2-6369-4e19-a646-251a416a8dcd,uuid=018892c7-8144-49c0-93d2-79ee83efd6a9,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-0002/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/da ta/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=23,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:34:d6:f3,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on $ blkid /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" TYPE="ext3" $ another vm without resize: $ blkid /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" TYPE="ext3" /dev/sr0: LABEL="config-2" TYPE="iso9660" $ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1558343/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1557585] Re: Xenapi live-migration does not work at all now
** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova/mitaka Importance: Undecided => High ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/mitaka Milestone: None => mitaka-rc2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1557585 Title: Xenapi live-migration does not work at all now Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: In case Nova calculated live migration type by itself and it's a block live migration, it will not work if Xen is used because of invalid check in driver: https://github.com/openstack/nova/blob/dae13c5153a3aee25c8ded1cb154cc56a04cd7a2/nova/virt/xenapi/vmops.py#L2391 Basically because here block_migration will be None and real value will be stored in migrate_data.block_migration To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1557585/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1559978] [NEW] [RFE] log segmentation_id over threshold for monitoring
Public bug reported: Use case === Monitoring of the "segmentation resources". Logging the status of such resources as we go, (or the pass over a certain threshold) would allow monitoring solutions to identify tripping over certain levels, and warn the administrator to take action: cleaning up unused tenant networks, changing configuration, changing segmentation technologies. etc. Description = Depending on configuration, and underlaying technologies, the segmentation ids can be exhausted (vlan/vni/tunnel keys, etc..), making it a consumable resource. External monitoring solutions have no easy way to determine the amount of "segmentation resources" available on the underlaying resource technology. Alternatives == One alternative could be providing a generic API to retrieve the usage of resources. That would require the monitoring solution to make API calls and therefore use credentials, making it harder to leverage standard deployments and monitoring tools. This could also be considered as a second step of this RFE. ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe ** Description changed: Use case === Monitoring of the "segmentation resources". - Logging the status of such resources as we go, (or the pass over a certain threshold) - would allow monitoring solutions to identify tripping over certain levels, - and warn the administrator to take action: cleaning up unused tenant networks, - changing configuration, changing segmentation technologies. etc. - + Logging the status of such resources as we go, (or the pass over a certain threshold) would allow monitoring solutions to identify tripping over + certain levels, and warn the administrator to take action: cleaning up + unused tenant networks, changing configuration, changing segmentation + technologies. etc. Description = - Depending on configuration, and underlaying technologies, the segmentation ids - can be exhausted (vlan/vni/tunnel keys, etc..), making it a consumable resource. + Depending on configuration, and underlaying technologies, the segmentation + ids can be exhausted (vlan/vni/tunnel keys, etc..), making it a consumable + resource. - External monitoring solutions have no easy way to determine the amount of + External monitoring solutions have no easy way to determine the amount of "segmentation resources" available on the underlaying resource technology. - Alternatives == - One alternative could be providing a generic API to retrieve the usage of + One alternative could be providing a generic API to retrieve the usage of resources. That would require the monitoring solution to make API calls and therefore use credentials, making it harder to leverage standard deployments and monitoring tools. This could also be considered as a second step of this RFE. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1559978 Title: [RFE] log segmentation_id over threshold for monitoring Status in neutron: New Bug description: Use case === Monitoring of the "segmentation resources". Logging the status of such resources as we go, (or the pass over a certain threshold) would allow monitoring solutions to identify tripping over certain levels, and warn the administrator to take action: cleaning up unused tenant networks, changing configuration, changing segmentation technologies. etc. Description = Depending on configuration, and underlaying technologies, the segmentation ids can be exhausted (vlan/vni/tunnel keys, etc..), making it a consumable resource. External monitoring solutions have no easy way to determine the amount of "segmentation resources" available on the underlaying resource technology. Alternatives == One alternative could be providing a generic API to retrieve the usage of resources. That would require the monitoring solution to make API calls and therefore use credentials, making it harder to leverage standard deployments and monitoring tools. This could also be considered as a second step of this RFE. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1559978/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1300141] Re: Output from API calls in the tests
** Changed in: horizon/icehouse Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1300141 Title: Output from API calls in the tests Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Won't Fix Bug description: After pulling the latest this morning, I see the following output in the tests: test_index (openstack_dashboard.dashboards.admin.info.tests.SystemInfoViewTests) ... INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): public.nova.example.com DEBUG:cinderclient.client:Connection refused: HTTPConnectionPool(host='public.nova.example.com', port=8776): Max retries exceeded with url: /v1/os-quota-sets/1/defaults (Caused by : [Errno -2] Name or service not known) ok It looks like there is an API call that should be mocked, that isn't. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1300141/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360012] Re: Database Launch Instance should show Flavor Details
Trove has been removed from Horizon, so this bug report is irrelevant. Adding trove-dashboard so they can Triage. ** Changed in: horizon Status: In Progress => Invalid ** Tags removed: trove ** Changed in: horizon Assignee: Prasoon Telang (prasoontelang) => (unassigned) ** Also affects: trove-dashboard Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1360012 Title: Database Launch Instance should show Flavor Details Status in OpenStack Dashboard (Horizon): Invalid Status in trove-dashboard: New Bug description: The modal has a "Flavor" dropdown, but user has no idea what the Flavor is defined as. We should follow the same convention has what the "Launch Instance" modal has. See attached image. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1360012/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1229819] Re: Unit Test fixes with the new "router" dashboard
** Changed in: horizon/icehouse Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1229819 Title: Unit Test fixes with the new "router" dashboard Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Won't Fix Bug description: Currently the new dashboard "router" doesn't have its unit test run by default. This needs to be fixed. Additionally, many existing unit tests such as the network create tests from the "project" and "admin" dashboards and also the instance create tests have been changed to accommodate the cisco n1k plugin only when the plugin is being used. The existing solution is very cumbersome with a check being done to test the config variable in the local_settings. A better solution needs to be found to ensure these are run in a better manner. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1229819/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1421287] Re: [data processing] Remove css from templates
Sahara now resides in its own dashboard repo; adding Sahara to bug tracker ** Changed in: horizon Status: In Progress => Invalid ** Also affects: sahara Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1421287 Title: [data processing] Remove css from templates Status in OpenStack Dashboard (Horizon): Invalid Status in Sahara: New Bug description: * low priority, technical debt item * There are a couple places in the Data Processing panels where there is css defined inside of the templates. Ideally, existing css rules can be applied to achieve the desired results, but it is possible that we may need to add new styles to the projects css definitions. The places where css is defined in data processing templates are: .../horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/templates/data_processing.job_binaries/job_binaries.html #id_job_binary_url { width: 200px !important; } .form-help-block { float: left; text-align: left; width: 300px; } and .../horizon/openstack_dashboard/dashboards/project/data_processing/jobs/templates/data_processing.jobs/jobs.html .job_origin_main, .job_origin_lib { width: 200px !important; } .job_binary_add_button, .job_binary_remove_button { width: 80px !important; margin-left: 5px; } .form-help-block { float: left; text-align: left; width: 300px; } .lib-input-div { float:left; width:320px; } .job-libs-display { float:left; } .actions_column { width: 210px !important; } To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1421287/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1430232] Re: In "Job execution" i can select cluster in "Error" state
Sahara now resides in its own dashboard repo; adding Sahara to bug tracker ** Changed in: horizon Assignee: Akanksha Agrawal (akanksha-aha) => (unassigned) ** Changed in: horizon Status: New => Invalid ** Also affects: sahara Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1430232 Title: In "Job execution" i can select cluster in "Error" state Status in OpenStack Dashboard (Horizon): Invalid Status in Sahara: New Bug description: ENVIRONMENT: devstack STEPS TO REPRODUCE: 1. Create Vanilla2 cluster 2. Create wrong cluster and retain in "Error" state 3. Create Job 4. Click on "Launch on existing cluster" ACTUAL RESULT: In "Cluster" i can select all clusters EXPECTED RESULT: In "Cluster" i can select clusters in only "Active" state To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1430232/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1331563] Re: container does not show up after creation
** Changed in: horizon/icehouse Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1331563 Title: container does not show up after creation Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Dashboard (Horizon) icehouse series: Won't Fix Bug description: When creating a container, the Horizon behavior is confusing. It loads you into the container, but the container is not listed. You have to backup the URL and reload to get it to show up. Our users find this very confusing. As you can see in the screen shot the URL shows that I should be inside a new container, which I am not since I can see the old ones and my new container "dude_wheres_my_container" is nowhere to be found. See attached screenshot. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1331563/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1346389] Re: Disk unit missing in Overview Usage Summary
** Changed in: horizon/icehouse Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1346389 Title: Disk unit missing in Overview Usage Summary Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Won't Fix Bug description: In Project and Admin Overview pages, the Disk column in the Usage Summary table does not contain a unit. It would be better to add the unit for clarity. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1346389/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1371787] Re: In the Create and Update User form under Identity->Users, Description section has glyphicon-eye-open and close icons
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1371787 Title: In the Create and Update User form under Identity->Users, Description section has glyphicon-eye-open and close icons Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: Click on the Users panel under Identity and click either 'Create User' or 'Edit'. In the 'Description' section of the form, the glyphicon- eye-open/close icon shows up. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1371787/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1369621] Re: Project limits don't update when using the input selector to change instance count
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1369621 Title: Project limits don't update when using the input selector to change instance count Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: To recreate: Project -> Compute -> Instances -> Launch instance Change the instance count using the up/down arrows Observe how the project limits do not update To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1369621/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1377372] Re: run_tests always requires selenium (even without '--with-selenium')
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1377372 Title: run_tests always requires selenium (even without '--with-selenium') Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: When packaging horizon, distros may use ./run_tests.sh -N and supply the required python packages externally. If selenium is not present on the system and run_tests.sh is invoked without selenium (i.e. the arguments --with-selenium and --only-selenium are NOT used), it will fail. Below is the Python stack dump when building Horizon Juno RC1 in Debian: == ERROR: Failure: ImportError (No module named selenium.common) -- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 414, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/horizon/test/webdriver.py", line 24, in from selenium.common import exceptions as selenium_exceptions ImportError: No module named selenium.common Please make the selenium import optional (with an except ImportError: to catch it). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1377372/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1386126] Re: Eye icon on field is misplaced if previous field has validation message
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1386126 Title: Eye icon on field is misplaced if previous field has validation message Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: You can see it on the Identity -> Users -> Create User form. Encountered on both master branch and juno. The same misplacement also occurs with the new awesome-font icons which are to replace current glyphicons. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1386126/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1378525] Re: Broken L3 HA migration should be blocked
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1378525 Title: Broken L3 HA migration should be blocked Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Status in neutron: Fix Released Bug description: While the HA property is update-able, and resulting router-get invocations suggest that the router is HA, the migration itself fails on the agent. This is deceiving and confusing and should be blocked until the migration itself is fixed in a future patch. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1378525/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422049] Re: Security group checking action permissions raise error
** Changed in: horizon/juno Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1422049 Title: Security group checking action permissions raise error Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Won't Fix Bug description: When using nova-network, I got the output on horizon: [Sun Feb 15 02:48:41.965163 2015] [:error] [pid 21259:tid 140656137611008] Error while checking action permissions. [Sun Feb 15 02:48:41.965184 2015] [:error] [pid 21259:tid 140656137611008] Traceback (most recent call last): [Sun Feb 15 02:48:41.965193 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 1260, in _filter_action [Sun Feb 15 02:48:41.965199 2015] [:error] [pid 21259:tid 140656137611008] return action._allowed(request, datum) and row_matched [Sun Feb 15 02:48:41.965205 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", line 137, in _allowed [Sun Feb 15 02:48:41.965211 2015] [:error] [pid 21259:tid 140656137611008] return self.allowed(request, datum) [Sun Feb 15 02:48:41.965440 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py", line 83, in allowed [Sun Feb 15 02:48:41.965457 2015] [:error] [pid 21259:tid 140656137611008] if usages['security_groups']['available'] <= 0: [Sun Feb 15 02:48:41.965466 2015] [:error] [pid 21259:tid 140656137611008] KeyError: 'available' [Sun Feb 15 02:48:41.986480 2015] [:error] [pid 21259:tid 140656137611008] Error while checking action permissions. [Sun Feb 15 02:48:41.986533 2015] [:error] [pid 21259:tid 140656137611008] Traceback (most recent call last): [Sun Feb 15 02:48:41.986569 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 1260, in _filter_action [Sun Feb 15 02:48:41.986765 2015] [:error] [pid 21259:tid 140656137611008] return action._allowed(request, datum) and row_matched [Sun Feb 15 02:48:41.986806 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", line 137, in _allowed [Sun Feb 15 02:48:41.986841 2015] [:error] [pid 21259:tid 140656137611008] return self.allowed(request, datum) [Sun Feb 15 02:48:41.987010 2015] [:error] [pid 21259:tid 140656137611008] File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py", line 83, in allowed [Sun Feb 15 02:48:41.987051 2015] [:error] [pid 21259:tid 140656137611008] if usages['security_groups']['available'] <= 0: [Sun Feb 15 02:48:41.987088 2015] [:error] [pid 21259:tid 140656137611008] KeyError: 'available' To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1422049/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392735] Re: Project Limits don't refresh while selecting Flavor
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392735 Title: Project Limits don't refresh while selecting Flavor Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: To recreate: Project -> Compute -> Instances -> Launch instance Change the flavor using the up/down arrows Observe how the project limits do not update until the user tabs out of the field To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1407055] Re: All unit test jobs failing due to timezone change (test_timezone_offset_is_displayed)
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1407055 Title: All unit test jobs failing due to timezone change (test_timezone_offset_is_displayed) Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: 2015-01-02 06:09:33.597 | == 2015-01-02 06:09:33.597 | FAIL: test_timezone_offset_is_displayed (openstack_dashboard.dashboards.settings.user.tests.UserSettingsTest) 2015-01-02 06:09:33.597 | -- 2015-01-02 06:09:33.597 | Traceback (most recent call last): 2015-01-02 06:09:33.598 | File "/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/settings/user/tests.py", line 30, in test_timezone_offset_is_displayed 2015-01-02 06:09:33.598 | self.assertContains(res, "UTC +04:00: Russia (Moscow) Time") 2015-01-02 06:09:33.598 | File "/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/testcases.py", line 351, in assertContains 2015-01-02 06:09:33.598 | msg_prefix + "Couldn't find %s in response" % text_repr) 2015-01-02 06:09:33.598 | AssertionError: False is not true : Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response 2015-01-02 06:09:33.598 | u"False is not true : Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response" = self._formatMessage(u"False is not true : Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response", "%s is not true" % safe_repr(False)) 2015-01-02 06:09:33.598 | >> raise self.failureException(u"False is not true : Couldn't find 'UTC +04:00: Russia (Moscow) Time' in response") Noticed in master and Icehouse jobs, I assume Juno is affected too. The timezone appears to be listed as UTC +03:00 now. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1407055/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394370] Re: [OSSA 2014-040] horizon login page is vulnerable to DOS attack (CVE-2014-8124)
** Changed in: horizon/juno Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394370 Title: [OSSA 2014-040] horizon login page is vulnerable to DOS attack (CVE-2014-8124) Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Status in OpenStack Security Advisory: Fix Released Bug description: We have horizon deployed with mysql sessions. I believe this issue exists with all db backed sessions, and likely memchached too (but I am not sure). Every request to the login page is generating a new session record in the db. This is based upon this line of code: https://github.com/django/django/blob/master/django/contrib/sessions/backends/db.py#L41 What happens is as soon as you access request.session['foo'] then you are going to get an entry in the db. I have placed some debugging code in a variety of locations where we are accessing the session store before we should be, which creates these records: https://github.com/openstack/horizon/blob/master/horizon/middleware.py#L94 The check for the timeout should never occur if there is no authenticated user. So the check a few lines below needs to be moved higher. https://github.com/openstack/django_openstack_auth/blob/master/openstack_auth/utils.py#L50 This check I am not sure how to work around. We are accessing the session, which creates records, just trying to keep track if a user is logged in or not. It seems like we are not using the django auth mechanisms correctly here, and I can't see if there is a workaround. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394370/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1382023] Re: Horizon fails with Django-1.7
** Changed in: horizon/icehouse Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1382023 Title: Horizon fails with Django-1.7 Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: as reported here: http://lists.alioth.debian.org/pipermail/openstack- devel/2014-October/007488.html or with this backtrace, horizon Juno and Icehouse in Debian fail with this backtrace: http://paste.fedoraproject.org/142396/13459234/ [Thu Oct 16 11:33:45.901644 2014] [:error] [pid 1581] [remote ::1:27029] mod_wsgi (pid=1581): Exception occurred processing WSGI script '/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'. [Thu Oct 16 11:33:45.901690 2014] [:error] [pid 1581] [remote ::1:27029] Traceback (most recent call last): [Thu Oct 16 11:33:45.901707 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, in __call__ [Thu Oct 16 11:33:45.901793 2014] [:error] [pid 1581] [remote ::1:27029] self.load_middleware() [Thu Oct 16 11:33:45.901807 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 46, in load_middleware [Thu Oct 16 11:33:45.901879 2014] [:error] [pid 1581] [remote ::1:27029] mw_instance = mw_class() [Thu Oct 16 11:33:45.901889 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/middleware/locale.py", line 23, in __init__ [Thu Oct 16 11:33:45.901929 2014] [:error] [pid 1581] [remote ::1:27029] for url_pattern in get_resolver(None).url_patterns: [Thu Oct 16 11:33:45.901939 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 367, in url_patterns [Thu Oct 16 11:33:45.902065 2014] [:error] [pid 1581] [remote ::1:27029] patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) [Thu Oct 16 11:33:45.902076 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 361, in urlconf_module [Thu Oct 16 11:33:45.902091 2014] [:error] [pid 1581] [remote ::1:27029] self._urlconf_module = import_module(self.urlconf_name) [Thu Oct 16 11:33:45.902099 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module [Thu Oct 16 11:33:45.902134 2014] [:error] [pid 1581] [remote ::1:27029] __import__(name) [Thu Oct 16 11:33:45.902146 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py", line 36, in [Thu Oct 16 11:33:45.902182 2014] [:error] [pid 1581] [remote ::1:27029] url(r'', include(horizon.urls)) [Thu Oct 16 11:33:45.902191 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 29, in include [Thu Oct 16 11:33:45.902231 2014] [:error] [pid 1581] [remote ::1:27029] patterns = getattr(urlconf_module, 'urlpatterns', urlconf_module) [Thu Oct 16 11:33:45.902242 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 224, in inner [Thu Oct 16 11:33:45.902336 2014] [:error] [pid 1581] [remote ::1:27029] self._setup() [Thu Oct 16 11:33:45.902346 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 357, in _setup [Thu Oct 16 11:33:45.902359 2014] [:error] [pid 1581] [remote ::1:27029] self._wrapped = self._setupfunc() [Thu Oct 16 11:33:45.902367 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/horizon/base.py", line 778, in url_patterns [Thu Oct 16 11:33:45.902525 2014] [:error] [pid 1581] [remote ::1:27029] return self._urls()[0] [Thu Oct 16 11:33:45.902537 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/horizon/base.py", line 812, in _urls [Thu Oct 16 11:33:45.902552 2014] [:error] [pid 1581] [remote ::1:27029] url(r'^%s/' % dash.slug, include(dash._decorated_urls))) [Thu Oct 16 11:33:45.902561 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/horizon/base.py", line 487, in _decorated_urls [Thu Oct 16 11:33:45.902573 2014] [:error] [pid 1581] [remote ::1:27029] url(r'^%s/' % url_slug, include(panel._decorated_urls))) [Thu Oct 16 11:33:45.902581 2014] [:error] [pid 1581] [remote ::1:27029] File "/usr/lib/python2.7/site-packages/horizon/base.py",
[Yahoo-eng-team] [Bug 1386687] Re: Overview page: OverflowError when cinder limits are negative
** Changed in: horizon/icehouse Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1386687 Title: Overview page: OverflowError when cinder limits are negative Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Fix Released Status in OpenStack Dashboard (Horizon) juno series: Fix Released Bug description: This is the Cinder twin to bug 1370869 which was resolved for Nova. For some yet-to-be-fully-debugged reasons, after deleting multiple instances the quota_usages table for Cinder ended up with negative values for several of the "in use" limits, causing the Overview Page to fail with an error 500: OverflowError at /project/ cannot convert float infinity to integer Even if this is (probably?) a rare occurrence, it would make sense to also add guards for the cinder limits and make the overview page more resilient. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1386687/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1330513] Re: cannot delete db users
** Changed in: horizon/icehouse Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1330513 Title: cannot delete db users Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Won't Fix Bug description: The call within DeleteUser has a typo. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1330513/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1269727] Re: minified js considered as binary
** Changed in: horizon/havana Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1269727 Title: minified js considered as binary Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) havana series: Won't Fix Bug description: In Debian, minified javascript are considered either as an obfuscation of the original code, or as a "compiled-in binary" version of them, and therefore minified javascript files are considered non-free. Moreover, the use of unminified javascript will ease the development on client side. There will be no impact on Horizon performance due to the compress tag of Django. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1269727/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1370869] Re: Cannot display project overview page due to "cannot convert float infinity to integer" error
** Changed in: horizon/icehouse Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1370869 Title: Cannot display project overview page due to "cannot convert float infinity to integer" error Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) icehouse series: Fix Released Bug description: Due to nova bug 1370867, nova absolute-limits sometimes returns -1 for *Used fields rather than 0. If this happens, the project overview page cannot be displayed with "cannot convert float infinity to integer" error. Users cannot use the dashboard without specifying URL directly, so it is better the dashboard guards this situation. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1370869/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1196823] Re: New keystoneclient properties break testsuite
** Changed in: horizon/folsom Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1196823 Title: New keystoneclient properties break testsuite Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) folsom series: Fix Released Status in OpenStack Dashboard (Horizon) grizzly series: Fix Released Bug description: keystoneclient.Client recently changed tenant_id and tenant_name to properties which mox is unable to mock automatically. Thus we have to manually fake them in the testsuite as we did before: == FAIL: test_get_default_role (openstack_dashboard.test.api_tests.keystone_tests.RoleAPITests) -- Traceback (most recent call last): File "/var/lib/openstack-dashboard-test/openstack_dashboard/test/api_tests/keystone_tests.py", line 78, in test_get_default_role keystoneclient = self.stub_keystoneclient() File "/var/lib/openstack-dashboard-test/openstack_dashboard/test/helpers.py", line 289, in stub_keystoneclient self.keystoneclient = self.mox.CreateMock(keystone_client.Client) File "/usr/lib/python2.7/site-packages/mox.py", line 258, in CreateMock new_mock = MockObject(class_to_mock, attrs=attrs) File "/usr/lib/python2.7/site-packages/mox.py", line 556, in __init__ attr = getattr(class_to_mock, method) File "/usr/lib/python2.7/site-packages/mox.py", line 608, in __getattr__ raise UnknownMethodCallError(name) UnknownMethodCallError: Method called is not a member of the object: Method called is not a member of the object: tenant_id >> raise UnknownMethodCallError('tenant_id') To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1196823/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1100444] Re: Edit Flavor Window Displays Details of Deleted Flavors
** Changed in: horizon/folsom Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1100444 Title: Edit Flavor Window Displays Details of Deleted Flavors Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Dashboard (Horizon) folsom series: Won't Fix Bug description: A user deletes all existing flavors and creates new flavors. When they click the "Edit Flavors" button, the window displays the details of the deleted flavor of the same flavor ID. This window appears to disregard the deleted flag in the table nova.instance_types and pulls the first row that matches the flavor ID. Steps to reproduce: 1.) Delete all existing flavors. 2.) Create a new flavor through the dashboard. 3.) Once the flavor has been created, click the "Edit Flavor" button. Thank you, Sean Carlisle To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1100444/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 913641] Re: dashboard can't delete snapshot
** Changed in: horizon/diablo Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/913641 Title: dashboard can't delete snapshot Status in OpenStack Dashboard (Horizon) diablo series: Fix Released Bug description: i use diablo final dahboard,but i can't use delete snapshot i find this, openstack-dashboard/django-openstack/django_openstack/dash/views/images.py image.owner id an id of int, the request.user.username is an name of str, so i change it to this it work well,i can delete snapshot 195 #if image.owner == request.user.username: 196 if image.owner == tenant_id: To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/diablo/+bug/913641/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1080920] Re: Quotas in Overview (Folsom)
** Changed in: horizon/folsom Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1080920 Title: Quotas in Overview (Folsom) Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) folsom series: Won't Fix Bug description: Hello, Unless I'm mistaken, there's a small typo on the usage/base.py file that is causing the quota information to not be displayed on the Overview page. usage/base.py sets "self.quotas": https://github.com/openstack/horizon/blob/stable/folsom/horizon/usage/base.py#L110 But the template refers to "self.quota": https://github.com/openstack/horizon/blob/stable/folsom/horizon/templates/horizon/common/_quota_summary.html#L5 Fixing either file to refer to the right variable fixes the problem. Thanks, Joe To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1080920/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1053488] Re: Keyerror when displaying Instances & Volumes
** Changed in: horizon/essex Status: In Progress => Won't Fix ** Changed in: horizon/essex Assignee: Jiang Yong (jiangy) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1053488 Title: Keyerror when displaying Instances & Volumes Status in OpenStack Dashboard (Horizon): Invalid Status in OpenStack Dashboard (Horizon) essex series: Won't Fix Status in horizon package in Debian: Fix Released Bug description: This is most likely Essex specific because the corresponding code has changed in Folsom. Steps to reproduce: Create the user USER and give it admin privileges Make the user a member of tenantA Create a volume in tenantB Associate the volume to an instance in tenantB Login to the dashboard as USER Go to the Volume & Instance menu entry See the following error message, where the key matches the UUID of the volume. KeyError at /nova/instances_and_volumes/ u'f67c2eb9-9863-44c0-addf-1338233d7b4c' The corresponding code in horizon/dashboards/nova/instances_and_volumes/views.py is volumes = api.volume_list(self.request) instances = SortedDict([(inst.id, inst) for inst in self._get_instances()]) for volume in volumes: for att in volume.attachments: att['instance'] = instances[att['server_id']] And my first guess is that api.volume_list(self.request) returns all the volumes when the user has admin privileges, while self._get_instances() only returns the instances of the current tenant. When trying to get the instance to which each volume is attached, the lookup fails every time the volume is attached to a tenant that is not the current tenant. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1053488/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1482190] Re: Newly joined project can't be set as Active Project without re-login
This was backported to Liberty and Kilo. The only missing patch was on the Horizon part: https://review.openstack.org/#/c/260951/ But as Kilo is security-only, this has to be closed as "resolved" now as fixes were provided for liberty (mitaka/master are ok) ** Changed in: horizon Status: Incomplete => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1482190 Title: Newly joined project can't be set as Active Project without re-login Status in OpenStack Dashboard (Horizon): Fix Released Bug description: After adding the current user to member of another project that already exists from Dashboard > Identity menu > Projects tab, "Set as Active Project" is not displayed in Actions of this project and it is not possible to switch to the project until re-login. 1. Log in Dashboard as admin user and go to Identity menu > Projects tab. 2. Create a project. 3. Add the current user (admin) to member of project created in step2 from "Manage Members" in Actions. 4. Confirm that there is no action to switch project in Actions of the project created in step2. 5. Re-login and confirm that "Set as Active Project" exists in the Actions of the project created in step2. Actual Results: "Set as Active Project" is not displayed in the Actions list until re-login. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1482190/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1559920] [NEW] Flows per in_port are deleted after SG rules are applied
Public bug reported: During the creation of a new port in the integration bridge (br-int), first the firewall rules are applied and then all flows matching this input port are deleted: if cur_tag != lvm.vlan: self.int_br.delete_flows(in_port=port.ofport) This happens only when the port is created (or the vlan tag changes). If any firewall rule is applied using the in_port as a condition, during the initialization of the firewall for this port, this rule is deleted. Instead of that, this security action should be moved to the previous function, "_add_port_tag_info", in order to avoid any firewall rule deletion and maintaining the same security level during the port creation; that means the ports doesn't allow any kind of traffic until the firewall rules are applied. ** Affects: neutron Importance: Undecided Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez) Status: New ** Tags: firewall groups ovs security ** Tags added: firewall groups ovs security ** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1559920 Title: Flows per in_port are deleted after SG rules are applied Status in neutron: New Bug description: During the creation of a new port in the integration bridge (br-int), first the firewall rules are applied and then all flows matching this input port are deleted: if cur_tag != lvm.vlan: self.int_br.delete_flows(in_port=port.ofport) This happens only when the port is created (or the vlan tag changes). If any firewall rule is applied using the in_port as a condition, during the initialization of the firewall for this port, this rule is deleted. Instead of that, this security action should be moved to the previous function, "_add_port_tag_info", in order to avoid any firewall rule deletion and maintaining the same security level during the port creation; that means the ports doesn't allow any kind of traffic until the firewall rules are applied. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1559920/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1557482] Re: nova hypervisor-list will generate exception
** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1557482 Title: nova hypervisor-list will generate exception Status in OpenStack Compute (nova): Invalid Bug description: Version: Liberty. OS: controller & compute node are ubuntu 14.04 server, follow the doc to install. After installed Using "nova host-list" will get the result correctly +--+-+--+ | host_name| service | zone | +--+-+--+ | controller | cert| internal | | controller | scheduler | internal | | controller | consoleauth | internal | | controller | conductor | internal | | compute-node-164 | compute | nova | +--+-+--+ but if use "nova hypervisor-list" ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0649f773-8b50-4b84-a670-d8c290f77439) From the dashboard, in the System -> hypervisor panel, I can not find the hypervisor. when navigate to this page, an error will display get not the hypervisor informatin. The compute node is displayed on "Compute Host" tab, not not in the Hypervisor tab. version_instgalled: ii nova-api 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - API frontend ii nova-cert2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - certificate management ii nova-common 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - common files ii nova-conductor 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - conductor service ii nova-consoleauth 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - Console Authenticator ii nova-novncproxy 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 2:12.0.1-0ubuntu1~cloud0 all OpenStack Compute Python libraries ii python-novaclient2:2.30.1-1~cloud0 all client library for OpenStack Compute API nova-api.log below cut here -- 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions [req-7ddb9e1f-b74a-4df0-ade6-2b364a66ac36 c7922dbbc15b4e5e9ba93acba512e746 cfd6638eca634eb68060e6090ea105b1 - - -] Unexpected exception in API method 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/hypervisors.py", line 101, in detail 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions for hyp in compute_nodes]) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 3504, in service_get_by_compute_host 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions return objects.Service.get_by_compute_host(context, host_name) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 171, in wrapper 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions result = fn(cls, context, *args, **kwargs) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/objects/service.py", line 222, in get_by_compute_host 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions db_service = db.service_get_by_compute_host(context, host) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 139, in service_get_by_compute_host 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions use_slave=use_slave) 2016-03-05 00:16:02.073 2798 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 500, in service_get_by_compute_host 2016-03-05