[Yahoo-eng-team] [Bug 1266986] [NEW] Fix typo in gridfs store
Public bug reported: Referring commit 61a715e17e8d6e6dbe60d35447c70fba990bd2e9 https://github.com/openstack/glance/blob/master/glance/store/gridfs.py#L99 Fix typo: msg = ("Missing dependecies: pymongo") It should be, msg = ("Missing dependencies: pymongo") ** Affects: glance Importance: Undecided Assignee: Aswad Rangnekar (aswad-r) Status: In Progress ** Tags: ntt ** Changed in: glance Assignee: (unassigned) => Aswad Rangnekar (aswad-r) ** Changed in: glance Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1266986 Title: Fix typo in gridfs store Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Bug description: Referring commit 61a715e17e8d6e6dbe60d35447c70fba990bd2e9 https://github.com/openstack/glance/blob/master/glance/store/gridfs.py#L99 Fix typo: msg = ("Missing dependecies: pymongo") It should be, msg = ("Missing dependencies: pymongo") To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1266986/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266983] [NEW] Grizzly 2013.1.4 's NOVA Waste PublicIP
Public bug reported: 192.168.0.0/24 is Manage IP 172.168.149.0/24 is Public IP 172.5.5.0/24 is VM Fix IP 1.We boot a VM only link to private network without public network, [root@controllernode ~(wuxi_it)]$ nova list +--+--+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-++ | 786b8958-2a77-4a3c-89c9-35fbce1b6391 | CentOS6.4_test | ACTIVE | None | Running | ext_net=172.168.149.158; wuxi_it-net=172.168.5.11, 172.168.149.155 | | d5dcea71-8b1c-4e92-aac7-069460e11bb3 | Cirros_3.1_node1 | ACTIVE | None | Running | ext_net=172.168.149.161; wuxi_it-net=172.168.5.9 | | acd5c695-85c3-4e89-8730-e0badf487a5a | cirros_test | ACTIVE | None | Running | wuxi_it-net=172.168.5.13 | +--+--+++-++ 2.We assign a floating-ip to this VM [root@controllernode ~(wuxi_it)]$ nova add-floating-ip cirros_test 172.168.149.157 [root@controllernode ~(wuxi_it)]$ nova list +--+--+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-++ | 786b8958-2a77-4a3c-89c9-35fbce1b6391 | CentOS6.4_test | ACTIVE | None | Running | ext_net=172.168.149.158; wuxi_it-net=172.168.5.11, 172.168.149.155 | | d5dcea71-8b1c-4e92-aac7-069460e11bb3 | Cirros_3.1_node1 | ACTIVE | None | Running | ext_net=172.168.149.161; wuxi_it-net=172.168.5.9 | | acd5c695-85c3-4e89-8730-e0badf487a5a | cirros_test | ACTIVE | None | Running | wuxi_it-net=172.168.5.13, 172.168.149.157 | +--+--+++-++ We find we can't ping this IP:172.168.149.157 3.We boot a VM with both private and public network and assign same floating-ip to this VM [root@controllernode ~(wuxi_it)]$ nova list +--+--+++-++ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-++ | 786b8958-2a77-4a3c-89c9-35fbce1b6391 | CentOS6.4_test | ACTIVE | None | Running | ext_net=172.168.149.158; wuxi_it-net=172.168.5.11, 172.168.149.155 | | d5dcea71-8b1c-4e92-aac7-069460e11bb3 | Cirros_3.1_node1 | ACTIVE | None | Running | ext_net=172.168.149.161; wuxi_it-net=172.168.5.9 | | 5166613f-90dd-4d1b-b79c-41497215746c | Cirros_3.1_test | BUILD | spawning | NOSTATE | ext_net=172.168.149.162; wuxi_it-net=172.168.5.15 | +--+--+++-++ [root@controllernode ~(wuxi_it)]$ nova floating-ip-list +-+--+--+-+ | Ip | Instance Id | Fixed Ip | Pool | +-+--+--+-+ | 172.168.149.155 | 786b8958-2a77-4a3c-89c9-35fbce1b6391 | 172.168.5.11 | ext_net | | 172.168.149.156 | 98d31755-1e92-4217-8247-b2be88978475 | 172.168.5.12 | ext_net | | 172.168.149.157 | None | None | ext_net | +-+--+--+-+ [root@controllernode ~(wuxi_it)]$ nova add-floating-ip Cirros_3.1_test 172.168.149.157 [root@controllernode ~(wuxi_it)]$ nova floating-ip-list +-+--+--+-+ | Ip | Instance Id
[Yahoo-eng-team] [Bug 1266974] [NEW] nova work with glance SSL
Public bug reported: My environment is : Nova api --https--> haproxy(SSL proxy)http> Glance api1 |--http> Glance api2 I use centos + rdo rpm package(havana), my haproxy is 1.5_dev21. It can work well if I config in nova.conf as following: glance_api_servers=glanceapi1_ip:9292,glanceapi2_ip:9292 But when I want nova api talk with glance api in https, it can't work. My config is as following: glance_api_servers=https://Glanceapi_VIP:443 in nova.conf, When I boot VM, I will get the error as below: 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 1220, in create 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack legacy_bdm=legacy_bdm) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 840, in _create_instance 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image_id, boot_meta = self._get_image(context, image_href) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 620, in _get_image 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = image_service.show(context, image_id) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/image/glance.py", line 292, in show 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack _reraise_translated_image_exception(image_id) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/image/glance.py", line 290, in show 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = self._client.call(context, 1, 'get', image_id) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/nova/image/glance.py", line 214, in call 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return getattr(client.images, method)(*args, **kwargs) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/glanceclient/v1/images.py", line 114, in get 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack % urllib.quote(str(image_id))) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 293, in raw_request 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return self._http_request(url, method, **kwargs) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 244, in _http_request 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack body_str = ''.join([chunk for chunk in body_iter]) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 499, in __iter__ 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = self.next() 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/glanceclient/common/http.py", line 515, in next 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = self._resp.read(CHUNKSIZE) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib64/python2.6/httplib.py", line 518, in read 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.close() 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib64/python2.6/httplib.py", line 499, in close 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.fp.close() 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib64/python2.6/socket.py", line 278, in close 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self._sock.close() 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 145, in __getattr__ 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack attr = getattr(self.fd, name) 2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack AttributeError: 'GreenSocket' object has no attribute 'close' ** Affects: nova Importance: Undecided Status: New ** Description changed: My environment is : - Nova api -https> haproxy(1.5_dev21 work as ssl proxy) http> Glance api1 - |--http> Glance api2 + Nova api --https--> haproxy(SSL proxy)http> Glance api1 + |--http> Glance api2 + + I use centos + rdo rpm package(havana), my haproxy is 1.5_dev21. It can work well if I config in nova.conf as following: glance_api_servers=glanceapi1_ip:9292,glanceapi2_ip:9292 - But when I want nova api talk with glance api in https, it can't work. My config is as following: - glance_api_servers=https://Glanceapi_VIP:443 in nova.co
[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils
I'm not crazy about this approach of making changes throughout the project; updating all of the projects and then removing the wrapper in oslo, then updating the libs in all of the projects again is really something that should not be a top priority. I do however think that the usage should be allowed to fall off naturally as other efforts are made to update to using mock, once that's done we should eventually just find that this wrapper is no longer needed and remove it from oslo at that time. ** Changed in: cinder Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266962 Title: Remove set_time_override in timeutils Status in OpenStack Telemetry (Ceilometer): New Status in Cinder: Invalid Status in Gantt: New Status in OpenStack Image Registry and Delivery Service (Glance): New Status in Ironic (Bare Metal Provisioning): New Status in OpenStack Identity (Keystone): New Status in Manila: New Status in OpenStack Message Queuing Service (Marconi): New Status in OpenStack Compute (Nova): New Status in Oslo - a Library of Common OpenStack Code: New Status in Messaging API for OpenStack: New Status in Python client library for Keystone: New Status in Python client library for Nova: New Status in Tuskar: New Bug description: set_time_override was written as a helper function to mock utcnow in unittests. However we now use mock or fixture to mock our objects so set_time_override has become obsolete. We should first remove all usage of set_time_override from downstream projects before deleting it from oslo. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266962] [NEW] Remove set_time_override in timeutils
Public bug reported: set_time_override was written as a helper function to mock utcnow in unittests. However we now use mock or fixture to mock our objects so set_time_override has become obsolete. We should first remove all usage of set_time_override from downstream projects before deleting it from oslo. ** Affects: ceilometer Importance: Undecided Status: New ** Affects: cinder Importance: Undecided Status: New ** Affects: gantt Importance: Undecided Status: New ** Affects: glance Importance: Undecided Status: New ** Affects: ironic Importance: Undecided Status: New ** Affects: keystone Importance: Undecided Status: New ** Affects: manila Importance: Undecided Status: New ** Affects: marconi Importance: Undecided Status: New ** Affects: nova Importance: Undecided Status: New ** Affects: oslo Importance: Undecided Status: New ** Affects: oslo.messaging Importance: Undecided Status: New ** Affects: python-keystoneclient Importance: Undecided Status: New ** Affects: python-novaclient Importance: Undecided Status: New ** Affects: tuskar Importance: Undecided Status: New ** Also affects: ceilometer Importance: Undecided Status: New ** Also affects: cinder Importance: Undecided Status: New ** Also affects: glance Importance: Undecided Status: New ** Also affects: ironic Importance: Undecided Status: New ** Also affects: keystone Importance: Undecided Status: New ** Also affects: manila Importance: Undecided Status: New ** Also affects: marconi Importance: Undecided Status: New ** Also affects: nova Importance: Undecided Status: New ** Also affects: oslo.messaging Importance: Undecided Status: New ** Also affects: tuskar Importance: Undecided Status: New ** Also affects: python-keystoneclient Importance: Undecided Status: New ** Also affects: python-novaclient Importance: Undecided Status: New ** Also affects: gantt Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1266962 Title: Remove set_time_override in timeutils Status in OpenStack Telemetry (Ceilometer): New Status in Cinder: New Status in Gantt: New Status in OpenStack Image Registry and Delivery Service (Glance): New Status in Ironic (Bare Metal Provisioning): New Status in OpenStack Identity (Keystone): New Status in Manila: New Status in OpenStack Message Queuing Service (Marconi): New Status in OpenStack Compute (Nova): New Status in Oslo - a Library of Common OpenStack Code: New Status in Messaging API for OpenStack: New Status in Python client library for Keystone: New Status in Python client library for Nova: New Status in Tuskar: New Bug description: set_time_override was written as a helper function to mock utcnow in unittests. However we now use mock or fixture to mock our objects so set_time_override has become obsolete. We should first remove all usage of set_time_override from downstream projects before deleting it from oslo. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266960] [NEW] InstanceIsLocked Exception didn't raise
Public bug reported: When VM is locked by nova, VM API should not execute. There are so many api that cannot be executed when vm is locked. But When API is tried, nova raise inappropriate InstanceInvalidSate exception instead of InstanceIsLocked exception. example) $ nova reboot 9b4a0687-d1f5-4dac-a49e-f5c6d1f3e316 ERROR: Instance is in an invalid state for 'reboot' (HTTP 409) (Request-ID: req-8407d42d-9f0e-4c91-995b-35025251f209) $ nova delete 9b4a0687-d1f5-4dac-a49e-f5c6d1f3e316 Instance is in an invalid state for 'delete' (HTTP 409) (Request-ID: req-f1b144b6-958e-4b19-84c7-fa641344d4d5) This case is occured from so many compute api. delete reboot rebuild resize shelve pause unpause suspend resume rescue unrescue attach_volume detach_volume update_instance_metadata InstanceInvalidState is used at many api to check state is appropriate, but InstanceIsLocked inherited InstanceInvalidState, when vm is locked, api request return InstanceInvalidState instead of InstanceIsLocked. So I suggest modify InstanceIsLocked inherit Invalid instead of InstanceInvalidState. InstanceIsLocked(InstanceInvalidState) -> InstanceIsLocked(Invalid) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266960 Title: InstanceIsLocked Exception didn't raise Status in OpenStack Compute (Nova): New Bug description: When VM is locked by nova, VM API should not execute. There are so many api that cannot be executed when vm is locked. But When API is tried, nova raise inappropriate InstanceInvalidSate exception instead of InstanceIsLocked exception. example) $ nova reboot 9b4a0687-d1f5-4dac-a49e-f5c6d1f3e316 ERROR: Instance is in an invalid state for 'reboot' (HTTP 409) (Request-ID: req-8407d42d-9f0e-4c91-995b-35025251f209) $ nova delete 9b4a0687-d1f5-4dac-a49e-f5c6d1f3e316 Instance is in an invalid state for 'delete' (HTTP 409) (Request-ID: req-f1b144b6-958e-4b19-84c7-fa641344d4d5) This case is occured from so many compute api. delete reboot rebuild resize shelve pause unpause suspend resume rescue unrescue attach_volume detach_volume update_instance_metadata InstanceInvalidState is used at many api to check state is appropriate, but InstanceIsLocked inherited InstanceInvalidState, when vm is locked, api request return InstanceInvalidState instead of InstanceIsLocked. So I suggest modify InstanceIsLocked inherit Invalid instead of InstanceInvalidState. InstanceIsLocked(InstanceInvalidState) -> InstanceIsLocked(Invalid) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266960/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266827] Re: Instances fail to boot with error: /var/run/openvswitch/db.sock: connection attempt failed
this is either a f19 issue or a openvswitch issue as we don't set/use/maintain the unix:/var/run/openvswitch/db.sock parameter anywhere in the nova code base and hence have no "cleanup" code for clearing up that file. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266827 Title: Instances fail to boot with error: /var/run/openvswitch/db.sock: connection attempt failed Status in OpenStack Compute (Nova): Invalid Bug description: Tested on RHEL6.5 with havana release openstack-nova-compute-2013.2.1-1.el6.noarch. Installed openstack via packstack allinone. Have not configured neutron for external access. Using the default network. Steps to reproduce: 1. Create a f19 glance image. 2. Boot an instance with the above image. Fails with the following in nova/compute.log: 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1280, in create_ovs_vif_port 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] run_as_root=True) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/utils.py", line 177, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] return processutils.execute(*cmd, **kwargs) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 178, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] cmd=' '.join(cmd)) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] ProcessExecutionError: Unexpected error while running command. 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl -- --may-exist add-port br-int qvoab718ad4-cb -- set Interface qvoab718ad4-cb external-ids:iface-id=ab718ad4-cb35-4736-b57b-d4c2b002cb36 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:f1:ae:df external-ids:vm-uuid=787f6b8d-2851-41d2-9a2d-d7b5701c546f 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Exit code: 1 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Stdout: '' 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Stderr: '2014-01-07T15:36:50Z|2|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Connection refused)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\n' 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] NOTE: This was a working setup. Have booted instances before with the same image. No config changes have been made that could explain the failure. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266827/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266954] [NEW] tempest failure: test_list_agent failes with MismatchError
Public bug reported: Jenkins job 'check-tempest-dsvm-neutron-pg-isolated' failed when I posted a fix. (see. http://logs.openstack.org/34/65034/1/check/check-tempest-dsvm-neutron-pg-isolated/d3e37ea ) Failed test is as follows: --- FAIL: tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke] tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke] Traceback (most recent call last): File "tempest/api/network/admin/test_agent_management.py", line 37, in test_list_agent self.assertIn(self.agent, agents) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 330, in assertIn self.assertThat(haystack, Contains(needle)) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 414, in assertThat raise MismatchError(matchee, matcher, mismatch, verbose) MismatchError: {u'binary': u'neutron-dhcp-agent', u'description': None, ... snip --- I found the mismatch is only 'heartbeat_timestamp'. expected: u'heartbeat_timestamp': u'2014-01-06 06:52:48.291601' real: u'heartbeat_timestamp': u'2014-01-06 06:52:52.309330' In test_list_agent GET API issues twice and compare results. It is possible to differ heartbeat_timestam isn't it? ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1266954 Title: tempest failure: test_list_agent failes with MismatchError Status in OpenStack Neutron (virtual network service): New Bug description: Jenkins job 'check-tempest-dsvm-neutron-pg-isolated' failed when I posted a fix. (see. http://logs.openstack.org/34/65034/1/check/check-tempest-dsvm-neutron-pg-isolated/d3e37ea ) Failed test is as follows: --- FAIL: tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke] tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke] Traceback (most recent call last): File "tempest/api/network/admin/test_agent_management.py", line 37, in test_list_agent self.assertIn(self.agent, agents) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 330, in assertIn self.assertThat(haystack, Contains(needle)) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 414, in assertThat raise MismatchError(matchee, matcher, mismatch, verbose) MismatchError: {u'binary': u'neutron-dhcp-agent', u'description': None, ... snip --- I found the mismatch is only 'heartbeat_timestamp'. expected: u'heartbeat_timestamp': u'2014-01-06 06:52:48.291601' real: u'heartbeat_timestamp': u'2014-01-06 06:52:52.309330' In test_list_agent GET API issues twice and compare results. It is possible to differ heartbeat_timestam isn't it? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1266954/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266926] [NEW] log files should reveal source of the request
Public bug reported: Created attachment 713099 [details] token logs Description of problem: Keystone log files should reveal the source of the token (ip address). I create a mis-configuration where i had two different setups (A & B ) by mistake i configure keystone quantum end-point in setup A to use quantum server of setup B. in the logs files both quauntum server and keystone on setup B we could only see the messages ( as attahced ), but there was no clue about the source ip of the request, even in debug mode. adding the identity/source ip ( in none debug mode )will help debug such issues Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.configure quantum setup A 2.configure quantum setup B 3.configure keystone enpoint of setupB to use the quantum server of setup A 4.check keystone.log and quantum server.log on setup A this can be tested with other services like NOVA configuration on setupB: keystone service-create --name=quantum --type=network --description="Quantum Service" keystone endpoint-create --region RegionOne --service-id $(get_service_id quantum) --publicurl http://ip_setup_A:9696 --adminurl http://ip_setup_A:9696 --internalurl http://ip_setup_A:9696 Actual results: errors as attached no way to know about this mis-config Expected results: Origianlly reported: https://bugzilla.redhat.com/show_bug.cgi?id=923612 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1266926 Title: log files should reveal source of the request Status in OpenStack Identity (Keystone): New Bug description: Created attachment 713099 [details] token logs Description of problem: Keystone log files should reveal the source of the token (ip address). I create a mis-configuration where i had two different setups (A & B ) by mistake i configure keystone quantum end-point in setup A to use quantum server of setup B. in the logs files both quauntum server and keystone on setup B we could only see the messages ( as attahced ), but there was no clue about the source ip of the request, even in debug mode. adding the identity/source ip ( in none debug mode )will help debug such issues Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.configure quantum setup A 2.configure quantum setup B 3.configure keystone enpoint of setupB to use the quantum server of setup A 4.check keystone.log and quantum server.log on setup A this can be tested with other services like NOVA configuration on setupB: keystone service-create --name=quantum --type=network --description="Quantum Service" keystone endpoint-create --region RegionOne --service-id $(get_service_id quantum) --publicurl http://ip_setup_A:9696 --adminurl http://ip_setup_A:9696 --internalurl http://ip_setup_A:9696 Actual results: errors as attached no way to know about this mis-config Expected results: Origianlly reported: https://bugzilla.redhat.com/show_bug.cgi?id=923612 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1266926/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266916] Re: Missing InstanceInfoCache entry prevents delete
Dup'd https://bugs.launchpad.net/nova/+bug/1266919 ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266916 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): Invalid Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266916/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266917] Re: Missing InstanceInfoCache entry prevents delete
Dup'd https://bugs.launchpad.net/nova/+bug/1266919 ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266917 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): Invalid Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266917/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266918] Re: Missing InstanceInfoCache entry prevents delete
Dup'd https://bugs.launchpad.net/nova/+bug/1266919 ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266918 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): Invalid Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266918/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266921] [NEW] Log entry when a regular user does "keystone user-list" is not helpfu
Public bug reported: "keystone user-list" is an admin only command. When a regular user tries to execute it, you get a helpful response at the command line: [root@rhel ~(keystone_username)]# keystone user-list You are not authorized to perform the requested action: admin_required (HTTP 403) However, this same message is in /var/log/keystone/keystone.log: 2012-12-17 17:27:29 WARNING [keystone.common.wsgi] You are not authorized to perform the requested action: admin_required This log entry is not helpful. As an administrator, all this tells you is that *someone* tried to execute *something* that they weren't allowed to. Without any information about who or what, the log entry isn't useful. Originalluy reported: https://bugzilla.redhat.com/show_bug.cgi?id=888066 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1266921 Title: Log entry when a regular user does "keystone user-list" is not helpfu Status in OpenStack Identity (Keystone): New Bug description: "keystone user-list" is an admin only command. When a regular user tries to execute it, you get a helpful response at the command line: [root@rhel ~(keystone_username)]# keystone user-list You are not authorized to perform the requested action: admin_required (HTTP 403) However, this same message is in /var/log/keystone/keystone.log: 2012-12-17 17:27:29 WARNING [keystone.common.wsgi] You are not authorized to perform the requested action: admin_required This log entry is not helpful. As an administrator, all this tells you is that *someone* tried to execute *something* that they weren't allowed to. Without any information about who or what, the log entry isn't useful. Originalluy reported: https://bugzilla.redhat.com/show_bug.cgi?id=888066 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1266921/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266917] [NEW] Missing InstanceInfoCache entry prevents delete
Public bug reported: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266917 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): New Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266917/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266918] [NEW] Missing InstanceInfoCache entry prevents delete
Public bug reported: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266918 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): New Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266918/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266916] [NEW] Missing InstanceInfoCache entry prevents delete
Public bug reported: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266916 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): New Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266916/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266919] [NEW] Missing InstanceInfoCache entry prevents delete
Public bug reported: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. ** Affects: nova Importance: Undecided Assignee: Rick Harris (rconradharris) Status: In Progress ** Changed in: nova Assignee: (unassigned) => Rick Harris (rconradharris) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266919 Title: Missing InstanceInfoCache entry prevents delete Status in OpenStack Compute (Nova): In Progress Bug description: If you're trying to delete an instance which is out-of-sync such that it's missing a InstanceInfoCache entry, you'll will receive a traceback: http://paste.openstack.org/show/60685/ Delete in this case should be allowed so that you can cleanup these 'broken' instances. The solution is to catch the InstanceInfoCacheNotFound exception (like we do with other NotFound exceptions around this code), and continue on. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266919/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266906] [NEW] Volume type defaults remain in Defaults tab even after volume type is removed
Public bug reported: After creating a volume type 'abc' a number of default quotas are added to the Default tab's quota table: - Volumes Abc - Snapshots Abc - Gigabytes Abc I expected these defaults to be removed from the table once I remove the volume type. However, after removing the volume type they are still there. I verified that volume type was removed from the Volumes tab, and also marked deleted in Cinder database. ** Affects: horizon Importance: Undecided Status: New ** Tags: defaults horizon volume-type ** Attachment added: "Volume Types and Defaults after removing volume type 'abc'" https://bugs.launchpad.net/bugs/1266906/+attachment/3942814/+files/vol_type_defaults_remain_after_deletion.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1266906 Title: Volume type defaults remain in Defaults tab even after volume type is removed Status in OpenStack Dashboard (Horizon): New Bug description: After creating a volume type 'abc' a number of default quotas are added to the Default tab's quota table: - Volumes Abc - Snapshots Abc - Gigabytes Abc I expected these defaults to be removed from the table once I remove the volume type. However, after removing the volume type they are still there. I verified that volume type was removed from the Volumes tab, and also marked deleted in Cinder database. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1266906/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266892] [NEW] Keystone v3 doesn't handle incorrect user ID on request to list its roles
Public bug reported: When an invalid user ID is given on issuing the API request to list roles for a user in a project, no appropriate handling is performed. It is expected to receive back the 404 'resource not found' status code (and it is correctly returned for other cases such as requesting the list of roles for a user on a project and supplying an invalid project ID, assigning a role to a user on a project and suppling an invalid project or user ID; etc.). At the moment the request is processed and the regular response body is returned with empty roles list. Steps to reproduce: - Issue the API request to list roles assigned to a user on a project, use invalid user ID (service token can be used): curl -i -X GET http://KEYSTONE_ENDPOINT_IP:35357/v3/projects/PROJECT_IDusers/AN_INVALID_USER_ID/roles -H "X-Auth- Token: KEYSTONE_SERVICE_TOKEN" Expected result: 404 status code is returned with error message saying that resource hasn't been found. Actual result: 200 'success' status code is retuned with response body containing empty roles list. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1266892 Title: Keystone v3 doesn't handle incorrect user ID on request to list its roles Status in OpenStack Identity (Keystone): New Bug description: When an invalid user ID is given on issuing the API request to list roles for a user in a project, no appropriate handling is performed. It is expected to receive back the 404 'resource not found' status code (and it is correctly returned for other cases such as requesting the list of roles for a user on a project and supplying an invalid project ID, assigning a role to a user on a project and suppling an invalid project or user ID; etc.). At the moment the request is processed and the regular response body is returned with empty roles list. Steps to reproduce: - Issue the API request to list roles assigned to a user on a project, use invalid user ID (service token can be used): curl -i -X GET http://KEYSTONE_ENDPOINT_IP:35357/v3/projects/PROJECT_IDusers/AN_INVALID_USER_ID/roles -H "X-Auth- Token: KEYSTONE_SERVICE_TOKEN" Expected result: 404 status code is returned with error message saying that resource hasn't been found. Actual result: 200 'success' status code is retuned with response body containing empty roles list. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1266892/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266875] [NEW] First spawn of docker image starts with "sh" command instead of image CMD
Public bug reported: Hello On spawning instance firstly after uploading image to registry, docker container runs with command "sh", instead of command, written in image. Next spawns work as expected. ** Affects: nova Importance: Undecided Assignee: Roman Rader (antigluk) Status: New ** Changed in: nova Assignee: (unassigned) => Roman Rader (antigluk) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266875 Title: First spawn of docker image starts with "sh" command instead of image CMD Status in OpenStack Compute (Nova): New Bug description: Hello On spawning instance firstly after uploading image to registry, docker container runs with command "sh", instead of command, written in image. Next spawns work as expected. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266875/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1255876] Re: need to ignore swap files from getting into repository
** Changed in: oslo Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1255876 Title: need to ignore swap files from getting into repository Status in OpenStack Telemetry (Ceilometer): Invalid Status in Heat Orchestration Templates and tools: New Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Neutron (virtual network service): Fix Released Status in Oslo - a Library of Common OpenStack Code: Won't Fix Status in Python client library for Ceilometer: Fix Committed Status in Python client library for Cinder: Fix Committed Status in Python client library for Glance: Fix Committed Status in Python client library for heat: In Progress Status in Python client library for Keystone: Fix Committed Status in Python client library for Neutron: Fix Committed Status in Python client library for Nova: Fix Committed Status in Python client library for Swift: Fix Committed Status in OpenStack Data Processing (Savanna): Invalid Bug description: need to ignore swap files from getting into repository currently the implemented ignore in .gitignore is *.swp however vim goes beyond to generate these so to improve it could be done *.sw? To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1255876/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266711] Re: AttributeError: virConnect instance has no attribute 'registerCloseCallback'
The trigger for this was an inadvertent upgrade to libvirt 1.11 on all the slaves. When we determined that http://git.openstack.org/cgit /openstack-infra/config/commit/?id=bdcc115 was broken and reverted it with http://git.openstack.org/cgit/openstack- infra/config/commit/?id=f7b9581 (back in late September), we neglected to remove the /etc/apt/sources.list.d/cloudarchive.list file it left behind. This went unnoticed because unattended-upgrade job does not upgrade from unofficial repositories unless explicitly whitelisted, so it wasn't until early this morning when we approved http://git.openstack.org/cgit/openstack-infra/config/commit/?id=0385b96 that apt-get ended up pulling libvirt-dev and its dependencies from Ubuntu Cloud Archive rather than main that this came to light. The ensuing mayhem was resolved by merging http://git.openstack.org/cgit /openstack-infra/config/commit/?id=7282ca4 and then using salt.run to execute the following on all precise.* slaves: rm -f /etc/apt/sources.list.d/cloudarchive.list apt-get update apt-get install -y --force-yes \ libvirt-bin=0.9.8-2ubuntu17.16 \ libvirt-dev=0.9.8-2ubuntu17.16 \ libvirt0=0.9.8-2ubuntu17.16 \ libxenstore3.0=4.1.5-0ubuntu0.12.04.2 \ libxen-dev=4.1.5-0ubuntu0.12.04.2 Since then, subsequent unit tests are back to using old libvirt in CI and passing normally. ** Changed in: openstack-ci Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266711 Title: AttributeError: virConnect instance has no attribute 'registerCloseCallback' Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Core Infrastructure: Fix Released Bug description: During Jenkins tests I got this error two times with a different patchset ft1.8205: nova.tests.virt.libvirt.test_libvirt.LibvirtNonblockingTestCase.test_connection_to_primitive_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{WARNING [nova.virt.libvirt.driver] URI test:///default does not support events: internal error: could not initialize domain event timer}}} Traceback (most recent call last): File "nova/tests/virt/libvirt/test_libvirt.py", line 7570, in test_connection_to_primitive jsonutils.to_primitive(connection._conn, convert_instances=True) File "nova/virt/libvirt/driver.py", line 678, in _get_connection wrapped_conn = self._get_new_connection() File "nova/virt/libvirt/driver.py", line 664, in _get_new_connection wrapped_conn.registerCloseCallback( File "/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/tpool.py", line 172, in __getattr__ f = getattr(self._obj,attr_name) AttributeError: virConnect instance has no attribute 'registerCloseCallback' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266711/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266510] Re: Misspelled config file name in ec2api_modwsgi.py
@ mark-m-miller: Filed this at https://github.com/rcbops/chef- cookbooks/issues/754 as it is a issue with our cookbooks not nova. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266510 Title: Misspelled config file name in ec2api_modwsgi.py Status in OpenStack Compute (Nova): Invalid Bug description: I found a misspelling in file ec2api_modwsgi.py. config_files = ['/etc/nova/api-paste.ini', '/etc/nova/nova.comf'] should be: config_files = ['/etc/nova/api-paste.ini', '/etc/nova/nova.conf'] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266510/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266827] [NEW] Instances fail to boot with error: /var/run/openvswitch/db.sock: connection attempt failed
Public bug reported: Tested on RHEL6.5 with havana release openstack-nova-compute-2013.2.1-1.el6.noarch. Installed openstack via packstack allinone. Have not configured neutron for external access. Using the default network. Steps to reproduce: 1. Create a f19 glance image. 2. Boot an instance with the above image. Fails with the following in nova/compute.log: 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1280, in create_ovs_vif_port 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] run_as_root=True) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/utils.py", line 177, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] return processutils.execute(*cmd, **kwargs) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 178, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] cmd=' '.join(cmd)) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] ProcessExecutionError: Unexpected error while running command. 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl -- --may-exist add-port br-int qvoab718ad4-cb -- set Interface qvoab718ad4-cb external-ids:iface-id=ab718ad4-cb35-4736-b57b-d4c2b002cb36 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:f1:ae:df external-ids:vm-uuid=787f6b8d-2851-41d2-9a2d-d7b5701c546f 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Exit code: 1 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Stdout: '' 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Stderr: '2014-01-07T15:36:50Z|2|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Connection refused)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\n' 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] NOTE: This was a working setup. Have booted instances before with the same image. No config changes have been made that could explain the failure. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266827 Title: Instances fail to boot with error: /var/run/openvswitch/db.sock: connection attempt failed Status in OpenStack Compute (Nova): New Bug description: Tested on RHEL6.5 with havana release openstack-nova-compute-2013.2.1-1.el6.noarch. Installed openstack via packstack allinone. Have not configured neutron for external access. Using the default network. Steps to reproduce: 1. Create a f19 glance image. 2. Boot an instance with the above image. Fails with the following in nova/compute.log: 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1280, in create_ovs_vif_port 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] run_as_root=True) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/utils.py", line 177, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] return processutils.execute(*cmd, **kwargs) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 178, in execute 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] cmd=' '.join(cmd)) 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] ProcessExecutionError: Unexpected error while running command. 2014-01-07 21:07:10.245 32545 TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf
[Yahoo-eng-team] [Bug 1266825] [NEW] checking the "Name" in instances will not select all instances
Public bug reported: If I select the "Name" check box in the instances view it only checks that check box and does not select all the instances This is on the RDO ice-house packages: python-django-horizon-2014.1-0.1b1.el6.noarch ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1266825 Title: checking the "Name" in instances will not select all instances Status in OpenStack Dashboard (Horizon): New Bug description: If I select the "Name" check box in the instances view it only checks that check box and does not select all the instances This is on the RDO ice-house packages: python-django-horizon-2014.1-0.1b1.el6.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1266825/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266711] Re: AttributeError: virConnect instance has no attribute 'registerCloseCallback'
** Also affects: openstack-ci Importance: Undecided Status: New ** Changed in: openstack-ci Status: New => In Progress ** Changed in: openstack-ci Importance: Undecided => Critical ** Changed in: openstack-ci Assignee: (unassigned) => Jeremy Stanley (fungi) ** Changed in: openstack-ci Milestone: None => icehouse -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266711 Title: AttributeError: virConnect instance has no attribute 'registerCloseCallback' Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Core Infrastructure: In Progress Bug description: During Jenkins tests I got this error two times with a different patchset ft1.8205: nova.tests.virt.libvirt.test_libvirt.LibvirtNonblockingTestCase.test_connection_to_primitive_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{WARNING [nova.virt.libvirt.driver] URI test:///default does not support events: internal error: could not initialize domain event timer}}} Traceback (most recent call last): File "nova/tests/virt/libvirt/test_libvirt.py", line 7570, in test_connection_to_primitive jsonutils.to_primitive(connection._conn, convert_instances=True) File "nova/virt/libvirt/driver.py", line 678, in _get_connection wrapped_conn = self._get_new_connection() File "nova/virt/libvirt/driver.py", line 664, in _get_new_connection wrapped_conn.registerCloseCallback( File "/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/tpool.py", line 172, in __getattr__ f = getattr(self._obj,attr_name) AttributeError: virConnect instance has no attribute 'registerCloseCallback' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266711/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266812] [NEW] Keystone spamming log with deprecation warnings
Public bug reported: ONce the startup finishes, the log is currently filled with line after line of 2014-01-07 16:55:04.871 29654 WARNING keystone.common.utils [-] Deprecated: v2 API is deprecated as of Icehouse in favor of v3 API and may be removed in K. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1266812 Title: Keystone spamming log with deprecation warnings Status in OpenStack Identity (Keystone): New Bug description: ONce the startup finishes, the log is currently filled with line after line of 2014-01-07 16:55:04.871 29654 WARNING keystone.common.utils [-] Deprecated: v2 API is deprecated as of Icehouse in favor of v3 API and may be removed in K. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1266812/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266794] [NEW] Neutron install fails on Windows
Public bug reported: "python setup.py install" fails on Windows due to this patch: https://review.openstack.org/#/c/64747/ with the following error: "setup.py install" now fails on Windows due to this patch with the following error: error: can't copy 'etc\neutron\plugins\nicira\nvp.ini': doesn't exist or not a regular file The issue is related to the usage of symlinks which are not supported on Windows in: etc/neutron/plugins/nicira ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1266794 Title: Neutron install fails on Windows Status in OpenStack Neutron (virtual network service): New Bug description: "python setup.py install" fails on Windows due to this patch: https://review.openstack.org/#/c/64747/ with the following error: "setup.py install" now fails on Windows due to this patch with the following error: error: can't copy 'etc\neutron\plugins\nicira\nvp.ini': doesn't exist or not a regular file The issue is related to the usage of symlinks which are not supported on Windows in: etc/neutron/plugins/nicira To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1266794/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266792] [NEW] test_create_list_show_delete_interfaces fails with InterfaceAttachFailed in tempest-dsvm-neutron
Public bug reported: I couldn't find obvious existing bugs with the same failures so reporting a new one. This is a nova compute test that fails but it's failing because neutron isn't serving a port for the instance. The nova compute errors are pretty obvious but doesn't give a good reason why it fails: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-n-cpu.txt.gz?level=TRACE The neutron server logs are full of errors, so I'm not sure which is the root cause: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-svc.txt.gz?level=TRACE Looks like neutron is just dead or in terrible shape for some reason. I'm actually surprised there aren't more test failures in this run given all of the errors in the neutron logs. The errors are more limited in the dhcp agent logs, looks like a problem with openvswitch? http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-dhcp.txt.gz?level=TRACE Looking at the ovs agent logs, there is a single error: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-agt.txt.gz#_2014-01-07_02_00_00_640 The server instance in tempest must be getting reused because it's UUID c11cb018-0040-473d-90bb-d13a7f0d267d shows up 150 times in the neutron ovs agent logs. ** Affects: neutron Importance: Undecided Status: New ** Tags: gate-failure ** Tags added: gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1266792 Title: test_create_list_show_delete_interfaces fails with InterfaceAttachFailed in tempest-dsvm-neutron Status in OpenStack Neutron (virtual network service): New Bug description: I couldn't find obvious existing bugs with the same failures so reporting a new one. This is a nova compute test that fails but it's failing because neutron isn't serving a port for the instance. The nova compute errors are pretty obvious but doesn't give a good reason why it fails: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-n-cpu.txt.gz?level=TRACE The neutron server logs are full of errors, so I'm not sure which is the root cause: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-svc.txt.gz?level=TRACE Looks like neutron is just dead or in terrible shape for some reason. I'm actually surprised there aren't more test failures in this run given all of the errors in the neutron logs. The errors are more limited in the dhcp agent logs, looks like a problem with openvswitch? http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-dhcp.txt.gz?level=TRACE Looking at the ovs agent logs, there is a single error: http://logs.openstack.org/25/64725/4/check/check-tempest-dsvm- neutron/811ce6f/logs/screen-q-agt.txt.gz#_2014-01-07_02_00_00_640 The server instance in tempest must be getting reused because it's UUID c11cb018-0040-473d-90bb-d13a7f0d267d shows up 150 times in the neutron ovs agent logs. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1266792/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266780] [NEW] DHCP server crashes on networks where IPv6 addresses were supplied unshortened
Public bug reported: Added a subnet as shown below. neutron subnet-create HA1 fd5f:5d21:845:1e3c::/64 --name HA1SubnetIPv6 --no-gateway --host-route destination=::/0,nexthop=fd5f:5d21:845:1e3c::2 --allocation-pool start=fd5f:5d21:0845:1e3c:ff00::0042:0100,end=fd5f:5d21:0845:1e3c:ff00::0042:0180 --ip-version 6 I wasn't necessarily expecting DHCP to work for IPv6, but it stopped working for IPv4. In the log, I saw that the DHCP server had begun failing with the error: http://paste.openstack.org/show/60638/ After a little digging in the code, I realised the DHCP agent was trying to re-add an address (for the DHCP server) that already existed. I'm pretty sure the problem happens in init_l3 in interface.py, where desired addresses are compared with existing addresses returned by "ip addr show". Since "show" returns IPv6 addresses in shortened form (i.e. with unnecessary 0s removed), and the starting address I specified when creating the subnet is unshortened, the comparison fails and the agent tries to add the address again, which fails and causes the DHCP agent to abort startup. Of course, it's possible the failure to convert IPv6 addresses into shortened form (or enforce entry only in shortened form) could have other effects that I haven't observed. Workaround: Specify all IPv6 addresses in shortened form when setting up subnets. The given Exception stops happening. (Though I've subsequently run into Bug 1257446 which also prevents dual-stack operation.) ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1266780 Title: DHCP server crashes on networks where IPv6 addresses were supplied unshortened Status in OpenStack Neutron (virtual network service): New Bug description: Added a subnet as shown below. neutron subnet-create HA1 fd5f:5d21:845:1e3c::/64 --name HA1SubnetIPv6 --no-gateway --host-route destination=::/0,nexthop=fd5f:5d21:845:1e3c::2 --allocation-pool start=fd5f:5d21:0845:1e3c:ff00::0042:0100,end=fd5f:5d21:0845:1e3c:ff00::0042:0180 --ip-version 6 I wasn't necessarily expecting DHCP to work for IPv6, but it stopped working for IPv4. In the log, I saw that the DHCP server had begun failing with the error: http://paste.openstack.org/show/60638/ After a little digging in the code, I realised the DHCP agent was trying to re-add an address (for the DHCP server) that already existed. I'm pretty sure the problem happens in init_l3 in interface.py, where desired addresses are compared with existing addresses returned by "ip addr show". Since "show" returns IPv6 addresses in shortened form (i.e. with unnecessary 0s removed), and the starting address I specified when creating the subnet is unshortened, the comparison fails and the agent tries to add the address again, which fails and causes the DHCP agent to abort startup. Of course, it's possible the failure to convert IPv6 addresses into shortened form (or enforce entry only in shortened form) could have other effects that I haven't observed. Workaround: Specify all IPv6 addresses in shortened form when setting up subnets. The given Exception stops happening. (Though I've subsequently run into Bug 1257446 which also prevents dual-stack operation.) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1266780/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266760] [NEW] Project Member tab is broken
Public bug reported: The lack of pagination reduces the user experience and can lead to error when managing a project with many users ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1266760 Title: Project Member tab is broken Status in OpenStack Dashboard (Horizon): New Bug description: The lack of pagination reduces the user experience and can lead to error when managing a project with many users To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1266760/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266740] [NEW] tempest test_aggregate_add_host_create_server_with_az fails with "server failed to build and is ERROR status"
Public bug reported: 2014-01-07 04:25:45.882 | Traceback (most recent call last): 2014-01-07 04:25:45.883 | File "tempest/api/compute/v3/admin/test_aggregates.py", line 212, in test_aggregate_add_host_create_server_with_az 2014-01-07 04:25:45.883 | wait_until='ACTIVE') 2014-01-07 04:25:45.883 | File "tempest/api/compute/base.py", line 138, in create_test_server 2014-01-07 04:25:45.883 | server['id'], kwargs['wait_until']) 2014-01-07 04:25:45.884 | File "tempest/services/compute/v3/xml/servers_client.py", line 418, in wait_for_server_status 2014-01-07 04:25:45.884 | extra_timeout=extra_timeout) 2014-01-07 04:25:45.884 | File "tempest/common/waiters.py", line 76, in wait_for_server_status 2014-01-07 04:25:45.885 | raise exceptions.BuildErrorException(server_id=server_id) 2014-01-07 04:25:45.885 | BuildErrorException: Server 3fb9c710-51b8-4772-9b43-5b284efa6f45 failed to build and is in ERROR status http://logs.openstack.org/98/65198/2/check/check-tempest-dsvm-postgres- full/5b803ac/console.html ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266740 Title: tempest test_aggregate_add_host_create_server_with_az fails with "server failed to build and is ERROR status" Status in OpenStack Compute (Nova): New Bug description: 2014-01-07 04:25:45.882 | Traceback (most recent call last): 2014-01-07 04:25:45.883 | File "tempest/api/compute/v3/admin/test_aggregates.py", line 212, in test_aggregate_add_host_create_server_with_az 2014-01-07 04:25:45.883 | wait_until='ACTIVE') 2014-01-07 04:25:45.883 | File "tempest/api/compute/base.py", line 138, in create_test_server 2014-01-07 04:25:45.883 | server['id'], kwargs['wait_until']) 2014-01-07 04:25:45.884 | File "tempest/services/compute/v3/xml/servers_client.py", line 418, in wait_for_server_status 2014-01-07 04:25:45.884 | extra_timeout=extra_timeout) 2014-01-07 04:25:45.884 | File "tempest/common/waiters.py", line 76, in wait_for_server_status 2014-01-07 04:25:45.885 | raise exceptions.BuildErrorException(server_id=server_id) 2014-01-07 04:25:45.885 | BuildErrorException: Server 3fb9c710-51b8-4772-9b43-5b284efa6f45 failed to build and is in ERROR status http://logs.openstack.org/98/65198/2/check/check-tempest-dsvm- postgres-full/5b803ac/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266740/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266711] [NEW] AttributeError: virConnect instance has no attribute 'registerCloseCallback'
Public bug reported: During Jenkins tests I got this error two times with a different patchset ft1.8205: nova.tests.virt.libvirt.test_libvirt.LibvirtNonblockingTestCase.test_connection_to_primitive_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{WARNING [nova.virt.libvirt.driver] URI test:///default does not support events: internal error: could not initialize domain event timer}}} Traceback (most recent call last): File "nova/tests/virt/libvirt/test_libvirt.py", line 7570, in test_connection_to_primitive jsonutils.to_primitive(connection._conn, convert_instances=True) File "nova/virt/libvirt/driver.py", line 678, in _get_connection wrapped_conn = self._get_new_connection() File "nova/virt/libvirt/driver.py", line 664, in _get_new_connection wrapped_conn.registerCloseCallback( File "/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/tpool.py", line 172, in __getattr__ f = getattr(self._obj,attr_name) AttributeError: virConnect instance has no attribute 'registerCloseCallback' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266711 Title: AttributeError: virConnect instance has no attribute 'registerCloseCallback' Status in OpenStack Compute (Nova): New Bug description: During Jenkins tests I got this error two times with a different patchset ft1.8205: nova.tests.virt.libvirt.test_libvirt.LibvirtNonblockingTestCase.test_connection_to_primitive_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{WARNING [nova.virt.libvirt.driver] URI test:///default does not support events: internal error: could not initialize domain event timer}}} Traceback (most recent call last): File "nova/tests/virt/libvirt/test_libvirt.py", line 7570, in test_connection_to_primitive jsonutils.to_primitive(connection._conn, convert_instances=True) File "nova/virt/libvirt/driver.py", line 678, in _get_connection wrapped_conn = self._get_new_connection() File "nova/virt/libvirt/driver.py", line 664, in _get_new_connection wrapped_conn.registerCloseCallback( File "/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/tpool.py", line 172, in __getattr__ f = getattr(self._obj,attr_name) AttributeError: virConnect instance has no attribute 'registerCloseCallback' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266711/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266676] [NEW] replace deprecated code for django-1.7
Public bug reported: 1.7 See the Django 1.5 release notes for more details on these changes. The module django.utils.simplejson will be removed. The standard library provides json which should be used instead. The function django.utils.itercompat.product will be removed. The Python builtin version should be used instead. Auto-correction of INSTALLED_APPS and TEMPLATE_DIRS settings when they are specified as a plain string instead of a tuple will be removed and raise an exception. The mimetype argument to the __init__ methods of HttpResponse, SimpleTemplateResponse, and TemplateResponse, will be removed. content_type should be used instead. This also applies to the render_to_response() shortcut and the sitemap views, index() and sitemap(). When HttpResponse is instantiated with an iterator, or when content is set to an iterator, that iterator will be immediately consumed. The AUTH_PROFILE_MODULE setting, and the get_profile() method on the User model, will be removed. The cleanup management command will be removed. It’s replaced by clearsessions. The daily_cleanup.py script will be removed. The depth keyword argument will be removed from select_related(). The undocumented get_warnings_state()/restore_warnings_state() functions from django.test.utils and the save_warnings_state()/ restore_warnings_state() django.test.*TestCase methods are deprecated. Use the warnings.catch_warnings context manager available starting with Python 2.6 instead. The undocumented check_for_test_cookie method in AuthenticationForm will be removed following an accelerated deprecation. Users subclassing this form should remove calls to this method, and instead ensure that their auth related views are CSRF protected, which ensures that cookies are enabled. The version of django.contrib.auth.views.password_reset_confirm() that supports base36 encoded user IDs (django.contrib.auth.views.password_reset_confirm_uidb36) will be removed. If your site has been running Django 1.6 for more than PASSWORD_RESET_TIMEOUT_DAYS, this change will have no effect. If not, then any password reset links generated before you upgrade to Django 1.7 won’t work after the upgrade. This is connected with blueprint django-1point6 ** Affects: horizon Importance: Medium Assignee: Matthias Runge (mrunge) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1266676 Title: replace deprecated code for django-1.7 Status in OpenStack Dashboard (Horizon): In Progress Bug description: 1.7 See the Django 1.5 release notes for more details on these changes. The module django.utils.simplejson will be removed. The standard library provides json which should be used instead. The function django.utils.itercompat.product will be removed. The Python builtin version should be used instead. Auto-correction of INSTALLED_APPS and TEMPLATE_DIRS settings when they are specified as a plain string instead of a tuple will be removed and raise an exception. The mimetype argument to the __init__ methods of HttpResponse, SimpleTemplateResponse, and TemplateResponse, will be removed. content_type should be used instead. This also applies to the render_to_response() shortcut and the sitemap views, index() and sitemap(). When HttpResponse is instantiated with an iterator, or when content is set to an iterator, that iterator will be immediately consumed. The AUTH_PROFILE_MODULE setting, and the get_profile() method on the User model, will be removed. The cleanup management command will be removed. It’s replaced by clearsessions. The daily_cleanup.py script will be removed. The depth keyword argument will be removed from select_related(). The undocumented get_warnings_state()/restore_warnings_state() functions from django.test.utils and the save_warnings_state()/ restore_warnings_state() django.test.*TestCase methods are deprecated. Use the warnings.catch_warnings context manager available starting with Python 2.6 instead. The undocumented check_for_test_cookie method in AuthenticationForm will be removed following an accelerated deprecation. Users subclassing this form should remove calls to this method, and instead ensure that their auth related views are CSRF protected, which ensures that cookies are enabled. The version of django.contrib.auth.views.password_reset_confirm() that supports base36 encoded user IDs (django.contrib.auth.views.password_reset_confirm_uidb36) will be removed. If your site has been running Django 1.6 for more than PASSWORD_RESET_TIMEOUT_DAYS, this change will have no effect. If not, then any password reset links generated before you upgrade to Django 1.7 won’t work after the upgrade. This is connected with blueprint djang
[Yahoo-eng-team] [Bug 1266671] [NEW] nova's development.environment.rst has inadequate description of tools/install_venv.py
Public bug reported: nova/doc/source/devref/development.environment.rst has a section that discusses tools/install_venv.py; that section says this script installs the dependencies in requirements.txt but does not clearly say that the dependencies in test-requirements are also installed. ** Affects: nova Importance: Undecided Assignee: Mike Spreitzer (mike-spreitzer) Status: In Progress ** Tags: documentation low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266671 Title: nova's development.environment.rst has inadequate description of tools/install_venv.py Status in OpenStack Compute (Nova): In Progress Bug description: nova/doc/source/devref/development.environment.rst has a section that discusses tools/install_venv.py; that section says this script installs the dependencies in requirements.txt but does not clearly say that the dependencies in test-requirements are also installed. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1266671/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp