Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change
As Matt found, it the problem was with out of date requirements. Going forward I would advise any third party CI that is not spinning up a new VM for every job to purge all python packages after each run. This will make devstack reinstall everything, avoid this type of problem. Though the problem was with global requirements, only our CI was failing because everyone else was getting the newest version of each package each time. We are still failing on one test (test_ec2_instance_run.InstanceRunTest) and we are not sure what the cause is. Here is a log from a recent run: http://openstack-ci.tintri.com/tintri/refs-changes-88-210588-1/ Here is the failing test: ft1.280: setUpClass (tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest)_StringException: Traceback (most recent call last): File "tempest/test.py", line 272, in setUpClass six.reraise(etype, value, trace) File "tempest/test.py", line 265, in setUpClass cls.resource_setup() File "tempest/thirdparty/boto/test_ec2_instance_run.py", line 91, in resource_setup state = wait.state_wait(_state, "available") File "tempest/thirdparty/boto/utils/wait.py", line 51, in state_wait (dtime, final_set, status)) AssertionError: State change timeout exceeded!(196s) While waitingfor set(['available']) at "failed" >From n-crt log: 2015-08-07 15:21:58.237 ERROR oslo_messaging.rpc.dispatcher [req-d4ce0001-0754-461f-8fdc-57908baf88f7 tempest-InstanceRunTest-1110235717 tempest-InstanceRunTest-357311946] Exception during message handling: 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/cert/manager.py", line 70, in decrypt_text 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher return crypto.decrypt_text(project_id, base64.b64decode(text)) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/nova/nova/crypto.py", line 200, in decrypt_text 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher return priv_key.decrypt(text, padding.PKCS1v15()) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 536, in decrypt 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher return _enc_dec_rsa(self._backend, self, ciphertext, padding) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 76, in _enc_dec_rsa 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 105, in _enc_dec_rsa_pkey_ctx 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher _handle_rsa_enc_dec_error(backend, key) 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 145, in _handle_rsa_enc_dec_error 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher assert errors[0].reason in decoding_errors 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher AssertionError 2015-08-07 15:21:58.237 32745 ERROR oslo_messaging.rpc.dispatcher 2 The 08/07/2015 09:01, Matt Riedemann wrote: > > > On 8/7/2015 8:38 AM, Matt Riedemann wrote: > > > > > >On 8/6/2015 3:30 PM, Skyler Berg wrote: > >>After the change "cleanup NovaObjectDictCompat from virtual_interface" > >>[1] was merged into Nova on the morning of August 5th, Tintri's CI for > >>Cinder started failing 13 test cases that involve a volume being > >>attached to an instance [2]. > >> > >>I have verified that the tests fail with the above mentioned change and > >>pass when running against the previous commit. > >> > >>If anyone knows why this patch is causing an issue or is experiencing > >>similar problems, please let me know. > >> > >>In the meantime, expect Tintri's CI to be either down o
Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change
On 8/7/2015 8:38 AM, Matt Riedemann wrote: On 8/6/2015 3:30 PM, Skyler Berg wrote: After the change "cleanup NovaObjectDictCompat from virtual_interface" [1] was merged into Nova on the morning of August 5th, Tintri's CI for Cinder started failing 13 test cases that involve a volume being attached to an instance [2]. I have verified that the tests fail with the above mentioned change and pass when running against the previous commit. If anyone knows why this patch is causing an issue or is experiencing similar problems, please let me know. In the meantime, expect Tintri's CI to be either down or reporting failures until a solution is found. [1] https://review.openstack.org/#/c/200823/ [2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/ From the n-cpu logs this is the TypeError: 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager Traceback (most recent call last): 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager executor_callback)) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager executor_callback) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager result = func(ctxt, **new_args) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/floating_ips.py", line 113, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager **kwargs) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 496, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager context, instance_uuid) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in __exit__ 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 490, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager networks, macs) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 755, in _allocate_mac_addresses 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager network['id']) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 774, in _add_virtual_interface 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager vif.create() 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 205, in wrapper 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager self[key] = field.from_primitive(self, key, value) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager TypeError: 'VirtualInterface' object does not support item assignment 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager It looks like you're missing this change in whatever version of oslo.versionedobjects you have in your CI: https://review.openstack.org/#/c/202200/ That should be in o.vo 0.6.0, latest is 0.7.0. What version of oslo.versionedobjects is on the this system? It would be helpful to have pip freeze output. I proposed a change to global-requirements to raise the minimum required oslo.versionedobjects to >= 0.6.0 here: https://review.openstack.org/#/c/210445/ -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change
On 8/6/2015 3:30 PM, Skyler Berg wrote: After the change "cleanup NovaObjectDictCompat from virtual_interface" [1] was merged into Nova on the morning of August 5th, Tintri's CI for Cinder started failing 13 test cases that involve a volume being attached to an instance [2]. I have verified that the tests fail with the above mentioned change and pass when running against the previous commit. If anyone knows why this patch is causing an issue or is experiencing similar problems, please let me know. In the meantime, expect Tintri's CI to be either down or reporting failures until a solution is found. [1] https://review.openstack.org/#/c/200823/ [2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/ From the n-cpu logs this is the TypeError: 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager Traceback (most recent call last): 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager executor_callback)) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager executor_callback) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager result = func(ctxt, **new_args) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/floating_ips.py", line 113, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager **kwargs) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 496, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager context, instance_uuid) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in __exit__ 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 490, in allocate_for_instance 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager networks, macs) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 755, in _allocate_mac_addresses 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager network['id']) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/opt/stack/nova/nova/network/manager.py", line 774, in _add_virtual_interface 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager vif.create() 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 205, in wrapper 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager self[key] = field.from_primitive(self, key, value) 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager TypeError: 'VirtualInterface' object does not support item assignment 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager 2015-08-05 06:34:54.826 8 ERROR nova.compute.manager It looks like you're missing this change in whatever version of oslo.versionedobjects you have in your CI: https://review.openstack.org/#/c/202200/ That should be in o.vo 0.6.0, latest is 0.7.0. What version of oslo.versionedobjects is on the this system? It would be helpful to have pip freeze output. -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change
After the change "cleanup NovaObjectDictCompat from virtual_interface" [1] was merged into Nova on the morning of August 5th, Tintri's CI for Cinder started failing 13 test cases that involve a volume being attached to an instance [2]. I have verified that the tests fail with the above mentioned change and pass when running against the previous commit. If anyone knows why this patch is causing an issue or is experiencing similar problems, please let me know. In the meantime, expect Tintri's CI to be either down or reporting failures until a solution is found. [1] https://review.openstack.org/#/c/200823/ [2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/ -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev