On Sun, Dec 11, 2016 at 1:48 PM, Eyal Edri <[email protected]> wrote: > FYI, > > We're seeing a single test 'add hotplug disk' failing recently on OST, the > test only failed once on the jobs [1] and was fixed on its own after one > run, but now I see it also on check-patch [2] jobs. > > Looking at the changelog, I can't find immediate suspects, and if this is > a race it might make sense it got in at some point and didn't fail on the > 1st time. > > Anyone can have a look to help debug this? it also failed once on 4.0 job. > > I see some exceptions on VDSM.LOG [3] > > [1] http://jenkins.ovirt.org/view/experimental%20jobs/job/ > test-repo_ovirt_experimental_master/4079/ > [2] http://jenkins.ovirt.org/job/ovirt-system-tests_master_ > check-patch-fc24-x86_64/322/ > [3] > > 2016-12-11 04:51:19,055 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call > Host.getDeviceList succeeded in 0.39 seconds (__init__:515) > 2016-12-11 04:51:19,106 INFO (jsonrpc/5) [dispatcher] Run and protect: > getAllTasksInfo(spUUID=None, options=None) (logUtils:49) > 2016-12-11 04:51:19,106 ERROR (jsonrpc/5) [storage.TaskManager.Task] > (Task='ef8d6b6f-256c-4420-9998-bef8a140a20f') Unexpected error (task:870) > Traceback (most recent call last): > File "/usr/share/vdsm/storage/task.py", line 877, in _run > return fn(*args, **kargs) > File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in > wrapper > res = f(*args, **kwargs) > File "/usr/share/vdsm/storage/hsm.py", line 2193, in getAllTasksInfo > allTasksInfo = self._pool.getAllTasksInfo() > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line > 77, in wrapper > raise SecureError("Secured object is not in safe state") > SecureError: Secured object is not in safe state > 2016-12-11 04:51:19,109 INFO (jsonrpc/5) [storage.TaskManager.Task] > (Task='ef8d6b6f-256c-4420-9998-bef8a140a20f') aborting: Task is aborted: > u'Secured object is not in safe state' - code 100 (task:1175) > 2016-12-11 04:51:19,109 ERROR (jsonrpc/5) [storage.Dispatcher] Secured > object is not in safe state (dispatcher:80) > Traceback (most recent call last): > File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper > result = ctask.prepare(func, *args, **kwargs) > File "/usr/share/vdsm/storage/task.py", line 105, in wrapper > return m(self, *a, **kw) > File "/usr/share/vdsm/storage/task.py", line 1183, in prepare > raise self.error >
I vaguely remember a sanlock issue? > > second error: > > 2016-12-11 04:55:21,837 ERROR (libvirt/events) [vds] Error running VM > callback (clientIF:543) > Traceback (most recent call last): > File "/usr/share/vdsm/clientIF.py", line 514, in dispatchLibvirtEvents > v.onLibvirtLifecycleEvent(event, detail, None) > File "/usr/share/vdsm/virt/vm.py", line 4318, in onLibvirtLifecycleEvent > elif detail == libvirt.VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY: > AttributeError: 'module' object has no attribute > 'VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY' > This also sounds familiar - I thought there was a workaround for it 2 weeks ago or so? Y. > > > > > -- > Eyal Edri > Associate Manager > RHV DevOps > EMEA ENG Virtualization R&D > Red Hat Israel > > phone: +972-9-7692018 > irc: eedri (on #tlv #rhev-dev #rhev-integ) > > _______________________________________________ > Devel mailing list > [email protected] > http://lists.phx.ovirt.org/mailman/listinfo/devel >
_______________________________________________ Devel mailing list [email protected] http://lists.phx.ovirt.org/mailman/listinfo/devel
