Hi, all,
  I`m trying to run the job "dsvm-tempest-full" in a slave node which built and 
managed by the nodepool.
But I got the following error messages:
----------------------
14:45:48 + timeout -s 9 174m /opt/stack/new/devstack-gate/devstack-vm-gate.sh
14:45:48 timeout: failed to run command 
‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory

...
14:45:48 + echo 'ERROR: the main setup script run by this job failed - exit 
code: 127'
14:45:48 ERROR: the main setup script run by this job failed - exit code: 127
...
14:45:52 No hosts matched
14:45:52 + exit 127
14:45:52 Build step 'Execute shell' marked build as failure
14:45:53 Finished: FAILURE
----------------------
I have no idea what caused this issue.
And furthermore, it seems no more chances to find the detailed information 
about this error,
because the slave node was deleted soon by the nodepool automatically, after 
this job finished.
Is there a setting for the nodepool to prevent the used node being deleted?

I have posted the full console log (which displayed in the Jenkins server) on 
the paste server:
http://paste.openstack.org/show/434487/

Cloud you give me some guidances to work this out?
Thanks in advance.


Xiexs
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to