I managed to get a reproduce by creating a slow vm: ubuntu 14.04 in
vbox, 1 G ram, 2 vcpu, set to 50% of cpu performance.

tox -epy27 -- --until-fail multiprocess

On the 3rd time through I got the following:

running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp7YX5uh
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpU5Qsw_
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_killed_worker_recover
 [5.688803s] ... ok

Captured stderr:
~~~~~~~~~~~~~~~~
    
/home/sdague/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:35:
 SAWarning: The IN-predicate on "instances.uuid" was invoked with an empty 
sequence. This results in a contradiction, which nonetheless can be expensive 
to evaluate.  Consider alternative strategies for improved performance.
      return o[0](self, self.expr, op, *(other + o[1:]), **kwargs)
    
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_killed_worker_recover
 [2.634592s] ... ok
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_restart_sighup
 [1.565492s] ... ok
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_terminate_sigterm
 [2.400319s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_restart_sighup
 [160.043131s] ... FAILED
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigkill
 [2.317150s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
 [2.274788s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_terminate_sigkill
 [2.089225s] ... ok

and.... hang

So, testr is correctly killing the restart test when it times out. It is
also correctly moving on to additional tests. However it is then in a
hung state and can't finish once the tests are done.

Why the test timed out, I don't know. However the fact that testr is
going crazy is an issue all by itself.

** Also affects: testrepository
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Confirmed
Status in Test Repository:
  New

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 |     self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 |     self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 |     time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
  2014-08-15 13:46:09.157 |     hub.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
  2014-08-15 13:46:09.157 |     return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
  2014-08-15 13:46:09.158 |     self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
  2014-08-15 13:46:09.158 |     sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2014-08-15 13:46:09.158 |     raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to