Public bug reported:

screen-n-sch.txt:
2014-02-24 22:43:41.502 WARNING nova.scheduler.filters.compute_filter 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo]  ram:6577 disk:75776 
io_ops:9 instances:14 has not been heard from in a while
2014-02-24 22:43:41.502 INFO nova.filters 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] Filter ComputeFilter 
returned 0 hosts
2014-02-24 22:43:41.503 WARNING nova.scheduler.driver 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] [instance: 
b5a607f0-5280-4033-ba8f-087884d41d28] Setting instance to ERROR state.

The tempest stress runner with the following example
https://github.com/openstack/tempest/blob/master/tempest/stress/etc
/server-create-destroy-test.json can cause this kind of load.

./tempest/stress/run_stress.py -t tempest/stress/etc/server-create-
destroy-test.json -n 1024 -S

The example config  uses only 8 threads, If you would like to increase the 
number of thread you may need to increase the demo user's quota,
or enable the use_tenant_isolation.

tempest.log:
INFO: Statistics (per process):
INFO:  Process 0 (ServerCreateDestroyTest): Run 103 actions (0 failed)
INFO:  Process 1 (ServerCreateDestroyTest): Run 101 actions (0 failed)
INFO:  Process 2 (ServerCreateDestroyTest): Run 101 actions (0 failed)
INFO:  Process 3 (ServerCreateDestroyTest): Run 100 actions (0 failed)
INFO:  Process 4 (ServerCreateDestroyTest): Run 102 actions (2 failed)
INFO:  Process 5 (ServerCreateDestroyTest): Run 102 actions (1 failed)
INFO:  Process 6 (ServerCreateDestroyTest): Run 101 actions (0 failed)
INFO:  Process 7 (ServerCreateDestroyTest): Run 101 actions (0 failed)
INFO: Summary:
INFO - 2014-02-24 22:44:22,713.713 INFO: Run 811 actions (3 failed)

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284708

Title:
  n-cpu under load does not uptdates it's status

Status in OpenStack Compute (Nova):
  New

Bug description:
  screen-n-sch.txt:
  2014-02-24 22:43:41.502 WARNING nova.scheduler.filters.compute_filter 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo]  ram:6577 disk:75776 
io_ops:9 instances:14 has not been heard from in a while
  2014-02-24 22:43:41.502 INFO nova.filters 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] Filter ComputeFilter 
returned 0 hosts
  2014-02-24 22:43:41.503 WARNING nova.scheduler.driver 
[req-ff34935c-c472-47df-ac4a-1286a7944b17 demo demo] [instance: 
b5a607f0-5280-4033-ba8f-087884d41d28] Setting instance to ERROR state.

  The tempest stress runner with the following example
  https://github.com/openstack/tempest/blob/master/tempest/stress/etc
  /server-create-destroy-test.json can cause this kind of load.

  ./tempest/stress/run_stress.py -t tempest/stress/etc/server-create-
  destroy-test.json -n 1024 -S

  The example config  uses only 8 threads, If you would like to increase the 
number of thread you may need to increase the demo user's quota,
  or enable the use_tenant_isolation.

  tempest.log:
  INFO: Statistics (per process):
  INFO:  Process 0 (ServerCreateDestroyTest): Run 103 actions (0 failed)
  INFO:  Process 1 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 2 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 3 (ServerCreateDestroyTest): Run 100 actions (0 failed)
  INFO:  Process 4 (ServerCreateDestroyTest): Run 102 actions (2 failed)
  INFO:  Process 5 (ServerCreateDestroyTest): Run 102 actions (1 failed)
  INFO:  Process 6 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO:  Process 7 (ServerCreateDestroyTest): Run 101 actions (0 failed)
  INFO: Summary:
  INFO - 2014-02-24 22:44:22,713.713 INFO: Run 811 actions (3 failed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to