[openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread fdsafdsafd
Hello,
 In boot-and-delete test, the the item will return after the nova api send 
delete request. But right then, the quota may not be recycle. Then there will 
be a problem.
   If i press an openstack cloud by boot-and-list, i got that my cloud support 
65 concurrency. But if i use that number to do boot-and-delete, many request 
will failed.
  For example, if the json is this
  {
NovaServers.boot_and_delete_server: [
{
args: {
flavor: {
name: ooo
},
image: {
name: ubuntu1204
},
},
runner: {
type: constant,
times:8000,
concurrency:65
},
context: {
users: {
tenants: 1,
users_per_tenant:1 
},
}
}
]
}


almost 130 request will failed by no valid host. In my oponion, i think all the 
failed request is failed by the delay recycle quotas. 
Am i right? or there is another story?
Thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread Boris Pavlovic
Hi,

I don't think that this it is related to quotas. Seems more like it's
related to the fixed ips (that are not released).

Could you try to put unlimited quotas for you tenant. To do that you need
to add new context to the  task, and it will look like:

 context: {
users: {
tenants: 1,
users_per_tenant: 1
},
quotas: {
nova: {
cores: -1,
 . here other values from
https://github.com/stackforge/rally/blob/master/rally/benchmark/context/quotas.py#L32-L80
 }
}
  }


Best regards,
Boris Pavlovic


On Fri, Jul 18, 2014 at 8:00 AM, fdsafdsafd jaze...@163.com wrote:

 Hello,
  In boot-and-delete test, the the item will return after the nova api
 send delete request. But right then, the quota may not be recycle. Then
 there will be a problem.
If i press an openstack cloud by boot-and-list, i got that my cloud
 support 65 concurrency. But if i use that number to do boot-and-delete,
 many request will failed.
   For example, if the json is this
   {
 NovaServers.boot_and_delete_server: [
 {
 args: {
 flavor: {
 name: ooo
 },
 image: {
 name: ubuntu1204
 },
 },
 runner: {
 type: constant,
 times:8000,
 concurrency:65
 },
 context: {
 users: {
 tenants: 1,
 users_per_tenant:1
 },
 }
 }
 ]
 }

 almost 130 request will failed by no valid host. In my oponion, i think
 all the failed request is failed by the delay recycle quotas.
 Am i right? or there is another story?
 Thanks



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread Lingxian Kong
Could you put some nova logs here? I'm available for help.

2014-07-18 12:00 GMT+08:00 fdsafdsafd jaze...@163.com:
 Hello,
  In boot-and-delete test, the the item will return after the nova api
 send delete request. But right then, the quota may not be recycle. Then
 there will be a problem.
If i press an openstack cloud by boot-and-list, i got that my cloud
 support 65 concurrency. But if i use that number to do boot-and-delete, many
 request will failed.
   For example, if the json is this
   {
 NovaServers.boot_and_delete_server: [
 {
 args: {
 flavor: {
 name: ooo
 },
 image: {
 name: ubuntu1204
 },
 },
 runner: {
 type: constant,
 times:8000,
 concurrency:65
 },
 context: {
 users: {
 tenants: 1,
 users_per_tenant:1
 },
 }
 }
 ]
 }

 almost 130 request will failed by no valid host. In my oponion, i think all
 the failed request is failed by the delay recycle quotas.
 Am i right? or there is another story?
 Thanks



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev