[openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread fdsafdsafd
Hello, In boot-and-delete test, the the item will return after the nova api send delete request. But right then, the quota may not be recycle. Then there will be a problem. If i press an openstack cloud by boot-and-list, i got that my cloud support 65 concurrency. But if i use that

[openstack-dev] [rally][nova] resize

2014-07-18 Thread fdsafdsafd
Did someone test the concurrency of nova's resize? i found it has poor concurrency, i do not know why. I found most the failed request is rpc timeout. I write the resize test for nova is boot-resize-confirm-delete. ___ OpenStack-dev mailing list

Re: [openstack-dev] [rally][nova] resize

2014-07-19 Thread fdsafdsafd
, 2014 at 9:07 AM, fdsafdsafd jaze...@163.com wrote: Did someone test the concurrency of nova's resize? i found it has poor concurrency, i do not know why. I found most the failed request is rpc timeout. I write the resize test for nova is boot-resize-confirm-delete

[openstack-dev] [rally][nova] boot-and-delete

2014-07-20 Thread fdsafdsafd
The boot-and-delete.json is NovaServers.boot_and_delete_server: [ { args: { flavor: { name: ooo }, image: { name: ubuntu1204 }, }, runner: {

[openstack-dev] [nova] how scheduler handle messages?

2014-07-21 Thread fdsafdsafd
Hello, recently, i use rally to test boot-and-delete. I thought that one nova-scheduler will handle message sent to it one by one, but the log print show differences. So Can some one how nova-scheduler handle messages? I read the code in nova.service, and found that one service will create

Re: [openstack-dev] [nova] how scheduler handle messages?

2014-07-23 Thread fdsafdsafd
at once. Greenthread switching can happen any time a monkeypatched call is made. Vish On Jul 21, 2014, at 3:36 AM, fdsafdsafd jaze...@163.com wrote: Hello, recently, i use rally to test boot-and-delete. I thought that one nova-scheduler will handle message sent to it one by one, but the log

Re: [openstack-dev] [rally][nova] resize

2014-07-23 Thread fdsafdsafd
. 2014-07-19 13:07 GMT+08:00 fdsafdsafd jaze...@163.com: Did someone test the concurrency of nova's resize? i found it has poor concurrency, i do not know why. I found most the failed request is rpc timeout. I write the resize test for nova is boot-resize-confirm-delete

[openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
In resize, we convert the disk and drop peel backing file, should we judge whether we are in shared_storage? If we are in shared storage, for example, nfs, then we can use the image in _base to be the backing file. And the time cost to resize will be faster. The processing in line 5132

Re: [openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
-serif;} _page WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 90.0pt 72.0pt 90.0pt;} div.WordSection1 {page:WordSection1;} -- whether we already use like that ? https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156   From: fdsafdsafd