Re: [openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
gtai" wrote: whether we already use like that ? https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156   From: fdsafdsafd [mailto:jaze...@163.com] Sent: Thursday, July 24, 2014 4:30 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [no

[openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
In resize, we convert the disk and drop peel backing file, should we judge whether we are in shared_storage? If we are in shared storage, for example, nfs, then we can use the image in _base to be the backing file. And the time cost to resize will be faster. The processing in line 5132 https:/

Re: [openstack-dev] [rally][nova] resize

2014-07-23 Thread fdsafdsafd
just resized. So, i really do not know why. > >2014-07-19 13:07 GMT+08:00 fdsafdsafd : >> Did someone test the concurrency of nova's resize? i found it has poor >> concurrency, i do not know why. I found most the failed reque

Re: [openstack-dev] [nova] how scheduler handle messages?

2014-07-23 Thread fdsafdsafd
reenthread switching can happen any time a monkeypatched call is made. Vish On Jul 21, 2014, at 3:36 AM, fdsafdsafd wrote: Hello, recently, i use rally to test boot-and-delete. I thought that one nova-scheduler will handle message sent to it one by one, but the log print show differences. S

[openstack-dev] [nova] how scheduler handle messages?

2014-07-21 Thread fdsafdsafd
Hello, recently, i use rally to test boot-and-delete. I thought that one nova-scheduler will handle message sent to it one by one, but the log print show differences. So Can some one how nova-scheduler handle messages? I read the code in nova.service, and found that one service will create f

[openstack-dev] [rally][nova] boot-and-delete

2014-07-20 Thread fdsafdsafd
The boot-and-delete.json is "NovaServers.boot_and_delete_server": [ { "args": { "flavor": { "name": "ooo" }, "image": { "name": "ubuntu1204" }, },

Re: [openstack-dev] [rally][nova] resize

2014-07-19 Thread fdsafdsafd
19, 2014 at 9:07 AM, fdsafdsafd wrote: Did someone test the concurrency of nova's resize? i found it has poor concurrency, i do not know why. I found most the failed request is rpc timeout. I write the resize test for nova is b

[openstack-dev] [rally][nova] resize

2014-07-18 Thread fdsafdsafd
Did someone test the concurrency of nova's resize? i found it has poor concurrency, i do not know why. I found most the failed request is rpc timeout. I write the resize test for nova is boot-resize-confirm-delete. ___ OpenStack-dev mailing list Op

[openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread fdsafdsafd
Hello, In boot-and-delete test, the the item will return after the nova api send delete request. But right then, the quota may not be recycle. Then there will be a problem. If i press an openstack cloud by boot-and-list, i got that my cloud support 65 concurrency. But if i use that numbe