gtai" wrote:
whether we already use like that ?
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156
From: fdsafdsafd [mailto:jaze...@163.com]
Sent: Thursday, July 24, 2014 4:30 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [no
In resize, we convert the disk and drop peel backing file, should we judge
whether we are in shared_storage? If we are in shared storage, for example,
nfs, then we can use the image in _base to be the backing file. And the time
cost to resize will be faster.
The processing in line 5132
https:/
just resized.
So, i really do not know why.
>
>2014-07-19 13:07 GMT+08:00 fdsafdsafd :
>> Did someone test the concurrency of nova's resize? i found it has poor
>> concurrency, i do not know why. I found most the failed reque
reenthread switching can happen
any time a monkeypatched call is made.
Vish
On Jul 21, 2014, at 3:36 AM, fdsafdsafd wrote:
Hello,
recently, i use rally to test boot-and-delete. I thought that one
nova-scheduler will handle message sent to it one by one, but the log print
show differences. S
Hello,
recently, i use rally to test boot-and-delete. I thought that one
nova-scheduler will handle message sent to it one by one, but the log print
show differences. So Can some one how nova-scheduler handle messages? I read
the code in nova.service, and found that one service will create f
The boot-and-delete.json is
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "ooo"
},
"image": {
"name": "ubuntu1204"
},
},
19, 2014 at 9:07 AM, fdsafdsafd wrote:
Did someone test the concurrency of nova's resize? i found it has poor
concurrency, i do not know why. I found most the failed request is rpc timeout.
I write the resize test for nova is b
Did someone test the concurrency of nova's resize? i found it has poor
concurrency, i do not know why. I found most the failed request is rpc timeout.
I write the resize test for nova is boot-resize-confirm-delete.
___
OpenStack-dev mailing list
Op
Hello,
In boot-and-delete test, the the item will return after the nova api send
delete request. But right then, the quota may not be recycle. Then there will
be a problem.
If i press an openstack cloud by boot-and-list, i got that my cloud support
65 concurrency. But if i use that numbe