[openstack-dev] [rally] nova boot-and-delele

2014-07-17 Thread fdsafdsafd
Hello,
 In boot-and-delete test, the the item will return after the nova api send 
delete request. But right then, the quota may not be recycle. Then there will 
be a problem.
   If i press an openstack cloud by boot-and-list, i got that my cloud support 
65 concurrency. But if i use that number to do boot-and-delete, many request 
will failed.
  For example, if the json is this
  {
NovaServers.boot_and_delete_server: [
{
args: {
flavor: {
name: ooo
},
image: {
name: ubuntu1204
},
},
runner: {
type: constant,
times:8000,
concurrency:65
},
context: {
users: {
tenants: 1,
users_per_tenant:1 
},
}
}
]
}


almost 130 request will failed by no valid host. In my oponion, i think all the 
failed request is failed by the delay recycle quotas. 
Am i right? or there is another story?
Thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][nova] resize

2014-07-18 Thread fdsafdsafd
Did someone test the concurrency of nova's resize? i found it has poor 
concurrency, i do not know why. I found most the failed request is rpc timeout.
I write the resize test for nova is boot-resize-confirm-delete. 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][nova] resize

2014-07-19 Thread fdsafdsafd
ok, i will try.






At 2014-07-19 04:30:49, Boris Pavlovic bo...@pavlovic.me wrote:

Hi, 


Could you please contribute to Rally this benchmark. So others will be able to 
repeat experiment locally. (here is the instruction 
https://wiki.openstack.org/wiki/Rally/Develop#How_to_contribute )


And as far as I know, nobody (in rally team) was benchmarking this stuff. 


As well I hope that we will get soon OSprofiler 
(https://github.com/stackforge/osprofiler) in upstream: it will answer on such 
questions like where is the bottleneck.






Best regards,
Boris Pavlovic 



On Sat, Jul 19, 2014 at 9:07 AM, fdsafdsafd jaze...@163.com wrote:

Did someone test the concurrency of nova's resize? i found it has poor 
concurrency, i do not know why. I found most the failed request is rpc timeout.
I write the resize test for nova is boot-resize-confirm-delete. 









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][nova] boot-and-delete

2014-07-20 Thread fdsafdsafd
The boot-and-delete.json is 
NovaServers.boot_and_delete_server: [
{
args: {
flavor: {
name: ooo
},
image: {
name: ubuntu1204
},
},
runner: {
type: constant,
times:50,
concurrency:21
},
context: {
users: {
tenants: 1,
users_per_tenant:1 
},
}
}
]
}


It sounds like that, the rally start an new iter, before scheduler konw that 
the resource is backed to compute node. Can some one test this before?
why the scheduler be late? i set resource tracker period to 1s. But the problem 
also appeared. 




the scheduler in ram filter is 
ram:31618 disk:339968 io_ops:0 instances:0:now:2014-07-21 
10:07:19.726181:memory_mb_limit:48195.0:used_ram_mb:512:usable_ram:47683.0 
host_passes /
ram:29570 disk:329728 io_ops:1 instances:1:now:2014-07-21 
10:07:19.740639:memory_mb_limit:48195.0:used_ram_mb:2560:usable_ram:45635.0 
host_passes /
ram:27522 disk:319488 io_ops:2 instances:2:now:2014-07-21 
10:07:19.798188:memory_mb_limit:48195.0:used_ram_mb:4608:usable_ram:43587.0 
host_passes /
ram:25474 disk:309248 io_ops:3 instances:3:now:2014-07-21 
10:07:19.811062:memory_mb_limit:48195.0:used_ram_mb:6656:usable_ram:41539.0 
host_passes /
ram:23426 disk:299008 io_ops:4 instances:4:now:2014-07-21 
10:07:19.822793:memory_mb_limit:48195.0:used_ram_mb:8704:usable_ram:39491.0 
host_passes /
ram:21378 disk:288768 io_ops:5 instances:5:now:2014-07-21 
10:07:19.833709:memory_mb_limit:48195.0:used_ram_mb:10752:usable_ram:37443.0 
host_passes /
ram:19330 disk:278528 io_ops:6 instances:6:now:2014-07-21 
10:07:19.858253:memory_mb_limit:48195.0:used_ram_mb:12800:usable_ram:35395.0 
host_passes /
ram:17282 disk:268288 io_ops:7 instances:7:now:2014-07-21 
10:07:22.230625:memory_mb_limit:48195.0:used_ram_mb:14848:usable_ram:33347.0 
host_passes /
ram:15234 disk:258048 io_ops:8 instances:8:now:2014-07-21 
10:07:22.444355:memory_mb_limit:48195.0:used_ram_mb:16896:usable_ram:31299.0 
host_passes /
ram:13186 disk:247808 io_ops:9 instances:9:now:2014-07-21 
10:07:22.456158:memory_mb_limit:48195.0:used_ram_mb:18944:usable_ram:29251.0 
host_passes /
ram:11138 disk:237568 io_ops:10 instances:10:now:2014-07-21 
10:07:22.466355:memory_mb_limit:48195.0:used_ram_mb:20992:usable_ram:27203.0 
host_passes /
ram:9090 disk:227328 io_ops:11 instances:11:now:2014-07-21 
10:07:22.476552:memory_mb_limit:48195.0:used_ram_mb:23040:usable_ram:25155.0 
host_passes /
ram:7042 disk:217088 io_ops:12 instances:12:now:2014-07-21 
10:07:22.486794:memory_mb_limit:48195.0:used_ram_mb:25088:usable_ram:23107.0 
host_passes /
ram:4994 disk:206848 io_ops:13 instances:13:now:2014-07-21 
10:07:22.780476:memory_mb_limit:48195.0:used_ram_mb:27136:usable_ram:21059.0 
host_passes /
ram:2946 disk:196608 io_ops:14 instances:14:now:2014-07-21 
10:07:22.791989:memory_mb_limit:48195.0:used_ram_mb:29184:usable_ram:19011.0 
host_passes /
ram:898 disk:186368 io_ops:15 instances:15:now:2014-07-21 
10:07:22.944792:memory_mb_limit:48195.0:used_ram_mb:31232:usable_ram:16963.0 
host_passes /
ram:-1150 disk:176128 io_ops:16 instances:16:now:2014-07-21 
10:07:22.955335:memory_mb_limit:48195.0:used_ram_mb:33280:usable_ram:14915.0 
host_passes /
ram:-3198 disk:165888 io_ops:17 instances:17:now:2014-07-21 
10:07:22.965552:memory_mb_limit:48195.0:used_ram_mb:35328:usable_ram:12867.0 
host_passes /
ram:-5246 disk:155648 io_ops:18 instances:18:now:2014-07-21 
10:07:22.975790:memory_mb_limit:48195.0:used_ram_mb:37376:usable_ram:10819.0 
host_passes /
ram:-7294 disk:145408 io_ops:19 instances:19:now:2014-07-21 
10:07:22.986395:memory_mb_limit:48195.0:used_ram_mb:39424:usable_ram:8771.0 
host_passes /
ram:-9342 disk:135168 io_ops:20 instances:20:now:2014-07-21 
10:07:22.996581:memory_mb_limit:48195.0:used_ram_mb:41472:usable_ram:6723.0 
host_passes /
ram:-11390 disk:268288 io_ops:12 instances:21:now:2014-07-21 
10:08:30.170616:memory_mb_limit:48195.0:used_ram_mb:43520:usable_ram:4675.0 
host_passes /
ram:-13438 disk:258048 io_ops:13 instances:22:now:2014-07-21 
10:08:30.188967:memory_mb_limit:48195.0:used_ram_mb:45568:usable_ram:2627.0 
host_passes /
ram:-15486 disk:268288 io_ops:7 instances:23:now:2014-07-21 
10:09:58.839181:memory_mb_limit:48195.0:used_ram_mb:47616:usable_ram:579.0 
host_passes /
ram:-15486 disk:268288 io_ops:7 instances:23:now:2014-07-21 
10:09:59.432281:memory_mb_limit:48195.0:used_ram_mb:47616:usable_ram:579.0 
host_passes /
ram:-15486 disk:268288 io_ops:7 instances:23:now:2014-07-21 
10:10:00.658376:memory_mb_limit:48195.0:used_ram_mb:47616:usable_ram:579.0 
host_passes /
ram:-15486 disk:268288 io_ops:7 instances:23:now:2014-07-21 
10:10:01.138658:memory_mb_limit:48195.0:used_ram_mb:47616:usable_ram:579.0 
host_passes /
ram:-15486 disk:268288 io_ops:7 

[openstack-dev] [nova] how scheduler handle messages?

2014-07-21 Thread fdsafdsafd
Hello,
   recently, i use rally to test boot-and-delete. I thought that one 
nova-scheduler will handle message sent to it one by one, but the log print 
show differences. So Can some one how nova-scheduler handle messages? I read 
the code in nova.service,  and found that one service will create fanout 
consumer, and that all fanout message consumed in one thread. So I wonder that, 
How the nova-scheduler handle message, if there are many messages casted to 
call scheduler's run_instance?
Thanks a lot.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how scheduler handle messages?

2014-07-23 Thread fdsafdsafd
Thanks. It really help. Thanks a lot.



At 2014-07-23 02:45:40, Vishvananda Ishaya vishvana...@gmail.com wrote:
Workers can consume more than one message at a time due to 
eventlet/greenthreads. The conf option rpc_thread_pool_size determines how many 
messages can theoretically be handled at once. Greenthread switching can happen 
any time a monkeypatched call is made.


Vish


On Jul 21, 2014, at 3:36 AM, fdsafdsafd jaze...@163.com wrote:


Hello,
   recently, i use rally to test boot-and-delete. I thought that one 
nova-scheduler will handle message sent to it one by one, but the log print 
show differences. So Can some one how nova-scheduler handle messages? I read 
the code in nova.service,  and found that one service will create fanout 
consumer, and that all fanout message consumed in one thread. So I wonder that, 
How the nova-scheduler handle message, if there are many messages casted to 
call scheduler's run_instance?
Thanks a lot.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][nova] resize

2014-07-23 Thread fdsafdsafd


At 2014-07-23 00:09:09, Lingxian Kong anlin.k...@gmail.com wrote:
Maybe you are using local storage for your vm system volume backend,
accroding to the 'resize' implementation, 'rsync' and 'scp' will be

executed during the resize process, which will be the bottleneck
No, i use nfs. I found that, the resize will convert qcow2 disk to raw, and 
then convert to qcow2, I do not know why ? why we directly resize qcow2?
I test havana. and in 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py
It also does this.
In comment line 5221 in the code get from above link, it said this

 If we have a non partitioned image that we can extend
then ensure we're in 'raw' format so we can extend file system.

But our colleague test that we can resize the qcow2 even if there have a non 
partioned image. He can resize an image that just resized.
So, i really do not know why.


 
2014-07-19 13:07 GMT+08:00 fdsafdsafd jaze...@163.com:
 Did someone test the concurrency of nova's resize? i found it has poor
 concurrency, i do not know why. I found most the failed request is rpc
 timeout.
 I write the resize test for nova is boot-resize-confirm-delete.






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
In resize, we convert the disk and drop peel backing file, should we judge 
whether we are in shared_storage? If we are in shared storage, for example, 
nfs, then we can use the image in _base to be the backing file. And the time 
cost to resize will be faster.


The processing in line 5132
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py




Thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd

No.
before L5156, we convert it from qcow2 to qcow2, in which it strips backing 
file.
I think here, we should wirte like this:
 
if info['type'] == 'qcow2' and info['backing_file']:
       if shared_storage:
             utils.execute('cp', from_path, img_path)
       else:
 tmp_path = from_path + _rbase
         # merge backing file
         utils.execute('qemu-img', 'convert', '-f', 'qcow2',
                              '-O', 'qcow2', from_path, tmp_path)
            libvirt_utils.copy_image(tmp_path, img_path, host=dest)
            utils.execute('rm', '-f', tmp_path)
else:  # raw or qcow2 with no backing file
         libvirt_utils.copy_image(from_path, img_path, host=dest)



At 2014-07-24 05:02:39, Tian, Shuangtai shuangtai.t...@intel.com wrote:
 




!--

_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
_font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
_font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}

p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:SimSun;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-reply;
font-family:Calibri,sans-serif;
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:Calibri,sans-serif;}
_page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
--





whether we already use like that ?

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156

 

From: fdsafdsafd [mailto:jaze...@163.com]


Sent: Thursday, July 24, 2014 4:30 PM

To: openstack-dev@lists.openstack.org

Subject: [openstack-dev] [nova]resize

 





In resize, we convert the disk and drop peel backing file, should we judge 
whether we are in shared_storage? If we are in shared storage, for example, 




nfs, then we can use the image in _base to be the backing file. And the time 
cost to resize will be faster.




 




The processing in line 5132




https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py




 




 




Thanks



 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev