At 2014-07-23 00:09:09, "Lingxian Kong" <[email protected]> wrote: >Maybe you are using local storage for your vm system volume backend, >accroding to the 'resize' implementation, 'rsync' and 'scp' will be
>executed during the resize process, which will be the bottleneck No, i use nfs. I found that, the resize will convert qcow2 disk to raw, and then convert to qcow2, I do not know why ? why we directly resize qcow2? I test havana. and in https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py It also does this. In comment line 5221 in the code get from above link, it said this If we have a non partitioned image that we can extend then ensure we're in 'raw' format so we can extend file system. But our colleague test that we can resize the qcow2 even if there have a non partioned image. He can resize an image that just resized. So, i really do not know why. > >2014-07-19 13:07 GMT+08:00 fdsafdsafd <[email protected]>: >> Did someone test the concurrency of nova's resize? i found it has poor >> concurrency, i do not know why. I found most the failed request is rpc >> timeout. >> I write the resize test for nova is boot-resize-confirm-delete. >> >> >> >> >> >> >> _______________________________________________ >> OpenStack-dev mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > >-- >Regards! >----------------------------------- >Lingxian Kong > >_______________________________________________ >OpenStack-dev mailing list >[email protected] >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
