On Nov 13, 2013, at 9:14 PM, Haomai Wang <[email protected]> wrote:

> 
> On Nov 13, 2013, at 10:58 AM, Josh Durgin <[email protected]> wrote:
> 
>> On 11/11/2013 11:10 PM, Haomai Wang wrote:
>>> Hi all,
>>> 
>>> Now OpenStack Nova master branch still exists a bug when you boot a VM 
>>> which root disk size is specified. The storage backend of Nova also is rbd. 
>>> For example, you boot a VM and specify 10G as root disk size. But the image 
>>> is only 1G. Then VM will be spawned and the root disk size will expands to 
>>> 10G. The filesystem still is 1G.
>>> 
>>> Now I have a way to solve it. When we boot a VM and resize root disk size, 
>>> we use "fuse-rbd" command to resize filesystem.
>>> 
>>> fuse-rbd -p pool -c /etc/ceph/ceph.conf /tmp-ceph-rbd
>>> cd /tmp-ceph-rbd
>>> resize2fs volume-xxxxxxxxxxx
>>> 
>>> It seemed to work but I want to know whether exists problems when many 
>>> volumes in a pool. I'm not sure that too many volumes cause performance 
>>> problem.
>> 
>> fuse-rbd has a 128 image limit at the moment. It's more of a prototype
>> than something I'd recommend relying on.
>> 
>> Interacting with an untrusted filesystem on a compute host is also a
>> bit worrying from a security perspective. If you really need to resize
>> the fs and can't use cloud-init, using libguestfs would be best. This
>> isolates the operations into a vm, so the host kernel isn't interacting
>> with untrusted filesystems.
> 
> I expected libguestfs too. But the pity is that the python binding of
> libguestfs doesn't support "protocol" argument, extra protocols such
> as rbd can't be used.

Oh, sorry. Newer python-libguest supports it. Good news, thanks! :-)

> 
> Maybe cloud-init is the proper choice. I just want to find a way let not
> dependent to image.
> 
>> 
>> Josh
> 
> Best regards,
> Wheats
> 
> 
> 

Best regards,
Wheats



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to