I wish to have a big qcow2 container, say 1TB, to hold the growing bulk of 
data.  At the beginning it only get 100GB of data.  To save the storage, I 
would create an empty container, say:

# qemu-img create -f qcow2 data.qcow2 1000G

When time pass, some files saved, some files deleted.  The data size may still 
be 100GB, while the qcow2 container may have grown to 400GB.

What I understood is that the inode deleted inside VM is impossible to be 
removed in qcow2 due to POSIX limitation.

So, what I do now is:

1. In server, create a qcow2 with 1000G size
# qemu-img create -f qcow2 data.qcow2 1000G

2. Inside VM, format the qcow2 to ext4 (or whatever format)
# mkfs.ext4 /dev/vdb

3. Inside VM, create 8 x 100GB of dummy files (zero.1 to zero.8)
# mount /dev/vdb /mnt/vdb
# for i in 1 2 3 4 5 6 7 8; dd if=/dev/zero of=/mnt/vdb/zero.$i bs=1000000 
count=100000; done
# umount /dev/vdb

4. In server, compress the qcow2
# qemu-img convert -c -f qcow2 -O qcow2 data.qcow2 data-compressed.qcow2

This effectively limit qcow2 file to a maximum size of 200GB no matter how much 
read/write it got.  In case the data size is growing, I can simply remove 
"zero.1" to "zero.8" on demand without data migration / partitioning / 
down-time.

The question is: Is it possible to have a feature to create pre-formatted (say, 
ext4) qcow2 container that is filled with balloon files (zero.xxx), so that the 
above steps can be done in one step?  Say, no need to make the balloon files to 
inflate the qcow2 to 1TB size and use compress command to trim it down (which 
is slow).

                                          --
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to