Hi Simon, perfect! thank you! do you find a strange issue on the snapshot of rbd? the snapshot actual size is bigger than the original image file.. if you use rbd du to check the size..
On Thu, Jan 28, 2016 at 9:44 PM, Simon Ironside <[email protected]> wrote: > On 28/01/16 12:56, Bill WONG wrote: > > you dump the snapshot to raw img, possible to exprt to qcow2 format? >> > > Yes, I dump to raw because I'm able to get better and faster compression > of the image myself than using the qcow2 format. > > You can export directly to qcow2 with qemu-img convert if you want: > qemu-img convert -c -p -f rbd -O qcow2 \ > rbd:rbd/testvm@backup \ > testvm.qcow2 > > and you meant create the VM using qcow2 with the local HDD storage, then >> convert to rbd? >> is it perfect, if you can provide more details... it's highly >> appreciated.. thank you! >> > > Ok . . . > > 1. Create the VM using a file-based qcow2 image then convert to rbd > > # Create a VM using a regular file > virt-install --name testvm \ > --ram=1024 --vcpus=1 --os-variant=rhel7 \ > --controller scsi,model=virtio-scsi \ > --disk > path=/var/lib/libvirt/images/testvm.qcow2,size=10,bus=scsi,format=qcow2 \ > --cdrom=/var/lib/libvirt/images/rhel7.iso \ > --nonetworks --graphics vnc > > # Complete your VM's setup then shut it down > > # Convert qcow2 image to rbd > qemu-img convert -p -f qcow2 -O rbd \ > /var/lib/libvirt/images/testvm.qcow2 \ > rbd:rbd/testvm > > # Delete the qcow2 image, don't need it anymore > rm -f /var/lib/libvirt/images/testvm.qcow2 > > # Update the VM definition > virsh edit testvm > # Find the <disk> section referring to your original qcow2 image > # Delete it and replace with: > > <disk type='network' device='disk'> > <driver name='qemu' type='raw' discard='unmap'/> > <source protocol='rbd' name='rbd/testvm'> > <host name='ceph-mon1.example.org' port='6789'/> > <host name='ceph-mon2.example.org' port='6789'/> > <host name='ceph-mon3.example.org' port='6789'/> > </source> > <auth username='CEPH_USERNAME'> > <secret type='ceph' uuid='SECRET_UUID'/> > </auth> > <target dev='sda' bus='scsi'/> > </disk> > > # Obvious use your own ceph monitor host name(s) > # Also change CEPH_USERNAME and SECRET_UUID to suit > > # Restart your VM, it'll now be using ceph storage directly. > > Btw, using virtio-scsi devices as above and discard='unmap' above enables > TRIM support. This means you can use fstrim or mount file systems with > discard inside the VM to free up unused space in the image. > > 2. Modify the XML produced by virt-install before the VM is started > > The process here is basically the same as above, the trick is to make the > disk XML change before the VM is started for the first time so that it's > not necessary to shut down the VM to copy from qcow2 file to rbd image. > > # Create RBD image for the VM > qemu-img create -f rbd rbd:rbd/testvm 10G > > # Create a VM XML but don't start it > virt-install --name testvm \ > --ram=1024 --vcpus=1 --os-variant=rhel7 \ > --controller scsi,model=virtio-scsi \ > --disk > path=/var/lib/libvirt/images/deleteme.img,size=1,bus=scsi,format=raw \ > --cdrom=/var/lib/libvirt/images/rhel7.iso \ > --nonetworks --graphics vnc \ > --dry-run --print-step 1 > testvm.xml > > # Define the VM from XML > virsh define testvm.xml > > # Update the VM definition > virsh edit testvm > # Find the <disk> section referring to your original deleteme image > # Delete it and replace it with RBD disk XML as in procedure 1. > > # Restart your VM, it'll now be using ceph storage directly. > > I think it's easier to understand what's going on with procedure 1 but > once you're comfortable I suspect you'll end up using procedure 2, mainly > because it saves having to shut down the VM and do the conversion and also > because my compute nodes only have tiny local storage. > > It's also possible to script much of the above with the likes of virsh > detach-disk and virsh attach-device to make the disk XML change. > > Cheers, > Simon. >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
