GitHub user dstoy53 added a comment to the discussion: converting a qcow2
volume from thin to sparse, live
In case anyone stumbles on this and finds it useful, I ended up going with an
unmanage+blockcopy+importunmanaged approach. A different iteration had me
wanting to live migrate NFS volumes to RBD, so I made codex write a script to
perform the following operations for a given vm uuid:
1. capture the VM definition (the instance data, the NICs with their network
and IP assignments, the block devices and their disk offerings) and store it in
a local directory to track details and state
2. unmanage+force (due to configdrive)
3. create an xml payload for each target disk device using my local data (helps
makes the script resumable)
4. for each disk, use the libvirt library to perform the blockcopy+pivot, by
default it seems to create the target disk device but depending on your needs
VIR_DOMAIN_BLOCK_COPY_REUSE_EXT can use a pre-created destination. Letting
libvirt create it has the advantage that it will calculate the correct volume
size so you don't need to do a byte vs MiB vs GiB conversion.
5. import the VM based on the captured details in step 1 (basically satisfy the
requirements for the importunmanagedinstance, including scenarios with multiple
NICs or multiple block devices)
That should work similarly enough for the thin to sparse migration too, just
haven't gotten around to it yet.
The main disadvantages are that ubuntu 24.04's version of libvirt may or may
not properly track zero blocks when using blockcopy (vs an offline convert),
and I'm guessing usage meter data tied to the vm db id might be inaccurate
following the import.
I think adding this as a feature is in the realm of possible. Today if you try
to live migrate a KVM instance's volume while the VM is running, cloudstack
will throw an error that you must migrate the instance to another host
simultaneously to refresh the xml domain (unless the destination is StorPool).
Doing a simultaneous VM+storage migration works for NFS to NFS (changing the
offering has no effect on thin vs sparse), but fails for NFS to RBD.
Running a blockcopy job with the appropriate xml payload for the disk and
polling its completion feels like an appropriate solution. Doing it live
in-place using the same storage pool might require a new volume uuid - perhaps
tracked as an independent cloudstack volume with reuse existing for the job
until the pivot succeeds. The job completion % polling would be useful user
feedback too.
In vsphere this would be similar to a storage vmotion and changing the
datastore and/or the disk format (thin/thick+lazy/thin+eager).
I don't have a frame of reference for how xen would apply this capability.
A manual process looks like this for an unmanaged vm:
```
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='your_user'>
<secret type='ceph' uuid='your_secret_uuid'/>
</auth>
<source protocol='rbd' name='your_pool/your_rbd_uuid'>
<host name='your_mon_fqdn' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>your_disk_serial</serial>
</disk>
```
virsh blockcopy your_domain vda \
--xml /tmp/vda-rbd.xml \
--wait \
--verbose \
--pivot \
--transient-job
an offline migration for nfs->rbd is possible, but offline for thin->sparse
didn't seem to work in my experiment with or without a pool migration (and the
volume's original provisioning_type stored in the db seems authoritative)
both should be doable live using blockcopy, I only went with an unmanage+manage
to keep cloudstack sane
unrelated to all of that, the ceph storage driver doesn't seem to respect the
thin/sparse/thick definitions and in my experiments a sparse disk offering
still results in parent/child rbd images
it's very possible I'm going about it all wrong, but it works on my machine :)
GitHub link:
https://github.com/apache/cloudstack/discussions/12741#discussioncomment-16677258
----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]