--- Begin Message ---
Hi Fabian,

Use Cases:

1. Full volume copy performed on the same storage (storage_id).

2. New volume creation from a snapshot on the same storage (storage_id).

Currently, the full clone operation involves attaching both the source and destination volumes to a Proxmox node and transferring data through it. This method is network-intensive, impacting both the send and receive channels of the Proxmox and storage nodes. As a result, it can negatively affect VMs performing I/O operations on the storage.

Expanding 'volume_export' and 'volume_impor't may not be the best approach. Instead, introducing a new method—such as 'volume_copy' or 'copy_image'—could be more appropriate. This method would be triggered when both the source and destination share the same storage_id, optimizing the copy process. It would be especially beneficial for ZFS-based storages, particularly those with complex rollback mechanisms. With this approach, users could perform fast, full clones without impacting network performance. Best regards, Andrei Perepiolkin


On 10/3/25 02:42, Fabian Grünbichler wrote:
On October 3, 2025 2:15 am, Andrei Perapiolkin via pve-devel wrote:
Hi,

Can the honorable community help me find an elegant way for
volume_import to identify the source volume origin type and name?

I'm investigating this to implement storage-assisted copy (i.e.,
performing the volume copy entirely on the storage side).

My initial assumption was that this could be achieved by defining custom
volume_export and volume_import functions.
However, may be there is a better way to do storage assisted copy.
volume_export and volume_import are only used for a few specific cases
(mainly offline migration of local disks, and offline remote migration
of disks).

for those you could define your own transport format that includes or
just contains the relevant metadata to do a storage side copy - it would
only be selected if the storage for the source and target volume both
say they support that particular format.

I assume your storage is shared, so you'd be more interested in the
move disk/full clone case, which currently uses either a mirror block
job (if the VM is running), qemu-img convert (if the VM is not running)
or rsync (for container volumes). These mechanisms are all not
extensible at the moment for storage plugins.

Maybe you could describe for which tasks you would see a clear benefit
for extending the interface to allow a storage plugin to provide a "copy
volume A into volume B storage side" - for the live move disk it might
be hard (without dirty bitmap trickery like we use for replication, but
that might be an option?), for the offline moves it would probably be
possible to somehow special case this and let plugins opt in, we've
discussed this ourselves in the past..

P.S.
Just found out about
https://pve.proxmox.com/wiki/Storage_Plugin_Development:_Writing_a_Storage=
_Plugin_for_SSHFS
This is grate!
Many thanks for posting this article!
Great that it found its intended audience! :)


--- End Message ---
_______________________________________________
pve-devel mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to