Hi Alexandre,

If guests are linux you could try use the scsi driver with discard enabled

fstrim -v / then may make the unused space free on the underlying FS then.

I don't use LVM but this certainly works with other types of storage..





On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
<adhau...@voyages-sncf.com> wrote:
> Hello,
>
> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm 
> hitting a major issue with storage migrations..
> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS 
> to LVM on Proxmox.
>
> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
> Thin provisionning works fine on Proxmox newly created VMs.
>
> But, i just discovered that when using qm move_disk to migrate from NFS to 
> LVM, it actually allocates all blocks of data !
> It's a huge problem for me and clearly a nogo... as the SAN storage arrays 
> are filling up very quickly !
>
> After further investigations, in qemu & proxmox... I found in proxmox code 
> that qemu_drive_mirror is called with those arguments :
>
> (In /usr/share/perl5/PVE/QemuServer.pm)
>
>    5640 sub qemu_drive_mirror {
> .......
>    5654     my $opts = { timeout => 10, device => "drive-$drive", mode => 
> "existing", sync => "full", target => $qemu_target };
>
> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block 
> targets, but proxmox does not use it.
> Is there any reason why this flag is not enabled during qemu drive mirroring 
> ??
>
> Cheers,
> Alexandre.
> _______________________________________________
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to