So: - rbd --> qcow2 (with proxmox 3.4 and POWERED-OFF vm) loose the sparse mode - qemu-img convert qcow2-->qcow2 gives me back a sparsed image - move (from gui) qcow2 --> rbd with proxmox 4.0 (and vm powered off), loose the sparse mode - qemu-img convert qcow2--> rbd (from CLI), keep the sparse mode (but i need to try again the conversion to keep format=2 and layering feature.
----- Messaggio originale ----- Da: "Alexandre DERUMIER" <[email protected]> A: "proxmoxve" <[email protected]> Inviato: Martedì, 20 ottobre 2015 12:49:32 Oggetto: Re: [PVE-User] ceph/rbd to qcow2 - sparse file looking at rbd block driver, it seem that bdrv_co_write_zeroes is not implemented. Does rbd -> qcow2 (with proxmox 4.0), give you sparse qcow2 ? is it only qcow2->rbd which is non sparse ? ----- Mail original ----- De: "aderumier" <[email protected]> À: "proxmoxve" <[email protected]> Envoyé: Mardi 20 Octobre 2015 12:18:43 Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file and also this one: "mirror: Do zero write on target if sectors not allocated" http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/mirror.c;h=8888cea9521fd5fcbc300c054fc8936bdac4f47e;hp=4be06a508233e69040c74fce00d3baac107dbfd8;hb=dcfb3beb5130694b76b57de109619fcbf9c7e5b5;hpb=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b ----- Mail original ----- De: "aderumier" <[email protected]> À: "Fabrizio Cuseo" <[email protected]>, "proxmoxve" <[email protected]> Envoyé: Mardi 20 Octobre 2015 12:13:30 Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file mmm. this is strange because with last qemu version include in proxmox 4.0, the drive-mirror feature (move disk in proxmox), should skip zeros blocks. I don't have tested it. http://git.qemu.org/?p=qemu.git;a=commit;h=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b "+# @unmap: #optional Whether to try to unmap target sectors where source has +# only zero. If true, and target unallocated sectors will read as zero, +# target image sectors will be unmapped; otherwise, zeroes will be +# written. Both will result in identical contents. +# Default is true. (Since 2.4) #" As workaround : - do the move disk with the vm shutdown will do a sparse file - if you use virtio-scsi + discard, you can use fstrim command (linux guest) in your guest after the migration. ----- Mail original ----- De: "Fabrizio Cuseo" <[email protected]> À: "proxmoxve" <[email protected]> Envoyé: Lundi 19 Octobre 2015 22:30:02 Objet: [PVE-User] ceph/rbd to qcow2 - sparse file Hello. I have a test cluster (3 hosts) with 20/30 test vm's, and ceph storage. Last week i planned to upgrade from 3.4 to 4.0; so i moved all the vm disks on a moosefs storage (qcow2). Moving from rbd to qcow2 caused all the disks to loose the sparse mode. I have reinstalled the whole cluster from scratch and now I am moving back the disks from qcow2 to rbd, but now i need to convert (with the vm off) every single disk from qcow2 to qcow2, so i can have the disk image sparsed. There is the possibility to move the disk from proxmox gui without loosing the sparse mode ? At least with pve 4.0 Regards, Fabrizio _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- --- Fabrizio Cuseo - mailto:[email protected] Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:[email protected] Numero verde nazionale: 800 901492 _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
