Hello Daniel,

Yes, the fix has been merged to master and ready for test :) It should resolve:https://bugzilla.redhat.com/1707707 <https://bugzilla.redhat.com/1707707>

Oh, good news! I will test it in some days!

Or is it crucial for you to have it in a 4.3.z build?

It is not crucial for me, I will install a new Ovirt test environment. But once confirmed that the bug has been fixed, could it be compiled for 4.3? So it would also have 4.3 compatibility, since it interests me too.

Regards,

Fran


On 5/20/20 11:25 AM, Daniel Erez wrote:


On Tue, May 19, 2020 at 1:52 PM FMGarcia <[email protected] <mailto:[email protected]>> wrote:

    Hello Nir,

    Sorry for the wait, I was involved with other issues.


    Right now I don't have much time to implement this issue. However,
    I could deploy a Centos8.1 and I'm able to test it. In a few days
    I could deploy it.

    Is there a tutorial for testing these compilations? (Necessary
    dependencies, scripts to remove all traces of the previous
    installed version, etc.)

    I understand that the changes you have made these days at
    https://gerrit.ovirt.org/#/c/108991/ are almost final, right?
    Could you test them already? Or do I wait a little?


Yes, the fix has been merged to master and ready for test :) It should resolve:https://bugzilla.redhat.com/1707707 <https://bugzilla.redhat.com/1707707> If you're planning to upgrade to oVirt 4.4, it should be available in a future release candidate.
Or is it crucial for you to have it in a 4.3.z build?




    Despite the fact that you have implemented some very complete
    scripts (backup-disk and upload-disk). In the short term, I'm
    still interested in the snapshot-based model. Since in my Java
    code, I support full, incremental and differential backups with
    the limitation that the storage domain cannot be a block domain,
    in NFS(for example) it works very well!. However, in the vms that
    have some disk stored in a block domain, I can only make a backup
    by means of a snapshot clone. Until this is resolved :) hehe.


    BTW, thanks for your '/backup_disk.py demo/', it looks so good!.
    If I have some time I will try to test it in my environment. :)


    Best regards,

    Fran


    On 5/14/20 5:39 PM, Nir Soffer wrote:
    On Wed, May 13, 2020 at 12:19 PM FMGarcia<[email protected]>  
<mailto:[email protected]>  wrote:

    Hi Fran, I'm moving the discussion to devel mailing list where it belongs.

    Inhttps://gerrit.ovirt.org/#/c/107082/  we have "several problems" to 
decide this patch:

    At the base (current version in github), the synergy 
('download_disk_snapshot.py' and 'upload_disk_snapshot.py') does not working:

    'Download_disk_snapshot.py' only download volumes of a disk.
    'Upload_disk_snapshot.py' requires: virtual machine configuration ('.ovf'), 
a only disk to upload in path './disks/xxxx', and manual action to attach disk 
to the vm.

    Then, I think that if you want a synergy with both scripts, we should 
change 'download_disk_snapshot.py' before that 'upload_disk_snapshot.py'. If 
not, you should edit 'upload_disk_snapshot.py' to add a variable 'vm_id'(as 
variable sd_name in this script) to attach the uploaded disk.
    I agree. It would be nice if we can do:

    $ mkdir -p backups/vm-id
    $ download_disk_snapshots.py --backup-dir backups/vm-id ...
    $ upload_disk_snapshots.py --backup-dir backups/vm-id ...

    download_disk_snapshots.py will download vm ovf and all disks.
    upload_disk_snaphsots.py
    would take the output of download_disk_snapshots.py and create a new vm.

    I suppose that the best thing is to discard the gerrit, and to propose 
first what you want with 'download_disk_snapshot.py' and 
'upload_disk_snapshot.py' and then act accordingly (several patch). Do you 
agree?
    This is a bigger change that can take more time. I think we better fix
    the issues in the current
    scripts - the first one is the missing attach disk that you fix in your 
patch.

    Since you posted this fix with a lot of other unrelated fixes (some
    wrong or unneeded),
    we cannot merge it. This is another reason to post minimal patches
    that do one fix.

    I'm only truly interested in opened bug with block domain and volumes of > 
1GB:https://bugzilla.redhat.com/show_bug.cgi?id=1707707. I make these changes to 
help a little since you would help me by solving the bug. I don't code in Python, 
I code in Java, using Java-sdk and the bug is a major limitation in my software, 
so I want resolve this bug (1 year old). =( I hope you understand. :)
    Sure, I understand.

    If you don't time to work on this, some other developer can take over
    this patch.

    The bug should be fixed by:
    https://gerrit.ovirt.org/c/108991/

    It would be nice if you can test this. I started a build here:
    https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5867/

    When the build is ready, you will be able to install engine from this
    build by adidng
    a yum repo with the baseurl:
    
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5867/artifact/build-artifacts.el8.x86_64/

    Note that this requires CentOS 8.1. If you want to test on CentOS 7,
    you need to wait until
    the fix will be backported to 4.3, or since you like Java, maybe port
    it yourself?

    Note also that we have a much more advanced backup and restore options:
    
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/backup_vm.py

    Here is example run I did yesterday:

    I started with full backup of a running vm:

    $ ./backup_vm.py full --engine-urlhttps://engine3/  --username
    admin@internal --password-file
    /home/nsoffer/.config/ovirt/engine3/password --cafile
    /home/nsoffer/Downloads/certs/engine3.pem --backup-dir
    /home/nsoffer/tmp/backups/test-nfs
    b5732b5c-37ee-4c66-b77e-bda5d37a10fe
    [   0.0 ] Starting full backup for VM b5732b5c-37ee-4c66-b77e-bda5d37a10fe
    [   1.5 ] Waiting until backup f73541c6-88d1-4dac-a551-da922cdb3f55 is ready
    [   4.6 ] Created checkpoint '4754dc34-da4b-4e62-84ea-164c413b003c'
    (to use in --from-checkpoint-uuid for the next incremental backup)
    [   4.6 ] Creating image transfer for disk 
566e6aa6-575b-4f83-88c9-e5e5b54d9649
    [   5.9 ] Waiting until transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
    will be ready
    [   5.9 ] Image transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790 is ready
    [   5.9 ] Transfer url:
    https://host4:54322/images/13a0a396-5070-4b0f-a5cd-e2506c5abf0f
    Formatting 
'/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2',
    fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
    refcount_bits=16
    [ 100.00% ] 6.00 GiB, 18.34 seconds, 334.95 MiB/s
    [  24.3 ] Finalizing transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
    [  24.5 ] Full backup completed successfully

    This downloads all vms disks to ~/tmp/backups/test-nfs/, creating
    566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2

    This file includes entire disk content at the time the backup was
    stated. This includes
    data from all snapshots.

    Then I run incremental backup of the same vm, recoding the data changes 
since
    the full backup:

    $ ./backup_vm.py incremental --engine-urlhttps://engine3/  --username
    admin@internal --password-file
    /home/nsoffer/.config/ovirt/engine3/password --cafile
    /home/nsoffer/Downloads/certs/engine3.pem --backup-dir
    /home/nsoffer/tmp/backups/test-nfs --from-checkpoint-uuid
    4754dc34-da4b-4e62-84ea-164c413b003c
    b5732b5c-37ee-4c66-b77e-bda5d37a10fe
    [   0.0 ] Starting incremental backup for VM
    b5732b5c-37ee-4c66-b77e-bda5d37a10fe
    [   1.3 ] Waiting until backup 01a88749-06eb-431a-81f2-b03db24b878e is ready
    [   2.3 ] Created checkpoint '6f80d3c5-5b81-42ae-9700-2ccab37ad93b'
    (to use in --from-checkpoint-uuid for the next incremental backup)
    [   2.3 ] Creating image transfer for disk 
566e6aa6-575b-4f83-88c9-e5e5b54d9649
    [   3.4 ] Waiting until transfer 16c90052-9411-46f6-8dc6-b2f260206708
    will be ready
    [   3.4 ] Image transfer 16c90052-9411-46f6-8dc6-b2f260206708 is ready
    [   3.4 ] Transfer url:
    https://host4:54322/images/b9a44902-46f1-43b3-a9ad-9d72735c53ad
    Formatting 
'/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2',
    fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
    refcount_bits=16
    [ 100.00% ] 6.00 GiB, 0.63 seconds, 9.52 GiB/s
    [   4.0 ] Finalizing transfer 16c90052-9411-46f6-8dc6-b2f260206708
    [   4.1 ] Incremental backup completed successfully

    This backup is tiny since the only thing changed was new directory created
    on the vm, and some system logs modified since the full backup.

    Then I rebased the incremental backup on top of the full backup:

    cd home/nsoffer/tmp/backups/test-nfs
    qemu-img rebase -u -b
    566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2 -F qcow2
    566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2

    This images are now a valid qcow2 chain that can be uploaded using
    upload_disk.py:

    $ python3 upload_disk.py --engine-urlhttps://engine3/  --username
    admin@internal --password /home/nsoffer/.config/ovirt/engine3/password
    --cafile /home/nsoffer/Downloads/certs/engine3.pem --disk-format qcow2
    --disk-sparse --sd-name iscsi2-1
    
/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
    Checking image...
    Image format: qcow2
    Disk format: cow
    Disk content type: data
    Disk provisioned size: 6442450944
    Disk initial size: 2755264512
    Disk name: 
566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
    Connecting...
    Creating disk...
    Disk id: a9785777-8aac-4515-a47a-2f5126e3af73
    Creating image transfer...
    Transfer ID: 6e0384b6-730b-4416-a954-bf45e627d5cf
    Transfer host: host4
    Uploading image...
    [ 100.00% ] 6.00 GiB, 20.50 seconds, 299.70 MiB/s
    Finalizing image transfer...
    Upload completed successfully

    The result is a single qcow2 disk on the domain iscs2-1.

    I created a new vm from this disk.

    This backup script is not complete yet, we don't download the VM OVF
    in each backup, and we don't
    create the VM from the OVF. These features should be added later.

    You may want to start testing and intergating this code instead of the
    snapshot based download.

    
Seehttps://www.ovirt.org/develop/release-management/features/storage/incremental-backup.html

    Nir

_______________________________________________
Devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/IA5DKBT2DKF7XRAVGJ5YBKQPJILDNTAB/

Reply via email to