On Tue, Nov 13, 2018 at 7:12 PM Suchitra Herwadkar <
suchitra.herwad...@veritas.com> wrote:

> Hi Nir
>
>
>
> In a case of recovery here are the steps we intend to do
>
> --First create an empty disk with the desired disk format ( RAW/COW)
>
> --Then upload the backup image contents within it using the image transfer
> APIs.
>
> --Attach the disk to the created VM.
>
>
>
> During the 2nd step, if it happens to be a QCOW disk, how can we upload
> the backup image contents and lay it down in a QCOW2 format. Or does the
> image transfer API for upload handles it internally ?
>
> Just wanted to ensure if we need to do anything post upload for QCOW?
>

When uploading and downloading images in 4.2 there is nothing special about
the
image format.

When downloading an image you get the image contents as stored in storage.
For
raw image this is the same data as seen by the guest. For qcow2 images, it
is the
qcow2 file contents as stored in file based storage, or the qcow2 contents
stored in
the block device with block based storage.

This is the same when uploading images back to storage, when you upload
qcow2
image the exact image you upload is stored in the underlying ovirt volume.

Note that when uploading qcow2 image, the image must have valid backing
file, matching
the backing file defined in ovirt.

If the VM disk has no snapshots and is not based on a template, there is no
backing
file, and the qcow2 image must not have a backing file.

If you upload a new snapshot to existing chain, the qcow2 image backing
file must be
the volume name of the current snapshot (a uuid).

Please see these examples for more info:
- upload disk with single volume (raw or qcow)

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

- upload disk with multiple snaphosts

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk_snapshots.py

If you store the backup data in another format, or want to create one image
from multiple
qcow2 snapshots, you must prepare the image in some temporary storage using
qemu-img
(for example, qemu-img commit for merging qcow2 snaphosts), and then upload
the
qcow2 image to storage.

You can also convert raw to qcow2 or qcow2 to raw using qemu-img.

In 4.3 we will support an improved workflow, when you always download and
upload
raw guest data regardless of the underlying disk format. See the feature
page
for more info:
https://ovirt.org/develop/release-management/features/storage/incremental-backup/

Nir



>
>
> Kindly confirm.
>
>
>
> -Suchitra
>
>
>
> *From: *Nir Soffer <nsof...@redhat.com>
> *Date: *Friday, November 2, 2018 at 8:12 PM
> *To: *Mahesh Falmari <mahesh.falm...@veritas.com>, Ryan Barry <
> rba...@redhat.com>
> *Cc: *Arik Hadas <aha...@redhat.com>, Martin Perina <mper...@redhat.com>,
> "Yaniv Lavi (Dary)" <yl...@redhat.com>, Daniel Erez <de...@redhat.com>,
> "Nisan, Tal" <tni...@redhat.com>, Pavan Chavva <pcha...@redhat.com>,
> devel <devel@ovirt.org>, James Olson <james.ol...@veritas.com>, Navin Tah
> <navin....@veritas.com>, Sudhakar Paulzagade <
> sudhakar.paulzag...@veritas.com>, Abhay Marode <abhay.mar...@veritas.com>,
> Suchitra Herwadkar <suchitra.herwad...@veritas.com>, Nirmalya Sirkar <
> nirmalya.sir...@veritas.com>, Abhijeet Barve <abhijeet.ba...@veritas.com>
> *Subject: *Re: [EXTERNAL] Re: List of Queries related to RHV
>
>
>
> On Fri, Nov 2, 2018 at 3:22 PM Mahesh Falmari <mahesh.falm...@veritas.com>
> wrote:
>
> Thanks Nir for the answers.
>
> Just a follow up question to your answer on Q.3>
> [Nir] Regarding backup, I think you need to store the vm configuration at
> the time of the backup regardless of having a version or not. The amount of
> data is very small.
>
> [Mahesh] We are looking for VM version information to be stored during
> backup for different reasons. One of the reason is in order to determine
> the compatibility of VM backed up from the latest version of RHV server and
> getting restored to the older versions.
>
>
>
> Trying to restore VM on older version sound fragile. Even if this works,
>
> I don't think we test such scenarios, so this is likely to break in future
>
> versions.
>
>
>
> Also, when you say vm configuration, what specific APIs could be leveraged
> to get it?
> Still looking for answer to whether VM version information will be
> available or not.
>
>
>
> I hope that Ryan to help with this.
>
>
>
>
>
> Thanks & Regards,
> Mahesh Falmari
>
>
>
> *From:* Nir Soffer <nsof...@redhat.com>
> *Sent:* Friday, November 2, 2018 1:01 AM
> *To:* Mahesh Falmari <mahesh.falm...@veritas.com>; Arik Hadas <
> aha...@redhat.com>; Martin Perina <mper...@redhat.com>
> *Cc:* Yaniv Lavi (Dary) <yl...@redhat.com>; Daniel Erez <de...@redhat.com>;
> Nisan, Tal <tni...@redhat.com>; Pavan Chavva <pcha...@redhat.com>; devel <
> devel@ovirt.org>; James Olson <james.ol...@veritas.com>; Navin Tah <
> navin....@veritas.com>; Sudhakar Paulzagade <
> sudhakar.paulzag...@veritas.com>; Abhay Marode <abhay.mar...@veritas.com>;
> Suchitra Herwadkar <suchitra.herwad...@veritas.com>; Nirmalya Sirkar <
> nirmalya.sir...@veritas.com>; Abhijeet Barve <abhijeet.ba...@veritas.com>
> *Subject:* Re: [EXTERNAL] Re: List of Queries related to RHV
>
>
>
> On Wed, Oct 17, 2018 at 5:03 PM Mahesh Falmari <mahesh.falm...@veritas.com>
> wrote:
>
> Thanks for the prompt response on these queries. We have few follow-up
> queries mentioned inline.
>
>
>
> Thanks & Regards,
> Mahesh Falmari
>
>
>
> *From:* Yaniv Lavi <yl...@redhat.com>
> *Sent:* Tuesday, October 16, 2018 7:19 PM
> *To:* Mahesh Falmari <mahesh.falm...@veritas.com>
> *Cc:* Nir Soffer <nsof...@redhat.com>; Erez, Daniel <de...@redhat.com>;
> Tal Nisan <tni...@redhat.com>; Pavan Chavva <pcha...@redhat.com>; devel <
> devel@ovirt.org>; James Olson <james.ol...@veritas.com>; Navin Tah <
> navin....@veritas.com>; Sudhakar Paulzagade <
> sudhakar.paulzag...@veritas.com>; Abhay Marode <abhay.mar...@veritas.com>;
> Suchitra Herwadkar <suchitra.herwad...@veritas.com>; Nirmalya Sirkar <
> nirmalya.sir...@veritas.com>; Abhijeet Barve <abhijeet.ba...@veritas.com>
> *Subject:* [EXTERNAL] Re: List of Queries related to RHV
>
>
>
>
> *YANIV LAVI*
>
> SENIOR TECHNICAL PRODUCT MANAGER
>
> Red Hat Israel Ltd. <https://www.redhat.com/>
>
> 34 Jerusalem Road, Building A, 1st floor
>
> Ra'anana, Israel 4350109
>
> yl...@redhat.com    T: +972-9-7692306/8272306     F: +972-9-7692223
>  IM: ylavi
>
> [image: Image removed by sender.] <https://red.ht/sig>
>
> *TRIED. TESTED. TRUSTED.* <https://redhat.com/trusted>
>
> @redhatnews <https://twitter.com/redhatnews>   Red Hat 
> <https://www.linkedin.com/company/red-hat>   Red Hat 
> <https://www.facebook.com/RedHatInc>
>
>
>
> On Tue, Oct 16, 2018 at 4:35 PM Mahesh Falmari <mahesh.falm...@veritas.com>
> wrote:
>
> Hi Nir,
>
> We have few queries with respect to RHV which we would like to understand
> from you.
>
>
>
> *1. Does RHV maintains the virtual machine configuration file in back end?*
>
> Just like we have configuration files for other hypervisors like for
> VMware it is .vmx and for Hyper-V, it is .vmcx which captures most of the
> virtual machine configuration information in that. On the similar lines,
> does RHV also maintains such file? If not, what is the other way to get all
> the virtual machine configuration information from a single API?
>
>
>
> There is a OVF storage, but this is not meant for consumption.
>
>
>
> Right, this is only for internal use.
>
>
>
> ...
>
> *3. Do we have any version associated with the virtual machine?*
>
> Just like we have hardware version in case of VMware and virtual machine
> version in case of Hyper-V, does RHV also associate any such version with
> virtual machine?
>
>
>
> The HW version is based on the VM machine type.
>
>  [Mahesh] Can you please elaborate more on this? How simply VM machine
> type going to determine it’s version?
>
>
>
> Arik, can you answer this?
>
>
>
> Regarding backup, I think you need to store the vm configuration at the
> time of the
>
> backup regardless of having a version or not. The amount of data is very
> small.
>
> *4. Is it possible to create virtual machines with QCOW2 as base disks
> instead of RAW?*
>
> We would like to understand if there are any use cases customers prefer
> creating virtual machines from QCOW2 as base disks instead of RAW ones.
>
>
>
> That is a possibility in cases of thin disk on file storage.
>
>   [Mahesh] Can you please elaborate more on this?
>
>
>
> Using the UI you can use qcow2 format only for thin disks on block storage.
>
>
>
> Using the SDK you can also create qcow2 image on thin file based storage.
>
>
>
> You can see examples here:
>
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/78c3d5bd14eeb93ef72ec31d775ff5c41f51a8c7/sdk/examples/upload_disk.py#L123
>
>
>
> In 4.3 we plan to support qcow2 image format for both thin and preallocated
>
> disks to allow change block tracking for incremental backup. See
>
> html:
> https://github.com/oVirt/ovirt-site/blob/bc51f4a7867d9c7e3797da6da1d19e111cd2ff67/source/develop/release-management/features/storage/incremental-backup.html.md
>
>
>
> *5. RHV Deployment*
>
> What kind of deployments you have come across in the field? Does customers
> scale their infrastructure by adding more datacenters/clusters/nodes or
> they add more RHV managers? What scenarios trigger having more than one RHV
> manager?
>
>
>
> We are all kind with oVirt. I depends on the use case.
>
>
>
> I don't know about any stats from users, but the general idea is:
>
>
>
> - one engine
>
> - 1 or more DCs
>
> - 1 or more storage domains in a DC
>
> - 1 or more clusters per DCs
>
> - 1 or more hosts per cluster
>
>
>
> The theoretical limit is 2000 hosts per DC (limited by sanlock), but the
> practical
>
> limit is much lower. I don't think we have setups with more than 200 hosts.
>
>
>
> Martin, do you have more info on this?
>
> *6. Image transfer*
>
> We are trying to download disk chunks using multiple threads to improve
> performance of reading data from RHV. Downloading 2 disk chunks
> simultaneously via threads should take approximately the same time.
>
> This is much more complicated to calculate.
>
> But from our observations this takes roughly 1.5 times.
>
>
>
> It sounds like a reasonable speed up.
>
>
>
> Can RHVM server requests in parallel,
>
>
>
> Yes
>
>
>
> if so are there any settings that need to be tweaked?
>
>
>
> We don't have any settings related to concurrency.
>
> Here is an example:
> Request 1 for chunk 1 from thread 1, Range: bytes=0-1023
> Request 2 for chunk 2 from thread 2, Range: bytes=1024-2047
> Takes roughly 1.5 seconds, whereas a single request would take 1 second.
> Expecting it to take just around 1 second.
>
> 1.5 seconds for reading 1024 bytes?
>
> [Mahesh] Seeking response to this query.
>
>
>
> The throughout you will get depends on many things:
>
> - are you communicating with the proxy (using proxy_url) or the daemon
>
>   (transfer_url)? Accessing the daemon directly will be faster.
>
> - are you running on the same host as the daemon performing the transfer?
>
> - if you run on the same host, are you using unix socket? (15% improvement
> expected)
>
> - are you using the same connection per thread for entire transfer (huge
> improvement
>
>   when doing small requests)
>
> - which version are you testing? we support keep alive connections only
> since 1.4.x.
>
> - are you using big enough requests? you will get best performance if you
> use one
>
>    big request per thread
>
> - network bandwidth
>
> - storage speed
>
>
>
> For best throughput when downloading single disk, you should use multiple
> threads,
>
> each downloading one part of an image, but I'm not sure it worth the time
> to optimize
>
> this since backup are expected the run in the same time anyway, and you
> will quickly
>
> reach the storage limit while downloading disks for multiple vms on
> multiple hosts
>
> in the same time.
>
>
>
> I suggest testing the expected use case, not single download.
>
>
>
> *7. Free and Thaw operation*
>
> For cinder based VM, API recommended for FS consistent backup.
>
> ·         POST /api/vms/<ID>/freezefilesystems
>
> ·         POST /api/vms/<ID>/thawfilesystems
>
> Why do we need this whereas it is not required for other storage?
>
> Creating a snapshot does this for you in a case where you have the oVirt
> guest agent install on the guest.
>
>   [Mahesh] Thanks, we would also like to understand is there a way to
> control crash/app consistent snapshots through REST APIs?
>
>
>
> snapshots are always consistent if you have qemu guest agent installed,
> and the
>
> guest is using the guest agent scripts properly.
>
>
>
> Without the guest agent, or if it is not configured properly, file system
> will have to
>
> recover after restore, in the same way it recovers after power failure.
>
>
>
> To be able to resume a vm to the same state when the snapshot was done you
>
> need to include memory in the snapshot, but this make the snapshot much
> slower,
>
> depending on the amount of guest memory, and require huge amount of space,
>
> so I don't think it makes sense for backup.
>
>
>
> Regarding REST API, Arik can add more info about this.
>
>
>
> Nir
>
>
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WGL2IORXSAR7M2DU33IKUQZKJ6GZV7JU/

Reply via email to