On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer <nsof...@redhat.com> wrote:
>
> I want to share useful info from the OST hackathon we had this week.
>
> Image transfer must work with real hostnames to allow server
> certificate verification.
> Inside the OST environment, engine and hosts names are resolvable, but
> on the host
> (or vm) running OST, the names are not available.
>
> This can be fixed by adding the engine and hosts to /etc/hosts like this:
>
> $ cat /etc/hosts
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
>
> 192.168.200.2 engine
> 192.168.200.3 lago-basic-suite-master-host-0
> 192.168.200.4 lago-basic-suite-master-host-1

Are these addresses guaranteed to be static?

Where are they defined?

>
> It would be if this was automated by OST. You can get the details using:
>
> $ cd src/ovirt-system-tests/deployment-xxx
> $ lago status

It would have been even nicer if it was possible/easy to have this working
dynamically without user intervention.

Thought about and searched for ways to achieve this, failed to find something
simple.

Closest options I found, in case someone feels like playing with this:

1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:

https://blog.tremily.us/posts/HOSTALIASES/

With this, if indeed the addresses are static, but you do not want to have
them hardcoded in /etc (say, because you want different ones per different
runs/needs/whatever), you can add them hardcoded there with some longer
name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
to the engine of this specific run.

2. https://github.com/fritzw/ld-preload-open

With this, you can have a process-specific /etc/resolv.conf, pointing
this specific process to the internal nameserver inside lago/OST.
This requires building this small C library. Didn't try it or check
its code. Also can't find it pre-built in copr (or anywhere).

(
Along the way, if you like such tricks, found this:

https://github.com/gaul/awesome-ld-preload
)

>
> OST keeps the deployment directory in the source directory. Be careful if you
> like to "git clean -dxf' since it will delete all the deployment and
> you will have to
> kill the vms manually later.
>
> The next thing we need is the engine ca cert. It can be fetched like this:
>
> $ curl -k 
> 'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'
> > ca.pem
>
> I would expect OST to do this and put the file in the deployment directory.
>
> To upload or download images, backup vms or use other modern examples from
> the sdk, you need to have a configuration file like this:
>
> $ cat ~/.config/ovirt.conf
> [engine]
> engine_url = https://engine
> username = admin@internal
> password = 123
> cafile = ca.pem
>
> With this uploading from the same directory where ca.pem is located
> will work. If you want
> it to work from any directory, use absolute path to the file.
>
> I created a test image using qemu-img and qemu-io:
>
> $ qemu-img create -f qcow2 test.qcow2 1g
>
> To write some data to the test image we can use qemu-io. This writes 64k of 
> data
> (b"\xf0" * 64 * 1024) to offset 1 MiB.
>
> $ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2

Never heard about qemu-io. Nice to know. Seems like it does not have a manpage,
in el8, although I can find such a manpage elsewhere on the net.

>
> Since this image contains only 64k of data, uploading it should be instant.
>
> The last part we need is the imageio client package:
>
> $ dnf install ovirt-imageio-client
>
> To upload the image, we need at least one host up and storage domains
> created. I did not find a way to prepare OST, so simply run this after
> run_tests completed. It took about an hour.
>
> To upload the image to raw sparse disk we can use:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> -c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
> [   0.0 ] Checking image...
> [   0.0 ] Image format: qcow2
> [   0.0 ] Disk format: raw
> [   0.0 ] Disk content type: data
> [   0.0 ] Disk provisioned size: 1073741824
> [   0.0 ] Disk initial size: 1073741824
> [   0.0 ] Disk name: test.raw
> [   0.0 ] Disk backup: False
> [   0.0 ] Connecting...
> [   0.0 ] Creating disk...
> [  36.3 ] Disk ID: 26df08cf-3dec-47b9-b776-0e2bc564b6d5
> [  36.3 ] Creating image transfer...
> [  38.2 ] Transfer ID: de8cfac9-ead2-4304-b18b-a1779d647716
> [  38.2 ] Transfer host name: lago-basic-suite-master-host-1
> [  38.2 ] Uploading image...
> [ 100.00% ] 1.00 GiB, 1.79 seconds, 571.50 MiB/s
> [  40.0 ] Finalizing image transfer...
> [  44.1 ] Upload completed successfully
>
> I uploaded this before I added the hosts to /etc/hosts, so the upload
> was done via the proxy.
>
> Yes, it took 36 seconds to create the disk.
>
> To download the disk use:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine 5ac63c72-6296-46b1-a068-b1039c8ecbd1 downlaod.qcow2
> [   0.0 ] Connecting...
> [   0.2 ] Creating image transfer...
> [   1.6 ] Transfer ID: a99e2a43-8360-4661-81dc-02828a88d586
> [   1.6 ] Transfer host name: lago-basic-suite-master-host-1
> [   1.6 ] Downloading image...
> [ 100.00% ] 1.00 GiB, 0.32 seconds, 3.10 GiB/s
> [   1.9 ] Finalizing image transfer...
>
> We can verify the transfers using checksums. Here we create a checksum
> of the remote
> disk:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py
> -c engine 26df08cf-3dec-47b9-b776-0e2bc564b6d5
> {
>     "algorithm": "blake2b",
>     "block_size": 4194304,
>     "checksum":
> "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
> }
>
> And checksum of the downloaded image - they should match:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_image.py
> downlaod.qcow2
> {
>   "algorithm": "blake2b",
>   "block_size": 4194304,
>   "checksum": 
> "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
> }
>
> Same upload to iscsi domain, using qcow2 format:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> -c engine --sd-name iscsi --disk-sparse --disk-format qcow2 test.qcow2
> [   0.0 ] Checking image...
> [   0.0 ] Image format: qcow2
> [   0.0 ] Disk format: cow
> [   0.0 ] Disk content type: data
> [   0.0 ] Disk provisioned size: 1073741824
> [   0.0 ] Disk initial size: 458752
> [   0.0 ] Disk name: test.qcow2
> [   0.0 ] Disk backup: False
> [   0.0 ] Connecting...
> [   0.0 ] Creating disk...
> [  27.8 ] Disk ID: e7ef253e-7baa-4d4a-a9b2-1a6b7db13f41
> [  27.8 ] Creating image transfer...
> [  30.0 ] Transfer ID: 88328857-ac99-4ee1-9618-6b3cd14a7db8
> [  30.0 ] Transfer host name: lago-basic-suite-master-host-0
> [  30.0 ] Uploading image...
> [ 100.00% ] 1.00 GiB, 0.31 seconds, 3.28 GiB/s
> [  30.3 ] Finalizing image transfer...
> [  35.4 ] Upload completed successfully
>
> Again, creating the disk is very slow, not sure why. Probably having a storage
> server on a nested vm is not a good idea.
>
> We can compare the checksum with the source image since checksum are computed
> from the guest content:
>
> [nsoffer@ost ~]$ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py -c
> engine e7ef253e-7baa-4d4a-a9b2-1a6b7db13f41
> {
>     "algorithm": "blake2b",
>     "block_size": 4194304,
>     "checksum":
> "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
> }
>
> Finally, we can try real images using virt-builder:
>
> $ virt-builder fedora-32
>
> Will create a new Fedora 32 server image in the current directory. See
> --help for many
> useful options to create different format, set root password, or
> install packages.
>
> Uploading this image is much slower:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> -c engine --sd-name nfs --disk-sparse --disk-format raw fedora-32.img
> [   0.0 ] Checking image...
> [   0.0 ] Image format: raw
> [   0.0 ] Disk format: raw
> [   0.0 ] Disk content type: data
> [   0.0 ] Disk provisioned size: 6442450944
> [   0.0 ] Disk initial size: 6442450944
> [   0.0 ] Disk name: fedora-32.raw
> [   0.0 ] Disk backup: False
> [   0.0 ] Connecting...
> [   0.0 ] Creating disk...
> [  36.8 ] Disk ID: b17126f3-fa03-4c22-8f59-ef599b64a42e
> [  36.8 ] Creating image transfer...
> [  38.5 ] Transfer ID: fe82fb86-b87a-4e49-b9cd-f1f4334e7852
> [  38.5 ] Transfer host name: lago-basic-suite-master-host-0
> [  38.5 ] Uploading image...
> [ 100.00% ] 6.00 GiB, 99.71 seconds, 61.62 MiB/s
> [ 138.2 ] Finalizing image transfer...
> [ 147.8 ] Upload completed successfully
>
> At the current state of OST, we should avoid such long tests.

Did you try to check if it's indeed actually plain IO that's
slowing disk creation? Perhaps it's something else?

>
> Using backup_vm.py and other examples should work in the same way.
>
> I posted this patch to improve nfs performance, please review:
> https://gerrit.ovirt.org/c/112067/

Nice.

Thanks and best regards,
-- 
Didi
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/U3SMUBRO7MG4N6HPEE3WGNT7TIPUR2OQ/

Reply via email to