Hi Daniel and other friendly contributors,

finally i sorted out how to set provisioned_size/initial_size correctly in upload_disk.py and my error is gone. It wasn't so easy, but maybe i took an awkard route when starting with a preallocated qcow2 image. In this special case you have to set provisioned_size to st_size, whereas with sparse images provisioned_size is "virtual size" from "qemu-img info". This may seem obvious to others, i took the hard route.

My approach stems from my desire to repeat the exact example in upload_disk.py (which uses a qcow image) and my actual use case, which is uploading a rather large image converted from vmdk (i only tested this with raw format yet), so i wanted to have some "real large" data to upload.

@nsoffer:
I'll open a bug for better ovirt-imageio-daemon as soon as i can.

thanks a lot for help
matthias

Am 2017-09-13 um 16:49 schrieb Daniel Erez:
Hi Matthias,

The 403 response from the daemon means the ticket can't be authenticated
(for some reason). I assume that the issue here is the initial size of the disk.
When uploading/downloading a qcow image, you should specify the apparent
size of the file (see 'st_size' in [1]). You can get it simply by 'ls -l' [2] (which is
a different value from 'disk size' of qemu-img info [3]).
btw, why are you creating a preallocated qcow disk? For what use-case?

[1] https://linux.die.net/man/2/stat

[2] $ ls -l test.qcow2
-rw-r--r--. 1 user user 1074135040 Sep 13 16:50 test.qcow2

[3]
$ qemu-img create -f qcow2 -o preallocation=full test.qcow2 1g
$ qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
cluster_size: 65536
Format specific information:
     compat: 1.1
     lazy refcounts: false
     refcount bits: 16
     corrupt: false



On Wed, Sep 13, 2017 at 5:03 PM Matthias Leopold <matthias.leop...@meduniwien.ac.at <mailto:matthias.leop...@meduniwien.ac.at>> wrote:

    i tried it again twice:

    when using upload_disk.py from the ovirt engine host itself the disk
    upload succeeds (despite an "503 Service Unavailable Completed 100%" in
    script output in the end)

    another try was from an ovirt-sdk installation on my ubuntu desktop
    itself (yesterday i tried it from a centos VM on my desktop machine).
    this failed again, this time with "socket.error: [Errno 32] Broken pipe"
    after reaching "200 OK Completed 100%". in imageio-proxy log i have
    again the 403 error in this moment

    what's the difference between accessing the API from the engine host and
    from "outside" in this case?

    thx
    matthias

    Am 2017-09-12 um 16:42 schrieb Matthias Leopold:
     > Thanks, i tried this script and it _almost_ worked ;-)
     >
     > i uploaded two images i created with
     > qemu-img create -f qcow2 -o preallocation=full
     > and
     > qemu-img create -f qcow2 -o preallocation=falloc
     >
     > for initial_size and provisioned_size i took the value reported by
     > "qemu-img info" in "virtual size" (same as "disk size" in this case)
     >
     > the upload goes to 100% and then fails with
     >
     > 200 OK Completed 100%
     > Traceback (most recent call last):
     >    File "./upload_disk.py", line 157, in <module>
     >      headers=upload_headers,
     >    File "/usr/lib64/python2.7/httplib.py", line 1017, in request
     >      self._send_request(method, url, body, headers)
     >    File "/usr/lib64/python2.7/httplib.py", line 1051, in
    _send_request
     >      self.endheaders(body)
     >    File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
     >      self._send_output(message_body)
     >    File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
     >      self.send(msg)
     >    File "/usr/lib64/python2.7/httplib.py", line 840, in send
     >      self.sock.sendall(data)
     >    File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
     >      v = self.send(data[count:])
     >    File "/usr/lib64/python2.7/ssl.py", line 712, in send
     >      v = self._sslobj.write(data)
     > socket.error: [Errno 104] Connection reset by peer
     >
     > in web GUI the disk stays in Status: "Transferring via API"
     > it can only be removed when manually unlocking it (unlock_entity.sh)
     >
     > engine.log tells nothing interesting
     >
     > i attached the last lines of ovirt-imageio-proxy/image-proxy.log and
     > ovirt-imageio-daemon/daemon.log (from the executing node)
     >
     > the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't
    look too
     > nice to me
     >
     > can you explain what happens?
     >
     > ovirt engine is 4.1.5
     > ovirt node is 4.1.3 (is that a problem?)
     >
     > thx
     > matthias
     >
     >
     >
     > Am 2017-09-12 um 13:15 schrieb Fred Rolland:
     >> Hi,
     >>
     >> You can check this example:
     >>
    
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
     >>
     >>
     >> Regards,
     >> Fred
     >>
     >> On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold
     >> <matthias.leop...@meduniwien.ac.at
    <mailto:matthias.leop...@meduniwien.ac.at>
     >> <mailto:matthias.leop...@meduniwien.ac.at
    <mailto:matthias.leop...@meduniwien.ac.at>>> wrote:
     >>
     >>     Hi,
     >>
     >>     is there a way to upload disk images (not OVF files, not ISO
    files)
     >>     to oVirt storage domains via CLI? I need to upload a 800GB
    file and
     >>     this is not really comfortable via browser. I looked at
    ovirt-shell
     >>     and
     >>
     >>
    
https://www.ovirt.org/develop/release-management/features/storage/image-upload/
     >>
     >>
     >>
    
<https://www.ovirt.org/develop/release-management/features/storage/image-upload/>,
     >>
     >>     but i didn't find an option in either of them.
     >>
     >>     thx
     >>     matthias
     >>
     >>     _______________________________________________
     >>     Users mailing list
     >> Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org
    <mailto:Users@ovirt.org>>
     >> http://lists.ovirt.org/mailman/listinfo/users
     >>     <http://lists.ovirt.org/mailman/listinfo/users>
     >>
     >>
     >

    --
    Matthias Leopold
    IT Systems & Communications
    Medizinische Universität Wien
    Spitalgasse 23 / BT 88 /Ebene 00
    A-1090 Wien
    Tel: +43 1 40160-21241 <tel:+43%201%204016021241>
    Fax: +43 1 40160-921200 <tel:+43%201%2040160921200>
    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to