[ovirt-devel] Re: Hosts in Unassigned state after upgrading libvirt to 4.9

2018-11-26 Thread Nir Soffer
On Mon, Nov 26, 2018 at 10:15 PM Nir Soffer  wrote:

> I updated 2 Fedora 28 hosts today, getting new ovirt-master-release.rpm,
> which exposes new virt-preview repo providing libvirt 4.9 and qemu 3.1.
>
> After the update, connecting with engine master (built few week ago) fail
> with:
>
> 2018-11-26 22:07:51,702+02 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Unexpected return
> value: Status [code=-32603, message=Internal JSON-RPC error: {'reason':
> "[Errno 2] No such file or directory: '/usr/share/libvirt/cpu_map.xml'"}]
>
> Looks like contents of /usr/share/libvirt/ is different now:
>
> $ ls -1 /usr/share/libvirt/cpu_map/*.xml | head
> /usr/share/libvirt/cpu_map/index.xml
> /usr/share/libvirt/cpu_map/ppc64_POWER6.xml
> /usr/share/libvirt/cpu_map/ppc64_POWER7.xml
> /usr/share/libvirt/cpu_map/ppc64_POWER8.xml
> /usr/share/libvirt/cpu_map/ppc64_POWER9.xml
> /usr/share/libvirt/cpu_map/ppc64_POWERPC_e5500.xml
> /usr/share/libvirt/cpu_map/ppc64_POWERPC_e6500.xml
> /usr/share/libvirt/cpu_map/ppc64_vendors.xml
> /usr/share/libvirt/cpu_map/x86_486.xml
> /usr/share/libvirt/cpu_map/x86_athlon.xml
>

Looks like vdsm is not ready for this change:

$ git grep cpu_map.xml
lib/vdsm/machinetype.py:CPU_MAP_FILE = '/usr/share/libvirt/cpu_map.xml'
tests/Makefile.am:  cpu_map.xml \
tests/caps_test.py:'cpu_map.xml')

[ovirt-devel] Hosts in Unassigned state after upgrading libvirt to 4.9

2018-11-26 Thread Nir Soffer
I updated 2 Fedora 28 hosts today, getting new ovirt-master-release.rpm,
which exposes new virt-preview repo providing libvirt 4.9 and qemu 3.1.

After the update, connecting with engine master (built few week ago) fail
with:

2018-11-26 22:07:51,702+02 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Unexpected return
value: Status [code=-32603, message=Internal JSON-RPC error: {'reason':
"[Errno 2] No such file or directory: '/usr/share/libvirt/cpu_map.xml'"}]

Looks like contents of /usr/share/libvirt/ is different now:

$ ls -1 /usr/share/libvirt/cpu_map/*.xml | head
/usr/share/libvirt/cpu_map/index.xml
/usr/share/libvirt/cpu_map/ppc64_POWER6.xml
/usr/share/libvirt/cpu_map/ppc64_POWER7.xml
/usr/share/libvirt/cpu_map/ppc64_POWER8.xml
/usr/share/libvirt/cpu_map/ppc64_POWER9.xml
/usr/share/libvirt/cpu_map/ppc64_POWERPC_e5500.xml
/usr/share/libvirt/cpu_map/ppc64_POWERPC_e6500.xml
/usr/share/libvirt/cpu_map/ppc64_vendors.xml
/usr/share/libvirt/cpu_map/x86_486.xml
/usr/share/libvirt/cpu_map/x86_athlon.xml

Do we have a fix for this?

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J5EWGPGQSUU25YJ53EQP3XJR6TGRQZNQ/


[ovirt-devel] Re: oVirt impersonation and sessions

2018-11-26 Thread Anastasiya Ruzhanskaya
And are these session ids, which are sent from clients to engine, sent
further to client? I was not successful in deciphering the packets on the
enine -vdsm channel, as I don't know the session key which wireshark needs
( for channel client - engine it was easier), so not sure what rpc fields
are. For example, in libvirt itself there is no user information sent in
rpc fields.

пн, 26 нояб. 2018 г. в 15:55, Greg Sheremeta :

>
> On Sun, Nov 25, 2018 at 10:24 PM Anastasiya Ruzhanskaya <
> anastasiya.ruzhansk...@frtk.ru> wrote:
>
>> Hello everyone!
>>
>> I wanted to find out how the impersonation technique used in oVirt works?
>> I know from libvirt developers, that oVirt opens one connection only for
>> multiple clients. How does this work?
>>
>
> vdsm, on the hypervisor machine, funnels all the traffic from engine to
> libvirt. vdsm is therefore the only "client" of libvirt.
>
>
>>
>> Also I found out in source code that in ActionParameterBase class the
>> sessionId field is marked transient but, for example, for GWT rpc message,
>> which goes to the server and says what action will be made (shut down,
>> pause vm) this is the only field in all sent information which says what
>> the session is. Where is the session sent instead? There was also a field
>> with session id in https headers, but this was related to cookie so I am
>> not completely sure if this can help to identify the current user.
>>
>
> Yes, that's it. From the headers view in Chrome, on the GWT RPC messages:
> Cookie: JSESSIONID=VdzARh0xFJ8sVZXgG96dF_123cBUpQNfC3Kdz6e0.hostedengine
>
>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MHUIQLODX454RUCI3MY5LGVG4W6NYL37/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BU5TQV4HKMPG67WJQW5NNTBFTFTLPSMH/


[ovirt-devel] Re: [EXTERNAL] Re: mismatch in disk size while uploading a disk in chunks using Image Transfer

2018-11-26 Thread Ranjit DSouza
Nir

I think you were spot on with the content-range not getting sent to RHV server. 
Good catch!

Ok so that problem was in our http client code where we were not setting this 
header in libcurl. Now that we have moved forward, we are seeing that the 
restored disk actual_size is 4k over the provisioned size.

{
  "actual_size" : "3221229568", //this is 3GB + 4k
  "alias" : "vmRestoreDisk",
  "content_type" : "data",
  "format" : "raw",
  "image_id" : "b69363da-620e-4f55-a3c7-1481e85c4164",
  "propagate_errors" : "false",
  "provisioned_size" : "3221225472",
  "shareable" : "false",
  "sparse" : "true",
  "status" : "ok",
  "storage_type" : "image",
  "total_size" : "0",
  "wipe_after_delete" : "false",
  "disk_profile" : {
"href" : 
"/ovirt-engine/api/diskprofiles/555ef5b2-807e-4f21-9a32-0494686515e4",
"id" : "555ef5b2-807e-4f21-9a32-0494686515e4"
  },

I was expecting it to be 1 GB as was the original disk. But I am able to boot 
the vm and log in and look at the directories, (earlier I was getting an error 
when I opened the console that it was not a bootable disk)

{
  "actual_size" : "1389109248",
  "alias" : "3gbdisk",
  "content_type" : "data",
  "format" : "raw",
  "image_id" : "8fbac55e-0c86-4c0b-911b-f5b0a6722834",
  "propagate_errors" : "false",
  "provisioned_size" : "3221225472",
  "shareable" : "false",
  "sparse" : "true",
  "status" : "ok",


I am going through the var/log/ovirt-imageio-daemon logs to check for any 
clues. In the meanwhile, do let us know your thoughts on why this may have 
happened.
(we are taking your performance related comments seriously and will work on it 
once we are done with this)

Thanks
Ranjit

From: Nir Soffer [mailto:nsof...@redhat.com]
Sent: Saturday, November 24, 2018 12:17 AM
To: Ranjit DSouza 
Cc: devel ; Pavan Chavva ; Suchitra 
Herwadkar ; Abhay Marode 

Subject: [EXTERNAL] Re: mismatch in disk size while uploading a disk in chunks 
using Image Transfer

On Fri, Nov 23, 2018 at 2:49 PM Ranjit DSouza 
mailto:ranjit.dso...@veritas.com>> wrote:
...
I am trying to upload a snapshot disk in chunks. Everything seems to work fine, 
but observed that the actual_size after upload, is much lesser than the 
actual_size of the original disk.

Here are the steps:

1.   Take a snapshot of a vm disk and download it (using Image Transfer 
mechanism). Save it on the file system somewhere.  This disk name is 3gbdisk. 
It is Raw + sparse. Resides on nfs storage. The size of this downloaded file is 
3 GB.

  "actual_size" : "1389109248", //1 GB

This is the allocated size (what du -sh filename will show).

But in 4.2 we do not support yet detection of zero or unallocated areas in the 
image,
so you always download the complete image. Zero or unallocated areas are 
downloaded
as zeros.

...
 2.   Now create a new floating disk, (raw + sparse), with provisioned_size 
= 3221225472, or 3 GB. This disk name is vmRestoreDisk

3.   Upload to this disk using Image Transfer API, using libCurl  in chunks 
of 128 MB. This is done in a while loop,  sequentially reading portions of the 
file downloaded in step 1 and uploading these chunks via libcurl.  I Use the 
Transfer URL, not proxy URL.

Here is the trace of the first chunk. Note the Content-Range and Content-Length 
headers. Start offset = 0, end offset = 134217727 (or 128 MB)

upload request for chunk, start offset: 0, end offset: 134217727
Upload Started
Header:Content-Range: bytes 0-134217727/3221225472

The Content-Range header looks correct...

Header:Content-Length: 3221225472
*   Trying 10.210.46.215...
* TCP_NODELAY set
* Connected to 
pnm86hpch30bl15.pne.ven.veritas.com 
(10.210.46.215) port 54322 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: O=pne.ven.veritas.com; 
CN=pnm86hpch30bl15.pne.ven.veritas.com
*  start date: Oct  7 08:55:24 2018 GMT
*  expire date: Oct  7 08:55:24 2023 GMT
*  issuer: C=US; O=pne.ven.veritas.com; 
CN=pravauto20.pne.ven.veritas.com.59289
*  SSL certificate verify result: unable to get local issuer certificate (20), 
continuing anyway.
> PUT /images/8ebc9fa8-d322-423e-8a14-5e46ca10ed4e HTTP/1.1
Host: 
pnm86hpch30bl15.pne.ven.veritas.com:54322
Accept: */*
Content-Length: 134217728
Expect: 100-continue

But you did not send the Content-Range header for this request...


* Done waiting for 100-continue
* We are completely uploaded and fine
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK

The request was successful, writing the first 128 MiB...

< Date: Fri, 23 Nov 2018 11:52:53 GMT
< Server: WSGIServer/0.1 Python/2.7.5
< Content-Type: application/json; charset=UTF-8
< Content-Length: 0

[ovirt-devel] Re: [EXTERNAL] Re: mismatch in disk size while uploading a disk in chunks using Image Transfer

2018-11-26 Thread Nir Soffer
On Mon, Nov 26, 2018 at 4:00 PM Ranjit DSouza 
wrote:

> Nir
>
>
>
> I think you were spot on with the content-range not getting sent to RHV
> server. Good catch!
>
>
>
> Ok so that problem was in our http client code where we were not setting
> this header in libcurl. Now that we have moved forward, we are seeing that
> the restored disk actual_size is 4k over the provisioned size.
>

actual size is the allocated size on storage - basically:

st_blocks * 512

We know that sometimes creating fully allocated disk show st_size + 4k. I
don't know
why this happens but it does not change anything for the guest or for oVirt.

The important check is having exactly st_size bytes in the upload - same as
in the
uploaded file, and checking that both contain the same content.

Nir


>
>
> {
>
>   "actual_size" : "3221229568", //this is 3GB + 4k
>
>   "alias" : "vmRestoreDisk",
>
>   "content_type" : "data",
>
>   "format" : "raw",
>
>   "image_id" : "b69363da-620e-4f55-a3c7-1481e85c4164",
>
>   "propagate_errors" : "false",
>
>   "provisioned_size" : "3221225472",
>
>   "shareable" : "false",
>
>   "sparse" : "true",
>
>   "status" : "ok",
>
>   "storage_type" : "image",
>
>   "total_size" : "0",
>
>   "wipe_after_delete" : "false",
>
>   "disk_profile" : {
>
> "href" :
> "/ovirt-engine/api/diskprofiles/555ef5b2-807e-4f21-9a32-0494686515e4",
>
> "id" : "555ef5b2-807e-4f21-9a32-0494686515e4"
>
>   },
>
>
>
> I was expecting it to be 1 GB as was the original disk. But I am able to
> boot the vm and log in and look at the directories, (earlier I was getting
> an error when I opened the console that it was not a bootable disk)
>
>
>
> {
>
>   "actual_size" : "1389109248",
>
>   "alias" : "3gbdisk",
>
>   "content_type" : "data",
>
>   "format" : "raw",
>
>   "image_id" : "8fbac55e-0c86-4c0b-911b-f5b0a6722834",
>
>   "propagate_errors" : "false",
>
>   "provisioned_size" : "3221225472",
>
>   "shareable" : "false",
>
>   "sparse" : "true",
>
>   "status" : "ok",
>
>
>
>
>
> I am going through the var/log/ovirt-imageio-daemon logs to check for any
> clues. In the meanwhile, do let us know your thoughts on why this may have
> happened.
>
> (we are taking your performance related comments seriously and will work
> on it once we are done with this)
>
>
>
> Thanks
>
> Ranjit
>
>
>
> *From:* Nir Soffer [mailto:nsof...@redhat.com]
> *Sent:* Saturday, November 24, 2018 12:17 AM
> *To:* Ranjit DSouza 
> *Cc:* devel ; Pavan Chavva ;
> Suchitra Herwadkar ; Abhay Marode <
> abhay.mar...@veritas.com>
> *Subject:* [EXTERNAL] Re: mismatch in disk size while uploading a disk in
> chunks using Image Transfer
>
>
>
> On Fri, Nov 23, 2018 at 2:49 PM Ranjit DSouza 
> wrote:
>
> ...
>
> I am trying to upload a snapshot disk in chunks. Everything seems to work
> fine, but observed that the actual_size after upload, is much lesser than
> the actual_size of the original disk.
>
>
>
> Here are the steps:
>
> 1.   Take a snapshot of a vm disk and download it (using Image
> Transfer mechanism). Save it on the file system somewhere.  This disk name
> is *3gbdisk*. It is Raw + sparse. Resides on nfs storage. The size of
> this downloaded file is 3 GB.
>
>
>
>   "actual_size" : "*1389109248*", //1 GB
>
>
>
> This is the allocated size (what du -sh filename will show).
>
>
>
> But in 4.2 we do not support yet detection of zero or unallocated areas in
> the image,
>
> so you always download the complete image. Zero or unallocated areas are
> downloaded
>
> as zeros.
>
>
>
> ...
>
>  2.   Now create a new floating disk, (raw + sparse), with
> provisioned_size = 3221225472, or 3 GB. This disk name is vmRestoreDisk
>
> 3.   Upload to this disk using Image Transfer API, using libCurl  in
> chunks of 128 MB. This is done in a while loop,  sequentially reading
> portions of the file downloaded in step 1 and uploading these chunks via
> libcurl.  I Use the Transfer URL, not proxy URL.
>
>
>
> Here is the trace of the first chunk. Note the Content-Range and
> Content-Length headers. Start offset = 0, end offset = 134217727 (or 128 MB)
>
>
>
> upload request for chunk, start offset: 0, end offset: 134217727
>
> Upload Started
>
> Header:Content-Range: bytes 0-134217727/3221225472
>
>
>
> The Content-Range header looks correct...
>
>
>
> Header:Content-Length: 3221225472
>
> *   Trying 10.210.46.215...
>
> * TCP_NODELAY set
>
> * Connected to pnm86hpch30bl15.pne.ven.veritas.com (10.210.46.215) port
> 54322 (#0)
>
> * ALPN, offering http/1.1
>
> * Cipher selection:
> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
>
> * ALPN, server did not agree to a protocol
>
> * Server certificate:
>
> *  subject: O=pne.ven.veritas.com; CN=pnm86hpch30bl15.pne.ven.veritas.com
>
> *  start date: Oct  7 08:55:24 2018 GMT
>
> *  expire date: Oct  7 08:55:24 2023 GMT
>
> *  issuer: C=US; O=pne.ven.veritas.com;
> CN=pravauto20.pne.ven.veritas.com.59289
>
> *  SSL certificate verify 

[ovirt-devel] Build fail with "# /usr/sbin/groupadd -g 1000 mock" - no test was run

2018-11-26 Thread Nir Soffer
I have seen this issue in the recent days several times.

Here are some examples:
-
https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2269/console
-
https://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/26243/console
- https://jenkins.ovirt.org/job/vdsm_standard-check-patch/91/console
-
https://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/26187/console

There are too issues:
- No test ran, so we need to retrigger manually
- Gerrit report build FAILURE instead of build ERROR

I reported the second issue many times in the last 5 years, no progress yet.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R5YBXXFUS7Y5BDR4DEIJYAXOZJLMOSKM/


[ovirt-devel] Re: splitting tests for stdci v2

2018-11-26 Thread Barak Korren
On Mon, 26 Nov 2018 at 15:10, Nir Soffer  wrote:

> On Mon, Nov 26, 2018 at 3:00 PM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> I'm currently working on paralleling our stdci v2.
>>
>> I've already extracted 'linters' stage, more patches (and more
>> substages) are on the way.
>>
>> This part i.e. :
>>
>> if git diff-tree --no-commit-id --name-only -r HEAD | egrep --quiet
>> 'vdsm.spec.in|Makefile.am|automation' ; then
>>  ./automation/build-artifacts.sh"
>> ...
>>
>>
Please note that checks like this (about which files were changed by the
patch) can be done via the STDCI V2 'runif' option. So you no longer need
to write scripts like this.


> seems to be an excellent candidate for extraction to a separate substage.
>>
>> The question is - how should we proceed with tests? I can create
>> substage for each of:
>>
>> tox -e "tests,{storage,lib,network,virt}"
>>
>> But the original 'check-patch' combined the coverage reports into one -
>> we would lose that.
>>
>
> My long term goal is to get rid of all the ugly bash code in the makefile,
> and
> run everything via tox, but as first step I think we can split the work by
> running:
>
> make tests
>
> In the tests substage, instead of "make check" today.
>
> Does it change anything about coverage?
>
> Theoretically we can split also to storage/network/virt/infra jobs but I
> think
> this will consume too many resources and harm other projects sharing
> the slaves.
>
>
>> There is a possibility that we could work on something that gathers
>> coverage data from multiple sources (tests, OST) as a completely
>> separate jenkins job or smth, but that will be a bigger effort.
>> What do you think about it?
>>
>> Marcin
>>
>>
>>
>>
>> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MR4VTN6BZFRCTPP77TT4SG47E36W72KQ/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SVA35XP5HSYFSCFQP4KJ6DEJQX7DK5TC/


[ovirt-devel] Re: splitting tests for stdci v2

2018-11-26 Thread Nir Soffer
On Mon, Nov 26, 2018 at 3:00 PM Marcin Sobczyk  wrote:

> Hi,
>
> I'm currently working on paralleling our stdci v2.
>
> I've already extracted 'linters' stage, more patches (and more
> substages) are on the way.
>
> This part i.e. :
>
> if git diff-tree --no-commit-id --name-only -r HEAD | egrep --quiet
> 'vdsm.spec.in|Makefile.am|automation' ; then
>  ./automation/build-artifacts.sh"
> ...
>
> seems to be an excellent candidate for extraction to a separate substage.
>
> The question is - how should we proceed with tests? I can create
> substage for each of:
>
> tox -e "tests,{storage,lib,network,virt}"
>
> But the original 'check-patch' combined the coverage reports into one -
> we would lose that.
>

My long term goal is to get rid of all the ugly bash code in the makefile,
and
run everything via tox, but as first step I think we can split the work by
running:

make tests

In the tests substage, instead of "make check" today.

Does it change anything about coverage?

Theoretically we can split also to storage/network/virt/infra jobs but I
think
this will consume too many resources and harm other projects sharing
the slaves.


> There is a possibility that we could work on something that gathers
> coverage data from multiple sources (tests, OST) as a completely
> separate jenkins job or smth, but that will be a bigger effort.
> What do you think about it?
>
> Marcin
>
>
>
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MR4VTN6BZFRCTPP77TT4SG47E36W72KQ/


[ovirt-devel] Re: UI test failing on a vdsm change?

2018-11-26 Thread Greg Sheremeta
On Sat, Nov 24, 2018 at 10:20 AM Barak Korren  wrote:

>
>
> בתאריך שבת, 24 בנוב׳ 2018, 17:05, מאת Greg Sheremeta  >:
>
>>
>>
>> On Sat, Nov 24, 2018 at 9:49 AM Dan Kenigsberg  wrote:
>>
>>>
>>>
>>> On Sat, 24 Nov 2018, 13:50 Greg Sheremeta >>
 Correct, that vdsm patch is unrelated.

 The docker-based selenium testing infrastructure did not initialize
 correctly. Firefox started but chrome did not download correctly.
  [
 https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11365/testReport/junit/(root)/008_basic_ui_sanity/running_tests___basic_suite_el7_x86_64___start_grid/
 ]

 Unable to find image 'selenium/node-chrome-debug:3.9.1-actinium' locally
 Trying to pull repository docker.io/selenium/node-chrome-debug ...
 3.9.1-actinium: Pulling from docker.io/selenium/node-chrome-debug
 1be7f2b886e8 : 
 Already exists
 6fbc4a21b806: Already exists
 c71a6f8e1378: Already exists
 4be3072e5a37: Already exists
 06c6d2f59700: Already exists
 edcd5e9f2f91: Already exists
 0eeaf787f757: Already exists
 c949dee5af7e: Already exists
 df88a49b4162: Already exists
 ce3c6f42fd24: Already exists
 6d845a39af3f: Pulling fs layer
 11d16a965e13: Pulling fs layer
 1294e9b42691: Pulling fs layer
 04b0c053828d: Pulling fs layer
 cf044f1d0e2a: Pulling fs layer
 8f84ccb3a86a: Pulling fs layer
 be9a1d0955bd: Pulling fs layer
 872e5c8a3ad8: Pulling fs layer
 07efee6f27e7: Pulling fs layer
 5c6207de8f09: Pulling fs layer
 b932cacc6ddb: Pulling fs layer
 c057ca8f4e65: Pulling fs layer
 bbe16010d6ab: Pulling fs layer
 645ca3607a4c: Pulling fs layer
 cf044f1d0e2a: Waiting
 04b0c053828d: Waiting
 8f84ccb3a86a: Waiting
 be9a1d0955bd: Waiting
 c057ca8f4e65: Waiting
 5c6207de8f09: Waiting
 b932cacc6ddb: Waiting
 bbe16010d6ab: Waiting
 645ca3607a4c: Waiting
 07efee6f27e7: Waiting
 872e5c8a3ad8: Waiting*/usr/bin/docker-current: error pulling image 
 configuration: unknown blob.
 *See '/usr/bin/docker-current run --help'.


 checking chrome node
 executing shell: *curl http://:/wd/hub/static/resource/hub.html
 <--- that URL won't work :)*

   % Total% Received % Xferd  Average Speed   TimeTime Time
 Current
  Dload  Upload   Total   SpentLeft
 Speed

   0 00 00 0  0  0 --:--:-- --:--:--
 --:--:-- 0
 curl: (6) Could not resolve host: ; Unknown error

 checking firefox node
 executing shell: curl
 http://172.18.0.3:/wd/hub/static/resource/hub.html
 
 WebDriver Hub


 This is the first time I've seen something like this with this test.
 Did it happen only the one time?

>>>
>>> I have no idea. I didn't not even know that such a test had existed.
>>>
>>
>> Yep, we make sure the UI loads, user can login and navigate, etc. -- all
>> automated.
>>
>>
>>>
>>> ovirt CI tries to cache yum repos it pulls from. Do you know if it does
>>> so with docker repos?
>>>
>>
>> I don't know. The selenium ones are standard from dockerhub [
>> https://hub.docker.com/u/selenium/]
>>
>
> We do not have the same elaborate caching that we have for RPMs for
> containers, but we can cache containers as long as they are explicitly
> whitelisted.
>
> Please create a ticket so we'll do that for the selenium containers if
> time makes sense.
>

They should be cached, indeed. Will do (with newer versions of chrome and
ff)


>
>
>>
>>>
>>>

 On Sat, Nov 24, 2018 at 2:35 AM Dan Kenigsberg 
 wrote:

> I just noticed that a vdsm change to gluster tests
> https://gerrit.ovirt.org/#/c/95596/ failed in the change queue, on
>
> WebDriverException in _init_browser connecting to hub
>
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11365/testReport/junit/(root)/008_basic_ui_sanity/running_tests___basic_suite_el7_x86_64___initialize_chrome/
>
> The failure is clearly unrelated to the patch; maybe one of you can
> explain why the test fails?
>


 --

 GREG SHEREMETA

 SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

 Red Hat NA

 

 gsher...@redhat.comIRC: gshereme
 

>>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> 
>>
>> gsher...@redhat.comIRC: gshereme
>> 
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> 

[ovirt-devel] Re: Vdsm: Failing test_no_match

2018-11-26 Thread Michal Skrivanek


> On 26 Nov 2018, at 13:24, Milan Zamazal  wrote:
> 
> Nir Soffer  writes:
> 
>> On Mon, Nov 26, 2018 at 12:27 PM Milan Zamazal  wrote:
>> 
>>> Nir Soffer  writes:
>>> 
 On Thu, Nov 22, 2018 at 2:08 PM Milan Zamazal 
>>> wrote:
 
> Nir Soffer  writes:
> 
>> On Wed, Nov 21, 2018, 17:46 Milan Zamazal > 
>>> Hi, test_no_match fails on CI most of the time (but not always) in
>>> https://gerrit.ovirt.org/95518:
>>> 
>>>  _ test_no_match[qcow2]
>>> _
>>>  11:30:16
>>>  11:30:16 tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'),
>>> image_format = 'qcow2'
>>>  11:30:16
>>>  11:30:16 def test_no_match(tmpdir, image_format):
>>>  11:30:16 path = str(tmpdir.join('test.' + image_format))
>>>  11:30:16 op = qemuimg.create(path, '1m', image_format)
>>>  11:30:16 op.run()
>>>  11:30:16 qemuio.write_pattern(path, image_format,
>>> pattern=2)
>>>  11:30:16 with pytest.raises(qemuio.VerificationError):
>>>  11:30:16 >   qemuio.verify_pattern(path, image_format,
> pattern=4)
>>>  11:30:16
>>>  11:30:16 storage/qemuio_test.py:59:
>>>  11:30:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>>> _
> _ _
>>> _ _ _ _ _ _ _ _
>>>  11:30:16
>>>  11:30:16 path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2',
> format
>>> = 'qcow2'
>>>  11:30:16 offset = 512, len = 1024, pattern = 4
>>>  11:30:16
>>>  11:30:16 def verify_pattern(path, format, offset=512, len=1024,
>>> pattern=5):
>>>  11:30:16 read_cmd = 'read -P %d -s 0 -l %d %d %d' %
>>> (pattern,
>>> len, offset, len)
>>>  11:30:16 cmd = ['qemu-io', '-f', format, '-c', read_cmd,
>>> path]
>>>  11:30:16 rc, out, err = commands.execCmd(cmd, raw=True)
>>>  11:30:16 if rc != 0 or err != b"":
>>>  11:30:16 >   raise cmdutils.Error(cmd, rc, out, err)
>>>  11:30:16 E   Error: Command ['qemu-io', '-f', 'qcow2',
>>> '-c',
>>> 'read -P 4 -s 0 -l 1024 512 1024',
>>> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
>>> out='Pattern verification failed at offset 512, 1024 bytes\nread
> 1024/1024
>>> bytes at offset 512\n1 KiB, 1 ops; 0.0002 sec (3.756 MiB/sec and
> 3846.1538
>>> ops/sec)\n' err=''
>>>  11:30:16
>>>  11:30:16 storage/qemuio.py:50: Error
>>> 
>>> (Similarly for raw.)
>>> 
>>> You can see the complete test run log here (or in other CI runs of
>>> the
>>> patch):
>>> 
>>> 
> 
>>> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2040/consoleFull
>>> 
>>> It fails on both Fedora and CentOS.  It may or may not be related to
>>> the
>>> fact that QEMU 2.11 is used in the failed runs.
>>> 
>>> Any idea what could be wrong?
>> 
>> 
>> Yes. Qemu-io was fixed lately to fail when pattern does not match, but
> our
>> wrapper still expects the old behaviour (return 0, log warning).
> 
> I see, thank you for explanation.
> 
>> Are you sure you run 2.11 and not 2.12?
> 
> Actually not.  Looking into the CI log once more, I can see it reports
> initially installed QEMU version before additional repos are added.
> There are no reports on QEMU versions or upgrades afterwards but that
> may be just silence of some automation script.  Since new QEMU version
> would be expected with the added repos and it would explain the test
> failure, let's assume it's indeed a newer QEMU.
> 
>> We will fix this soon.
> 
> OK, thank you.  We can disable the test temporarily in our patches
> updating repos & requirements and re-enable it before merge or later,
> depending on availability of your fix.
> 
 
 I cannot reproduce the error on Fedora 28
 (qemu-img-2.12.0-0.5.rc1.fc28.x86_64)
>>>^^
>>> This looks suspicious.  Where do you get it from?  Perhaps from the old
>>> virt-preview repo?  Please note there is a copr virt-preview repo now,
>>> see https://fedoraproject.org/wiki/Virtualization_Preview_Repository and
>>> https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/,
>>> which should be up-to-date and contain QEMU 3.1.
>>> 
>> 
>> I'm using the virt-preview repos enabled by this:
>> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>> 
>> If the repo was changed, updates to this rpm should have modified
>> my system to include the new repo.
> 
> I see.  Sandro and I have already updated ovirt-release last week and I
> can see the repo in
> https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc28/noarch/ovirt-release-master-4.3.0-0.1.master.20181125005806.git11c72da.fc28.noarch.rpm.
> Do you have the latest ovirt-release-master.rpm?
> 
>> Sandro, do we need to change 

[ovirt-devel] Re: oVirt impersonation and sessions

2018-11-26 Thread Greg Sheremeta
On Sun, Nov 25, 2018 at 10:24 PM Anastasiya Ruzhanskaya <
anastasiya.ruzhansk...@frtk.ru> wrote:

> Hello everyone!
>
> I wanted to find out how the impersonation technique used in oVirt works?
> I know from libvirt developers, that oVirt opens one connection only for
> multiple clients. How does this work?
>

vdsm, on the hypervisor machine, funnels all the traffic from engine to
libvirt. vdsm is therefore the only "client" of libvirt.


>
> Also I found out in source code that in ActionParameterBase class the
> sessionId field is marked transient but, for example, for GWT rpc message,
> which goes to the server and says what action will be made (shut down,
> pause vm) this is the only field in all sent information which says what
> the session is. Where is the session sent instead? There was also a field
> with session id in https headers, but this was related to cookie so I am
> not completely sure if this can help to identify the current user.
>

Yes, that's it. From the headers view in Chrome, on the GWT RPC messages:
Cookie: JSESSIONID=VdzARh0xFJ8sVZXgG96dF_123cBUpQNfC3Kdz6e0.hostedengine


> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MHUIQLODX454RUCI3MY5LGVG4W6NYL37/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FQ7EJNDEAHYNGVTHIW6OKVW3ZKX7LH3H/


[ovirt-devel] Re: Vdsm: Failing test_no_match

2018-11-26 Thread Nir Soffer
On Mon, Nov 26, 2018 at 2:25 PM Milan Zamazal  wrote:

> Nir Soffer  writes:
>
> > On Mon, Nov 26, 2018 at 12:27 PM Milan Zamazal 
> wrote:
> >
> >> Nir Soffer  writes:
> >>
> >> > On Thu, Nov 22, 2018 at 2:08 PM Milan Zamazal 
> >> wrote:
> >> >
> >> >> Nir Soffer  writes:
> >> >>
> >> >> > On Wed, Nov 21, 2018, 17:46 Milan Zamazal  wrote:
> >> >> >
> >> >> >> Hi, test_no_match fails on CI most of the time (but not always) in
> >> >> >> https://gerrit.ovirt.org/95518:
> >> >> >>
> >> >> >>   _ test_no_match[qcow2]
> >> >> >> _
> >> >> >>   11:30:16
> >> >> >>   11:30:16 tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'),
> >> >> >> image_format = 'qcow2'
> >> >> >>   11:30:16
> >> >> >>   11:30:16 def test_no_match(tmpdir, image_format):
> >> >> >>   11:30:16 path = str(tmpdir.join('test.' + image_format))
> >> >> >>   11:30:16 op = qemuimg.create(path, '1m', image_format)
> >> >> >>   11:30:16 op.run()
> >> >> >>   11:30:16 qemuio.write_pattern(path, image_format,
> >> pattern=2)
> >> >> >>   11:30:16 with pytest.raises(qemuio.VerificationError):
> >> >> >>   11:30:16 >   qemuio.verify_pattern(path, image_format,
> >> >> pattern=4)
> >> >> >>   11:30:16
> >> >> >>   11:30:16 storage/qemuio_test.py:59:
> >> >> >>   11:30:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _
> >> _
> >> >> _ _
> >> >> >> _ _ _ _ _ _ _ _
> >> >> >>   11:30:16
> >> >> >>   11:30:16 path =
> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2',
> >> >> format
> >> >> >> = 'qcow2'
> >> >> >>   11:30:16 offset = 512, len = 1024, pattern = 4
> >> >> >>   11:30:16
> >> >> >>   11:30:16 def verify_pattern(path, format, offset=512,
> len=1024,
> >> >> >> pattern=5):
> >> >> >>   11:30:16 read_cmd = 'read -P %d -s 0 -l %d %d %d' %
> >> (pattern,
> >> >> >> len, offset, len)
> >> >> >>   11:30:16 cmd = ['qemu-io', '-f', format, '-c', read_cmd,
> >> path]
> >> >> >>   11:30:16 rc, out, err = commands.execCmd(cmd, raw=True)
> >> >> >>   11:30:16 if rc != 0 or err != b"":
> >> >> >>   11:30:16 >   raise cmdutils.Error(cmd, rc, out, err)
> >> >> >>   11:30:16 E   Error: Command ['qemu-io', '-f', 'qcow2',
> >> '-c',
> >> >> >> 'read -P 4 -s 0 -l 1024 512 1024',
> >> >> >> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
> >> >> >> out='Pattern verification failed at offset 512, 1024 bytes\nread
> >> >> 1024/1024
> >> >> >> bytes at offset 512\n1 KiB, 1 ops; 0.0002 sec (3.756 MiB/sec and
> >> >> 3846.1538
> >> >> >> ops/sec)\n' err=''
> >> >> >>   11:30:16
> >> >> >>   11:30:16 storage/qemuio.py:50: Error
> >> >> >>
> >> >> >> (Similarly for raw.)
> >> >> >>
> >> >> >> You can see the complete test run log here (or in other CI runs of
> >> the
> >> >> >> patch):
> >> >> >>
> >> >> >>
> >> >>
> >>
> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2040/consoleFull
> >> >> >>
> >> >> >> It fails on both Fedora and CentOS.  It may or may not be related
> to
> >> the
> >> >> >> fact that QEMU 2.11 is used in the failed runs.
> >> >> >>
> >> >> >> Any idea what could be wrong?
> >> >> >
> >> >> >
> >> >> > Yes. Qemu-io was fixed lately to fail when pattern does not match,
> but
> >> >> our
> >> >> > wrapper still expects the old behaviour (return 0, log warning).
> >> >>
> >> >> I see, thank you for explanation.
> >> >>
> >> >> > Are you sure you run 2.11 and not 2.12?
> >> >>
> >> >> Actually not.  Looking into the CI log once more, I can see it
> reports
> >> >> initially installed QEMU version before additional repos are added.
> >> >> There are no reports on QEMU versions or upgrades afterwards but that
> >> >> may be just silence of some automation script.  Since new QEMU
> version
> >> >> would be expected with the added repos and it would explain the test
> >> >> failure, let's assume it's indeed a newer QEMU.
> >> >>
> >> >> > We will fix this soon.
> >> >>
> >> >> OK, thank you.  We can disable the test temporarily in our patches
> >> >> updating repos & requirements and re-enable it before merge or later,
> >> >> depending on availability of your fix.
> >> >>
> >> >
> >> > I cannot reproduce the error on Fedora 28
> >> > (qemu-img-2.12.0-0.5.rc1.fc28.x86_64)
> >> ^^
> >> This looks suspicious.  Where do you get it from?  Perhaps from the old
> >> virt-preview repo?  Please note there is a copr virt-preview repo now,
> >> see https://fedoraproject.org/wiki/Virtualization_Preview_Repository
> and
> >> https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/,
> >> which should be up-to-date and contain QEMU 3.1.
> >>
> >
> > I'm using the virt-preview repos enabled by this:
> > http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
> >
> > If the repo was changed, updates to this rpm should have modified
> > my system to include the new repo.
>
> I see.  Sandro and I have already updated 

[ovirt-devel] Re: Vdsm: Failing test_no_match

2018-11-26 Thread Milan Zamazal
Nir Soffer  writes:

> On Mon, Nov 26, 2018 at 12:27 PM Milan Zamazal  wrote:
>
>> Nir Soffer  writes:
>>
>> > On Thu, Nov 22, 2018 at 2:08 PM Milan Zamazal 
>> wrote:
>> >
>> >> Nir Soffer  writes:
>> >>
>> >> > On Wed, Nov 21, 2018, 17:46 Milan Zamazal > >> >
>> >> >> Hi, test_no_match fails on CI most of the time (but not always) in
>> >> >> https://gerrit.ovirt.org/95518:
>> >> >>
>> >> >>   _ test_no_match[qcow2]
>> >> >> _
>> >> >>   11:30:16
>> >> >>   11:30:16 tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'),
>> >> >> image_format = 'qcow2'
>> >> >>   11:30:16
>> >> >>   11:30:16 def test_no_match(tmpdir, image_format):
>> >> >>   11:30:16 path = str(tmpdir.join('test.' + image_format))
>> >> >>   11:30:16 op = qemuimg.create(path, '1m', image_format)
>> >> >>   11:30:16 op.run()
>> >> >>   11:30:16 qemuio.write_pattern(path, image_format,
>> pattern=2)
>> >> >>   11:30:16 with pytest.raises(qemuio.VerificationError):
>> >> >>   11:30:16 >   qemuio.verify_pattern(path, image_format,
>> >> pattern=4)
>> >> >>   11:30:16
>> >> >>   11:30:16 storage/qemuio_test.py:59:
>> >> >>   11:30:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>> _
>> >> _ _
>> >> >> _ _ _ _ _ _ _ _
>> >> >>   11:30:16
>> >> >>   11:30:16 path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2',
>> >> format
>> >> >> = 'qcow2'
>> >> >>   11:30:16 offset = 512, len = 1024, pattern = 4
>> >> >>   11:30:16
>> >> >>   11:30:16 def verify_pattern(path, format, offset=512, len=1024,
>> >> >> pattern=5):
>> >> >>   11:30:16 read_cmd = 'read -P %d -s 0 -l %d %d %d' %
>> (pattern,
>> >> >> len, offset, len)
>> >> >>   11:30:16 cmd = ['qemu-io', '-f', format, '-c', read_cmd,
>> path]
>> >> >>   11:30:16 rc, out, err = commands.execCmd(cmd, raw=True)
>> >> >>   11:30:16 if rc != 0 or err != b"":
>> >> >>   11:30:16 >   raise cmdutils.Error(cmd, rc, out, err)
>> >> >>   11:30:16 E   Error: Command ['qemu-io', '-f', 'qcow2',
>> '-c',
>> >> >> 'read -P 4 -s 0 -l 1024 512 1024',
>> >> >> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
>> >> >> out='Pattern verification failed at offset 512, 1024 bytes\nread
>> >> 1024/1024
>> >> >> bytes at offset 512\n1 KiB, 1 ops; 0.0002 sec (3.756 MiB/sec and
>> >> 3846.1538
>> >> >> ops/sec)\n' err=''
>> >> >>   11:30:16
>> >> >>   11:30:16 storage/qemuio.py:50: Error
>> >> >>
>> >> >> (Similarly for raw.)
>> >> >>
>> >> >> You can see the complete test run log here (or in other CI runs of
>> the
>> >> >> patch):
>> >> >>
>> >> >>
>> >>
>> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2040/consoleFull
>> >> >>
>> >> >> It fails on both Fedora and CentOS.  It may or may not be related to
>> the
>> >> >> fact that QEMU 2.11 is used in the failed runs.
>> >> >>
>> >> >> Any idea what could be wrong?
>> >> >
>> >> >
>> >> > Yes. Qemu-io was fixed lately to fail when pattern does not match, but
>> >> our
>> >> > wrapper still expects the old behaviour (return 0, log warning).
>> >>
>> >> I see, thank you for explanation.
>> >>
>> >> > Are you sure you run 2.11 and not 2.12?
>> >>
>> >> Actually not.  Looking into the CI log once more, I can see it reports
>> >> initially installed QEMU version before additional repos are added.
>> >> There are no reports on QEMU versions or upgrades afterwards but that
>> >> may be just silence of some automation script.  Since new QEMU version
>> >> would be expected with the added repos and it would explain the test
>> >> failure, let's assume it's indeed a newer QEMU.
>> >>
>> >> > We will fix this soon.
>> >>
>> >> OK, thank you.  We can disable the test temporarily in our patches
>> >> updating repos & requirements and re-enable it before merge or later,
>> >> depending on availability of your fix.
>> >>
>> >
>> > I cannot reproduce the error on Fedora 28
>> > (qemu-img-2.12.0-0.5.rc1.fc28.x86_64)
>> ^^
>> This looks suspicious.  Where do you get it from?  Perhaps from the old
>> virt-preview repo?  Please note there is a copr virt-preview repo now,
>> see https://fedoraproject.org/wiki/Virtualization_Preview_Repository and
>> https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/,
>> which should be up-to-date and contain QEMU 3.1.
>>
>
> I'm using the virt-preview repos enabled by this:
> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>
> If the repo was changed, updates to this rpm should have modified
> my system to include the new repo.

I see.  Sandro and I have already updated ovirt-release last week and I
can see the repo in
https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc28/noarch/ovirt-release-master-4.3.0-0.1.master.20181125005806.git11c72da.fc28.noarch.rpm.
Do you have the latest ovirt-release-master.rpm?

> Sandro, do we need to change ovirt-release-master.rpm?
>
>> but I hope 

[ovirt-devel] Re: Vdsm: Failing test_no_match

2018-11-26 Thread Nir Soffer
On Mon, Nov 26, 2018 at 12:27 PM Milan Zamazal  wrote:

> Nir Soffer  writes:
>
> > On Thu, Nov 22, 2018 at 2:08 PM Milan Zamazal 
> wrote:
> >
> >> Nir Soffer  writes:
> >>
> >> > On Wed, Nov 21, 2018, 17:46 Milan Zamazal  >> >
> >> >> Hi, test_no_match fails on CI most of the time (but not always) in
> >> >> https://gerrit.ovirt.org/95518:
> >> >>
> >> >>   _ test_no_match[qcow2]
> >> >> _
> >> >>   11:30:16
> >> >>   11:30:16 tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'),
> >> >> image_format = 'qcow2'
> >> >>   11:30:16
> >> >>   11:30:16 def test_no_match(tmpdir, image_format):
> >> >>   11:30:16 path = str(tmpdir.join('test.' + image_format))
> >> >>   11:30:16 op = qemuimg.create(path, '1m', image_format)
> >> >>   11:30:16 op.run()
> >> >>   11:30:16 qemuio.write_pattern(path, image_format,
> pattern=2)
> >> >>   11:30:16 with pytest.raises(qemuio.VerificationError):
> >> >>   11:30:16 >   qemuio.verify_pattern(path, image_format,
> >> pattern=4)
> >> >>   11:30:16
> >> >>   11:30:16 storage/qemuio_test.py:59:
> >> >>   11:30:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _
> >> _ _
> >> >> _ _ _ _ _ _ _ _
> >> >>   11:30:16
> >> >>   11:30:16 path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2',
> >> format
> >> >> = 'qcow2'
> >> >>   11:30:16 offset = 512, len = 1024, pattern = 4
> >> >>   11:30:16
> >> >>   11:30:16 def verify_pattern(path, format, offset=512, len=1024,
> >> >> pattern=5):
> >> >>   11:30:16 read_cmd = 'read -P %d -s 0 -l %d %d %d' %
> (pattern,
> >> >> len, offset, len)
> >> >>   11:30:16 cmd = ['qemu-io', '-f', format, '-c', read_cmd,
> path]
> >> >>   11:30:16 rc, out, err = commands.execCmd(cmd, raw=True)
> >> >>   11:30:16 if rc != 0 or err != b"":
> >> >>   11:30:16 >   raise cmdutils.Error(cmd, rc, out, err)
> >> >>   11:30:16 E   Error: Command ['qemu-io', '-f', 'qcow2',
> '-c',
> >> >> 'read -P 4 -s 0 -l 1024 512 1024',
> >> >> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
> >> >> out='Pattern verification failed at offset 512, 1024 bytes\nread
> >> 1024/1024
> >> >> bytes at offset 512\n1 KiB, 1 ops; 0.0002 sec (3.756 MiB/sec and
> >> 3846.1538
> >> >> ops/sec)\n' err=''
> >> >>   11:30:16
> >> >>   11:30:16 storage/qemuio.py:50: Error
> >> >>
> >> >> (Similarly for raw.)
> >> >>
> >> >> You can see the complete test run log here (or in other CI runs of
> the
> >> >> patch):
> >> >>
> >> >>
> >>
> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2040/consoleFull
> >> >>
> >> >> It fails on both Fedora and CentOS.  It may or may not be related to
> the
> >> >> fact that QEMU 2.11 is used in the failed runs.
> >> >>
> >> >> Any idea what could be wrong?
> >> >
> >> >
> >> > Yes. Qemu-io was fixed lately to fail when pattern does not match, but
> >> our
> >> > wrapper still expects the old behaviour (return 0, log warning).
> >>
> >> I see, thank you for explanation.
> >>
> >> > Are you sure you run 2.11 and not 2.12?
> >>
> >> Actually not.  Looking into the CI log once more, I can see it reports
> >> initially installed QEMU version before additional repos are added.
> >> There are no reports on QEMU versions or upgrades afterwards but that
> >> may be just silence of some automation script.  Since new QEMU version
> >> would be expected with the added repos and it would explain the test
> >> failure, let's assume it's indeed a newer QEMU.
> >>
> >> > We will fix this soon.
> >>
> >> OK, thank you.  We can disable the test temporarily in our patches
> >> updating repos & requirements and re-enable it before merge or later,
> >> depending on availability of your fix.
> >>
> >
> > I cannot reproduce the error on Fedora 28
> > (qemu-img-2.12.0-0.5.rc1.fc28.x86_64)
> ^^
> This looks suspicious.  Where do you get it from?  Perhaps from the old
> virt-preview repo?  Please note there is a copr virt-preview repo now,
> see https://fedoraproject.org/wiki/Virtualization_Preview_Repository and
> https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/,
> which should be up-to-date and contain QEMU 3.1.
>

I'm using the virt-preview repos enabled by this:
http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm

If the repo was changed, updates to this rpm should have modified
my system to include the new repo.

Sandro, do we need to change ovirt-release-master.rpm?

> but I hope this change will fix your setup.
>
> Which change?
>

Looks like you found it, but anyway:
https://gerrit.ovirt.org/c/95718/

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-devel] Re: Vdsm: Failing test_no_match

2018-11-26 Thread Milan Zamazal
Nir Soffer  writes:

> On Thu, Nov 22, 2018 at 2:08 PM Milan Zamazal  wrote:
>
>> Nir Soffer  writes:
>>
>> > On Wed, Nov 21, 2018, 17:46 Milan Zamazal > >
>> >> Hi, test_no_match fails on CI most of the time (but not always) in
>> >> https://gerrit.ovirt.org/95518:
>> >>
>> >>   _ test_no_match[qcow2]
>> >> _
>> >>   11:30:16
>> >>   11:30:16 tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'),
>> >> image_format = 'qcow2'
>> >>   11:30:16
>> >>   11:30:16 def test_no_match(tmpdir, image_format):
>> >>   11:30:16 path = str(tmpdir.join('test.' + image_format))
>> >>   11:30:16 op = qemuimg.create(path, '1m', image_format)
>> >>   11:30:16 op.run()
>> >>   11:30:16 qemuio.write_pattern(path, image_format, pattern=2)
>> >>   11:30:16 with pytest.raises(qemuio.VerificationError):
>> >>   11:30:16 >   qemuio.verify_pattern(path, image_format,
>> pattern=4)
>> >>   11:30:16
>> >>   11:30:16 storage/qemuio_test.py:59:
>> >>   11:30:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>> _ _
>> >> _ _ _ _ _ _ _ _
>> >>   11:30:16
>> >>   11:30:16 path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2',
>> format
>> >> = 'qcow2'
>> >>   11:30:16 offset = 512, len = 1024, pattern = 4
>> >>   11:30:16
>> >>   11:30:16 def verify_pattern(path, format, offset=512, len=1024,
>> >> pattern=5):
>> >>   11:30:16 read_cmd = 'read -P %d -s 0 -l %d %d %d' % (pattern,
>> >> len, offset, len)
>> >>   11:30:16 cmd = ['qemu-io', '-f', format, '-c', read_cmd, path]
>> >>   11:30:16 rc, out, err = commands.execCmd(cmd, raw=True)
>> >>   11:30:16 if rc != 0 or err != b"":
>> >>   11:30:16 >   raise cmdutils.Error(cmd, rc, out, err)
>> >>   11:30:16 E   Error: Command ['qemu-io', '-f', 'qcow2', '-c',
>> >> 'read -P 4 -s 0 -l 1024 512 1024',
>> >> '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
>> >> out='Pattern verification failed at offset 512, 1024 bytes\nread
>> 1024/1024
>> >> bytes at offset 512\n1 KiB, 1 ops; 0.0002 sec (3.756 MiB/sec and
>> 3846.1538
>> >> ops/sec)\n' err=''
>> >>   11:30:16
>> >>   11:30:16 storage/qemuio.py:50: Error
>> >>
>> >> (Similarly for raw.)
>> >>
>> >> You can see the complete test run log here (or in other CI runs of the
>> >> patch):
>> >>
>> >>
>> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/2040/consoleFull
>> >>
>> >> It fails on both Fedora and CentOS.  It may or may not be related to the
>> >> fact that QEMU 2.11 is used in the failed runs.
>> >>
>> >> Any idea what could be wrong?
>> >
>> >
>> > Yes. Qemu-io was fixed lately to fail when pattern does not match, but
>> our
>> > wrapper still expects the old behaviour (return 0, log warning).
>>
>> I see, thank you for explanation.
>>
>> > Are you sure you run 2.11 and not 2.12?
>>
>> Actually not.  Looking into the CI log once more, I can see it reports
>> initially installed QEMU version before additional repos are added.
>> There are no reports on QEMU versions or upgrades afterwards but that
>> may be just silence of some automation script.  Since new QEMU version
>> would be expected with the added repos and it would explain the test
>> failure, let's assume it's indeed a newer QEMU.
>>
>> > We will fix this soon.
>>
>> OK, thank you.  We can disable the test temporarily in our patches
>> updating repos & requirements and re-enable it before merge or later,
>> depending on availability of your fix.
>>
>
> I cannot reproduce the error on Fedora 28
> (qemu-img-2.12.0-0.5.rc1.fc28.x86_64)
^^
This looks suspicious.  Where do you get it from?  Perhaps from the old
virt-preview repo?  Please note there is a copr virt-preview repo now,
see https://fedoraproject.org/wiki/Virtualization_Preview_Repository and
https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/,
which should be up-to-date and contain QEMU 3.1.

> but I hope this change will fix your setup.

Which change?

> Can you confirm that this solves the issue?
>
> Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TRB7S3BB74VAFQC4YO7JJMOV6GFLF5IN/


[ovirt-devel] vdsm has been tagged (v4.20.44)

2018-11-26 Thread Milan Zamazal

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6TBIJOCAROABRN4JACXDCGXYQ3OZNGNS/