[ovirt-devel] Re: OST fails with "Error loading module from /usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"

2021-12-26 Thread Yedidyah Bar David
On Fri, Dec 24, 2021 at 1:06 PM Marcin Sobczyk  wrote:
>
> Hi All,
>
> OST currently fails all the time during engine setup.
> Here's a piece of ansible log that's seen repeatedly and I think
> describes the problem:
>
> 11:07:54 E "engine-config",
> 11:07:54 E "-s",
> 11:07:54 E "OvfUpdateIntervalInMinutes=10"
> 11:07:54 E ],
> 11:07:54 E "delta": "0:00:01.142926",
> 11:07:54 E "end": "2021-12-24 11:06:37.894810",
> 11:07:54 E "invocation": {
> 11:07:54 E "module_args": {
> 11:07:54 E "_raw_params": "engine-config -s
> OvfUpdateIntervalInMinutes='10' ",
> 11:07:54 E "_uses_shell": false,
> 11:07:54 E "argv": null,
> 11:07:54 E "chdir": null,
> 11:07:54 E "creates": null,
> 11:07:54 E "executable": null,
> 11:07:54 E "removes": null,
> 11:07:54 E "stdin": null,
> 11:07:54 E "stdin_add_newline": true,
> 11:07:54 E "strip_empty_ends": true,
> 11:07:54 E "warn": false
> 11:07:54 E }
> 11:07:54 E },
> 11:07:54 E "item": {
> 11:07:54 E "key": "OvfUpdateIntervalInMinutes",
> 11:07:54 E "value": "10"
> 11:07:54 E },
> 11:07:54 E "msg": "non-zero return code",
> 11:07:54 E "rc": 1,
> 11:07:54 E "start": "2021-12-24 11:06:36.751884",
> 11:07:54 E "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false",
> 11:07:54 E "stderr_lines": [
> 11:07:54 E "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"
> 11:07:54 E ],
> 11:07:54 E "stdout": "Error loading module from
> /usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml",
> 11:07:54 E "stdout_lines": [
> 11:07:54 E "Error loading module from
> /usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"
>
> We do set some config values for the engine in OST when running
> engine-setup. I tried commenting these out, but then engine failed
> health check anyway:
>
> "Status code was 503 and not [200]: HTTP Error 503: Service Unavailable"
>
> The last working set of OST images was the one from Dec 23, 2021 2:05:08
> AM. The first broken one is from Dec 24, 2021 2:05:09 AM. The shipped
> ovirt-engine RPMs doesn't seem to contain any important changes for
> these two sets, but AFAICS the newer ovirt-dependencies RPM did take in
> a couple of patches that look suspicious [1][2][3]. The patches were
> merged on November 16th, but it seems they were first used in that
> broken set from Dec 24 (the one from Dec 23 seems to contain
> ovirt-dependencies RPM based on this [4] commit).
>
> I wanted to try out an older version of ovirt-dependencies, but I think
> they were wiped out from resources.ovirt.org.
>
> I will disable cyclic el8stream OST runs for now, cause all of them
> fail. If there's anyone working and able to make a build with those
> patches reverted and test that out, please ping me and I'll re-enable them.
>
> Regards, Marcin
>
> [1] https://gerrit.ovirt.org/c/ovirt-dependencies/+/114699
> [2] https://gerrit.ovirt.org/c/ovirt-dependencies/+/113877

This is the one that broke us - it replaced a set of shell scripts
with maven. These scripts also created:

# ls -l /usr/share/java/ovirt-dependencies/4.4
total 0
lrwxrwxrwx. 1 root root 24 Nov  4 18:00 gwt-servlet.jar ->
../gwt-servlet-2.9.0.jar
lrwxrwxrwx. 1 root root 31 Nov  4 18:00 spring-aop.jar ->
../spring-aop-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 33 Nov  4 18:00 spring-beans.jar ->
../spring-beans-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 35 Nov  4 18:00 spring-context.jar ->
../spring-context-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 32 Nov  4 18:00 spring-core.jar ->
../spring-core-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 38 Nov  4 18:00 spring-expression.jar ->
../spring-expression-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 38 Nov  4 18:00 spring-instrument.jar ->
../spring-instrument-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 32 Nov  4 18:00 spring-jdbc.jar ->
../spring-jdbc-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 32 Nov  4 18:00 spring-test.jar ->
../spring-test-5.0.4.RELEASE.jar
lrwxrwxrwx. 1 root root 30 Nov  4 18:00 spring-tx.jar ->
../spring-tx-5.0.4.RELEASE.jar

But now there is no '4.4' directory. The engine spec still has:
%global ovirt_dependencies ovirt-dependencies/4.4

despite being at 4.5 and despite also ovirt-dependencies being
bumped to 4.5 a few days ago.

Not sure what the exact plans are regarding this change - if it's
just a bug, I'll let Martin/Artur fix (by perhaps adding a 4.5
directory with links and point the 4.5 engine there, or make it
provide both - not sure what the plans are). I did not try to
revert, anyway, because quite many other related patches were
merged since, including the move to github/copr.

For now, I pushed this simple workaround to OST [1]:

# ln -s . /usr/share/java/ovirt-dependencies/4.4
# ls -l /usr/share/java/ovirt-dependencies/
total 4724
lrwxrwxrwx. 1 root root   1 Dec 27 08:28 4.4 -> .
-rw-r--r--. 1 root root  360096 Dec 26 23:39 spring-aop.jar
-rw-r--r--. 1 root root  654084 Dec 26 23:39 spring-beans.jar
-rw-r--r--. 1 root root 1079129 Dec 26 23:39 spring-context.jar
-rw-r--r--. 1 root root 1216476 

[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Marcin Sobczyk



On 1/28/21 10:30 AM, Arik Hadas wrote:



On Thu, Jan 28, 2021 at 11:21 AM Marcin Sobczyk > wrote:


Hi,

On 1/28/21 9:43 AM, Arik Hadas wrote:
> Hi,
> Seems like our changes to bios type handing lead to that.
> Interestingly, OST passed on the patches..
Can you please provide more info on the verification process?


Sure.
The OST job [1] passed on PS 16 of [2] and there was no change on the 
patch between PS 16 and PS 17 that got in.


[1] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/ 

[2] https://gerrit.ovirt.org/#/c/ovirt-engine/+/111657/ 



So here's some post-mortem analysis.

The repo used for the manual run [3] was [4]. The build has been already 
cleaned up by jenkins, so there's
no way to peek into what versions of engine were built there. We can 
estimate though, based on the date
of 'check-patch' job, which is Jan 22 4:48 PM. The OST run was done on 
Jan 25, 2021 11:41 AM. The built
packages were simply outdated when the OST run was made. We can actually 
see that in 'dnf.log' [5]:


2021-01-25T11:45:51Z INFO Dependencies resolved.
2021-01-25T11:45:51Z INFO

 Package  Arch  Version  Repository  Size
===
Upgrading:
 ...
 ovirt-engine  noarch 
4.4.5.3-0.0.master.20210125103910.gitd5d5142096e.el8 
ovirt-master-tested-el8  13 M


The version of ovirt-engine that was available in 
ovirt-master-tested-el8 repo is from Jan 25.


Some conclusions:
- we use very fresh versions of packages in OST. If you're planning to 
test a package of your own please rebase first
- if you're trying to test your own package please make sure it's 
actually used by OST run, you can check that in dnf.log files
- in the future we should have an automated way of telling if none of 
the packages provided by the user didn't land in any of OST's VMs. I 
filed [6] to address this.


Since this is blocking all basic suite runs I posted a patch [7] that 
disables USB on the VMs we create in the suite. Please review.


Regards, Marcin

[3] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/parameters/
[4] 
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/9989/artifact/check-patch.el8.x86_64/
[5] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/dnf.log/*view*/

[6] https://issues.redhat.com/browse/RHV-40844
[7] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113201/



Regards, Marcin

> Anyway, we look into it.
> Thanks for bringing this to our attention!
>
> On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek
mailto:vjura...@redhat.com>
> >> wrote:
>
>     Hi,
>     OST fails constantly in test_check_snapshot_with_memory [1] with
>     NPE  in
>     VmDeviceUtils.updateUsbSlots [2]. Build with any additional
>     changes (custom
>     repo) is on [3].
>
>     Unfortunately, I wasn't able to find the root cause. Could
someone
>     please take
>     a look?
>
>     Thanks
>     Vojta
>
>     [1]
>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull

>   
 >
>     [2]
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log


>   
 
>
>     [3]
>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/

>   
 >___
>     Devel mailing list -- devel@ovirt.org
  

[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Arik Hadas
On Thu, Jan 28, 2021 at 11:21 AM Marcin Sobczyk  wrote:

> Hi,
>
> On 1/28/21 9:43 AM, Arik Hadas wrote:
> > Hi,
> > Seems like our changes to bios type handing lead to that.
> > Interestingly, OST passed on the patches..
> Can you please provide more info on the verification process?
>

Sure.
The OST job [1] passed on PS 16 of [2] and there was no change on the patch
between PS 16 and PS 17 that got in.

[1]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/
[2] https://gerrit.ovirt.org/#/c/ovirt-engine/+/111657/


>
> Regards, Marcin
>
> > Anyway, we look into it.
> > Thanks for bringing this to our attention!
> >
> > On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek  > > wrote:
> >
> > Hi,
> > OST fails constantly in test_check_snapshot_with_memory [1] with
> > NPE  in
> > VmDeviceUtils.updateUsbSlots [2]. Build with any additional
> > changes (custom
> > repo) is on [3].
> >
> > Unfortunately, I wasn't able to find the root cause. Could someone
> > please take
> > a look?
> >
> > Thanks
> > Vojta
> >
> > [1]
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull
> > <
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull>
> > [2]
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> > <
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >
> > [3]
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/
> > <
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/
> >___
> > Devel mailing list -- devel@ovirt.org 
> > To unsubscribe send an email to devel-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M5LFINFRHR3T56UDVBD53EOTUFPDXOPC/
> > <
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M5LFINFRHR3T56UDVBD53EOTUFPDXOPC/
> >
> >
> >
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6FCPAHKN6ECUTMHPPH37UPPMBDUTPDGL/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HG73VPO2KFUJ2AT5EO525BRJJNJ3MM5F/


[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Marcin Sobczyk

Hi,

On 1/28/21 9:43 AM, Arik Hadas wrote:

Hi,
Seems like our changes to bios type handing lead to that.
Interestingly, OST passed on the patches..

Can you please provide more info on the verification process?

Regards, Marcin


Anyway, we look into it.
Thanks for bringing this to our attention!

On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek > wrote:


Hi,
OST fails constantly in test_check_snapshot_with_memory [1] with
NPE  in
VmDeviceUtils.updateUsbSlots [2]. Build with any additional
changes (custom
repo) is on [3].

Unfortunately, I wasn't able to find the root cause. Could someone
please take
a look?

Thanks
Vojta

[1]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull

[2]

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log


[3]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/

___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M5LFINFRHR3T56UDVBD53EOTUFPDXOPC/




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6FCPAHKN6ECUTMHPPH37UPPMBDUTPDGL/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K5EAZU2JOYKIFCC2ZV4OEZHHQLINOZBR/


[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Arik Hadas
Hi,
Seems like our changes to bios type handing lead to that.
Interestingly, OST passed on the patches..
Anyway, we look into it.
Thanks for bringing this to our attention!

On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek 
wrote:

> Hi,
> OST fails constantly in test_check_snapshot_with_memory [1] with NPE  in
> VmDeviceUtils.updateUsbSlots [2]. Build with any additional changes
> (custom
> repo) is on [3].
>
> Unfortunately, I wasn't able to find the root cause. Could someone please
> take
> a look?
>
> Thanks
> Vojta
>
> [1]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull
> [2]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> [3]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M5LFINFRHR3T56UDVBD53EOTUFPDXOPC/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6FCPAHKN6ECUTMHPPH37UPPMBDUTPDGL/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-24 Thread Marcin Sobczyk



On 9/24/20 9:44 AM, Martin Perina wrote:



On Thu, Sep 24, 2020 at 8:26 AM Yedidyah Bar David > wrote:


On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek
mailto:vjura...@redhat.com>> wrote:
>
> Hi,
> can anybody look on OST, it fails constantly with error bellow.
> See e.g. [1, 2] for full logs.
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
>
> 13:07:16 ../basic-suite-master/test-scenarios/
> 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]:
Invalid
> characters were found in group names but not replaced, use
> 13:07:22 - to see details

I think this warning is unrelated, it's coming from here:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/consoleText


../basic-suite-master/test-scenarios/001_initialize_engine_pytest.py::test_check_ansible_connectivity
[WARNING]: Invalid characters were found in group names but not
replaced, use
- to see details

Yeah, these warnings are completely unrelated and harmless (although 
pretty ugly)




Perhaps it's due to:

ost_utils/ost_utils/pytest/fixtures/ansible.py

ANSIBLE_ENGINE_PATTERN = "~lago-.*-engine"
ANSIBLE_HOSTS_PATTERN = "~lago-.*-host-[0-9]"
ANSIBLE_HOST0_PATTERN = "~lago-.*-host-0"
ANSIBLE_HOST1_PATTERN = "~lago-.*-host-1"

?

Perhaps this can help understand:

https://gerrit.ovirt.org/111433


Adding Marcin ...
No, it's for a different reason - it's about how lago creates ansible 
inventory.
This is fixed in py3-based lago, so you won't see these errors in el8 
OST runs, but still visible on el7 runs.

The fix for this is here: https://github.com/lago-project/lago/pull/814

Overall the ansible output in OST should be improved because it's much 
too noisy.

I'll take care of it once I get rid of lago dependencies in basic suite.




Best regards,

> 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4)
doesn't match
> a supported version!
> 13:07:22   RequestsDependencyWarning)
> 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> 13:07:22     "changed": true,
> 13:07:22     "gid": 0,
> 13:07:22     "group": "root",
> 13:07:22     "mode": "0755",
> 13:07:22     "owner": "root",
> 13:07:22     "path": "/var/log/ost-engine-backup",
> 13:07:22     "secontext": "unconfined_u:object_r:var_log_t:s0",
> 13:07:22     "size": 6,
> 13:07:22     "state": "directory",
> 13:07:22     "uid": 0
> 13:07:22 }
>
> 13:07:44 [WARNING]: Invalid characters were found in group names
but not
> replaced, use
> 13:07:44 - to see details
> 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4)
doesn't match
> a supported version!
> 13:07:44   RequestsDependencyWarning)
> 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> 13:07:44 Start of engine-backup with mode 'backup'
> 13:07:44 scope: all
> 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> 13:07:44 Backing up:
> 13:07:44 Notifying engine
> 13:07:44 - Files
> 13:07:44 - Engine database 'engine'
> 13:07:44 - DWH database 'ovirt_engine_history'
> 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> 13:07:44 Notifying engineFATAL: failed to backup
/var/lib/grafana/grafana.db
> with sqlite3non-zero return code
> 13:17:47 FAILED___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/



-- 
Didi

___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HMN2L5DC4V6PLXD3GIHG445N7WVFFR5L/



--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org

[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-24 Thread Martin Perina
On Thu, Sep 24, 2020 at 8:26 AM Yedidyah Bar David  wrote:

> On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek 
> wrote:
> >
> > Hi,
> > can anybody look on OST, it fails constantly with error bellow.
> > See e.g. [1, 2] for full logs.
> > Thanks
> > Vojta
> >
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
> >
> > 13:07:16 ../basic-suite-master/test-scenarios/
> > 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
> > characters were found in group names but not replaced, use
> > 13:07:22 - to see details
>
> I think this warning is unrelated, it's coming from here:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/consoleText
>
>
> ../basic-suite-master/test-scenarios/001_initialize_engine_pytest.py::test_check_ansible_connectivity
> [WARNING]: Invalid characters were found in group names but not
> replaced, use
> - to see details
>
> Perhaps it's due to:
>
> ost_utils/ost_utils/pytest/fixtures/ansible.py
>
> ANSIBLE_ENGINE_PATTERN = "~lago-.*-engine"
> ANSIBLE_HOSTS_PATTERN = "~lago-.*-host-[0-9]"
> ANSIBLE_HOST0_PATTERN = "~lago-.*-host-0"
> ANSIBLE_HOST1_PATTERN = "~lago-.*-host-1"
>
> ?
>
> Perhaps this can help understand:
>
> https://gerrit.ovirt.org/111433


Adding Marcin ...

>
>
> Best regards,
>
> > 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> > RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't
> match
> > a supported version!
> > 13:07:22   RequestsDependencyWarning)
> > 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> > 13:07:22 "changed": true,
> > 13:07:22 "gid": 0,
> > 13:07:22 "group": "root",
> > 13:07:22 "mode": "0755",
> > 13:07:22 "owner": "root",
> > 13:07:22 "path": "/var/log/ost-engine-backup",
> > 13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
> > 13:07:22 "size": 6,
> > 13:07:22 "state": "directory",
> > 13:07:22 "uid": 0
> > 13:07:22 }
> >
> > 13:07:44 [WARNING]: Invalid characters were found in group names but not
> > replaced, use
> > 13:07:44 - to see details
> > 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> > RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't
> match
> > a supported version!
> > 13:07:44   RequestsDependencyWarning)
> > 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> > 13:07:44 Start of engine-backup with mode 'backup'
> > 13:07:44 scope: all
> > 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> > 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> > 13:07:44 Backing up:
> > 13:07:44 Notifying engine
> > 13:07:44 - Files
> > 13:07:44 - Engine database 'engine'
> > 13:07:44 - DWH database 'ovirt_engine_history'
> > 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> > 13:07:44 Notifying engineFATAL: failed to backup
> /var/lib/grafana/grafana.db
> > with sqlite3non-zero return code
> > 13:17:47 FAILED___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/
>
>
>
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HMN2L5DC4V6PLXD3GIHG445N7WVFFR5L/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JFZFIM6YTEK2ZOXHTTKLV2PXZNNNDIT7/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-24 Thread Yedidyah Bar David
On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek  wrote:
>
> Hi,
> can anybody look on OST, it fails constantly with error bellow.
> See e.g. [1, 2] for full logs.
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
>
> 13:07:16 ../basic-suite-master/test-scenarios/
> 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
> characters were found in group names but not replaced, use
> 13:07:22 - to see details

I think this warning is unrelated, it's coming from here:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/consoleText

../basic-suite-master/test-scenarios/001_initialize_engine_pytest.py::test_check_ansible_connectivity
[WARNING]: Invalid characters were found in group names but not
replaced, use
- to see details

Perhaps it's due to:

ost_utils/ost_utils/pytest/fixtures/ansible.py

ANSIBLE_ENGINE_PATTERN = "~lago-.*-engine"
ANSIBLE_HOSTS_PATTERN = "~lago-.*-host-[0-9]"
ANSIBLE_HOST0_PATTERN = "~lago-.*-host-0"
ANSIBLE_HOST1_PATTERN = "~lago-.*-host-1"

?

Perhaps this can help understand:

https://gerrit.ovirt.org/111433

Best regards,

> 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:22   RequestsDependencyWarning)
> 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> 13:07:22 "changed": true,
> 13:07:22 "gid": 0,
> 13:07:22 "group": "root",
> 13:07:22 "mode": "0755",
> 13:07:22 "owner": "root",
> 13:07:22 "path": "/var/log/ost-engine-backup",
> 13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
> 13:07:22 "size": 6,
> 13:07:22 "state": "directory",
> 13:07:22 "uid": 0
> 13:07:22 }
>
> 13:07:44 [WARNING]: Invalid characters were found in group names but not
> replaced, use
> 13:07:44 - to see details
> 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:44   RequestsDependencyWarning)
> 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> 13:07:44 Start of engine-backup with mode 'backup'
> 13:07:44 scope: all
> 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> 13:07:44 Backing up:
> 13:07:44 Notifying engine
> 13:07:44 - Files
> 13:07:44 - Engine database 'engine'
> 13:07:44 - DWH database 'ovirt_engine_history'
> 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> 13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db
> with sqlite3non-zero return code
> 13:17:47 FAILED___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/



-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HMN2L5DC4V6PLXD3GIHG445N7WVFFR5L/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-23 Thread Yedidyah Bar David
On Wed, Sep 23, 2020 at 5:33 PM Marcin Sobczyk  wrote:

>
>
> On 9/23/20 4:26 PM, Nir Soffer wrote:
>
> On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek  
>  wrote:
>
> Hi,
> can anybody look on OST, it fails constantly with error bellow.
> See e.g. [1, 2] for full logs.
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
>
> 13:07:16 ../basic-suite-master/test-scenarios/
> 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
> characters were found in group names but not replaced, use
> 13:07:22 - to see details
> 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:22   RequestsDependencyWarning)
> 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> 13:07:22 "changed": true,
> 13:07:22 "gid": 0,
> 13:07:22 "group": "root",
> 13:07:22 "mode": "0755",
> 13:07:22 "owner": "root",
> 13:07:22 "path": "/var/log/ost-engine-backup",
> 13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
> 13:07:22 "size": 6,
> 13:07:22 "state": "directory",
> 13:07:22 "uid": 0
> 13:07:22 }
>
> 13:07:44 [WARNING]: Invalid characters were found in group names but not
> replaced, use
> 13:07:44 - to see details
> 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:44   RequestsDependencyWarning)
> 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> 13:07:44 Start of engine-backup with mode 'backup'
> 13:07:44 scope: all
> 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> 13:07:44 Backing up:
> 13:07:44 Notifying engine
> 13:07:44 - Files
> 13:07:44 - Engine database 'engine'
> 13:07:44 - DWH database 'ovirt_engine_history'
> 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> 13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db
>
> More descriptive error message can be found here [3]:
>
> 2020-09-23 08:16:09 94947: Backing up grafana database to 
> /tmp/engine-backup.sHM28RhfZI/tar/db/grafana.db
> /usr/bin/engine-backup: line 1098: sqlite3: command not found
> 2020-09-23 08:16:09 94947: FATAL: failed to backup 
> /var/lib/grafana/grafana.db with sqlite3
>
>
Should be fixed by:

https://gerrit.ovirt.org/111401

Sorry for the noise.


>
>
> [3]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap_pytest.py/lago-basic-suite-master-engine/_var_log/ost-engine-backup/log.txt/*view*/
>
> with sqlite3non-zero return code
>
> Didi, is this related to the new sqlite change?
>
>
> 13:17:47 FAILED___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TFAO3U33L3EXUGIPU7DW476HUMPKWYJU/
>
>
>

-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FJIBH4Z3F6QPZGO5S6WOE44TWUWK3PYD/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-23 Thread Marcin Sobczyk



On 9/23/20 4:26 PM, Nir Soffer wrote:

On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek  wrote:

Hi,
can anybody look on OST, it fails constantly with error bellow.
See e.g. [1, 2] for full logs.
Thanks
Vojta

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/

13:07:16 ../basic-suite-master/test-scenarios/
002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
characters were found in group names but not replaced, use
13:07:22 - to see details
13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:22   RequestsDependencyWarning)
13:07:22 lago-basic-suite-master-engine | CHANGED => {
13:07:22 "changed": true,
13:07:22 "gid": 0,
13:07:22 "group": "root",
13:07:22 "mode": "0755",
13:07:22 "owner": "root",
13:07:22 "path": "/var/log/ost-engine-backup",
13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
13:07:22 "size": 6,
13:07:22 "state": "directory",
13:07:22 "uid": 0
13:07:22 }

13:07:44 [WARNING]: Invalid characters were found in group names but not
replaced, use
13:07:44 - to see details
13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:44   RequestsDependencyWarning)
13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
13:07:44 Start of engine-backup with mode 'backup'
13:07:44 scope: all
13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
13:07:44 log file: /var/log/ost-engine-backup/log.txt
13:07:44 Backing up:
13:07:44 Notifying engine
13:07:44 - Files
13:07:44 - Engine database 'engine'
13:07:44 - DWH database 'ovirt_engine_history'
13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db

More descriptive error message can be found here [3]:

2020-09-23 08:16:09 94947: Backing up grafana database to 
/tmp/engine-backup.sHM28RhfZI/tar/db/grafana.db
/usr/bin/engine-backup: line 1098: sqlite3: command not found
2020-09-23 08:16:09 94947: FATAL: failed to backup /var/lib/grafana/grafana.db 
with sqlite3



[3] 
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap_pytest.py/lago-basic-suite-master-engine/_var_log/ost-engine-backup/log.txt/*view*/



with sqlite3non-zero return code

Didi, is this related to the new sqlite change?


13:17:47 FAILED___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TFAO3U33L3EXUGIPU7DW476HUMPKWYJU/


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AK3RVL2ESJVZ7CE625JXBUX3OMLYAKVH/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-23 Thread Nir Soffer
On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek  wrote:
>
> Hi,
> can anybody look on OST, it fails constantly with error bellow.
> See e.g. [1, 2] for full logs.
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
>
> 13:07:16 ../basic-suite-master/test-scenarios/
> 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
> characters were found in group names but not replaced, use
> 13:07:22 - to see details
> 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:22   RequestsDependencyWarning)
> 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> 13:07:22 "changed": true,
> 13:07:22 "gid": 0,
> 13:07:22 "group": "root",
> 13:07:22 "mode": "0755",
> 13:07:22 "owner": "root",
> 13:07:22 "path": "/var/log/ost-engine-backup",
> 13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
> 13:07:22 "size": 6,
> 13:07:22 "state": "directory",
> 13:07:22 "uid": 0
> 13:07:22 }
>
> 13:07:44 [WARNING]: Invalid characters were found in group names but not
> replaced, use
> 13:07:44 - to see details
> 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
> a supported version!
> 13:07:44   RequestsDependencyWarning)
> 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> 13:07:44 Start of engine-backup with mode 'backup'
> 13:07:44 scope: all
> 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> 13:07:44 Backing up:
> 13:07:44 Notifying engine
> 13:07:44 - Files
> 13:07:44 - Engine database 'engine'
> 13:07:44 - DWH database 'ovirt_engine_history'
> 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> 13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db
> with sqlite3non-zero return code

Didi, is this related to the new sqlite change?

> 13:17:47 FAILED___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TFAO3U33L3EXUGIPU7DW476HUMPKWYJU/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-31 Thread Galit Rosenthal
After investigating it looks like the issues started when this patch was
merged.

Marcin, can you help debug it.

https://gerrit.ovirt.org/#/c/107399/

Thanks
Galit

On Mon, Mar 30, 2020 at 6:42 PM Martin Perina  wrote:

>
>
> On Mon, Mar 30, 2020 at 5:38 PM Galit Rosenthal 
> wrote:
>
>> It looks like the local repo stops running.
>> When I run curl before the failure just to check the status, I can see it
>> isn't accessible.
>>
>> I'm trying to see where it fails or what cause it to fail.
>>
>> I manage to reproduce on BM
>>
>
> I thought that moving setup_storage will mitigate the issue:
> https://gerrit.ovirt.org/#/c/107989/
> But it just postponed the error to further phase, now adding host failing
> to the same issue: Failed to download metadata for repo 'alocalsync'
>
> https://jenkins.ovirt.org/view/oVirt system
> tests/job/ovirt-system-tests_manual/6710
>
> So Galit, please take a look, oVirt CQ is suffering from this issue for
> more than a week now
>
>>
>> On Mon, Mar 30, 2020 at 6:23 PM Marcin Sobczyk 
>> wrote:
>>
>>> Hi Galit
>>>
>>> I can see the issue again - now in manual OST runs:
>>>
>>>
>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6711/consoleFull#L2,856
>>>
>>> Regards, Marcin
>>>
>>> On 3/23/20 10:09 PM, Marcin Sobczyk wrote:
>>>
>>>
>>>
>>> On 3/23/20 8:51 PM, Galit Rosenthal wrote:
>>>
>>> I run it now locally using the extra sources as it runs in the CQ and it
>>> didn't fail for me.
>>>
>>> I will continue to investigate tomorrow,
>>>
>>> Marcin, did you see this issue also in check_patch or only in CQ?
>>>
>>> I wasn't aware of the issue till Nir raised it - I was working with the
>>> patch previously
>>> and both check-patch and manual runs were fine. I think it concerns only
>>> CQ then.
>>>
>>> Regards,
>>> Galit
>>>
>>> On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal 
>>> wrote:
>>>
 I will look at it.

 On Mon, Mar 23, 2020 at 4:18 PM Martin Perina 
 wrote:

>
>
> On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk 
> wrote:
>
>>
>>
>> On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
>> >
>> >
>> > On 3/23/20 2:53 PM, Nir Soffer wrote:
>> >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk <
>> msobc...@redhat.com>
>> >> wrote:
>> >>>
>> >>>
>> >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
>>  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
>>   wrote:
>> >
>> > On 3/21/20 1:18 AM, Nir Soffer wrote:
>> >
>> > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
>>
>> > wrote:
>> >> Looks like infrastructure issue setting up storage on engine
>> host.
>> >>
>> >> Here are 2 failing builds with unrelated changes:
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
>> > Rebuilding still fails in setup_storage:
>> >
>> >
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
>> >
>> >
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
>> >
>> >
>> >> Is this a known issue?
>> >>
>> >> Error Message
>> >>
>> >> AssertionError: setup_storage.sh failed. Exit code is 1 assert
>> 1
>> >> == 0   -1   +0
>> >>
>> >> Stacktrace
>> >>
>> >> prefix = > 0x7f6fd2b998d0>
>> >>
>> >>   @pytest.mark.run(order=14)
>> >>   def test_configure_storage(prefix):
>> >>   engine = prefix.virt_env.engine_vm()
>> >>   result = engine.ssh(
>> >>   [
>> >>   '/tmp/setup_storage.sh',
>> >>   ],
>> >>   )
>> >>> assert result.code == 0, 'setup_storage.sh failed.
>> Exit
>> >>> code is %s' % result.code
>> >> E   AssertionError: setup_storage.sh failed. Exit code is 1
>> >> E   assert 1 == 0
>> >> E -1
>> >> E +0
>> >>
>> >>
>> >> The pytest traceback is nice, but in this case it is does not
>> >> show any useful info.
>> >>
>> >> Since we run a script using ssh, the error message should
>> include
>> >> the process stdout and stderr
>> >> which probably can explain the failure.
>> > I posted https://gerrit.ovirt.org/#/c/107830/ to improve
>> logging
>> > during storage setup.
>> > Unfortunately AFAICS it didn't fail, so I guess we'll have to
>> > merge it and wait for a failed job to get some helpful logs.
>>  Thanks.
>> 
>>  It still fails for me with current code:
>> 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
>> 
>> 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-30 Thread Martin Perina
On Mon, Mar 30, 2020 at 5:38 PM Galit Rosenthal  wrote:

> It looks like the local repo stops running.
> When I run curl before the failure just to check the status, I can see it
> isn't accessible.
>
> I'm trying to see where it fails or what cause it to fail.
>
> I manage to reproduce on BM
>

I thought that moving setup_storage will mitigate the issue:
https://gerrit.ovirt.org/#/c/107989/
But it just postponed the error to further phase, now adding host failing
to the same issue: Failed to download metadata for repo 'alocalsync'

https://jenkins.ovirt.org/view/oVirt system
tests/job/ovirt-system-tests_manual/6710

So Galit, please take a look, oVirt CQ is suffering from this issue for
more than a week now

>
> On Mon, Mar 30, 2020 at 6:23 PM Marcin Sobczyk 
> wrote:
>
>> Hi Galit
>>
>> I can see the issue again - now in manual OST runs:
>>
>>
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6711/consoleFull#L2,856
>>
>> Regards, Marcin
>>
>> On 3/23/20 10:09 PM, Marcin Sobczyk wrote:
>>
>>
>>
>> On 3/23/20 8:51 PM, Galit Rosenthal wrote:
>>
>> I run it now locally using the extra sources as it runs in the CQ and it
>> didn't fail for me.
>>
>> I will continue to investigate tomorrow,
>>
>> Marcin, did you see this issue also in check_patch or only in CQ?
>>
>> I wasn't aware of the issue till Nir raised it - I was working with the
>> patch previously
>> and both check-patch and manual runs were fine. I think it concerns only
>> CQ then.
>>
>> Regards,
>> Galit
>>
>> On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal 
>> wrote:
>>
>>> I will look at it.
>>>
>>> On Mon, Mar 23, 2020 at 4:18 PM Martin Perina 
>>> wrote:
>>>


 On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk 
 wrote:

>
>
> On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
> >
> >
> > On 3/23/20 2:53 PM, Nir Soffer wrote:
> >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk 
>
> >> wrote:
> >>>
> >>>
> >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
>  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
>   wrote:
> >
> > On 3/21/20 1:18 AM, Nir Soffer wrote:
> >
> > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
> > wrote:
> >> Looks like infrastructure issue setting up storage on engine
> host.
> >>
> >> Here are 2 failing builds with unrelated changes:
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
> > Rebuilding still fails in setup_storage:
> >
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
> >
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
> >
> >
> >> Is this a known issue?
> >>
> >> Error Message
> >>
> >> AssertionError: setup_storage.sh failed. Exit code is 1 assert
> 1
> >> == 0   -1   +0
> >>
> >> Stacktrace
> >>
> >> prefix = 
> >>
> >>   @pytest.mark.run(order=14)
> >>   def test_configure_storage(prefix):
> >>   engine = prefix.virt_env.engine_vm()
> >>   result = engine.ssh(
> >>   [
> >>   '/tmp/setup_storage.sh',
> >>   ],
> >>   )
> >>> assert result.code == 0, 'setup_storage.sh failed.
> Exit
> >>> code is %s' % result.code
> >> E   AssertionError: setup_storage.sh failed. Exit code is 1
> >> E   assert 1 == 0
> >> E -1
> >> E +0
> >>
> >>
> >> The pytest traceback is nice, but in this case it is does not
> >> show any useful info.
> >>
> >> Since we run a script using ssh, the error message should
> include
> >> the process stdout and stderr
> >> which probably can explain the failure.
> > I posted https://gerrit.ovirt.org/#/c/107830/ to improve
> logging
> > during storage setup.
> > Unfortunately AFAICS it didn't fail, so I guess we'll have to
> > merge it and wait for a failed job to get some helpful logs.
>  Thanks.
> 
>  It still fails for me with current code:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
> 
> 
>  Same when using current vdsm master.
> >>> Updated the patch according to your suggestions and currently
> trying
> >>> out
> >>> OST for the 4th time -
> >>> all previous runs succeeded. I guess I'm out of luck :)
> >> It succeeds on your local OST setup but fail on Jenkins?
> > No, I mean jenkins - both check-patch runs didn't fail on this
> script.
> > I also tried running OST 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-30 Thread Galit Rosenthal
It looks like the local repo stops running.
When I run curl before the failure just to check the status, I can see it
isn't accessible.

I'm trying to see where it fails or what cause it to fail.

I manage to reproduce on BM

On Mon, Mar 30, 2020 at 6:23 PM Marcin Sobczyk  wrote:

> Hi Galit
>
> I can see the issue again - now in manual OST runs:
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6711/consoleFull#L2,856
>
> Regards, Marcin
>
> On 3/23/20 10:09 PM, Marcin Sobczyk wrote:
>
>
>
> On 3/23/20 8:51 PM, Galit Rosenthal wrote:
>
> I run it now locally using the extra sources as it runs in the CQ and it
> didn't fail for me.
>
> I will continue to investigate tomorrow,
>
> Marcin, did you see this issue also in check_patch or only in CQ?
>
> I wasn't aware of the issue till Nir raised it - I was working with the
> patch previously
> and both check-patch and manual runs were fine. I think it concerns only
> CQ then.
>
> Regards,
> Galit
>
> On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal 
> wrote:
>
>> I will look at it.
>>
>> On Mon, Mar 23, 2020 at 4:18 PM Martin Perina  wrote:
>>
>>>
>>>
>>> On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk 
>>> wrote:
>>>


 On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
 >
 >
 > On 3/23/20 2:53 PM, Nir Soffer wrote:
 >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk 

 >> wrote:
 >>>
 >>>
 >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
   wrote:
 >
 > On 3/21/20 1:18 AM, Nir Soffer wrote:
 >
 > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
 > wrote:
 >> Looks like infrastructure issue setting up storage on engine
 host.
 >>
 >> Here are 2 failing builds with unrelated changes:
 >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
 >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
 > Rebuilding still fails in setup_storage:
 >
 >
 https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
 >
 >
 https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
 >
 >
 >> Is this a known issue?
 >>
 >> Error Message
 >>
 >> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1
 >> == 0   -1   +0
 >>
 >> Stacktrace
 >>
 >> prefix = 
 >>
 >>   @pytest.mark.run(order=14)
 >>   def test_configure_storage(prefix):
 >>   engine = prefix.virt_env.engine_vm()
 >>   result = engine.ssh(
 >>   [
 >>   '/tmp/setup_storage.sh',
 >>   ],
 >>   )
 >>> assert result.code == 0, 'setup_storage.sh failed. Exit
 >>> code is %s' % result.code
 >> E   AssertionError: setup_storage.sh failed. Exit code is 1
 >> E   assert 1 == 0
 >> E -1
 >> E +0
 >>
 >>
 >> The pytest traceback is nice, but in this case it is does not
 >> show any useful info.
 >>
 >> Since we run a script using ssh, the error message should
 include
 >> the process stdout and stderr
 >> which probably can explain the failure.
 > I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging
 > during storage setup.
 > Unfortunately AFAICS it didn't fail, so I guess we'll have to
 > merge it and wait for a failed job to get some helpful logs.
  Thanks.
 
  It still fails for me with current code:
 
 https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
 
 
  Same when using current vdsm master.
 >>> Updated the patch according to your suggestions and currently
 trying
 >>> out
 >>> OST for the 4th time -
 >>> all previous runs succeeded. I guess I'm out of luck :)
 >> It succeeds on your local OST setup but fail on Jenkins?
 > No, I mean jenkins - both check-patch runs didn't fail on this script.
 > I also tried running OST manually twice and same thing happened.
 > Anyway - the patch has been merged now so if any failure occurs in CQ
 > we should know what's going on.
 Ok, finally caught a failure in CQ [1]:

 [2020-03-23T14:14:09.836Z] if result.code != 0:
 [2020-03-23T14:14:09.836Z] msg = (
 [2020-03-23T14:14:09.836Z] 'setup_storage.sh failed
 with
 exit code: {}.\n'
 [2020-03-23T14:14:09.836Z] 'stdout:\n{}'
 [2020-03-23T14:14:09.836Z] 'stderr:\n{}'
 [2020-03-23T14:14:09.836Z] ).format(result.code,
 result.out,
 result.err)
 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-30 Thread Marcin Sobczyk

Hi Galit

I can see the issue again - now in manual OST runs:

https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6711/consoleFull#L2,856

Regards, Marcin

On 3/23/20 10:09 PM, Marcin Sobczyk wrote:



On 3/23/20 8:51 PM, Galit Rosenthal wrote:
I run it now locally using the extra sources as it runs in the CQ and 
it didn't fail for me.


I will continue to investigate tomorrow,

Marcin, did you see this issue also in check_patch or only in CQ?
I wasn't aware of the issue till Nir raised it - I was working with 
the patch previously
and both check-patch and manual runs were fine. I think it concerns 
only CQ then.



Regards,
Galit

On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal > wrote:


I will look at it.

On Mon, Mar 23, 2020 at 4:18 PM Martin Perina mailto:mper...@redhat.com>> wrote:



On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk
mailto:msobc...@redhat.com>> wrote:



On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
>
>
> On 3/23/20 2:53 PM, Nir Soffer wrote:
>> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk
mailto:msobc...@redhat.com>>
>> wrote:
>>>
>>>
>>> On 3/23/20 2:17 PM, Nir Soffer wrote:
 On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
 mailto:msobc...@redhat.com>>
wrote:
>
> On 3/21/20 1:18 AM, Nir Soffer wrote:
>
> On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer
mailto:nsof...@redhat.com>>
> wrote:
>> Looks like infrastructure issue setting up storage
on engine host.
>>
>> Here are 2 failing builds with unrelated changes:
>>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
> Rebuilding still fails in setup_storage:
>
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/

>
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/

>
>
>> Is this a known issue?
>>
>> Error Message
>>
>> AssertionError: setup_storage.sh failed. Exit code
is 1 assert 1
>> == 0   -1   +0
>>
>> Stacktrace
>>
>> prefix = 
>>
>> @pytest.mark.run(order=14)
>>   def test_configure_storage(prefix):
>>   engine = prefix.virt_env.engine_vm()
>>   result = engine.ssh(
>>   [
>> '/tmp/setup_storage.sh',
>>   ],
>>   )
>>>     assert result.code == 0,
'setup_storage.sh failed. Exit
>>> code is %s' % result.code
>> E   AssertionError: setup_storage.sh failed.
Exit code is 1
>> E   assert 1 == 0
>> E -1
>> E +0
>>
>>
>> The pytest traceback is nice, but in this case it
is does not
>> show any useful info.
>>
>> Since we run a script using ssh, the error message
should include
>> the process stdout and stderr
>> which probably can explain the failure.
> I posted https://gerrit.ovirt.org/#/c/107830/ to
improve logging
> during storage setup.
> Unfortunately AFAICS it didn't fail, so I guess
we'll have to
> merge it and wait for a failed job to get some
helpful logs.
 Thanks.

 It still fails for me with current code:


https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/



 Same when using current vdsm master.
>>> Updated the patch according to your suggestions and
currently trying
>>> out
>>> OST for the 4th time -
>>> all previous runs succeeded. I guess I'm out of luck :)
>> It succeeds on your local OST setup but fail on Jenkins?
> No, I mean jenkins - both check-patch runs didn't fail
on this script.
> I also tried running OST manually twice and same thing
happened.
> Anyway - the patch has been merged now 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Marcin Sobczyk



On 3/23/20 8:51 PM, Galit Rosenthal wrote:
I run it now locally using the extra sources as it runs in the CQ and 
it didn't fail for me.


I will continue to investigate tomorrow,

Marcin, did you see this issue also in check_patch or only in CQ?
I wasn't aware of the issue till Nir raised it - I was working with the 
patch previously
and both check-patch and manual runs were fine. I think it concerns only 
CQ then.



Regards,
Galit

On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal > wrote:


I will look at it.

On Mon, Mar 23, 2020 at 4:18 PM Martin Perina mailto:mper...@redhat.com>> wrote:



On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk
mailto:msobc...@redhat.com>> wrote:



On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
>
>
> On 3/23/20 2:53 PM, Nir Soffer wrote:
>> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk
mailto:msobc...@redhat.com>>
>> wrote:
>>>
>>>
>>> On 3/23/20 2:17 PM, Nir Soffer wrote:
 On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
 mailto:msobc...@redhat.com>> wrote:
>
> On 3/21/20 1:18 AM, Nir Soffer wrote:
>
> On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer
mailto:nsof...@redhat.com>>
> wrote:
>> Looks like infrastructure issue setting up storage
on engine host.
>>
>> Here are 2 failing builds with unrelated changes:
>>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
> Rebuilding still fails in setup_storage:
>
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/

>
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/

>
>
>> Is this a known issue?
>>
>> Error Message
>>
>> AssertionError: setup_storage.sh failed. Exit code
is 1 assert 1
>> == 0   -1   +0
>>
>> Stacktrace
>>
>> prefix = 
>>
>> @pytest.mark.run(order=14)
>>   def test_configure_storage(prefix):
>>   engine = prefix.virt_env.engine_vm()
>>   result = engine.ssh(
>>   [
>> '/tmp/setup_storage.sh',
>>   ],
>>   )
>>>     assert result.code == 0, 'setup_storage.sh
failed. Exit
>>> code is %s' % result.code
>> E   AssertionError: setup_storage.sh failed.
Exit code is 1
>> E   assert 1 == 0
>> E -1
>> E +0
>>
>>
>> The pytest traceback is nice, but in this case it
is does not
>> show any useful info.
>>
>> Since we run a script using ssh, the error message
should include
>> the process stdout and stderr
>> which probably can explain the failure.
> I posted https://gerrit.ovirt.org/#/c/107830/ to
improve logging
> during storage setup.
> Unfortunately AFAICS it didn't fail, so I guess
we'll have to
> merge it and wait for a failed job to get some
helpful logs.
 Thanks.

 It still fails for me with current code:


https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/



 Same when using current vdsm master.
>>> Updated the patch according to your suggestions and
currently trying
>>> out
>>> OST for the 4th time -
>>> all previous runs succeeded. I guess I'm out of luck :)
>> It succeeds on your local OST setup but fail on Jenkins?
> No, I mean jenkins - both check-patch runs didn't fail
on this script.
> I also tried running OST manually twice and same thing
happened.
> Anyway - the patch has been merged now so if any failure
occurs in CQ
> we should know what's going on.
Ok, finally caught a failure in CQ [1]:

[2020-03-23T14:14:09.836Z] if result.code != 0:

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Galit Rosenthal
I run it now locally using the extra sources as it runs in the CQ and it
didn't fail for me.

I will continue to investigate tomorrow,

Marcin, did you see this issue also in check_patch or only in CQ?
Regards,
Galit

On Mon, Mar 23, 2020 at 4:29 PM Galit Rosenthal  wrote:

> I will look at it.
>
> On Mon, Mar 23, 2020 at 4:18 PM Martin Perina  wrote:
>
>>
>>
>> On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk 
>> wrote:
>>
>>>
>>>
>>> On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
>>> >
>>> >
>>> > On 3/23/20 2:53 PM, Nir Soffer wrote:
>>> >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk 
>>> >> wrote:
>>> >>>
>>> >>>
>>> >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
>>>  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
>>>   wrote:
>>> >
>>> > On 3/21/20 1:18 AM, Nir Soffer wrote:
>>> >
>>> > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
>>> > wrote:
>>> >> Looks like infrastructure issue setting up storage on engine host.
>>> >>
>>> >> Here are 2 failing builds with unrelated changes:
>>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
>>> > Rebuilding still fails in setup_storage:
>>> >
>>> >
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
>>> >
>>> >
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
>>> >
>>> >
>>> >> Is this a known issue?
>>> >>
>>> >> Error Message
>>> >>
>>> >> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1
>>> >> == 0   -1   +0
>>> >>
>>> >> Stacktrace
>>> >>
>>> >> prefix = 
>>> >>
>>> >>   @pytest.mark.run(order=14)
>>> >>   def test_configure_storage(prefix):
>>> >>   engine = prefix.virt_env.engine_vm()
>>> >>   result = engine.ssh(
>>> >>   [
>>> >>   '/tmp/setup_storage.sh',
>>> >>   ],
>>> >>   )
>>> >>> assert result.code == 0, 'setup_storage.sh failed. Exit
>>> >>> code is %s' % result.code
>>> >> E   AssertionError: setup_storage.sh failed. Exit code is 1
>>> >> E   assert 1 == 0
>>> >> E -1
>>> >> E +0
>>> >>
>>> >>
>>> >> The pytest traceback is nice, but in this case it is does not
>>> >> show any useful info.
>>> >>
>>> >> Since we run a script using ssh, the error message should include
>>> >> the process stdout and stderr
>>> >> which probably can explain the failure.
>>> > I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging
>>> > during storage setup.
>>> > Unfortunately AFAICS it didn't fail, so I guess we'll have to
>>> > merge it and wait for a failed job to get some helpful logs.
>>>  Thanks.
>>> 
>>>  It still fails for me with current code:
>>> 
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
>>> 
>>> 
>>>  Same when using current vdsm master.
>>> >>> Updated the patch according to your suggestions and currently trying
>>> >>> out
>>> >>> OST for the 4th time -
>>> >>> all previous runs succeeded. I guess I'm out of luck :)
>>> >> It succeeds on your local OST setup but fail on Jenkins?
>>> > No, I mean jenkins - both check-patch runs didn't fail on this script.
>>> > I also tried running OST manually twice and same thing happened.
>>> > Anyway - the patch has been merged now so if any failure occurs in CQ
>>> > we should know what's going on.
>>> Ok, finally caught a failure in CQ [1]:
>>>
>>> [2020-03-23T14:14:09.836Z] if result.code != 0:
>>> [2020-03-23T14:14:09.836Z] msg = (
>>> [2020-03-23T14:14:09.836Z] 'setup_storage.sh failed with
>>> exit code: {}.\n'
>>> [2020-03-23T14:14:09.836Z] 'stdout:\n{}'
>>> [2020-03-23T14:14:09.836Z] 'stderr:\n{}'
>>> [2020-03-23T14:14:09.836Z] ).format(result.code, result.out,
>>> result.err)
>>> [2020-03-23T14:14:09.836Z] >   raise RuntimeError(msg)
>>> [2020-03-23T14:14:09.836Z] E   RuntimeError: setup_storage.sh
>>> failed with exit code: 1.
>>> [2020-03-23T14:14:09.836Z] E   stdout:
>>> [2020-03-23T14:14:09.836Z] E   Reposync & Extra Sources
>>> Content0.0  B/s |   0  B 00:00
>>> [2020-03-23T14:14:09.836Z] E   stderr:
>>> [2020-03-23T14:14:09.836Z] E   + set -xe
>>> [2020-03-23T14:14:09.836Z] E   +
>>> MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>>> [2020-03-23T14:14:09.836Z] E   +
>>> ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>>> [2020-03-23T14:14:09.836Z] E   + NUM_LUNS=5
>>> [2020-03-23T14:14:09.836Z] E   ++ uname -r
>>> [2020-03-23T14:14:09.836Z] E   ++ awk -F. '{print $(NF-1)}'
>>> [2020-03-23T14:14:09.836Z] E   + DIST=el8_1
>>> [2020-03-23T14:14:09.836Z] E   + 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Galit Rosenthal
I will look at it.

On Mon, Mar 23, 2020 at 4:18 PM Martin Perina  wrote:

>
>
> On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk 
> wrote:
>
>>
>>
>> On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
>> >
>> >
>> > On 3/23/20 2:53 PM, Nir Soffer wrote:
>> >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk 
>> >> wrote:
>> >>>
>> >>>
>> >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
>>  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
>>   wrote:
>> >
>> > On 3/21/20 1:18 AM, Nir Soffer wrote:
>> >
>> > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
>> > wrote:
>> >> Looks like infrastructure issue setting up storage on engine host.
>> >>
>> >> Here are 2 failing builds with unrelated changes:
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
>> > Rebuilding still fails in setup_storage:
>> >
>> >
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
>> >
>> >
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
>> >
>> >
>> >> Is this a known issue?
>> >>
>> >> Error Message
>> >>
>> >> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1
>> >> == 0   -1   +0
>> >>
>> >> Stacktrace
>> >>
>> >> prefix = 
>> >>
>> >>   @pytest.mark.run(order=14)
>> >>   def test_configure_storage(prefix):
>> >>   engine = prefix.virt_env.engine_vm()
>> >>   result = engine.ssh(
>> >>   [
>> >>   '/tmp/setup_storage.sh',
>> >>   ],
>> >>   )
>> >>> assert result.code == 0, 'setup_storage.sh failed. Exit
>> >>> code is %s' % result.code
>> >> E   AssertionError: setup_storage.sh failed. Exit code is 1
>> >> E   assert 1 == 0
>> >> E -1
>> >> E +0
>> >>
>> >>
>> >> The pytest traceback is nice, but in this case it is does not
>> >> show any useful info.
>> >>
>> >> Since we run a script using ssh, the error message should include
>> >> the process stdout and stderr
>> >> which probably can explain the failure.
>> > I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging
>> > during storage setup.
>> > Unfortunately AFAICS it didn't fail, so I guess we'll have to
>> > merge it and wait for a failed job to get some helpful logs.
>>  Thanks.
>> 
>>  It still fails for me with current code:
>> 
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
>> 
>> 
>>  Same when using current vdsm master.
>> >>> Updated the patch according to your suggestions and currently trying
>> >>> out
>> >>> OST for the 4th time -
>> >>> all previous runs succeeded. I guess I'm out of luck :)
>> >> It succeeds on your local OST setup but fail on Jenkins?
>> > No, I mean jenkins - both check-patch runs didn't fail on this script.
>> > I also tried running OST manually twice and same thing happened.
>> > Anyway - the patch has been merged now so if any failure occurs in CQ
>> > we should know what's going on.
>> Ok, finally caught a failure in CQ [1]:
>>
>> [2020-03-23T14:14:09.836Z] if result.code != 0:
>> [2020-03-23T14:14:09.836Z] msg = (
>> [2020-03-23T14:14:09.836Z] 'setup_storage.sh failed with
>> exit code: {}.\n'
>> [2020-03-23T14:14:09.836Z] 'stdout:\n{}'
>> [2020-03-23T14:14:09.836Z] 'stderr:\n{}'
>> [2020-03-23T14:14:09.836Z] ).format(result.code, result.out,
>> result.err)
>> [2020-03-23T14:14:09.836Z] >   raise RuntimeError(msg)
>> [2020-03-23T14:14:09.836Z] E   RuntimeError: setup_storage.sh
>> failed with exit code: 1.
>> [2020-03-23T14:14:09.836Z] E   stdout:
>> [2020-03-23T14:14:09.836Z] E   Reposync & Extra Sources
>> Content0.0  B/s |   0  B 00:00
>> [2020-03-23T14:14:09.836Z] E   stderr:
>> [2020-03-23T14:14:09.836Z] E   + set -xe
>> [2020-03-23T14:14:09.836Z] E   +
>> MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>> [2020-03-23T14:14:09.836Z] E   +
>> ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>> [2020-03-23T14:14:09.836Z] E   + NUM_LUNS=5
>> [2020-03-23T14:14:09.836Z] E   ++ uname -r
>> [2020-03-23T14:14:09.836Z] E   ++ awk -F. '{print $(NF-1)}'
>> [2020-03-23T14:14:09.836Z] E   + DIST=el8_1
>> [2020-03-23T14:14:09.836Z] E   + main
>> [2020-03-23T14:14:09.836Z] E   ++ hostname
>> [2020-03-23T14:14:09.836Z] E   + [[
>> lago-basic-suite-master-engine == *\i\p\v\6* ]]
>> [2020-03-23T14:14:09.836Z] E   + install_deps
>> [2020-03-23T14:14:09.836Z] E   + systemctl disable --now
>> kdump.service
>> [2020-03-23T14:14:09.836Z] E   Removed
>> 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Martin Perina
On Mon, Mar 23, 2020 at 3:16 PM Marcin Sobczyk  wrote:

>
>
> On 3/23/20 3:10 PM, Marcin Sobczyk wrote:
> >
> >
> > On 3/23/20 2:53 PM, Nir Soffer wrote:
> >> On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk 
> >> wrote:
> >>>
> >>>
> >>> On 3/23/20 2:17 PM, Nir Soffer wrote:
>  On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk
>   wrote:
> >
> > On 3/21/20 1:18 AM, Nir Soffer wrote:
> >
> > On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer 
> > wrote:
> >> Looks like infrastructure issue setting up storage on engine host.
> >>
> >> Here are 2 failing builds with unrelated changes:
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
> > Rebuilding still fails in setup_storage:
> >
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
> >
> >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
> >
> >
> >> Is this a known issue?
> >>
> >> Error Message
> >>
> >> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1
> >> == 0   -1   +0
> >>
> >> Stacktrace
> >>
> >> prefix = 
> >>
> >>   @pytest.mark.run(order=14)
> >>   def test_configure_storage(prefix):
> >>   engine = prefix.virt_env.engine_vm()
> >>   result = engine.ssh(
> >>   [
> >>   '/tmp/setup_storage.sh',
> >>   ],
> >>   )
> >>> assert result.code == 0, 'setup_storage.sh failed. Exit
> >>> code is %s' % result.code
> >> E   AssertionError: setup_storage.sh failed. Exit code is 1
> >> E   assert 1 == 0
> >> E -1
> >> E +0
> >>
> >>
> >> The pytest traceback is nice, but in this case it is does not
> >> show any useful info.
> >>
> >> Since we run a script using ssh, the error message should include
> >> the process stdout and stderr
> >> which probably can explain the failure.
> > I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging
> > during storage setup.
> > Unfortunately AFAICS it didn't fail, so I guess we'll have to
> > merge it and wait for a failed job to get some helpful logs.
>  Thanks.
> 
>  It still fails for me with current code:
> 
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
> 
> 
>  Same when using current vdsm master.
> >>> Updated the patch according to your suggestions and currently trying
> >>> out
> >>> OST for the 4th time -
> >>> all previous runs succeeded. I guess I'm out of luck :)
> >> It succeeds on your local OST setup but fail on Jenkins?
> > No, I mean jenkins - both check-patch runs didn't fail on this script.
> > I also tried running OST manually twice and same thing happened.
> > Anyway - the patch has been merged now so if any failure occurs in CQ
> > we should know what's going on.
> Ok, finally caught a failure in CQ [1]:
>
> [2020-03-23T14:14:09.836Z] if result.code != 0:
> [2020-03-23T14:14:09.836Z] msg = (
> [2020-03-23T14:14:09.836Z] 'setup_storage.sh failed with
> exit code: {}.\n'
> [2020-03-23T14:14:09.836Z] 'stdout:\n{}'
> [2020-03-23T14:14:09.836Z] 'stderr:\n{}'
> [2020-03-23T14:14:09.836Z] ).format(result.code, result.out,
> result.err)
> [2020-03-23T14:14:09.836Z] >   raise RuntimeError(msg)
> [2020-03-23T14:14:09.836Z] E   RuntimeError: setup_storage.sh
> failed with exit code: 1.
> [2020-03-23T14:14:09.836Z] E   stdout:
> [2020-03-23T14:14:09.836Z] E   Reposync & Extra Sources
> Content0.0  B/s |   0  B 00:00
> [2020-03-23T14:14:09.836Z] E   stderr:
> [2020-03-23T14:14:09.836Z] E   + set -xe
> [2020-03-23T14:14:09.836Z] E   +
> MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
> [2020-03-23T14:14:09.836Z] E   +
> ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
> [2020-03-23T14:14:09.836Z] E   + NUM_LUNS=5
> [2020-03-23T14:14:09.836Z] E   ++ uname -r
> [2020-03-23T14:14:09.836Z] E   ++ awk -F. '{print $(NF-1)}'
> [2020-03-23T14:14:09.836Z] E   + DIST=el8_1
> [2020-03-23T14:14:09.836Z] E   + main
> [2020-03-23T14:14:09.836Z] E   ++ hostname
> [2020-03-23T14:14:09.836Z] E   + [[
> lago-basic-suite-master-engine == *\i\p\v\6* ]]
> [2020-03-23T14:14:09.836Z] E   + install_deps
> [2020-03-23T14:14:09.836Z] E   + systemctl disable --now
> kdump.service
> [2020-03-23T14:14:09.836Z] E   Removed
> /etc/systemd/system/multi-user.target.wants/kdump.service.
> [2020-03-23T14:14:09.836Z] E   + yum install --nogpgcheck -y
> nfs-utils rpcbind lvm2 targetcli sg3_utils iscsi-initiator-utils lsscsi
> policycoreutils-python-utils
> 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Marcin Sobczyk



On 3/23/20 3:10 PM, Marcin Sobczyk wrote:



On 3/23/20 2:53 PM, Nir Soffer wrote:
On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk  
wrote:



On 3/23/20 2:17 PM, Nir Soffer wrote:
On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk 
 wrote:


On 3/21/20 1:18 AM, Nir Soffer wrote:

On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  
wrote:

Looks like infrastructure issue setting up storage on engine host.

Here are 2 failing builds with unrelated changes:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/

Rebuilding still fails in setup_storage:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/ 

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/ 




Is this a known issue?

Error Message

AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 
== 0   -1   +0


Stacktrace

prefix = 

  @pytest.mark.run(order=14)
  def test_configure_storage(prefix):
  engine = prefix.virt_env.engine_vm()
  result = engine.ssh(
  [
  '/tmp/setup_storage.sh',
  ],
  )
    assert result.code == 0, 'setup_storage.sh failed. Exit 
code is %s' % result.code

E   AssertionError: setup_storage.sh failed. Exit code is 1
E   assert 1 == 0
E -1
E +0


The pytest traceback is nice, but in this case it is does not 
show any useful info.


Since we run a script using ssh, the error message should include 
the process stdout and stderr

which probably can explain the failure.
I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging 
during storage setup.
Unfortunately AFAICS it didn't fail, so I guess we'll have to 
merge it and wait for a failed job to get some helpful logs.

Thanks.

It still fails for me with current code:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/ 



Same when using current vdsm master.
Updated the patch according to your suggestions and currently trying 
out

OST for the 4th time -
all previous runs succeeded. I guess I'm out of luck :)

It succeeds on your local OST setup but fail on Jenkins?

No, I mean jenkins - both check-patch runs didn't fail on this script.
I also tried running OST manually twice and same thing happened.
Anyway - the patch has been merged now so if any failure occurs in CQ
we should know what's going on.

Ok, finally caught a failure in CQ [1]:

[2020-03-23T14:14:09.836Z] if result.code != 0:
[2020-03-23T14:14:09.836Z] msg = (
[2020-03-23T14:14:09.836Z] 'setup_storage.sh failed with 
exit code: {}.\n'

[2020-03-23T14:14:09.836Z] 'stdout:\n{}'
[2020-03-23T14:14:09.836Z] 'stderr:\n{}'
[2020-03-23T14:14:09.836Z] ).format(result.code, result.out, 
result.err)

[2020-03-23T14:14:09.836Z] >   raise RuntimeError(msg)
[2020-03-23T14:14:09.836Z] E   RuntimeError: setup_storage.sh 
failed with exit code: 1.

[2020-03-23T14:14:09.836Z] E   stdout:
[2020-03-23T14:14:09.836Z] E   Reposync & Extra Sources 
Content    0.0  B/s |   0  B 00:00

[2020-03-23T14:14:09.836Z] E   stderr:
[2020-03-23T14:14:09.836Z] E   + set -xe
[2020-03-23T14:14:09.836Z] E   + 
MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
[2020-03-23T14:14:09.836Z] E   + 
ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3

[2020-03-23T14:14:09.836Z] E   + NUM_LUNS=5
[2020-03-23T14:14:09.836Z] E   ++ uname -r
[2020-03-23T14:14:09.836Z] E   ++ awk -F. '{print $(NF-1)}'
[2020-03-23T14:14:09.836Z] E   + DIST=el8_1
[2020-03-23T14:14:09.836Z] E   + main
[2020-03-23T14:14:09.836Z] E   ++ hostname
[2020-03-23T14:14:09.836Z] E   + [[ 
lago-basic-suite-master-engine == *\i\p\v\6* ]]

[2020-03-23T14:14:09.836Z] E   + install_deps
[2020-03-23T14:14:09.836Z] E   + systemctl disable --now 
kdump.service
[2020-03-23T14:14:09.836Z] E   Removed 
/etc/systemd/system/multi-user.target.wants/kdump.service.
[2020-03-23T14:14:09.836Z] E   + yum install --nogpgcheck -y 
nfs-utils rpcbind lvm2 targetcli sg3_utils iscsi-initiator-utils lsscsi 
policycoreutils-python-utils
[2020-03-23T14:14:09.836Z] E   Failed to download metadata for 
repo 'alocalsync'
[2020-03-23T14:14:09.836Z] E   Error: Failed to download 
metadata for repo 'alocalsync'



[1] 
https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-master_change-queue-tester/detail/ovirt-master_change-queue-tester/21420/pipeline






Also I wonder why this code is called as a test 
(test_configure_storage). This looks like setup

step so it should run as a fixture.
That's true, but the pytest porting effort was about providing a 
bare minimum to move away from nose.
Organizing the tests into proper setup/fixtures is a huge task and 
will be probably implemented

incrementally in the nearest 

[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Marcin Sobczyk



On 3/23/20 2:53 PM, Nir Soffer wrote:

On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk  wrote:



On 3/23/20 2:17 PM, Nir Soffer wrote:

On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk  wrote:


On 3/21/20 1:18 AM, Nir Soffer wrote:

On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  wrote:

Looks like infrastructure issue setting up storage on engine host.

Here are 2 failing builds with unrelated changes:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/

Rebuilding still fails in setup_storage:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/


Is this a known issue?

Error Message

AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0   -1   +0

Stacktrace

prefix = 

  @pytest.mark.run(order=14)
  def test_configure_storage(prefix):
  engine = prefix.virt_env.engine_vm()
  result = engine.ssh(
  [
  '/tmp/setup_storage.sh',
  ],
  )

assert result.code == 0, 'setup_storage.sh failed. Exit code is %s' % 
result.code

E   AssertionError: setup_storage.sh failed. Exit code is 1
E   assert 1 == 0
E -1
E +0


The pytest traceback is nice, but in this case it is does not show any useful 
info.

Since we run a script using ssh, the error message should include the process 
stdout and stderr
which probably can explain the failure.

I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging during storage 
setup.
Unfortunately AFAICS it didn't fail, so I guess we'll have to merge it and wait 
for a failed job to get some helpful logs.

Thanks.

It still fails for me with current code:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/

Same when using current vdsm master.

Updated the patch according to your suggestions and currently trying out
OST for the 4th time -
all previous runs succeeded. I guess I'm out of luck :)

It succeeds on your local OST setup but fail on Jenkins?

No, I mean jenkins - both check-patch runs didn't fail on this script.
I also tried running OST manually twice and same thing happened.
Anyway - the patch has been merged now so if any failure occurs in CQ
we should know what's going on.




Also I wonder why this code is called as a test (test_configure_storage). This 
looks like setup
step so it should run as a fixture.

That's true, but the pytest porting effort was about providing a bare minimum 
to move away from nose.
Organizing the tests into proper setup/fixtures is a huge task and will be 
probably implemented
incrementally in the nearest future.

Understood


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/53CVZBUF5PQU7UJKCGYBGYUIJH32JYGI/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Nir Soffer
On Mon, Mar 23, 2020 at 3:26 PM Marcin Sobczyk  wrote:
>
>
>
> On 3/23/20 2:17 PM, Nir Soffer wrote:
> > On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk  wrote:
> >>
> >>
> >> On 3/21/20 1:18 AM, Nir Soffer wrote:
> >>
> >> On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  wrote:
> >>> Looks like infrastructure issue setting up storage on engine host.
> >>>
> >>> Here are 2 failing builds with unrelated changes:
> >>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
> >>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
> >>
> >> Rebuilding still fails in setup_storage:
> >>
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
> >>
> >>>
> >>> Is this a known issue?
> >>>
> >>> Error Message
> >>>
> >>> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0   
> >>> -1   +0
> >>>
> >>> Stacktrace
> >>>
> >>> prefix = 
> >>>
> >>>  @pytest.mark.run(order=14)
> >>>  def test_configure_storage(prefix):
> >>>  engine = prefix.virt_env.engine_vm()
> >>>  result = engine.ssh(
> >>>  [
> >>>  '/tmp/setup_storage.sh',
> >>>  ],
> >>>  )
> assert result.code == 0, 'setup_storage.sh failed. Exit code is 
>  %s' % result.code
> >>> E   AssertionError: setup_storage.sh failed. Exit code is 1
> >>> E   assert 1 == 0
> >>> E -1
> >>> E +0
> >>>
> >>>
> >>> The pytest traceback is nice, but in this case it is does not show any 
> >>> useful info.
> >>>
> >>> Since we run a script using ssh, the error message should include the 
> >>> process stdout and stderr
> >>> which probably can explain the failure.
> >> I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging during 
> >> storage setup.
> >> Unfortunately AFAICS it didn't fail, so I guess we'll have to merge it and 
> >> wait for a failed job to get some helpful logs.
> > Thanks.
> >
> > It still fails for me with current code:
> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/
> >
> > Same when using current vdsm master.
> Updated the patch according to your suggestions and currently trying out
> OST for the 4th time -
> all previous runs succeeded. I guess I'm out of luck :)

It succeeds on your local OST setup but fail on Jenkins?

> >>> Also I wonder why this code is called as a test (test_configure_storage). 
> >>> This looks like setup
> >>> step so it should run as a fixture.
> >> That's true, but the pytest porting effort was about providing a bare 
> >> minimum to move away from nose.
> >> Organizing the tests into proper setup/fixtures is a huge task and will be 
> >> probably implemented
> >> incrementally in the nearest future.
> > Understood
> >
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ENQHFTYQJLBWXAI2MMY7OB6GQNURLBIA/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Marcin Sobczyk



On 3/23/20 2:17 PM, Nir Soffer wrote:

On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk  wrote:



On 3/21/20 1:18 AM, Nir Soffer wrote:

On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  wrote:

Looks like infrastructure issue setting up storage on engine host.

Here are 2 failing builds with unrelated changes:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/


Rebuilding still fails in setup_storage:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/



Is this a known issue?

Error Message

AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0   -1   +0

Stacktrace

prefix = 

 @pytest.mark.run(order=14)
 def test_configure_storage(prefix):
 engine = prefix.virt_env.engine_vm()
 result = engine.ssh(
 [
 '/tmp/setup_storage.sh',
 ],
 )

   assert result.code == 0, 'setup_storage.sh failed. Exit code is %s' % 
result.code

E   AssertionError: setup_storage.sh failed. Exit code is 1
E   assert 1 == 0
E -1
E +0


The pytest traceback is nice, but in this case it is does not show any useful 
info.

Since we run a script using ssh, the error message should include the process 
stdout and stderr
which probably can explain the failure.

I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging during storage 
setup.
Unfortunately AFAICS it didn't fail, so I guess we'll have to merge it and wait 
for a failed job to get some helpful logs.

Thanks.

It still fails for me with current code:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/

Same when using current vdsm master.
Updated the patch according to your suggestions and currently trying out 
OST for the 4th time -

all previous runs succeeded. I guess I'm out of luck :)




Also I wonder why this code is called as a test (test_configure_storage). This 
looks like setup
step so it should run as a fixture.

That's true, but the pytest porting effort was about providing a bare minimum 
to move away from nose.
Organizing the tests into proper setup/fixtures is a huge task and will be 
probably implemented
incrementally in the nearest future.

Understood


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2G2NYBXELHVMY2MLG556EG5PQ5YAV6V7/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Nir Soffer
On Mon, Mar 23, 2020 at 1:25 PM Marcin Sobczyk  wrote:
>
>
>
> On 3/21/20 1:18 AM, Nir Soffer wrote:
>
> On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  wrote:
>>
>> Looks like infrastructure issue setting up storage on engine host.
>>
>> Here are 2 failing builds with unrelated changes:
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
>
>
> Rebuilding still fails in setup_storage:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/
>
>>
>>
>> Is this a known issue?
>>
>> Error Message
>>
>> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0   -1   
>> +0
>>
>> Stacktrace
>>
>> prefix = 
>>
>> @pytest.mark.run(order=14)
>> def test_configure_storage(prefix):
>> engine = prefix.virt_env.engine_vm()
>> result = engine.ssh(
>> [
>> '/tmp/setup_storage.sh',
>> ],
>> )
>> >   assert result.code == 0, 'setup_storage.sh failed. Exit code is %s' 
>> > % result.code
>> E   AssertionError: setup_storage.sh failed. Exit code is 1
>> E   assert 1 == 0
>> E -1
>> E +0
>>
>>
>> The pytest traceback is nice, but in this case it is does not show any 
>> useful info.
>>
>> Since we run a script using ssh, the error message should include the 
>> process stdout and stderr
>> which probably can explain the failure.
>
> I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging during 
> storage setup.
> Unfortunately AFAICS it didn't fail, so I guess we'll have to merge it and 
> wait for a failed job to get some helpful logs.

Thanks.

It still fails for me with current code:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6689/testReport/

Same when using current vdsm master.

>> Also I wonder why this code is called as a test (test_configure_storage). 
>> This looks like setup
>> step so it should run as a fixture.
>
> That's true, but the pytest porting effort was about providing a bare minimum 
> to move away from nose.
> Organizing the tests into proper setup/fixtures is a huge task and will be 
> probably implemented
> incrementally in the nearest future.

Understood
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5AP27PDBW6GU4E6ADXUDV5COD63477IC/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-23 Thread Marcin Sobczyk



On 3/21/20 1:18 AM, Nir Soffer wrote:
On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer > wrote:


Looks like infrastructure issue setting up storage on engine host.

Here are 2 failing builds with unrelated changes:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/


Rebuilding still fails in setup_storage:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/


Is this a known issue?

Error Message

AssertionError: setup_storage.sh failed. Exit code is 1 assert
1 == 0   -1   +0

Stacktrace

prefix = 

    @pytest.mark.run(order=14)
    def test_configure_storage(prefix):
        engine = prefix.virt_env.engine_vm()
        result = engine.ssh(
            [
                '/tmp/setup_storage.sh',
            ],
        )
>       assert result.code == 0, 'setup_storage.sh failed.
Exit code is %s' % result.code
E       AssertionError: setup_storage.sh failed. Exit code is 1
E       assert 1 == 0
E         -1
E         +0


The pytest traceback is nice, but in this case it is does not show
any useful info.

Since we run a script using ssh, the error message should include
the process stdout and stderr
which probably can explain the failure.

I posted https://gerrit.ovirt.org/#/c/107830/ to improve logging during 
storage setup.
Unfortunately AFAICS it didn't fail, so I guess we'll have to merge it 
and wait for a failed job to get some helpful logs.




Also I wonder why this code is called as a test
(test_configure_storage). This looks like setup
step so it should run as a fixture.

That's true, but the pytest porting effort was about providing a bare 
minimum to move away from nose.
Organizing the tests into proper setup/fixtures is a huge task and will 
be probably implemented

incrementally in the nearest future.



Nir



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EWU27UQIX2BMUDXGOGTWXJY7HFFWQZA5/


[ovirt-devel] Re: OST fails in 002_bootstrap_pytest.py - setup_storage.sh

2020-03-20 Thread Nir Soffer
On Fri, Mar 20, 2020 at 9:35 PM Nir Soffer  wrote:

> Looks like infrastructure issue setting up storage on engine host.
>
> Here are 2 failing builds with unrelated changes:
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
>

Rebuilding still fails in setup_storage:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6679/testReport/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6680/testReport/


>
> Is this a known issue?
>
> Error Message
>
> AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0   -1
>   +0
>
> Stacktrace
>
> prefix = 
>
> @pytest.mark.run(order=14)
> def test_configure_storage(prefix):
> engine = prefix.virt_env.engine_vm()
> result = engine.ssh(
> [
> '/tmp/setup_storage.sh',
> ],
> )
> >   assert result.code == 0, 'setup_storage.sh failed. Exit code is
> %s' % result.code
> E   AssertionError: setup_storage.sh failed. Exit code is 1
> E   assert 1 == 0
> E -1
> E +0
>
>
> The pytest traceback is nice, but in this case it is does not show any
> useful info.
>
> Since we run a script using ssh, the error message should include the
> process stdout and stderr
> which probably can explain the failure.
>
> Also I wonder why this code is called as a test (test_configure_storage).
> This looks like setup
> step so it should run as a fixture.
>
> Nir
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MET62OLXJPMC2K7C25QAGJ7PX6BQ4VGQ/


[ovirt-devel] Re: OST Fails for missing glusterfs mirrors at host-deploy

2020-01-01 Thread Ehud Yonasi
Hi Amit,
It's probably caused by repo under maintenance / unavailability.
We do not mirror yet centos8 because of new features it adds and we do not
support it yet.

Regards,
Ehud.

On Wed, Jan 1, 2020 at 11:55 PM Amit Bawer  wrote:

> Snippet From:
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6293/console
>
> 23:31:25 + cd
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master
> 23:31:25 + lago ovirt deploy
> 23:31:26 @ Deploy oVirt environment:
> 23:31:26   # Deploy environment:
> 23:31:26 * [Thread-2] Deploy VM lago-basic-suite-master-host-0:
> 23:31:26 * [Thread-3] Deploy VM lago-basic-suite-master-host-1:
> 23:31:26 * [Thread-4] Deploy VM lago-basic-suite-master-engine:
> 23:32:15 * [Thread-3] Deploy VM lago-basic-suite-master-host-1:
> Success (in 0:00:49)
> 23:32:39 STDERR
> 23:32:39 + yum -y install ovirt-host
> 23:32:39 Error: Error downloading packages:
> 23:32:39   Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors
> were tried
> 23:32:39
> 23:32:39   - STDERR
> 23:32:39 + yum -y install ovirt-host
> 23:32:39 Error: Error downloading packages:
> 23:32:39   Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors
> were tried
> 23:32:39
> 23:32:39 * [Thread-2] Deploy VM lago-basic-suite-master-host-0: ERROR
> (in 0:01:13)
> 23:38:05 * [Thread-4] Deploy VM lago-basic-suite-master-engine: ERROR
> (in 0:06:39)
> 23:38:05   # Deploy environment: ERROR (in 0:06:39)
> 23:38:06 @ Deploy oVirt environment: ERROR (in 0:06:39)
> 23:38:06 Error occured, aborting
> 23:38:06 Traceback (most recent call last):
> 23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
> 383, in do_run
> 23:38:06 self.cli_plugins[args.ovirtverb].do_run(args)
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
> line 184, in do_run
> 23:38:06 self._do_run(**vars(args))
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> 573, in wrapper
> 23:38:06 return func(*args, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> 584, in wrapper
> 23:38:06 return func(*args, prefix=prefix, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
> 181, in do_deploy
> 23:38:06 prefix.deploy()
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
> 636, in wrapper
> 23:38:06 return func(*args, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py",
> line 127, in wrapper
> 23:38:06 return func(*args, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/prefix.py",
> line 284, in deploy
> 23:38:06 return super(OvirtPrefix, self).deploy()
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line
> 50, in wrapped
> 23:38:06 return func(*args, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
> 636, in wrapper
> 23:38:06 return func(*args, **kwargs)
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
> 1671, in deploy
> 23:38:06 self.virt_env.get_vms().values()
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> 104, in invoke_in_parallel
> 23:38:06 return vt.join_all()
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58,
> in _ret_via_queue
> 23:38:06 queue.put({'return': func()})
> 23:38:06   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
> 1662, in _deploy_host
> 23:38:06 host.name(),
> 23:38:06 LagoDeployError:
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_home_jenkins_agent_workspace_ovirt-system-tests_manual_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_1st_host_el7.sh
> failed with status 1 on lago-basic-suite-master-host-0
> 23:38:06 + res=1
> 23:38:06 + cd -
> 23:38:06
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests
> 23:38:06 + return 1
> 23:38:06 + env_collect
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
> 23:38:06 + local
> tests_out_dir=/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
> 23:38:06 + [[ -e
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
> ]]
> 23:38:06 + mkdir -p
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
> 23:38:06 + cd
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/current
> 23:38:06 + lago collect --output
> /home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
> 23:38:08 @ Collect artifacts:
> 23:38:08   # [Thread-1] 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-27 Thread Dominik Holler
On Wed, Nov 27, 2019 at 5:44 PM Nir Soffer  wrote:

> On Wed, Nov 27, 2019 at 5:54 PM Tal Nisan  wrote:
>
>>
>>
>> On Wed, Nov 27, 2019 at 1:27 PM Marcin Sobczyk 
>> wrote:
>>
>>> Hi,
>>>
>>> I ran OST on my physical server.
>>> I'm experiencing probably the same issues as described in the thread
>>> below.
>>>
>>> On one of the hosts:
>>>
>>> [root@lago-basic-suite-master-host-0 tmp]# ls -l /rhev/data-center/mnt/
>>> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1':
>>> Operation not permitted
>>> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share2':
>>> Operation not permitted
>>> ls: cannot access 
>>> '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported':
>>> Operation not permitted
>>> total 0
>>> d?? ? ???? 192.168.200.4:
>>> _exports_nfs_exported
>>> d?? ? ???? 192.168.200.4:_exports_nfs_share1
>>> d?? ? ???? 192.168.200.4:_exports_nfs_share2
>>> drwxr-xr-x. 3 vdsm kvm 50 Nov 27 04:22 blockSD
>>>
>>> I think there's some problem with the nfs shares on engine.
>>>
>> We saw it recently with the move to RHEL8, Nir isn't that the same issue
>> with the NFS squashing?
>>
>
> root not being able to access NFS is expected if the NFS server is not
> configured
> with annonuid=36,annongid=36.
>
> This is not new and did not change in rhel8. The change is probably in
> libvirt, trying to
> access disk it should not access since we disable dac in the xml for disks.
>
> When this happens vms do not start, and here the issue seems to be that vm
> get paused after
> some time because storage becomes inaccessible.
>
> I can mount engine's nfs shares directly from server's native OS:
>>>
>>> ➜  /tmp mkdir -p /tmp/aaa && mount "192.168.200.4:/exports/nfs/share1"
>>> /tmp/aaa
>>> ➜  /tmp ls -l /tmp/aaa
>>> total 4
>>> drwxr-xr-x. 5 36 kvm 4096 Nov 27 10:18
>>> 3332759c-a943-4fbd-80aa-a5f72cd87c7c
>>> ➜  /tmp
>>>
>>> But trying to do that from one of the hosts fails:
>>>
>>> [root@lago-basic-suite-master-host-0 tmp]# mkdir -p /tmp/aaa && mount
>>> -v "192.168.200.4:/exports/nfs/share1" /tmp/aaa
>>> mount.nfs: timeout set for Wed Nov 27 06:26:19 2019
>>> mount.nfs: trying text-based options
>>> 'vers=4.2,addr=192.168.200.4,clientaddr=192.168.201.2'
>>> mount.nfs: mount(2): Operation not permitted
>>> mount.nfs: trying text-based options 'addr=192.168.200.4'
>>> mount.nfs: prog 13, trying vers=3, prot=6
>>> mount.nfs: portmap query failed: RPC: Remote system error - No route to
>>> host
>>>
>>
> Smells like broken network.
>
>

As I reproduced this scenario, ping was working, while NFS not working.


> On the engine side, '/var/log/messages' seems to be flooded with nfs
>>> issues, example failures:
>>>
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
>>> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
>>> slotid 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
>>> enter. seqid 405 slot_seqid 404
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> 9042fc202080 opcnt 3 #1: 53: status 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> #2/3: 22 (OP_PUTFH)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd:
>>> fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request
>>> from insecure port 192.168.200.1, port=51529!
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> 9042fc202080 opcnt 3 #2: 22: status 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
>>> returned 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: -->
>>> nfsd4_store_cache_entry slot 9042c4d97000
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client
>>> (clientid 5dde5a1f/cc80daed)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd_dispatch:
>>> vers 4 proc 1
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> #1/3: 53 (OP_SEQUENCE)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
>>> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
>>> slotid 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
>>> enter. seqid 406 slot_seqid 405
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> 9042fc202080 opcnt 3 #1: 53: status 0
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>>> #2/3: 22 (OP_PUTFH)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd:
>>> fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
>>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request
>>> from insecure port 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-27 Thread Nir Soffer
On Wed, Nov 27, 2019 at 5:54 PM Tal Nisan  wrote:

>
>
> On Wed, Nov 27, 2019 at 1:27 PM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> I ran OST on my physical server.
>> I'm experiencing probably the same issues as described in the thread
>> below.
>>
>> On one of the hosts:
>>
>> [root@lago-basic-suite-master-host-0 tmp]# ls -l /rhev/data-center/mnt/
>> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1':
>> Operation not permitted
>> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share2':
>> Operation not permitted
>> ls: cannot access 
>> '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported':
>> Operation not permitted
>> total 0
>> d?? ? ???? 192.168.200.4:
>> _exports_nfs_exported
>> d?? ? ???? 192.168.200.4:_exports_nfs_share1
>> d?? ? ???? 192.168.200.4:_exports_nfs_share2
>> drwxr-xr-x. 3 vdsm kvm 50 Nov 27 04:22 blockSD
>>
>> I think there's some problem with the nfs shares on engine.
>>
> We saw it recently with the move to RHEL8, Nir isn't that the same issue
> with the NFS squashing?
>

root not being able to access NFS is expected if the NFS server is not
configured
with annonuid=36,annongid=36.

This is not new and did not change in rhel8. The change is probably in
libvirt, trying to
access disk it should not access since we disable dac in the xml for disks.

When this happens vms do not start, and here the issue seems to be that vm
get paused after
some time because storage becomes inaccessible.

I can mount engine's nfs shares directly from server's native OS:
>>
>> ➜  /tmp mkdir -p /tmp/aaa && mount "192.168.200.4:/exports/nfs/share1"
>> /tmp/aaa
>> ➜  /tmp ls -l /tmp/aaa
>> total 4
>> drwxr-xr-x. 5 36 kvm 4096 Nov 27 10:18
>> 3332759c-a943-4fbd-80aa-a5f72cd87c7c
>> ➜  /tmp
>>
>> But trying to do that from one of the hosts fails:
>>
>> [root@lago-basic-suite-master-host-0 tmp]# mkdir -p /tmp/aaa && mount -v
>> "192.168.200.4:/exports/nfs/share1" /tmp/aaa
>> mount.nfs: timeout set for Wed Nov 27 06:26:19 2019
>> mount.nfs: trying text-based options
>> 'vers=4.2,addr=192.168.200.4,clientaddr=192.168.201.2'
>> mount.nfs: mount(2): Operation not permitted
>> mount.nfs: trying text-based options 'addr=192.168.200.4'
>> mount.nfs: prog 13, trying vers=3, prot=6
>> mount.nfs: portmap query failed: RPC: Remote system error - No route to
>> host
>>
>
Smells like broken network.


> On the engine side, '/var/log/messages' seems to be flooded with nfs
>> issues, example failures:
>>
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
>> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
>> slotid 0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
>> enter. seqid 405 slot_seqid 404
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> 9042fc202080 opcnt 3 #1: 53: status 0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> #2/3: 22 (OP_PUTFH)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd:
>> fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request from
>> insecure port 192.168.200.1, port=51529!
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> 9042fc202080 opcnt 3 #2: 22: status 1
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
>> returned 1
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: -->
>> nfsd4_store_cache_entry slot 9042c4d97000
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client
>> (clientid 5dde5a1f/cc80daed)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd_dispatch:
>> vers 4 proc 1
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> #1/3: 53 (OP_SEQUENCE)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
>> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
>> slotid 0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
>> enter. seqid 406 slot_seqid 405
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> 9042fc202080 opcnt 3 #1: 53: status 0
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> #2/3: 22 (OP_PUTFH)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd:
>> fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request from
>> insecure port 192.168.200.1, port=51529!
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
>> 9042fc202080 opcnt 3 #2: 22: status 1
>> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
>> returned 1
>> Nov 27 06:25:25 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-27 Thread Tal Nisan
On Wed, Nov 27, 2019 at 1:27 PM Marcin Sobczyk  wrote:

> Hi,
>
> I ran OST on my physical server.
> I'm experiencing probably the same issues as described in the thread below.
>
> On one of the hosts:
>
> [root@lago-basic-suite-master-host-0 tmp]# ls -l /rhev/data-center/mnt/
> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1':
> Operation not permitted
> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share2':
> Operation not permitted
> ls: cannot access '/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported':
> Operation not permitted
> total 0
> d?? ? ???? 192.168.200.4:_exports_nfs_exported
> d?? ? ???? 192.168.200.4:_exports_nfs_share1
> d?? ? ???? 192.168.200.4:_exports_nfs_share2
> drwxr-xr-x. 3 vdsm kvm 50 Nov 27 04:22 blockSD
>
> I think there's some problem with the nfs shares on engine.
>
We saw it recently with the move to RHEL8, Nir isn't that the same issue
with the NFS squashing?

>
> I can mount engine's nfs shares directly from server's native OS:
>
> ➜  /tmp mkdir -p /tmp/aaa && mount "192.168.200.4:/exports/nfs/share1"
> /tmp/aaa
> ➜  /tmp ls -l /tmp/aaa
> total 4
> drwxr-xr-x. 5 36 kvm 4096 Nov 27 10:18 3332759c-a943-4fbd-80aa-a5f72cd87c7c
> ➜  /tmp
>
> But trying to do that from one of the hosts fails:
>
> [root@lago-basic-suite-master-host-0 tmp]# mkdir -p /tmp/aaa && mount -v
> "192.168.200.4:/exports/nfs/share1" /tmp/aaa
> mount.nfs: timeout set for Wed Nov 27 06:26:19 2019
> mount.nfs: trying text-based options
> 'vers=4.2,addr=192.168.200.4,clientaddr=192.168.201.2'
> mount.nfs: mount(2): Operation not permitted
> mount.nfs: trying text-based options 'addr=192.168.200.4'
> mount.nfs: prog 13, trying vers=3, prot=6
> mount.nfs: portmap query failed: RPC: Remote system error - No route to
> host
>
> On the engine side, '/var/log/messages' seems to be flooded with nfs
> issues, example failures:
>
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
> slotid 0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
> enter. seqid 405 slot_seqid 404
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> 9042fc202080 opcnt 3 #1: 53: status 0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> #2/3: 22 (OP_PUTFH)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: fh_verify(28:
> 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request from
> insecure port 192.168.200.1, port=51529!
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> 9042fc202080 opcnt 3 #2: 22: status 1
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
> returned 1
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: -->
> nfsd4_store_cache_entry slot 9042c4d97000
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client
> (clientid 5dde5a1f/cc80daed)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd_dispatch: vers
> 4 proc 1
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> #1/3: 53 (OP_SEQUENCE)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel:
> __find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence:
> slotid 0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid
> enter. seqid 406 slot_seqid 405
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> 9042fc202080 opcnt 3 #1: 53: status 0
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> #2/3: 22 (OP_PUTFH)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: fh_verify(28:
> 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request from
> insecure port 192.168.200.1, port=51529!
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op
> 9042fc202080 opcnt 3 #2: 22: status 1
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound
> returned 1
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: -->
> nfsd4_store_cache_entry slot 9042c4d97000
> Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client
> (clientid 5dde5a1f/cc80daed)
>
> Regards, Marcin
>
> On 11/26/19 8:40 PM, Martin Perina wrote:
>
> I've just merged https://gerrit.ovirt.org/105111 which only silence the
> issue, but we really need to unblock OST, as it's suffering from this for
> more than 2 weeks now.
>
> Tal/Nir, could someone really investigate why the storage become
> unavailable after some time? It may be caused by recent switch of hosts to
> CentOS 8, but may be not related
>
> 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-27 Thread Marcin Sobczyk

Hi,

I ran OST on my physical server.
I'm experiencing probably the same issues as described in the thread below.

On one of the hosts:

[root@lago-basic-suite-master-host-0 tmp]# ls -l /rhev/data-center/mnt/
ls: cannot access 
'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1': Operation not 
permitted
ls: cannot access 
'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share2': Operation not 
permitted
ls: cannot access 
'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported': Operation 
not permitted

total 0
d?? ? ?    ?    ?    ? 192.168.200.4:_exports_nfs_exported
d?? ? ?    ?    ?    ? 192.168.200.4:_exports_nfs_share1
d?? ? ?    ?    ?    ? 192.168.200.4:_exports_nfs_share2
drwxr-xr-x. 3 vdsm kvm 50 Nov 27 04:22 blockSD

I think there's some problem with the nfs shares on engine.

I can mount engine's nfs shares directly from server's native OS:

➜  /tmp mkdir -p /tmp/aaa && mount "192.168.200.4:/exports/nfs/share1" 
/tmp/aaa

➜  /tmp ls -l /tmp/aaa
total 4
drwxr-xr-x. 5 36 kvm 4096 Nov 27 10:18 3332759c-a943-4fbd-80aa-a5f72cd87c7c
➜  /tmp

But trying to do that from one of the hosts fails:

[root@lago-basic-suite-master-host-0 tmp]# mkdir -p /tmp/aaa && mount -v 
"192.168.200.4:/exports/nfs/share1" /tmp/aaa

mount.nfs: timeout set for Wed Nov 27 06:26:19 2019
mount.nfs: trying text-based options 
'vers=4.2,addr=192.168.200.4,clientaddr=192.168.201.2'

mount.nfs: mount(2): Operation not permitted
mount.nfs: trying text-based options 'addr=192.168.200.4'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - No route to host

On the engine side, '/var/log/messages' seems to be flooded with nfs 
issues, example failures:


Nov 27 06:25:25 lago-basic-suite-master-engine kernel: 
__find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence: 
slotid 0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid 
enter. seqid 405 slot_seqid 404
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
9042fc202080 opcnt 3 #1: 53: status 0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
#2/3: 22 (OP_PUTFH)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: 
fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request 
from insecure port 192.168.200.1, port=51529!
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
9042fc202080 opcnt 3 #2: 22: status 1
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound 
returned 1
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: --> 
nfsd4_store_cache_entry slot 9042c4d97000
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client 
(clientid 5dde5a1f/cc80daed)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd_dispatch: 
vers 4 proc 1
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
#1/3: 53 (OP_SEQUENCE)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: 
__find_in_sessionid_hashtbl: 1574853151:3430996717:11:0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd4_sequence: 
slotid 0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: check_slot_seqid 
enter. seqid 406 slot_seqid 405
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
9042fc202080 opcnt 3 #1: 53: status 0
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
#2/3: 22 (OP_PUTFH)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: 
fh_verify(28: 00070001 00340001  e50ae88b 5c44c45a 2b7c3991)
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsd: request 
from insecure port 192.168.200.1, port=51529!
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound op 
9042fc202080 opcnt 3 #2: 22: status 1
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: nfsv4 compound 
returned 1
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: --> 
nfsd4_store_cache_entry slot 9042c4d97000
Nov 27 06:25:25 lago-basic-suite-master-engine kernel: renewing client 
(clientid 5dde5a1f/cc80daed)


Regards, Marcin

On 11/26/19 8:40 PM, Martin Perina wrote:
I've just merged https://gerrit.ovirt.org/105111 which only silence 
the issue, but we really need to unblock OST, as it's suffering from 
this for more than 2 weeks now.


Tal/Nir, could someone really investigate why the storage become 
unavailable after some time? It may be caused by recent switch of 
hosts to CentOS 8, but may be not related


Thanks,
Martin


On Tue, Nov 26, 2019 at 9:17 AM Dominik Holler > wrote:




On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer mailto:nsof...@redhat.com>> wrote:

On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler
mailto:dhol...@redhat.com>> wrote:
>
>
>
   

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-26 Thread Martin Perina
I've just merged https://gerrit.ovirt.org/105111 which only silence the
issue, but we really need to unblock OST, as it's suffering from this for
more than 2 weeks now.

Tal/Nir, could someone really investigate why the storage become
unavailable after some time? It may be caused by recent switch of hosts to
CentOS 8, but may be not related

Thanks,
Martin


On Tue, Nov 26, 2019 at 9:17 AM Dominik Holler  wrote:

>
>
> On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer  wrote:
>
>> On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler 
>> wrote:
>> >
>> >
>> >
>> > On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:
>> >>
>> >> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler 
>> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer 
>> wrote:
>> >> >>
>> >> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
>> wrote:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
>> wrote:
>> >> >> >>
>> >> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer <
>> nsof...@redhat.com> wrote:
>> >> >> >> 
>> >> >> >> 
>> >> >> >> 
>> >> >> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk <
>> msobc...@redhat.com> wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
>> Barroso  wrote:
>> >> >> >> 
>> >> >> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
>> vjura...@redhat.com> wrote:
>> >> >> >>  >
>> >> >> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte
>> de Mora Barroso wrote:
>> >> >> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>> vjura...@redhat.com>
>> >> >> >>  > > wrote:
>> >> >> >>  > > >
>> >> >> >>  > > >
>> >> >> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
>> Holler wrote:
>> >> >> >>  > > >
>> >> >> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>> dhol...@redhat.com>
>> >> >> >>  > > > > wrote:
>> >> >> >>  > > > >
>> >> >> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>> nsof...@redhat.com>
>> >> >> >>  > > > > > wrote:
>> >> >> >>  > > > > >
>> >> >> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech
>> Juranek
>> >> >> >>  > > > > >> 
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> wrote:
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > Hi,
>> >> >> >>  > > > > >> > OST fails (see e.g. [1]) in
>> 002_bootstrap.check_update_host. It
>> >> >> >>  > > > > >> > fails
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> with
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> >  FAILED! => {"changed": false, "failures":
>> [], "msg": "Depsolve
>> >> >> >>  > > > > >> >  Error
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> occured:
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > \n Problem 1: cannot install the best
>> update candidate for package
>> >> >> >>  > > > > >> > vdsm-
>> >> >> >>  > > > > >> >
>> network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> nmstate
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > needed by
>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> >> >> >>  > > > > >> > Problem 2:
>> >> >> >>  > > > > >> > package
>> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> vdsm-network
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-26 Thread Dan Kenigsberg
On Tue, 26 Nov 2019, 10:19 Dominik Holler,  wrote:

>
>
> On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer  wrote:
>
>> On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler 
>> wrote:
>> >
>> >
>> >
>> > On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:
>> >>
>> >> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler 
>> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer 
>> wrote:
>> >> >>
>> >> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
>> wrote:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
>> wrote:
>> >> >> >>
>> >> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer <
>> nsof...@redhat.com> wrote:
>> >> >> >> 
>> >> >> >> 
>> >> >> >> 
>> >> >> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk <
>> msobc...@redhat.com> wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
>> dhol...@redhat.com> wrote:
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>>
>> >> >> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
>> Barroso  wrote:
>> >> >> >> 
>> >> >> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
>> vjura...@redhat.com> wrote:
>> >> >> >>  >
>> >> >> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte
>> de Mora Barroso wrote:
>> >> >> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>> vjura...@redhat.com>
>> >> >> >>  > > wrote:
>> >> >> >>  > > >
>> >> >> >>  > > >
>> >> >> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
>> Holler wrote:
>> >> >> >>  > > >
>> >> >> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>> dhol...@redhat.com>
>> >> >> >>  > > > > wrote:
>> >> >> >>  > > > >
>> >> >> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>> nsof...@redhat.com>
>> >> >> >>  > > > > > wrote:
>> >> >> >>  > > > > >
>> >> >> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech
>> Juranek
>> >> >> >>  > > > > >> 
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> wrote:
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > Hi,
>> >> >> >>  > > > > >> > OST fails (see e.g. [1]) in
>> 002_bootstrap.check_update_host. It
>> >> >> >>  > > > > >> > fails
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> with
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> >  FAILED! => {"changed": false, "failures":
>> [], "msg": "Depsolve
>> >> >> >>  > > > > >> >  Error
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> occured:
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > \n Problem 1: cannot install the best
>> update candidate for package
>> >> >> >>  > > > > >> > vdsm-
>> >> >> >>  > > > > >> >
>> network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> nmstate
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > needed by
>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> >> >> >>  > > > > >> > Problem 2:
>> >> >> >>  > > > > >> > package
>> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> vdsm-network
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none
>> of the providers can be
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >> installed\n
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >>  > > > > >>
>> >> >> >> 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-26 Thread Dominik Holler
On Mon, Nov 25, 2019 at 7:12 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer 
> wrote:
> >> >>
> >> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
> wrote:
> >> >> >>
> >> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer <
> nsof...@redhat.com> wrote:
> >> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk <
> msobc...@redhat.com> wrote:
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
> Barroso  wrote:
> >> >> >> 
> >> >> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >> >> >>  >
> >> >> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte
> de Mora Barroso wrote:
> >> >> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >> >> >>  > > wrote:
> >> >> >>  > > >
> >> >> >>  > > >
> >> >> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
> Holler wrote:
> >> >> >>  > > >
> >> >> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >> >> >>  > > > > wrote:
> >> >> >>  > > > >
> >> >> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >> >> >>  > > > > > wrote:
> >> >> >>  > > > > >
> >> >> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech
> Juranek
> >> >> >>  > > > > >> 
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> wrote:
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> > Hi,
> >> >> >>  > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >> >> >>  > > > > >> > fails
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> with
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> >  FAILED! => {"changed": false, "failures":
> [], "msg": "Depsolve
> >> >> >>  > > > > >> >  Error
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> occured:
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> > \n Problem 1: cannot install the best
> update candidate for package
> >> >> >>  > > > > >> > vdsm-
> >> >> >>  > > > > >> >
> network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> nmstate
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >> >> >>  > > > > >> > Problem 2:
> >> >> >>  > > > > >> > package
> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> vdsm-network
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of
> the providers can be
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> installed\n
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >>
> >> >> >>  > > > > >> > - cannot install the best update candidate
> for package vdsm-
> >> >> >>  > > > > >> >
> python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides
> >> >> >>  > > > > 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Nir Soffer
On Mon, Nov 25, 2019 at 7:15 PM Dominik Holler  wrote:
>
>
>
> On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:
>>
>> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler  wrote:
>> >
>> >
>> >
>> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:
>> >>
>> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler  wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:
>> >> >>
>> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler  
>> >> >> wrote:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  
>> >> >> > wrote:
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  
>> >> >> >>> wrote:
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  
>> >> >>  wrote:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>> >> >> >  wrote:
>> >> >> >>
>> >> >> >>
>> >> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>> >> >> >>  wrote:
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso 
>> >> >> >>>  wrote:
>> >> >> 
>> >> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>> >> >>   wrote:
>> >> >>  >
>> >> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de 
>> >> >>  > Mora Barroso wrote:
>> >> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
>> >> >>  > > 
>> >> >>  > > wrote:
>> >> >>  > > >
>> >> >>  > > >
>> >> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler 
>> >> >>  > > > wrote:
>> >> >>  > > >
>> >> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler 
>> >> >>  > > > > 
>> >> >>  > > > > wrote:
>> >> >>  > > > >
>> >> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
>> >> >>  > > > > > 
>> >> >>  > > > > > wrote:
>> >> >>  > > > > >
>> >> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>> >> >>  > > > > >> 
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> wrote:
>> >> >>  > > > > >>
>> >> >>  > > > > >> > Hi,
>> >> >>  > > > > >> > OST fails (see e.g. [1]) in 
>> >> >>  > > > > >> > 002_bootstrap.check_update_host. It
>> >> >>  > > > > >> > fails
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> with
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> >  FAILED! => {"changed": false, "failures": [], 
>> >> >>  > > > > >> > "msg": "Depsolve
>> >> >>  > > > > >> >  Error
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> occured:
>> >> >>  > > > > >>
>> >> >>  > > > > >> > \n Problem 1: cannot install the best update 
>> >> >>  > > > > >> > candidate for package
>> >> >>  > > > > >> > vdsm-
>> >> >>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - 
>> >> >>  > > > > >> > nothing provides
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> nmstate
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> > needed by 
>> >> >>  > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> >> >>  > > > > >> > Problem 2:
>> >> >>  > > > > >> > package 
>> >> >>  > > > > >> > vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch 
>> >> >>  > > > > >> > requires
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> vdsm-network
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the 
>> >> >>  > > > > >> > providers can be
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> installed\n
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >>
>> >> >>  > > > > >> > - cannot install the best update candidate for 
>> >> >>  > > > > >> > package vdsm-
>> >> >>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - 
>> >> >>  > > > > >> > nothing provides
>> >> >>  > > > > >> > 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 6:03 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer 
> wrote:
> >> >>
> >> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer 
> wrote:
> >> >> 
> >> >> 
> >> >> 
> >> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>
> >> >> >>
> >> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora
> Barroso  wrote:
> >> >> 
> >> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >> >>  >
> >> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de
> Mora Barroso wrote:
> >> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >> >>  > > wrote:
> >> >>  > > >
> >> >>  > > >
> >> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
> Holler wrote:
> >> >>  > > >
> >> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >> >>  > > > > wrote:
> >> >>  > > > >
> >> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >> >>  > > > > > wrote:
> >> >>  > > > > >
> >> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> >> >>  > > > > >> 
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> wrote:
> >> >>  > > > > >>
> >> >>  > > > > >> > Hi,
> >> >>  > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >> >>  > > > > >> > fails
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> with
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> >  FAILED! => {"changed": false, "failures": [],
> "msg": "Depsolve
> >> >>  > > > > >> >  Error
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> occured:
> >> >>  > > > > >>
> >> >>  > > > > >> > \n Problem 1: cannot install the best update
> candidate for package
> >> >>  > > > > >> > vdsm-
> >> >>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n
> - nothing provides
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> nmstate
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >> >>  > > > > >> > Problem 2:
> >> >>  > > > > >> > package
> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> vdsm-network
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of
> the providers can be
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> installed\n
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> > - cannot install the best update candidate for
> package vdsm-
> >> >>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n
> - nothing provides
> >> >>  > > > > >> > nmstate
> >> >>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >>
> >> >>  > > > > >> nmstate should be provided by copr repo enabled
> by
> >> >>  > > > > >> ovirt-release-master.
> >> >>  > > > > >
> >> >>  > > > > >
> >> >>  > > > > >
> >> >>  > > > > > I re-triggered as
> >> >>  > > > > >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> >> >>  > > > > > 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Nir Soffer
On Mon, Nov 25, 2019 at 6:48 PM Dominik Holler  wrote:
>
>
>
> On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:
>>
>> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler  wrote:
>> >
>> >
>> >
>> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:
>> >>
>> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler  
>> >> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  
>> >> >> wrote:
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>> >> 
>> >> 
>> >> 
>> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  
>> >>  wrote:
>> >> >
>> >> >
>> >> >
>> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler  
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>> >> >>  wrote:
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso 
>> >> >>>  wrote:
>> >> 
>> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>> >>   wrote:
>> >>  >
>> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora 
>> >>  > Barroso wrote:
>> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
>> >>  > > 
>> >>  > > wrote:
>> >>  > > >
>> >>  > > >
>> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler 
>> >>  > > > wrote:
>> >>  > > >
>> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler 
>> >>  > > > > 
>> >>  > > > > wrote:
>> >>  > > > >
>> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
>> >>  > > > > > 
>> >>  > > > > > wrote:
>> >>  > > > > >
>> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>> >>  > > > > >> 
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> wrote:
>> >>  > > > > >>
>> >>  > > > > >> > Hi,
>> >>  > > > > >> > OST fails (see e.g. [1]) in 
>> >>  > > > > >> > 002_bootstrap.check_update_host. It
>> >>  > > > > >> > fails
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> with
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> >  FAILED! => {"changed": false, "failures": [], 
>> >>  > > > > >> > "msg": "Depsolve
>> >>  > > > > >> >  Error
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> occured:
>> >>  > > > > >>
>> >>  > > > > >> > \n Problem 1: cannot install the best update 
>> >>  > > > > >> > candidate for package
>> >>  > > > > >> > vdsm-
>> >>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - 
>> >>  > > > > >> > nothing provides
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> nmstate
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> > needed by 
>> >>  > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> >>  > > > > >> > Problem 2:
>> >>  > > > > >> > package 
>> >>  > > > > >> > vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch 
>> >>  > > > > >> > requires
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> vdsm-network
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the 
>> >>  > > > > >> > providers can be
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> installed\n
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> > - cannot install the best update candidate for 
>> >>  > > > > >> > package vdsm-
>> >>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - 
>> >>  > > > > >> > nothing provides
>> >>  > > > > >> > nmstate
>> >>  > > > > >> > needed by 
>> >>  > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >>
>> >>  > > > > >> nmstate should be provided by copr repo enabled by
>> >>  > > > > >> ovirt-release-master.
>> >>  > > > > >
>> >>  > > > > >
>> >>  > > > > >
>> >>  > > > > > I re-triggered as
>> >>  > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>> >> 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 5:16 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler  wrote:
> >
> >
> >
> > On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:
> >>
> >> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >> >
> >> >
> >> >
> >> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler 
> wrote:
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer 
> wrote:
> >> 
> >> 
> >> 
> >>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >> >
> >> >
> >> >
> >> > On 11/22/19 4:54 PM, Martin Perina wrote:
> >> >
> >> >
> >> >
> >> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >>
> >> >>
> >> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler <
> dhol...@redhat.com> wrote:
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> >> 
> >>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
> >>  >
> >>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de
> Mora Barroso wrote:
> >>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> >>  > > wrote:
> >>  > > >
> >>  > > >
> >>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler
> wrote:
> >>  > > >
> >>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> >>  > > > > wrote:
> >>  > > > >
> >>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> >>  > > > > > wrote:
> >>  > > > > >
> >>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> >>  > > > > >> 
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> wrote:
> >>  > > > > >>
> >>  > > > > >> > Hi,
> >>  > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> >>  > > > > >> > fails
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> with
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> >  FAILED! => {"changed": false, "failures": [],
> "msg": "Depsolve
> >>  > > > > >> >  Error
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> occured:
> >>  > > > > >>
> >>  > > > > >> > \n Problem 1: cannot install the best update
> candidate for package
> >>  > > > > >> > vdsm-
> >>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  -
> nothing provides
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> nmstate
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >>  > > > > >> > Problem 2:
> >>  > > > > >> > package
> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> vdsm-network
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the
> providers can be
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> installed\n
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> > - cannot install the best update candidate for
> package vdsm-
> >>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  -
> nothing provides
> >>  > > > > >> > nmstate
> >>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >>
> >>  > > > > >> nmstate should be provided by copr repo enabled by
> >>  > > > > >> ovirt-release-master.
> >>  > > > > >
> >>  > > > > >
> >>  > > > > >
> >>  > > > > > I re-triggered as
> >>  > > > > >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> >>  > > > > > maybe
> >>  > > > > > https://gerrit.ovirt.org/#/c/104825/
> >>  > > > > > was missing
> >>  > > > >
> >>  > > > >
> >>  > > > >
> >>  > > > > Looks like
> >>  > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by
> OST.
> >>  > > >
> >>  > > >
> >>  > > >
> >>  > > > maybe not. You re-triggered with [1], which really
> missed this patch.
> >>  > > > I did a rebase and now running with this 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Nir Soffer
On Mon, Nov 25, 2019 at 6:05 PM Dominik Holler  wrote:
>
>
>
> On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:
>>
>> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler  wrote:
>> >
>> >
>> >
>> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  wrote:
>> >>
>> >>
>> >>
>> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>> 
>> 
>> 
>>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>> >
>> >
>> >
>> > On 11/22/19 4:54 PM, Martin Perina wrote:
>> >
>> >
>> >
>> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler  
>> > wrote:
>> >>
>> >>
>> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler  
>> >> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso 
>> >>>  wrote:
>> 
>>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>>   wrote:
>>  >
>>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora 
>>  > Barroso wrote:
>>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
>>  > > 
>>  > > wrote:
>>  > > >
>>  > > >
>>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>  > > >
>>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler 
>>  > > > > 
>>  > > > > wrote:
>>  > > > >
>>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
>>  > > > > > 
>>  > > > > > wrote:
>>  > > > > >
>>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>  > > > > >> 
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> wrote:
>>  > > > > >>
>>  > > > > >> > Hi,
>>  > > > > >> > OST fails (see e.g. [1]) in 
>>  > > > > >> > 002_bootstrap.check_update_host. It
>>  > > > > >> > fails
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> with
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg": 
>>  > > > > >> > "Depsolve
>>  > > > > >> >  Error
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> occured:
>>  > > > > >>
>>  > > > > >> > \n Problem 1: cannot install the best update candidate 
>>  > > > > >> > for package
>>  > > > > >> > vdsm-
>>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - 
>>  > > > > >> > nothing provides
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> nmstate
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> > needed by 
>>  > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>  > > > > >> > Problem 2:
>>  > > > > >> > package 
>>  > > > > >> > vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch 
>>  > > > > >> > requires
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> vdsm-network
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the 
>>  > > > > >> > providers can be
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> installed\n
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> > - cannot install the best update candidate for package 
>>  > > > > >> > vdsm-
>>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - 
>>  > > > > >> > nothing provides
>>  > > > > >> > nmstate
>>  > > > > >> > needed by 
>>  > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>  > > > > >>
>>  > > > > >>
>>  > > > > >>
>>  > > > > >> nmstate should be provided by copr repo enabled by
>>  > > > > >> ovirt-release-master.
>>  > > > > >
>>  > > > > >
>>  > > > > >
>>  > > > > > I re-triggered as
>>  > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>>  > > > > > maybe
>>  > > > > > https://gerrit.ovirt.org/#/c/104825/
>>  > > > > > was missing
>>  > > > >
>>  > > > >
>>  > > > >
>>  > > > > Looks like
>>  > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>>  > > >
>>  > > >
>>  > > >
>>  > > > maybe not. You re-triggered with [1], which really missed 
>>  > > > this patch.
>>  > > > I did a rebase and now running with this patch in build #6132 
>>  > > > [2]. Let's
>>  > > > wait
>>  >  for it 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Mon, Nov 25, 2019 at 4:50 PM Nir Soffer  wrote:

> On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler 
> wrote:
> >
> >
> >
> > On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler 
> wrote:
> >>
> >>
> >>
> >> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler 
> wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
> 
> 
> 
>  On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk 
> wrote:
> >
> >
> >
> > On 11/22/19 4:54 PM, Martin Perina wrote:
> >
> >
> >
> > On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
> wrote:
> >>
> >>
> >> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
> wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> 
>  On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
> vjura...@redhat.com> wrote:
>  >
>  > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
> Barroso wrote:
>  > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
>  > > wrote:
>  > > >
>  > > >
>  > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler
> wrote:
>  > > >
>  > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
>  > > > > wrote:
>  > > > >
>  > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
>  > > > > > wrote:
>  > > > > >
>  > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>  > > > > >> 
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> wrote:
>  > > > > >>
>  > > > > >> > Hi,
>  > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
>  > > > > >> > fails
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> with
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> >  FAILED! => {"changed": false, "failures": [],
> "msg": "Depsolve
>  > > > > >> >  Error
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> occured:
>  > > > > >>
>  > > > > >> > \n Problem 1: cannot install the best update
> candidate for package
>  > > > > >> > vdsm-
>  > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  -
> nothing provides
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> nmstate
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>  > > > > >> > Problem 2:
>  > > > > >> > package
> vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> vdsm-network
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the
> providers can be
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> installed\n
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> > - cannot install the best update candidate for
> package vdsm-
>  > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  -
> nothing provides
>  > > > > >> > nmstate
>  > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>  > > > > >>
>  > > > > >>
>  > > > > >>
>  > > > > >> nmstate should be provided by copr repo enabled by
>  > > > > >> ovirt-release-master.
>  > > > > >
>  > > > > >
>  > > > > >
>  > > > > > I re-triggered as
>  > > > > >
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>  > > > > > maybe
>  > > > > > https://gerrit.ovirt.org/#/c/104825/
>  > > > > > was missing
>  > > > >
>  > > > >
>  > > > >
>  > > > > Looks like
>  > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>  > > >
>  > > >
>  > > >
>  > > > maybe not. You re-triggered with [1], which really missed
> this patch.
>  > > > I did a rebase and now running with this patch in build
> #6132 [2]. Let's
>  > > > wait
>  >  for it to see if gerrit #104825 helps.
>  > > >
>  > > >
>  > > >
>  > > > [1]
> https://jenkins.ovirt.org/job/standard-manual-runner/909/
>  > > > [2]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>  > > >
>  > > >
>  > > >
>  > > > > Miguel, do you think merging
>  > > > >
>  > > > >
>  > > > >
>  > > > >
> 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Nir Soffer
On Mon, Nov 25, 2019 at 11:00 AM Dominik Holler  wrote:
>
>
>
> On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  wrote:
>>
>>
>>
>> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:
>>>
>>>
>>>
>>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:



 On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>
>
>
> On 11/22/19 4:54 PM, Martin Perina wrote:
>
>
>
> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler  wrote:
>>
>>
>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler  
>> wrote:
>>>
>>>
>>>
>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso 
>>>  wrote:

 On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek  
 wrote:
 >
 > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora 
 > Barroso wrote:
 > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
 > > 
 > > wrote:
 > > >
 > > >
 > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
 > > >
 > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler 
 > > > > 
 > > > > wrote:
 > > > >
 > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
 > > > > > 
 > > > > > wrote:
 > > > > >
 > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
 > > > > >> 
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> wrote:
 > > > > >>
 > > > > >> > Hi,
 > > > > >> > OST fails (see e.g. [1]) in 
 > > > > >> > 002_bootstrap.check_update_host. It
 > > > > >> > fails
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> with
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg": 
 > > > > >> > "Depsolve
 > > > > >> >  Error
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> occured:
 > > > > >>
 > > > > >> > \n Problem 1: cannot install the best update candidate 
 > > > > >> > for package
 > > > > >> > vdsm-
 > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing 
 > > > > >> > provides
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> nmstate
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > needed by 
 > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
 > > > > >> > Problem 2:
 > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch 
 > > > > >> > requires
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> vdsm-network
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers 
 > > > > >> > can be
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> installed\n
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > - cannot install the best update candidate for package 
 > > > > >> > vdsm-
 > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing 
 > > > > >> > provides
 > > > > >> > nmstate
 > > > > >> > needed by 
 > > > > >> > vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> nmstate should be provided by copr repo enabled by
 > > > > >> ovirt-release-master.
 > > > > >
 > > > > >
 > > > > >
 > > > > > I re-triggered as
 > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
 > > > > > maybe
 > > > > > https://gerrit.ovirt.org/#/c/104825/
 > > > > > was missing
 > > > >
 > > > >
 > > > >
 > > > > Looks like
 > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
 > > >
 > > >
 > > >
 > > > maybe not. You re-triggered with [1], which really missed this 
 > > > patch.
 > > > I did a rebase and now running with this patch in build #6132 
 > > > [2]. Let's
 > > > wait
 >  for it to see if gerrit #104825 helps.
 > > >
 > > >
 > > >
 > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
 > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
 > > >
 > > >
 > > >
 > > > > Miguel, do you think merging
 > > > >
 > > > >
 > > > >
 > > > > https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
 > > > > t-cq
 >  .repo.in
 > > > >
 > > > >
 > > > >
 > > > > would solve this?

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-25 Thread Dominik Holler
On Fri, Nov 22, 2019 at 8:57 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:
>
>>
>>
>> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>>
>>>
>>>
>>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>>>


 On 11/22/19 4:54 PM, Martin Perina wrote:



 On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
 wrote:

>
> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
> wrote:
>
>>
>>
>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>> mdbarr...@redhat.com> wrote:
>>
>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek <
>>> vjura...@redhat.com> wrote:
>>> >
>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
>>> Barroso wrote:
>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>>> vjura...@redhat.com>
>>> > > wrote:
>>> > > >
>>> > > >
>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>> > > >
>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>>> dhol...@redhat.com>
>>> > > > > wrote:
>>> > > > >
>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>>> nsof...@redhat.com>
>>> > > > > > wrote:
>>> > > > > >
>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>> > > > > >> 
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> wrote:
>>> > > > > >>
>>> > > > > >> > Hi,
>>> > > > > >> > OST fails (see e.g. [1]) in
>>> 002_bootstrap.check_update_host. It
>>> > > > > >> > fails
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> with
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>>> "Depsolve
>>> > > > > >> >  Error
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> occured:
>>> > > > > >>
>>> > > > > >> > \n Problem 1: cannot install the best update candidate
>>> for package
>>> > > > > >> > vdsm-
>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  -
>>> nothing provides
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> nmstate
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > needed by
>>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>> > > > > >> > Problem 2:
>>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>>> requires
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> vdsm-network
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the
>>> providers can be
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> installed\n
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > - cannot install the best update candidate for package
>>> vdsm-
>>> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
>>> provides
>>> > > > > >> > nmstate
>>> > > > > >> > needed by
>>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> nmstate should be provided by copr repo enabled by
>>> > > > > >> ovirt-release-master.
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > I re-triggered as
>>> > > > > >
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>>> > > > > > maybe
>>> > > > > > https://gerrit.ovirt.org/#/c/104825/
>>> > > > > > was missing
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > Looks like
>>> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>>> > > >
>>> > > >
>>> > > >
>>> > > > maybe not. You re-triggered with [1], which really missed this
>>> patch.
>>> > > > I did a rebase and now running with this patch in build #6132
>>> [2]. Let's
>>> > > > wait
>>> >  for it to see if gerrit #104825 helps.
>>> > > >
>>> > > >
>>> > > >
>>> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
>>> > > > [2]
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>>> > > >
>>> > > >
>>> > > >
>>> > > > > Miguel, do you think merging
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
>>> > > > > t-cq
>>> >  .repo.in
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > would solve this?
>>> > >
>>> > >
>>> > > I've split the patch Dominik mentions above in two, one of them
>>> adding
>>> > > the nmstate / networkmanager copr repos - [3].
>>> > >
>>> > > Let's see if it fixes it.
>>> >
>>> > it fixes original issue, but OST still 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 5:54 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:
>
>>
>>
>> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>>
>>>
>>>
>>> On 11/22/19 4:54 PM, Martin Perina wrote:
>>>
>>>
>>>
>>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>>> wrote:
>>>

 On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
 wrote:

>
>
> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
>
>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>> wrote:
>> >
>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
>> Barroso wrote:
>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
>> vjura...@redhat.com>
>> > > wrote:
>> > > >
>> > > >
>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>> > > >
>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>> dhol...@redhat.com>
>> > > > > wrote:
>> > > > >
>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>> nsof...@redhat.com>
>> > > > > > wrote:
>> > > > > >
>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>> > > > > >> 
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> wrote:
>> > > > > >>
>> > > > > >> > Hi,
>> > > > > >> > OST fails (see e.g. [1]) in
>> 002_bootstrap.check_update_host. It
>> > > > > >> > fails
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> with
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>> "Depsolve
>> > > > > >> >  Error
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> occured:
>> > > > > >>
>> > > > > >> > \n Problem 1: cannot install the best update candidate
>> for package
>> > > > > >> > vdsm-
>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>> provides
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > needed by
>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >> > Problem 2:
>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>> requires
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> vdsm-network
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the
>> providers can be
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> installed\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > - cannot install the best update candidate for package
>> vdsm-
>> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
>> provides
>> > > > > >> > nmstate
>> > > > > >> > needed by
>> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate should be provided by copr repo enabled by
>> > > > > >> ovirt-release-master.
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > I re-triggered as
>> > > > > >
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>> > > > > > maybe
>> > > > > > https://gerrit.ovirt.org/#/c/104825/
>> > > > > > was missing
>> > > > >
>> > > > >
>> > > > >
>> > > > > Looks like
>> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>> > > >
>> > > >
>> > > >
>> > > > maybe not. You re-triggered with [1], which really missed this
>> patch.
>> > > > I did a rebase and now running with this patch in build #6132
>> [2]. Let's
>> > > > wait
>> >  for it to see if gerrit #104825 helps.
>> > > >
>> > > >
>> > > >
>> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
>> > > > [2]
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>> > > >
>> > > >
>> > > >
>> > > > > Miguel, do you think merging
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
>> > > > > t-cq
>> >  .repo.in
>> > > > >
>> > > > >
>> > > > >
>> > > > > would solve this?
>> > >
>> > >
>> > > I've split the patch Dominik mentions above in two, one of them
>> adding
>> > > the nmstate / networkmanager copr repos - [3].
>> > >
>> > > Let's see if it fixes it.
>> >
>> > it fixes original issue, but OST still fails in
>> > 098_ovirt_provider_ovn.use_ovn_provider:
>> >
>> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134
>>
>> I think Dominik was looking into this issue; +Dominik Holler please
>> confirm.
>>
>> Let 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 5:48 PM Nir Soffer  wrote:

>
>
> On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:
>
>>
>>
>> On 11/22/19 4:54 PM, Martin Perina wrote:
>>
>>
>>
>> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler 
>> wrote:
>>
>>>
>>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>>> wrote:
>>>


 On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
 mdbarr...@redhat.com> wrote:

> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
> wrote:
> >
> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora
> Barroso wrote:
> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
> vjura...@redhat.com>
> > > wrote:
> > > >
> > > >
> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> > > >
> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
> dhol...@redhat.com>
> > > > > wrote:
> > > > >
> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
> nsof...@redhat.com>
> > > > > > wrote:
> > > > > >
> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> > > > > >> 
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > Hi,
> > > > > >> > OST fails (see e.g. [1]) in
> 002_bootstrap.check_update_host. It
> > > > > >> > fails
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> with
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
> "Depsolve
> > > > > >> >  Error
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> occured:
> > > > > >>
> > > > > >> > \n Problem 1: cannot install the best update candidate
> for package
> > > > > >> > vdsm-
> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
> provides
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >> > Problem 2:
> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
> requires
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> vdsm-network
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
> can be
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> installed\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > - cannot install the best update candidate for package
> vdsm-
> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
> provides
> > > > > >> > nmstate
> > > > > >> > needed by
> vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate should be provided by copr repo enabled by
> > > > > >> ovirt-release-master.
> > > > > >
> > > > > >
> > > > > >
> > > > > > I re-triggered as
> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > > > > maybe
> > > > > > https://gerrit.ovirt.org/#/c/104825/
> > > > > > was missing
> > > > >
> > > > >
> > > > >
> > > > > Looks like
> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
> > > >
> > > >
> > > >
> > > > maybe not. You re-triggered with [1], which really missed this
> patch.
> > > > I did a rebase and now running with this patch in build #6132
> [2]. Let's
> > > > wait
> >  for it to see if gerrit #104825 helps.
> > > >
> > > >
> > > >
> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
> > > > [2]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
> > > >
> > > >
> > > >
> > > > > Miguel, do you think merging
> > > > >
> > > > >
> > > > >
> > > > >
> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
> > > > > t-cq
> >  .repo.in
> > > > >
> > > > >
> > > > >
> > > > > would solve this?
> > >
> > >
> > > I've split the patch Dominik mentions above in two, one of them
> adding
> > > the nmstate / networkmanager copr repos - [3].
> > >
> > > Let's see if it fixes it.
> >
> > it fixes original issue, but OST still fails in
> > 098_ovirt_provider_ovn.use_ovn_provider:
> >
> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134
>
> I think Dominik was looking into this issue; +Dominik Holler please
> confirm.
>
> Let me know if you need any help Dominik.
>


 Thanks.
 The problem is that the hosts lost connection to storage:

 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Nir Soffer
On Fri, Nov 22, 2019, 18:18 Marcin Sobczyk  wrote:

>
>
> On 11/22/19 4:54 PM, Martin Perina wrote:
>
>
>
> On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler  wrote:
>
>>
>> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>>> mdbarr...@redhat.com> wrote:
>>>
 On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
 wrote:
 >
 > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
 wrote:
 > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek <
 vjura...@redhat.com>
 > > wrote:
 > > >
 > > >
 > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
 > > >
 > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
 dhol...@redhat.com>
 > > > > wrote:
 > > > >
 > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
 nsof...@redhat.com>
 > > > > > wrote:
 > > > > >
 > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
 > > > > >> 
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> wrote:
 > > > > >>
 > > > > >> > Hi,
 > > > > >> > OST fails (see e.g. [1]) in
 002_bootstrap.check_update_host. It
 > > > > >> > fails
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> with
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
 "Depsolve
 > > > > >> >  Error
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> occured:
 > > > > >>
 > > > > >> > \n Problem 1: cannot install the best update candidate for
 package
 > > > > >> > vdsm-
 > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
 provides
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> nmstate
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > needed by
 vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
 > > > > >> > Problem 2:
 > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
 requires
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> vdsm-network
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
 can be
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> installed\n
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> > - cannot install the best update candidate for package
 vdsm-
 > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
 provides
 > > > > >> > nmstate
 > > > > >> > needed by
 vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
 > > > > >>
 > > > > >>
 > > > > >>
 > > > > >> nmstate should be provided by copr repo enabled by
 > > > > >> ovirt-release-master.
 > > > > >
 > > > > >
 > > > > >
 > > > > > I re-triggered as
 > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
 > > > > > maybe
 > > > > > https://gerrit.ovirt.org/#/c/104825/
 > > > > > was missing
 > > > >
 > > > >
 > > > >
 > > > > Looks like
 > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
 > > >
 > > >
 > > >
 > > > maybe not. You re-triggered with [1], which really missed this
 patch.
 > > > I did a rebase and now running with this patch in build #6132
 [2]. Let's
 > > > wait
 >  for it to see if gerrit #104825 helps.
 > > >
 > > >
 > > >
 > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
 > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
 > > >
 > > >
 > > >
 > > > > Miguel, do you think merging
 > > > >
 > > > >
 > > > >
 > > > >
 https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
 > > > > t-cq
 >  .repo.in
 > > > >
 > > > >
 > > > >
 > > > > would solve this?
 > >
 > >
 > > I've split the patch Dominik mentions above in two, one of them
 adding
 > > the nmstate / networkmanager copr repos - [3].
 > >
 > > Let's see if it fixes it.
 >
 > it fixes original issue, but OST still fails in
 > 098_ovirt_provider_ovn.use_ovn_provider:
 >
 > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134

 I think Dominik was looking into this issue; +Dominik Holler please
 confirm.

 Let me know if you need any help Dominik.

>>>
>>>
>>> Thanks.
>>> The problem is that the hosts lost connection to storage:
>>>
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134/artifact/exported-artifacts/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
>>> :
>>>
>>> 2019-11-22 05:39:12,326-0500 DEBUG (jsonrpc/5) [common.commands] 
>>> /usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Marcin Sobczyk



On 11/22/19 4:54 PM, Martin Perina wrote:



On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler > wrote:



On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler
mailto:dhol...@redhat.com>> wrote:



On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso
mailto:mdbarr...@redhat.com>> wrote:

On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek
mailto:vjura...@redhat.com>> wrote:
>
> On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de
Mora Barroso wrote:
> > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek
mailto:vjura...@redhat.com>>
> > wrote:
> > >
> > >
> > > On pátek 22. listopadu 2019 9:41:26 CET Dominik
Holler wrote:
> > >
> > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler
mailto:dhol...@redhat.com>>
> > > > wrote:
> > > >
> > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer
mailto:nsof...@redhat.com>>
> > > > > wrote:
> > > > >
> > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> > > > >> mailto:vjura...@redhat.com>>
> > > > >>
> > > > >>
> > > > >>
> > > > >> wrote:
> > > > >>
> > > > >> > Hi,
> > > > >> > OST fails (see e.g. [1]) in
002_bootstrap.check_update_host. It
> > > > >> > fails
> > > > >>
> > > > >>
> > > > >>
> > > > >> with
> > > > >>
> > > > >>
> > > > >>
> > > > >> >  FAILED! => {"changed": false, "failures":
[], "msg": "Depsolve
> > > > >> >  Error
> > > > >>
> > > > >>
> > > > >>
> > > > >> occured:
> > > > >>
> > > > >> > \n Problem 1: cannot install the best update
candidate for package
> > > > >> > vdsm-
> > > > >> >
network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
provides
> > > > >>
> > > > >>
> > > > >>
> > > > >> nmstate
> > > > >>
> > > > >>
> > > > >>
> > > > >> > needed by
vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > >> > Problem 2:
> > > > >> > package
vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> > > > >>
> > > > >>
> > > > >>
> > > > >> vdsm-network
> > > > >>
> > > > >>
> > > > >>
> > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of
the providers can be
> > > > >>
> > > > >>
> > > > >>
> > > > >> installed\n
> > > > >>
> > > > >>
> > > > >>
> > > > >> > - cannot install the best update candidate
for package vdsm-
> > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n 
- nothing provides
> > > > >> > nmstate
> > > > >> > needed by
vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > >>
> > > > >>
> > > > >>
> > > > >> nmstate should be provided by copr repo enabled by
> > > > >> ovirt-release-master.
> > > > >
> > > > >
> > > > >
> > > > > I re-triggered as
> > > > >
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > > > maybe
> > > > > https://gerrit.ovirt.org/#/c/104825/
> > > > > was missing
> > > >
> > > >
> > > >
> > > > Looks like
> > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by
OST.
> > >
> > >
> > >
> > > maybe not. You re-triggered with [1], which really
missed this patch.
> > > I did a rebase and now running with this patch in
build #6132 [2]. Let's
> > > wait
>  for it to see if gerrit #104825 helps.
> > >
> > >
> > >
> > > [1]
https://jenkins.ovirt.org/job/standard-manual-runner/909/
> > > [2]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
> > >
> > >
> > >
> > > > Miguel, do you think merging
> > > >
> > > >
> > > >
> > > >

https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
> > > > t-cq
>  

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Martin Perina
On Fri, Nov 22, 2019 at 4:43 PM Dominik Holler  wrote:

>
> On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler 
> wrote:
>
>>
>>
>> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
>> mdbarr...@redhat.com> wrote:
>>
>>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>>> wrote:
>>> >
>>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
>>> wrote:
>>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek >> >
>>> > > wrote:
>>> > > >
>>> > > >
>>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>>> > > >
>>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>>> dhol...@redhat.com>
>>> > > > > wrote:
>>> > > > >
>>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer <
>>> nsof...@redhat.com>
>>> > > > > > wrote:
>>> > > > > >
>>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>>> > > > > >> 
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> wrote:
>>> > > > > >>
>>> > > > > >> > Hi,
>>> > > > > >> > OST fails (see e.g. [1]) in
>>> 002_bootstrap.check_update_host. It
>>> > > > > >> > fails
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> with
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>>> "Depsolve
>>> > > > > >> >  Error
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> occured:
>>> > > > > >>
>>> > > > > >> > \n Problem 1: cannot install the best update candidate for
>>> package
>>> > > > > >> > vdsm-
>>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>>> provides
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> nmstate
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>> > > > > >> > Problem 2:
>>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>>> requires
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> vdsm-network
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
>>> can be
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> installed\n
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> > - cannot install the best update candidate for package vdsm-
>>> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
>>> provides
>>> > > > > >> > nmstate
>>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>> > > > > >>
>>> > > > > >>
>>> > > > > >>
>>> > > > > >> nmstate should be provided by copr repo enabled by
>>> > > > > >> ovirt-release-master.
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > I re-triggered as
>>> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>>> > > > > > maybe
>>> > > > > > https://gerrit.ovirt.org/#/c/104825/
>>> > > > > > was missing
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > Looks like
>>> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>>> > > >
>>> > > >
>>> > > >
>>> > > > maybe not. You re-triggered with [1], which really missed this
>>> patch.
>>> > > > I did a rebase and now running with this patch in build #6132 [2].
>>> Let's
>>> > > > wait
>>> >  for it to see if gerrit #104825 helps.
>>> > > >
>>> > > >
>>> > > >
>>> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
>>> > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>>> > > >
>>> > > >
>>> > > >
>>> > > > > Miguel, do you think merging
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
>>> > > > > t-cq
>>> >  .repo.in
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > would solve this?
>>> > >
>>> > >
>>> > > I've split the patch Dominik mentions above in two, one of them
>>> adding
>>> > > the nmstate / networkmanager copr repos - [3].
>>> > >
>>> > > Let's see if it fixes it.
>>> >
>>> > it fixes original issue, but OST still fails in
>>> > 098_ovirt_provider_ovn.use_ovn_provider:
>>> >
>>> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134
>>>
>>> I think Dominik was looking into this issue; +Dominik Holler please
>>> confirm.
>>>
>>> Let me know if you need any help Dominik.
>>>
>>
>>
>> Thanks.
>> The problem is that the hosts lost connection to storage:
>>
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134/artifact/exported-artifacts/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
>> :
>>
>> 2019-11-22 05:39:12,326-0500 DEBUG (jsonrpc/5) [common.commands] 
>> /usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgs --config 
>> 'devices {  preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
>> write_cache_state=0  disable_after_error_count=3  
>> 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 12:17 PM Dominik Holler  wrote:

>
>
> On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
>
>> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
>> wrote:
>> >
>> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
>> wrote:
>> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
>> > > wrote:
>> > > >
>> > > >
>> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
>> > > >
>> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler <
>> dhol...@redhat.com>
>> > > > > wrote:
>> > > > >
>> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer > >
>> > > > > > wrote:
>> > > > > >
>> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
>> > > > > >> 
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> wrote:
>> > > > > >>
>> > > > > >> > Hi,
>> > > > > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host.
>> It
>> > > > > >> > fails
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> with
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
>> "Depsolve
>> > > > > >> >  Error
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> occured:
>> > > > > >>
>> > > > > >> > \n Problem 1: cannot install the best update candidate for
>> package
>> > > > > >> > vdsm-
>> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
>> provides
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >> > Problem 2:
>> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
>> requires
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> vdsm-network
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers
>> can be
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> installed\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> > - cannot install the best update candidate for package vdsm-
>> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
>> provides
>> > > > > >> > nmstate
>> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>> > > > > >>
>> > > > > >>
>> > > > > >>
>> > > > > >> nmstate should be provided by copr repo enabled by
>> > > > > >> ovirt-release-master.
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > I re-triggered as
>> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>> > > > > > maybe
>> > > > > > https://gerrit.ovirt.org/#/c/104825/
>> > > > > > was missing
>> > > > >
>> > > > >
>> > > > >
>> > > > > Looks like
>> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>> > > >
>> > > >
>> > > >
>> > > > maybe not. You re-triggered with [1], which really missed this
>> patch.
>> > > > I did a rebase and now running with this patch in build #6132 [2].
>> Let's
>> > > > wait
>> >  for it to see if gerrit #104825 helps.
>> > > >
>> > > >
>> > > >
>> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
>> > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>> > > >
>> > > >
>> > > >
>> > > > > Miguel, do you think merging
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
>> > > > > t-cq
>> >  .repo.in
>> > > > >
>> > > > >
>> > > > >
>> > > > > would solve this?
>> > >
>> > >
>> > > I've split the patch Dominik mentions above in two, one of them adding
>> > > the nmstate / networkmanager copr repos - [3].
>> > >
>> > > Let's see if it fixes it.
>> >
>> > it fixes original issue, but OST still fails in
>> > 098_ovirt_provider_ovn.use_ovn_provider:
>> >
>> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134
>>
>> I think Dominik was looking into this issue; +Dominik Holler please
>> confirm.
>>
>> Let me know if you need any help Dominik.
>>
>
>
> Thanks.
> The problem is that the hosts lost connection to storage:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134/artifact/exported-artifacts/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
> :
>
> 2019-11-22 05:39:12,326-0500 DEBUG (jsonrpc/5) [common.commands] 
> /usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgs --config 
> 'devices {  preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
> write_cache_state=0  disable_after_error_count=3  
> filter=["a|^/dev/mapper/36001405107ea8b4e3ac4ddeb3e19890f$|^/dev/mapper/360014054924c91df75e41178e4b8a80c$|^/dev/mapper/3600140561c0d02829924b77ab7323f17$|^/dev/mapper/3600140582feebc04ca5409a99660dbbc$|^/dev/mapper/36001405c3c53755c13c474dada6be354$|",
>  "r|.*|"] } global {  locking_type=1  prioritise_write_locks=1  
> wait_for_locks=1  use_lvmetad=0 } backup {  retain_min=50  retain_days=0 }' 
> --noheadings 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 12:00 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek 
> wrote:
> >
> > On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso
> wrote:
> > > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
> > > wrote:
> > > >
> > > >
> > > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> > > >
> > > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  >
> > > > > wrote:
> > > > >
> > > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
> > > > > > wrote:
> > > > > >
> > > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> > > > > >> 
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > Hi,
> > > > > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host.
> It
> > > > > >> > fails
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> with
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> >  FAILED! => {"changed": false, "failures": [], "msg":
> "Depsolve
> > > > > >> >  Error
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> occured:
> > > > > >>
> > > > > >> > \n Problem 1: cannot install the best update candidate for
> package
> > > > > >> > vdsm-
> > > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing
> provides
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >> > Problem 2:
> > > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch
> requires
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> vdsm-network
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can
> be
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> installed\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> > - cannot install the best update candidate for package vdsm-
> > > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing
> provides
> > > > > >> > nmstate
> > > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> nmstate should be provided by copr repo enabled by
> > > > > >> ovirt-release-master.
> > > > > >
> > > > > >
> > > > > >
> > > > > > I re-triggered as
> > > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > > > > maybe
> > > > > > https://gerrit.ovirt.org/#/c/104825/
> > > > > > was missing
> > > > >
> > > > >
> > > > >
> > > > > Looks like
> > > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
> > > >
> > > >
> > > >
> > > > maybe not. You re-triggered with [1], which really missed this patch.
> > > > I did a rebase and now running with this patch in build #6132 [2].
> Let's
> > > > wait
> >  for it to see if gerrit #104825 helps.
> > > >
> > > >
> > > >
> > > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
> > > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
> > > >
> > > >
> > > >
> > > > > Miguel, do you think merging
> > > > >
> > > > >
> > > > >
> > > > >
> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
> > > > > t-cq
> >  .repo.in
> > > > >
> > > > >
> > > > >
> > > > > would solve this?
> > >
> > >
> > > I've split the patch Dominik mentions above in two, one of them adding
> > > the nmstate / networkmanager copr repos - [3].
> > >
> > > Let's see if it fixes it.
> >
> > it fixes original issue, but OST still fails in
> > 098_ovirt_provider_ovn.use_ovn_provider:
> >
> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134
>
> I think Dominik was looking into this issue; +Dominik Holler please
> confirm.
>
> Let me know if you need any help Dominik.
>


Thanks.
The problem is that the hosts lost connection to storage:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134/artifact/exported-artifacts/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
:

2019-11-22 05:39:12,326-0500 DEBUG (jsonrpc/5) [common.commands]
/usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgs
--config 'devices {  preferred_names=["^/dev/mapper/"]
ignore_suspended_devices=1  write_cache_state=0
disable_after_error_count=3
filter=["a|^/dev/mapper/36001405107ea8b4e3ac4ddeb3e19890f$|^/dev/mapper/360014054924c91df75e41178e4b8a80c$|^/dev/mapper/3600140561c0d02829924b77ab7323f17$|^/dev/mapper/3600140582feebc04ca5409a99660dbbc$|^/dev/mapper/36001405c3c53755c13c474dada6be354$|",
"r|.*|"] } global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=0 } backup {  retain_min=50
retain_days=0 }' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
(cwd None) (commands:153)
2019-11-22 05:39:12,415-0500 ERROR (check/loop) [storage.Monitor]
Error 

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Miguel Duarte de Mora Barroso
On Fri, Nov 22, 2019 at 11:54 AM Vojtech Juranek  wrote:
>
> On pátek 22. listopadu 2019 9:56:56 CET Miguel Duarte de Mora Barroso wrote:
> > On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek 
> > wrote:
> > >
> > >
> > > On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> > >
> > > > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler 
> > > > wrote:
> > > >
> > > > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer 
> > > > > wrote:
> > > > >
> > > > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek
> > > > >> 
> > > > >>
> > > > >>
> > > > >>
> > > > >> wrote:
> > > > >>
> > > > >> > Hi,
> > > > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It
> > > > >> > fails
> > > > >>
> > > > >>
> > > > >>
> > > > >> with
> > > > >>
> > > > >>
> > > > >>
> > > > >> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve
> > > > >> >  Error
> > > > >>
> > > > >>
> > > > >>
> > > > >> occured:
> > > > >>
> > > > >> > \n Problem 1: cannot install the best update candidate for package
> > > > >> > vdsm-
> > > > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
> > > > >>
> > > > >>
> > > > >>
> > > > >> nmstate
> > > > >>
> > > > >>
> > > > >>
> > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > >> > Problem 2:
> > > > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> > > > >>
> > > > >>
> > > > >>
> > > > >> vdsm-network
> > > > >>
> > > > >>
> > > > >>
> > > > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
> > > > >>
> > > > >>
> > > > >>
> > > > >> installed\n
> > > > >>
> > > > >>
> > > > >>
> > > > >> > - cannot install the best update candidate for package vdsm-
> > > > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides
> > > > >> > nmstate
> > > > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > > > >>
> > > > >>
> > > > >>
> > > > >> nmstate should be provided by copr repo enabled by
> > > > >> ovirt-release-master.
> > > > >
> > > > >
> > > > >
> > > > > I re-triggered as
> > > > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > > > maybe
> > > > > https://gerrit.ovirt.org/#/c/104825/
> > > > > was missing
> > > >
> > > >
> > > >
> > > > Looks like
> > > > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
> > >
> > >
> > >
> > > maybe not. You re-triggered with [1], which really missed this patch.
> > > I did a rebase and now running with this patch in build #6132 [2]. Let's
> > > wait
>  for it to see if gerrit #104825 helps.
> > >
> > >
> > >
> > > [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
> > > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
> > >
> > >
> > >
> > > > Miguel, do you think merging
> > > >
> > > >
> > > >
> > > > https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-hos
> > > > t-cq
>  .repo.in
> > > >
> > > >
> > > >
> > > > would solve this?
> >
> >
> > I've split the patch Dominik mentions above in two, one of them adding
> > the nmstate / networkmanager copr repos - [3].
> >
> > Let's see if it fixes it.
>
> it fixes original issue, but OST still fails in
> 098_ovirt_provider_ovn.use_ovn_provider:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6134

I think Dominik was looking into this issue; +Dominik Holler please confirm.

Let me know if you need any help Dominik.
>
> > [3] - https://gerrit.ovirt.org/#/c/104897/
> >
> >
> > > >
> > > >
> > > > >> Who installs this rpm in OST?
> > > > >
> > > > >
> > > > >
> > > > > I do not understand the question.
> > > > >
> > > > >
> > > > >
> > > > >> > [...]
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > See [2] for full error.
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > Can someone please take a look?
> > > > >> > Thanks
> > > > >> > Vojta
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> > > > >> > [2]
> > > > >>
> > > > >>
> > > > >>
> > > > >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact
> > > > >> /
> > > > >>
> > > > >>
> > > > >>
> > > > >> > exported-artifacts/test_logs/basic-suite-master/
> > > > >>
> > > > >>
> > > > >>
> > > > >> post-002_bootstrap.py/lago-
> > > > >>
> > > > >>
> > > > >>
> > > > >> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> > > > >> 
> > > > >> >>
> > > > >>
> > > > >> > Devel mailing list -- devel@ovirt.org
> > > > >> > To unsubscribe send an email to devel-le...@ovirt.org
> > > > >> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > >>
> > > > >>
> > > > >>
> > > > >> > oVirt Code of Conduct:
> > > > >>
> > > > >> https://www.ovirt.org/community/about/community-guidelines/
> > > > >>
> > > > >>
> > > > >>
> > > > >> > List Archives:
> > > > >>
> > > > >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQ
> > > > >> N26B
> > > > >> L73K7D45A2IR7R3UMMM23/

[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Miguel Duarte de Mora Barroso
On Fri, Nov 22, 2019 at 9:49 AM Vojtech Juranek  wrote:
>
> On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> > On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  wrote:
> > > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:
> > >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
> > >>
> > >> wrote:
> > >> > Hi,
> > >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
> > >>
> > >> with
> > >>
> > >> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
> > >>
> > >> occured:
> > >> > \n Problem 1: cannot install the best update candidate for package
> > >> > vdsm-
> > >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
> > >>
> > >> nmstate
> > >>
> > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
> > >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> > >>
> > >> vdsm-network
> > >>
> > >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
> > >>
> > >> installed\n
> > >>
> > >> > - cannot install the best update candidate for package vdsm-
> > >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides
> > >> > nmstate
> > >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> > >>
> > >> nmstate should be provided by copr repo enabled by ovirt-release-master.
> > >
> > > I re-triggered as
> > > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > > maybe
> > > https://gerrit.ovirt.org/#/c/104825/
> > > was missing
> >
> > Looks like
> > https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.
>
> maybe not. You re-triggered with [1], which really missed this patch.
> I did a rebase and now running with this patch in build #6132 [2]. Let's wait
> for it to see if gerrit #104825 helps.
>
> [1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/
>
> > Miguel, do you think merging
> >
> > https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-host-cq
> > .repo.in
> >
> > would solve this?

I've split the patch Dominik mentions above in two, one of them adding
the nmstate / networkmanager copr repos - [3].

Let's see if it fixes it.

[3] - https://gerrit.ovirt.org/#/c/104897/

> >
> > >> Who installs this rpm in OST?
> > >
> > > I do not understand the question.
> > >
> > >> > [...]
> > >> >
> > >> > See [2] for full error.
> > >> >
> > >> > Can someone please take a look?
> > >> > Thanks
> > >> > Vojta
> > >> >
> > >> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> > >> > [2]
> > >>
> > >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
> > >>
> > >> > exported-artifacts/test_logs/basic-suite-master/
> > >>
> > >> post-002_bootstrap.py/lago-
> > >>
> > >> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> > >> >>
> > >> > Devel mailing list -- devel@ovirt.org
> > >> > To unsubscribe send an email to devel-le...@ovirt.org
> > >> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > >>
> > >> > oVirt Code of Conduct:
> > >> https://www.ovirt.org/community/about/community-guidelines/
> > >>
> > >> > List Archives:
> > >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26B
> > >> L73K7D45A2IR7R3UMMM23/ ___
> > >> Devel mailing list -- devel@ovirt.org
> > >> To unsubscribe send an email to devel-le...@ovirt.org
> > >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > >> oVirt Code of Conduct:
> > >> https://www.ovirt.org/community/about/community-guidelines/
> > >> List Archives:
> > >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3
> > >> NS5TGXFCILYES77KI5TZU/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UPJ5SEAV5Z65H5BQ3SCHOYZX6JMTQPBW/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Vojtech Juranek
On pátek 22. listopadu 2019 9:41:26 CET Dominik Holler wrote:
> On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  wrote:
> > On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:
> >> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
> >> 
> >> wrote:
> >> > Hi,
> >> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
> >> 
> >> with
> >> 
> >> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
> >> 
> >> occured:
> >> > \n Problem 1: cannot install the best update candidate for package
> >> > vdsm-
> >> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
> >> 
> >> nmstate
> >> 
> >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
> >> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> >> 
> >> vdsm-network
> >> 
> >> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
> >> 
> >> installed\n
> >> 
> >> > - cannot install the best update candidate for package vdsm-
> >> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides
> >> > nmstate
> >> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
> >> 
> >> nmstate should be provided by copr repo enabled by ovirt-release-master.
> > 
> > I re-triggered as
> > https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> > maybe
> > https://gerrit.ovirt.org/#/c/104825/
> > was missing
> 
> Looks like
> https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.

maybe not. You re-triggered with [1], which really missed this patch.
I did a rebase and now running with this patch in build #6132 [2]. Let's wait 
for it to see if gerrit #104825 helps.

[1] https://jenkins.ovirt.org/job/standard-manual-runner/909/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6132/

> Miguel, do you think merging
> 
> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-host-cq
> .repo.in
> 
> would solve this?
> 
> >> Who installs this rpm in OST?
> > 
> > I do not understand the question.
> > 
> >> > [...]
> >> > 
> >> > See [2] for full error.
> >> > 
> >> > Can someone please take a look?
> >> > Thanks
> >> > Vojta
> >> > 
> >> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> >> > [2]
> >> 
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
> >> 
> >> > exported-artifacts/test_logs/basic-suite-master/
> >> 
> >> post-002_bootstrap.py/lago-
> >> 
> >> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> >> >> 
> >> > Devel mailing list -- devel@ovirt.org
> >> > To unsubscribe send an email to devel-le...@ovirt.org
> >> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> 
> >> > oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> 
> >> > List Archives:
> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26B
> >> L73K7D45A2IR7R3UMMM23/ ___
> >> Devel mailing list -- devel@ovirt.org
> >> To unsubscribe send an email to devel-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3
> >> NS5TGXFCILYES77KI5TZU/



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EO5N4T5JPIPXBMHUBDL7XU3QCYV3BH2U/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Miguel Duarte de Mora Barroso
On Fri, Nov 22, 2019 at 9:41 AM Dominik Holler  wrote:
>
>
>
> On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  wrote:
>>
>>
>>
>> On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:
>>>
>>> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek  
>>> wrote:
>>> >
>>> > Hi,
>>> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails with
>>> >
>>> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error 
>>> > occured:
>>> > \n Problem 1: cannot install the best update candidate for package vdsm-
>>> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides nmstate
>>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
>>> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires 
>>> > vdsm-network
>>> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be 
>>> > installed\n
>>> > - cannot install the best update candidate for package vdsm-
>>> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
>>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>>
>>> nmstate should be provided by copr repo enabled by ovirt-release-master.
>>
>>
>>
>> I re-triggered as
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
>> maybe
>> https://gerrit.ovirt.org/#/c/104825/
>> was missing
>>
>
> Looks like
> https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.

Right.

>
> Miguel, do you think merging
>
> https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-host-cq.repo.in

It should.

I'll break down patch patch into smaller pieces - one adding the
nmstate / NM copr repos, another w/ enabling nmstate.

>
> would solve this?
>
>>
>>
>>>
>>> Who installs this rpm in OST?
>>>
>>
>> I do not understand the question.
>>
>>>
>>> > [...]
>>> >
>>> > See [2] for full error.
>>> >
>>> > Can someone please take a look?
>>> > Thanks
>>> > Vojta
>>> >
>>> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
>>> > [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
>>> > exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-
>>> > basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
>>> > Devel mailing list -- devel@ovirt.org
>>> > To unsubscribe send an email to devel-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct: 
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives: 
>>> > https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JEUPQR4EC2SBHIQ5FNX2U44N6AERRWD5/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-22 Thread Dominik Holler
On Fri, Nov 22, 2019 at 8:40 AM Dominik Holler  wrote:

>
>
> On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:
>
>> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
>> wrote:
>> >
>> > Hi,
>> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
>> with
>> >
>> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
>> occured:
>> > \n Problem 1: cannot install the best update candidate for package vdsm-
>> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides
>> nmstate
>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
>> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
>> vdsm-network
>> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
>> installed\n
>> > - cannot install the best update candidate for package vdsm-
>> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
>> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>>
>> nmstate should be provided by copr repo enabled by ovirt-release-master.
>>
>
>
> I re-triggered as
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
> maybe
> https://gerrit.ovirt.org/#/c/104825/
> was missing
>
>
Looks like
https://gerrit.ovirt.org/#/c/104825/ is ignored by OST.

Miguel, do you think merging

https://gerrit.ovirt.org/#/c/104495/15/common/yum-repos/ovirt-master-host-cq.repo.in

would solve this?


>
>
>> Who installs this rpm in OST?
>>
>>
> I do not understand the question.
>
>
>> > [...]
>> >
>> > See [2] for full error.
>> >
>> > Can someone please take a look?
>> > Thanks
>> > Vojta
>> >
>> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
>> > [2]
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
>> > exported-artifacts/test_logs/basic-suite-master/
>> post-002_bootstrap.py/lago-
>> >
>> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
>> > Devel mailing list -- devel@ovirt.org
>> > To unsubscribe send an email to devel-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AXXKBFNWEUWVUKCMSHZTJTZGJ6KVXZ4W/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-21 Thread Dominik Holler
On Thu, Nov 21, 2019 at 10:54 PM Nir Soffer  wrote:

> On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek 
> wrote:
> >
> > Hi,
> > OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails
> with
> >
> >  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error
> occured:
> > \n Problem 1: cannot install the best update candidate for package vdsm-
> > network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides nmstate
> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
> > package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires
> vdsm-network
> > = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be
> installed\n
> > - cannot install the best update candidate for package vdsm-
> > python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
> > needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n
>
> nmstate should be provided by copr repo enabled by ovirt-release-master.
>


I re-triggered as
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6131
maybe
https://gerrit.ovirt.org/#/c/104825/
was missing



> Who installs this rpm in OST?
>
>
I do not understand the question.


> > [...]
> >
> > See [2] for full error.
> >
> > Can someone please take a look?
> > Thanks
> > Vojta
> >
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> > [2]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
> > exported-artifacts/test_logs/basic-suite-master/
> post-002_bootstrap.py/lago-
> >
> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/URPZHR2IKQYBCMZVN5EBAXWTATCXBEKP/


[ovirt-devel] Re: OST fails, nothing provides nmstate

2019-11-21 Thread Nir Soffer
On Thu, Nov 21, 2019 at 11:24 PM Vojtech Juranek  wrote:
>
> Hi,
> OST fails (see e.g. [1]) in 002_bootstrap.check_update_host. It fails with
>
>  FAILED! => {"changed": false, "failures": [], "msg": "Depsolve Error occured:
> \n Problem 1: cannot install the best update candidate for package vdsm-
> network-4.40.0-1236.git63ea8cb8b.el8.x86_64\n  - nothing provides nmstate
> needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n Problem 2:
> package vdsm-python-4.40.0-1271.git524e08c8a.el8.noarch requires vdsm-network
> = 4.40.0-1271.git524e08c8a.el8, but none of the providers can be installed\n
> - cannot install the best update candidate for package vdsm-
> python-4.40.0-1236.git63ea8cb8b.el8.noarch\n  - nothing provides nmstate
> needed by vdsm-network-4.40.0-1271.git524e08c8a.el8.x86_64\n

nmstate should be provided by copr repo enabled by ovirt-release-master.

Who installs this rpm in OST?

> [...]
>
> See [2] for full error.
>
> Can someone please take a look?
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6128/artifact/
> exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-
> basic-suite-master-engine/_var_log/ovirt-engine/engine.log___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4K5N3VQN26BL73K7D45A2IR7R3UMMM23/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JN7MNUZN5K3NS5TGXFCILYES77KI5TZU/


[ovirt-devel] Re: OST fails for collecting artifacts

2019-11-20 Thread Yedidyah Bar David
On Wed, Nov 20, 2019 at 11:09 AM Milan Zamazal  wrote:
>
> Yedidyah Bar David  writes:
>
> > On Mon, Nov 18, 2019 at 4:38 PM Marcin Sobczyk  wrote:
> >>
> >> Hi,
> >
> >>
> >> I thought it's been removed already, but it seems it's not.
> >> I'm the author of the currently used and newer implementation of artifact 
> >> collection in lago,
> >> but from my experiments, I've learned that the extraction of wildcard 
> >> paths never worked.
> >> Here's an email I wrote to Galit about it some time ago:
> >>
> >> =
> >>
> >> Hi Galit,
> >>
> >> aaah yes - wildcard collection doesn't work - it never worked, even before 
> >> my changes.
> >>
> >> TL; DR - we just need to remove wildcard stuff from
> >> "LagoInitFile.in" files ("/tmp/otopi*", "/tmp/ovirt*").
> >
> > I'd like to clarify why these were added.
> >
> > When host-deploy runs, during "Add host", it writes its log file
> > locally. It defaults to /tmp/otopi*.log, and we do not change that, so
> > it uses the default.
> >
> > If/when it finishes successfully, the engine copies this log to
> > /var/log/ovirt-engine/host-deploy on the engine machine, and
> > host-deploy removes its log from /tmp.
> >
> > If it fails in the middle, it will (likely) leave its log in /tmp.
> >
> > We had failures in the past that were hard to diagnose, and therefore
> > added /tmp/otopi*, to help debug future similar cases.
> >
> > We can instead configure otopi/host-deploy on the host to keep its
> > logs by default elsewhere. E.g. by doing this on the _host_ before
> > adding it:
> >
> > mkdir -p /var/log/otopi
> > mkdir -p /etc/otopi.env.d
> > echo 'export OTOPI_LOGDIR=/var/log/otopi' > /etc/otopi.env.d/99-logdir.env
> >
> > and then collect /var/log/otopi.
> >
> > If this solves the problem, fine with me to do this and remove /tmp/otopi*.
> >
> > That said, I think collecting wildcards can be useful, so perhaps we
> > should fix this anyway.
> >
> > Also, /tmp/otopi* should normally do not exist, so it should almost
> > always fail to collect that. Is this really a problem? Or just
> > annoying to have to skip over when reading lago/ost logs?
>
> It is at least confusing and annoying to see these kind of errors in OST
> logs.  If it is expected to fail in some cases, it shouldn't be reported
> as error and should be silently ignored when the file doesn't exist.

I have no idea if it's possible to tell lago in the init file that
some artifacts are mandatory and some are optional. If not, perhaps we
want such functionality.

Either way, a different question is what to do if/when collection fails.

It's not clear to me that a job should fail if only collection failed.
Even if we set some as mandatory and they fail.

And another item is that it should be very clear from the logs if
failed collection also failed the job...

All are things to be done in lago, I guess. My patch only added a
single line to ovirt-system-tests, I didn't spend time on relevant
lago code.

Anyway, now pushed https://gerrit.ovirt.org/104827 . If it passes
well, we can ignore current thread issue for some more time...

Sorry for the noise and best regards,

>
> Thanks,
> Milan
>
> >> If you're curious what really happens... :)
> >>
> >> The old algorithm uses "SCPClient" from "scp" library to copy files.
> >> "scp.get" function accepts, among others, two arguments - "remote_path" 
> >> and "local_path".
> >> What we do, is we change slashes in "remote_path" to underscores and pass 
> >> it as "local_path":
> >>
> >> https://github.com/lago-project/lago/blob/7024fb7cabbd4ebc87f3ad12e35e0f5832af7d56/lago/plugins/vm.py#L651
> >>
> >> The effect is, we command "scp.get" to retrieve "/tmp/otopi*" and save it 
> >> as "_tmp_otopi*"... which of course makes no sense at all and doesn't 
> >> work...
> >>
> >>
> >> The new implementation *could* work with wildcards because the collection 
> >> is divided into two stages:
> >>
> >> https://github.com/lago-project/lago/blob/9803eeacd41b3f91cd6661a110aa0285aaf4b957/lago/plugins/vm.py#L313
> >>
> >> First we do the "tar -> copy tar with ssh -> untar to tmpdir" thing and 
> >> *only then* we use "shutil.move" to rename the files to the underscored 
> >> version.
> >> We could use "glob" module to try to iterate over stuff like "/tmp/otopi*" 
> >> and rename the files appropriately.
> >> However, we maintain two parallel implementations of artifacts collection 
> >> - the old one being a "plan B" in case there's no "tar" or "gzip" on the 
> >> target machine.
> >> This is the reason we have to keep both implementations identical in 
> >> behavior to avoid confusion. BTW the new implementation could drop the 
> >> underscore-renaming process completely - I think the only reason we do the 
> >> renaming in the old algorithm is because "scp" won't create intermediary 
> >> directories for you... untarring stuff handles that case well, but that's 
> >> a backwards-compatibility-breaking change :)
> >>
> >> 

[ovirt-devel] Re: OST fails for collecting artifacts

2019-11-20 Thread Milan Zamazal
Yedidyah Bar David  writes:

> On Mon, Nov 18, 2019 at 4:38 PM Marcin Sobczyk  wrote:
>>
>> Hi,
>
>>
>> I thought it's been removed already, but it seems it's not.
>> I'm the author of the currently used and newer implementation of artifact 
>> collection in lago,
>> but from my experiments, I've learned that the extraction of wildcard paths 
>> never worked.
>> Here's an email I wrote to Galit about it some time ago:
>>
>> =
>>
>> Hi Galit,
>>
>> aaah yes - wildcard collection doesn't work - it never worked, even before 
>> my changes.
>>
>> TL; DR - we just need to remove wildcard stuff from
>> "LagoInitFile.in" files ("/tmp/otopi*", "/tmp/ovirt*").
>
> I'd like to clarify why these were added.
>
> When host-deploy runs, during "Add host", it writes its log file
> locally. It defaults to /tmp/otopi*.log, and we do not change that, so
> it uses the default.
>
> If/when it finishes successfully, the engine copies this log to
> /var/log/ovirt-engine/host-deploy on the engine machine, and
> host-deploy removes its log from /tmp.
>
> If it fails in the middle, it will (likely) leave its log in /tmp.
>
> We had failures in the past that were hard to diagnose, and therefore
> added /tmp/otopi*, to help debug future similar cases.
>
> We can instead configure otopi/host-deploy on the host to keep its
> logs by default elsewhere. E.g. by doing this on the _host_ before
> adding it:
>
> mkdir -p /var/log/otopi
> mkdir -p /etc/otopi.env.d
> echo 'export OTOPI_LOGDIR=/var/log/otopi' > /etc/otopi.env.d/99-logdir.env
>
> and then collect /var/log/otopi.
>
> If this solves the problem, fine with me to do this and remove /tmp/otopi*.
>
> That said, I think collecting wildcards can be useful, so perhaps we
> should fix this anyway.
>
> Also, /tmp/otopi* should normally do not exist, so it should almost
> always fail to collect that. Is this really a problem? Or just
> annoying to have to skip over when reading lago/ost logs?

It is at least confusing and annoying to see these kind of errors in OST
logs.  If it is expected to fail in some cases, it shouldn't be reported
as error and should be silently ignored when the file doesn't exist.

Thanks,
Milan

>> If you're curious what really happens... :)
>>
>> The old algorithm uses "SCPClient" from "scp" library to copy files.
>> "scp.get" function accepts, among others, two arguments - "remote_path" and 
>> "local_path".
>> What we do, is we change slashes in "remote_path" to underscores and pass it 
>> as "local_path":
>>
>> https://github.com/lago-project/lago/blob/7024fb7cabbd4ebc87f3ad12e35e0f5832af7d56/lago/plugins/vm.py#L651
>>
>> The effect is, we command "scp.get" to retrieve "/tmp/otopi*" and save it as 
>> "_tmp_otopi*"... which of course makes no sense at all and doesn't work...
>>
>>
>> The new implementation *could* work with wildcards because the collection is 
>> divided into two stages:
>>
>> https://github.com/lago-project/lago/blob/9803eeacd41b3f91cd6661a110aa0285aaf4b957/lago/plugins/vm.py#L313
>>
>> First we do the "tar -> copy tar with ssh -> untar to tmpdir" thing and 
>> *only then* we use "shutil.move" to rename the files to the underscored 
>> version.
>> We could use "glob" module to try to iterate over stuff like "/tmp/otopi*" 
>> and rename the files appropriately.
>> However, we maintain two parallel implementations of artifacts collection - 
>> the old one being a "plan B" in case there's no "tar" or "gzip" on the 
>> target machine.
>> This is the reason we have to keep both implementations identical in 
>> behavior to avoid confusion. BTW the new implementation could drop the 
>> underscore-renaming process completely - I think the only reason we do the 
>> renaming in the old algorithm is because "scp" won't create intermediary 
>> directories for you... untarring stuff handles that case well, but that's a 
>> backwards-compatibility-breaking change :)
>>
>> =
>>
>> I will post a patch that removes this.
>>
>> Regards, Marcin
>>
>> On 11/18/19 2:45 PM, Amit Bawer wrote:
>>
>> Happens for several runs, full log can be seen at
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log
>>
>> 2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::  
>> - [Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in 0:00:08)
>> 2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::  
>> File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
>> _collect_artifacts
>> vm.collect_artifacts(path, ignore_nopath)
>>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
>> collect_artifacts
>> ignore_nopath=ignore_nopath
>>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
>> extract_paths
>> return self.provider.extract_paths(paths, *args, **kwargs)
>>   File 

[ovirt-devel] Re: OST fails for collecting artifacts

2019-11-19 Thread Yedidyah Bar David
On Mon, Nov 18, 2019 at 4:38 PM Marcin Sobczyk  wrote:
>
> Hi,
>
> I thought it's been removed already, but it seems it's not.
> I'm the author of the currently used and newer implementation of artifact 
> collection in lago,
> but from my experiments, I've learned that the extraction of wildcard paths 
> never worked.
> Here's an email I wrote to Galit about it some time ago:
>
> =
>
> Hi Galit,
>
> aaah yes - wildcard collection doesn't work - it never worked, even before my 
> changes.
>
> TL; DR - we just need to remove wildcard stuff from "LagoInitFile.in" files 
> ("/tmp/otopi*", "/tmp/ovirt*").

I'd like to clarify why these were added.

When host-deploy runs, during "Add host", it writes its log file
locally. It defaults to /tmp/otopi*.log, and we do not change that, so
it uses the default.

If/when it finishes successfully, the engine copies this log to
/var/log/ovirt-engine/host-deploy on the engine machine, and
host-deploy removes its log from /tmp.

If it fails in the middle, it will (likely) leave its log in /tmp.

We had failures in the past that were hard to diagnose, and therefore
added /tmp/otopi*, to help debug future similar cases.

We can instead configure otopi/host-deploy on the host to keep its
logs by default elsewhere. E.g. by doing this on the _host_ before
adding it:

mkdir -p /var/log/otopi
mkdir -p /etc/otopi.env.d
echo 'export OTOPI_LOGDIR=/var/log/otopi' > /etc/otopi.env.d/99-logdir.env

and then collect /var/log/otopi.

If this solves the problem, fine with me to do this and remove /tmp/otopi*.

That said, I think collecting wildcards can be useful, so perhaps we
should fix this anyway.

Also, /tmp/otopi* should normally do not exist, so it should almost
always fail to collect that. Is this really a problem? Or just
annoying to have to skip over when reading lago/ost logs?

>
>
> If you're curious what really happens... :)
>
> The old algorithm uses "SCPClient" from "scp" library to copy files.
> "scp.get" function accepts, among others, two arguments - "remote_path" and 
> "local_path".
> What we do, is we change slashes in "remote_path" to underscores and pass it 
> as "local_path":
>
> https://github.com/lago-project/lago/blob/7024fb7cabbd4ebc87f3ad12e35e0f5832af7d56/lago/plugins/vm.py#L651
>
> The effect is, we command "scp.get" to retrieve "/tmp/otopi*" and save it as 
> "_tmp_otopi*"... which of course makes no sense at all and doesn't work...
>
>
> The new implementation *could* work with wildcards because the collection is 
> divided into two stages:
>
> https://github.com/lago-project/lago/blob/9803eeacd41b3f91cd6661a110aa0285aaf4b957/lago/plugins/vm.py#L313
>
> First we do the "tar -> copy tar with ssh -> untar to tmpdir" thing and *only 
> then* we use "shutil.move" to rename the files to the underscored version.
> We could use "glob" module to try to iterate over stuff like "/tmp/otopi*" 
> and rename the files appropriately.
> However, we maintain two parallel implementations of artifacts collection - 
> the old one being a "plan B" in case there's no "tar" or "gzip" on the target 
> machine.
> This is the reason we have to keep both implementations identical in behavior 
> to avoid confusion. BTW the new implementation could drop the 
> underscore-renaming process completely - I think the only reason we do the 
> renaming in the old algorithm is because "scp" won't create intermediary 
> directories for you... untarring stuff handles that case well, but that's a 
> backwards-compatibility-breaking change :)
>
> =
>
> I will post a patch that removes this.
>
> Regards, Marcin
>
> On 11/18/19 2:45 PM, Amit Bawer wrote:
>
> Happens for several runs, full log can be seen at
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log
>
> 2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::  
> - [Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in 0:00:08)
> 2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::  
> File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
> _collect_artifacts
> vm.collect_artifacts(path, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
> collect_artifacts
> ignore_nopath=ignore_nopath
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
> extract_paths
> return self.provider.extract_paths(paths, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py", line 
> 398, in extract_paths
> ignore_nopath=ignore_nopath,
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 253, in 
> extract_paths
> self._extract_paths_tar_gz(paths, ignore_nopath)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 102, in 
> wrapper
> return func(self, *args, **kwargs)
>   

[ovirt-devel] Re: OST fails for collecting artifacts

2019-11-18 Thread Marcin Sobczyk

Hi,

I thought it's been removed already, but it seems it's not.
I'm the author of the currently used and newer implementation of 
artifact collection in lago,
but from my experiments, I've learned that the extraction of wildcard 
paths never worked.

Here's an email I wrote to Galit about it some time ago:

=

Hi Galit,

aaah yes - wildcard collection doesn't work - it never worked, even 
before my changes.


TL; DR - we just need to remove wildcard stuff from "LagoInitFile.in" 
files ("/tmp/otopi*", "/tmp/ovirt*").



If you're curious what really happens... :)

The old algorithm uses "SCPClient" from "scp" library to copy files.
"scp.get" function accepts, among others, two arguments - "remote_path" 
and "local_path".
What we do, is we change slashes in "remote_path" to underscores and 
pass it as "local_path":


https://github.com/lago-project/lago/blob/7024fb7cabbd4ebc87f3ad12e35e0f5832af7d56/lago/plugins/vm.py#L651

The effect is, we command "scp.get" to retrieve "/tmp/otopi*" and save 
it as "_tmp_otopi*"... which of course makes no sense at all and doesn't 
work...



The new implementation *could* work with wildcards because the 
collection is divided into two stages:


https://github.com/lago-project/lago/blob/9803eeacd41b3f91cd6661a110aa0285aaf4b957/lago/plugins/vm.py#L313

First we do the "tar -> copy tar with ssh -> untar to tmpdir" thing and 
*only then* we use "shutil.move" to rename the files to the underscored 
version.
We could use "glob" module to try to iterate over stuff like 
"/tmp/otopi*" and rename the files appropriately.
However, we maintain two parallel implementations of artifacts 
collection - the old one being a "plan B" in case there's no "tar" or 
"gzip" on the target machine.
This is the reason we have to keep both implementations identical in 
behavior to avoid confusion. BTW the new implementation could drop the 
underscore-renaming process completely - I think the only reason we do 
the renaming in the old algorithm is because "scp" won't create 
intermediary directories for you... untarring stuff handles that case 
well, but that's a backwards-compatibility-breaking change :)


=

I will post a patch that removes this.

Regards, Marcin

On 11/18/19 2:45 PM, Amit Bawer wrote:

Happens for several runs, full log can be seen at
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log
2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::  - 
[Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in 0:00:08)
2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::  File 
"/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
_collect_artifacts
 vm.collect_artifacts(path, ignore_nopath)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
collect_artifacts
 ignore_nopath=ignore_nopath
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
extract_paths
 return self.provider.extract_paths(paths, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py", line 
398, in extract_paths
 ignore_nopath=ignore_nopath,
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 253, in 
extract_paths
 self._extract_paths_tar_gz(paths, ignore_nopath)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 102, in 
wrapper
 return func(self, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 341, in 
_extract_paths_tar_gz
 raise ExtractPathNoPathError(remote_path)

2019-11-18 12:28:12,731::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error 
while running thread Thread-42
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in 
_ret_via_queue
 queue.put({'return': func()})
   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in 
_collect_artifacts
 vm.collect_artifacts(path, ignore_nopath)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 748, in 
collect_artifacts
 ignore_nopath=ignore_nopath
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 468, in 
extract_paths
 return self.provider.extract_paths(paths, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py", line 
398, in extract_paths
 ignore_nopath=ignore_nopath,
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 253, in 
extract_paths
 self._extract_paths_tar_gz(paths, ignore_nopath)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 102, in 
wrapper
 return func(self, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 341, in 
_extract_paths_tar_gz
 raise ExtractPathNoPathError(remote_path)
ExtractPathNoPathError: Failed 

[ovirt-devel] Re: OST fails on 004_basic_sanity.verify_glance_import

2019-09-26 Thread Nir Soffer
On Thu, Sep 26, 2019 at 1:39 PM Eyal Edri  wrote:

>
>
> On Thu, Sep 26, 2019 at 1:22 PM Yedidyah Bar David 
> wrote:
>
>> Hi all,
>>
>> [1], which I ran for verifying [2], fails for me at
>> 004_basic_sanity.verify_glance_import . I can see several different
>> cases of failure there reported to devel@, last one 1.5 months ago. Is
>> this a (new) known issue?
>>
>
> Its not a known issue AFAIK, though it is dependent on the network to the
> glance instance,
> which sometimes can be unstable or slow, hence a test can fail.
> We've had a network outage this morning, it could be related to it.
> If you see the same test failure repeatedly, then we'll need to
> investigate further if something is wrong with the glance server.
>

Depending on external service in OST is big no no. How do you want to gate
patches
when a test can fail randomly because of environment?

We must run glance inside the closed OST environment. Same for yum repos,
we must use mirrors running in the closed environment.

Nir


>
>
>>
>> [1]
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5666/
>>
>> [2] https://gerrit.ovirt.org/97698
>>
>> Thanks and best regards,
>> --
>> Didi
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/OOP7647K4VLXGVSDWS4R7UERH5SMFQEA/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat 
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/YCZHTRJMCCGCTNUYNUXR3WYTO6I5HTKA/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QRVN5LT6LFHDFQU3FGEFPPFDXYVVFIDM/


[ovirt-devel] Re: OST fails on 004_basic_sanity.verify_glance_import

2019-09-26 Thread Anton Marchukov
Hello.

Glance itself should be fine as it is in OSAS. But since there was no network 
in PHX DC for some time it means that jenkins was not able to access glance for 
some time. I think it is better to rerun first before checking further.

Anton.

> On 26 Sep 2019, at 12:38, Eyal Edri  wrote:
> 
> 
> 
> On Thu, Sep 26, 2019 at 1:22 PM Yedidyah Bar David  wrote:
> Hi all,
> 
> [1], which I ran for verifying [2], fails for me at
> 004_basic_sanity.verify_glance_import . I can see several different
> cases of failure there reported to devel@, last one 1.5 months ago. Is
> this a (new) known issue?
> 
> Its not a known issue AFAIK, though it is dependent on the network to the 
> glance instance,
> which sometimes can be unstable or slow, hence a test can fail.
> We've had a network outage this morning, it could be related to it.
> If you see the same test failure repeatedly, then we'll need to investigate 
> further if something is wrong with the glance server.
>  
> 
> [1] 
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5666/
> 
> [2] https://gerrit.ovirt.org/97698
> 
> Thanks and best regards,
> -- 
> Didi
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/OOP7647K4VLXGVSDWS4R7UERH5SMFQEA/
> 
> 
> -- 
> EYAL EDRI
> He / Him / His
> 
> MANAGER
> CONTINUOUS PRODUCTIZATION
> SYSTEM ENGINEERING
> RED HAT
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/YCZHTRJMCCGCTNUYNUXR3WYTO6I5HTKA/

-- 
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat





___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HSM3EORUDZEX2DVEUD2VSEHFJHPKEJCF/


[ovirt-devel] Re: OST fails on 004_basic_sanity.verify_glance_import

2019-09-26 Thread Eyal Edri
On Thu, Sep 26, 2019 at 1:22 PM Yedidyah Bar David  wrote:

> Hi all,
>
> [1], which I ran for verifying [2], fails for me at
> 004_basic_sanity.verify_glance_import . I can see several different
> cases of failure there reported to devel@, last one 1.5 months ago. Is
> this a (new) known issue?
>

Its not a known issue AFAIK, though it is dependent on the network to the
glance instance,
which sometimes can be unstable or slow, hence a test can fail.
We've had a network outage this morning, it could be related to it.
If you see the same test failure repeatedly, then we'll need to investigate
further if something is wrong with the glance server.


>
> [1]
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5666/
>
> [2] https://gerrit.ovirt.org/97698
>
> Thanks and best regards,
> --
> Didi
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/OOP7647K4VLXGVSDWS4R7UERH5SMFQEA/
>


-- 

Eyal edri

He / Him / His


MANAGER

CONTINUOUS PRODUCTIZATION

SYSTEM ENGINEERING

Red Hat 

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YCZHTRJMCCGCTNUYNUXR3WYTO6I5HTKA/


[ovirt-devel] Re: OST fails

2019-09-19 Thread Vojtech Juranek
> I've stumbled upon similar problem on my local ost run,

should be already fixed, thanks to Galit.

> although I have even more unsatisfied rpm dependencies [1]
> 
> Andrej
> 
> [1] http://pastebin.test.redhat.com/798792
> 
> On Tue, Sep 17, 2019 at 4:46 PM Vojtech Juranek  wrote:
> > Hi,
> > OST has started to fail during today, fails with unsatisfied rpm
> > dependencies,
> > e.g.
> > 
> > + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
> > ovirt-engine
> > ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins
> > cronie
> > Error: Package: rsyslog-mmjsonparse-8.24.0-38.el7.x86_64 (alocalsync)
> > 
> > Requires: rsyslog = 8.24.0-38.el7
> > Installed: rsyslog-8.24.0-34.el7.x86_64 (installed)
> > 
> > rsyslog = 8.24.0-34.el7
> > 
> > and many more, see [1] for more details.
> > 
> > Any idea what's wrong?
> > 
> > Thanks
> > Vojta
> > 
> > 
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5601/console
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AY6D4RVQWSAQ
> > ORJHWGW2ZY3OPW2ZDY4H/



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O7FT7IUHPDCWULFQIP5UAYYFWXW6RDFP/


[ovirt-devel] Re: OST fails

2019-09-19 Thread Andrej Cernek
Hi,
I've stumbled upon similar problem on my local ost run,
although I have even more unsatisfied rpm dependencies [1]

Andrej

[1] http://pastebin.test.redhat.com/798792

On Tue, Sep 17, 2019 at 4:46 PM Vojtech Juranek  wrote:

> Hi,
> OST has started to fail during today, fails with unsatisfied rpm
> dependencies,
> e.g.
>
> + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
> ovirt-engine
> ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins
> cronie
> Error: Package: rsyslog-mmjsonparse-8.24.0-38.el7.x86_64 (alocalsync)
> Requires: rsyslog = 8.24.0-38.el7
> Installed: rsyslog-8.24.0-34.el7.x86_64 (installed)
> rsyslog = 8.24.0-34.el7
>
> and many more, see [1] for more details.
>
> Any idea what's wrong?
>
> Thanks
> Vojta
>
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5601/console
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AY6D4RVQWSAQORJHWGW2ZY3OPW2ZDY4H/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SW34WMG5BCZP5HBLEVGAAWLNPHITJR36/


[ovirt-devel] Re: OST fails in verify_backup_snapshot_removed

2019-09-04 Thread Benny Zlotnik
it should be fixed by https://gerrit.ovirt.org/#/c/103091/ which is merged

On Wed, Sep 4, 2019 at 8:37 PM Yedidyah Bar David  wrote:

> Hi all,
>
> OST manual job failed for me a few times recently in
> 004_basic_sanity.verify_backup_snapshot_removed , last two (both with
> what I think are unrelated changes) are:
>
> 1.
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5516/
>
> with the copyright notices patches to the engine:
>
> https://gerrit.ovirt.org/97698
>
> 2.
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5528/
>
> with the unicode patches to the engine and to otopi:
>
> https://gerrit.ovirt.org/102936
> https://gerrit.ovirt.org/102938
>
> In job console, you can see that remove_backup_vm_and_backup_snapshot
> finished successfully, and several minutes latee
> verify_backup_snapshot_removed fails with a timeout. In between,
> engine.log has several ERRORs, e.g.:
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/5528/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>
> 2019-09-04 11:05:31,271-04 ERROR
> [org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand]
> (EE-ManagedThreadFactory-engine-Thread-354)
> [e55e5d43-c168-42b4-9b80-e0cd2b8405bb] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.DestroyImageCommand'
> with failure.
>
> No idea if that's related, didn't dig deeper. Any clue? Is this a known
> issue?
>
> Thanks and best regards,
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JYSNVL2H4E4UPJ2GOSORO5K7BZC6OXYH/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/55V5T6SJJTKIUWFT5JFCV42TGMVYSHBK/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Sachidananda URS
On Thu, Jul 18, 2019 at 3:19 PM Eyal Edri  wrote:

>
>
> On Thu, Jul 18, 2019 at 12:44 PM Sachidananda URS  wrote:
>
>>
>>
>> On Thu, Jul 18, 2019 at 2:21 PM Sahina Bose  wrote:
>>
>>> +Sac as it's the repo maintained by him, but I doubt if it is this repo
>>> specific
>>>
>>> On Thu, Jul 18, 2019 at 2:11 PM Vojtech Juranek 
>>> wrote:
>>>
 Hi,
 OST fails with

 09:47:03
 https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
 epel-7-x86_64/repodata/repomd.xml
 :
 [Errno 14] curl#7 - "Failed connect to
 copr-be.cloud.fedoraproject.org:443; Connection refused"

 see e.g. [1] for full log. Stared to fail this morning.
 Can anyone take a look and fix it?


>> This sometimes happens with Copr repo. I've seen similar error.
>> But will work after some time. I don't know the reason for this
>> instability.
>>
>
> Can the pkg get hosted on a more stable repo like on CentOS storage sig
> repos?
> e.g http://mirror.centos.org/centos/7/storage/x86_64/gluster-6/
>

Eyal, yes. I've also been wanting to create fedora packages. But haven't
found time.
I'll look into hosting the packages at mirror.centos.org.


-sac
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FD3OG4B74KHIMDHIXZVHG2YLMZTDOG3O/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Sachidananda URS
On Thu, Jul 18, 2019 at 2:21 PM Sahina Bose  wrote:

> +Sac as it's the repo maintained by him, but I doubt if it is this repo
> specific
>
> On Thu, Jul 18, 2019 at 2:11 PM Vojtech Juranek 
> wrote:
>
>> Hi,
>> OST fails with
>>
>> 09:47:03
>> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
>> epel-7-x86_64/repodata/repomd.xml
>> :
>> [Errno 14] curl#7 - "Failed connect to
>> copr-be.cloud.fedoraproject.org:443; Connection refused"
>>
>> see e.g. [1] for full log. Stared to fail this morning.
>> Can anyone take a look and fix it?
>>
>>
This sometimes happens with Copr repo. I've seen similar error.
But will work after some time. I don't know the reason for this instability.

-sac
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XOR5VOSPAF3TBACHPQLAY2TIRBQM4ARX/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Eyal Edri
On Thu, Jul 18, 2019 at 12:44 PM Sachidananda URS  wrote:

>
>
> On Thu, Jul 18, 2019 at 2:21 PM Sahina Bose  wrote:
>
>> +Sac as it's the repo maintained by him, but I doubt if it is this repo
>> specific
>>
>> On Thu, Jul 18, 2019 at 2:11 PM Vojtech Juranek 
>> wrote:
>>
>>> Hi,
>>> OST fails with
>>>
>>> 09:47:03
>>> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
>>> epel-7-x86_64/repodata/repomd.xml
>>> :
>>> [Errno 14] curl#7 - "Failed connect to
>>> copr-be.cloud.fedoraproject.org:443; Connection refused"
>>>
>>> see e.g. [1] for full log. Stared to fail this morning.
>>> Can anyone take a look and fix it?
>>>
>>>
> This sometimes happens with Copr repo. I've seen similar error.
> But will work after some time. I don't know the reason for this
> instability.
>

Can the pkg get hosted on a more stable repo like on CentOS storage sig
repos?
e.g http://mirror.centos.org/centos/7/storage/x86_64/gluster-6/


>
> -sac
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/XOR5VOSPAF3TBACHPQLAY2TIRBQM4ARX/
>


-- 

Eyal edri

He / Him / His


MANAGER

CONTINUOUS PRODUCTIZATION

SYSTEM ENGINEERING

Red Hat 

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OUEDTASUOMG5NEYQ3Z2QHAX5QEAJPJX5/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Sahina Bose
+Sac as it's the repo maintained by him, but I doubt if it is this repo
specific

On Thu, Jul 18, 2019 at 2:11 PM Vojtech Juranek  wrote:

> Hi,
> OST fails with
>
> 09:47:03
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
> epel-7-x86_64/repodata/repomd.xml
> :
> [Errno 14] curl#7 - "Failed connect to
> copr-be.cloud.fedoraproject.org:443; Connection refused"
>
> see e.g. [1] for full log. Stared to fail this morning.
> Can anyone take a look and fix it?
>
> Thanks in advance.
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5132/console
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DXDNFFNWE3DC2IJTOH7CMXN7FD4PO4HU/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MOTEJVTFNYBDK32RG3SSNZH4S5W7B6G2/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Vojtech Juranek
> it seems like you are downloading from external mirror.
> please use local mirrors (this fix should be done in you project)

can you explain what actually I should fix? It fails to download gluster-
ansible. I work on vdsm and it has no dependency on gluster-ansible AFAICT, so 
I have no idea what I should fix in "my" project.

Thanks
 
> 
> On Thu, Jul 18, 2019 at 10:42 AM Vojtech Juranek 
> 
> wrote:
> > Hi,
> > OST fails with
> > 
> > 09:47:03
> > https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
> > epel-7-x86_64/repodata/repomd.xml
> >  
> > > 7-x86_64/repodata/repomd.xml>: [Errno 14] curl#7 - "Failed connect to
> > copr-be.cloud.fedoraproject.org:443; Connection refused"
> > 
> > see e.g. [1] for full log. Stared to fail this morning.
> > Can anyone take a look and fix it?
> > 
> > Thanks in advance.
> > Vojta
> > 
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5132/console
> > ___
> > Infra mailing list -- in...@ovirt.org
> > To unsubscribe send an email to infra-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/in...@ovirt.org/message/DXDNFFNWE3DC
> > 2IJTOH7CMXN7FD4PO4HU/



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FXEMF6WA2GG457IFKFZEH422HZBE2YDA/


[ovirt-devel] Re: OST fails, cannot connect to repo

2019-07-18 Thread Dafna Ron
it seems like you are downloading from external mirror.
please use local mirrors (this fix should be done in you project)


On Thu, Jul 18, 2019 at 10:42 AM Vojtech Juranek 
wrote:

> Hi,
> OST fails with
>
> 09:47:03
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
> epel-7-x86_64/repodata/repomd.xml
> :
> [Errno 14] curl#7 - "Failed connect to
> copr-be.cloud.fedoraproject.org:443; Connection refused"
>
> see e.g. [1] for full log. Stared to fail this morning.
> Can anyone take a look and fix it?
>
> Thanks in advance.
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5132/console
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/DXDNFFNWE3DC2IJTOH7CMXN7FD4PO4HU/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PAEWFBQKS2TP6XJSDCTUUHNBWANWEWLC/


[ovirt-devel] Re: OST fails with LagoDeployError: ... failed with status 1 on lago-basic-suite-master-host-0

2019-02-04 Thread Dafna Ron
thanks Nir,

there is a packaging issue. I will take a look.

Cheers,
Dafna


On Sun, Feb 3, 2019 at 8:19 PM Nir Soffer  wrote:

> On Sun, Feb 3, 2019 at 8:17 PM Nir Soffer  wrote:
>
>> On Sun, Feb 3, 2019 at 7:27 PM Nir Soffer  wrote:
>>
>>> I had 2 OST failure today with the same infrastructure failure, before
>>> running any test:
>>>
>>> + install_cmd='yum install --nogpgcheck --downloaddir=/dev/shm -y'
>>> + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
>>> ovirt-engine ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'
>>> otopi-debug-plugins cronie
>>> Error: Package: 7:device-mapper-event-1.02.149-10.el7_6.3.x86_64
>>> (alocalsync)
>>>Requires: device-mapper = 7:1.02.149-10.el7_6.3
>>>Installed: 7:device-mapper-1.02.149-10.el7_6.2.x86_64
>>> (installed)
>>>device-mapper = 7:1.02.149-10.el7_6.2
>>> ret=1
>>> + echo 'install failed with status 1'
>>> + exit 1
>>>   - STDERR
>>>
>>> Links to failed builds:
>>> -
>>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4011/pipeline
>>> -
>>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4010/pipeline
>>>
>>> The last successful build was at Feb 1:
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4004/
>>>
>>> The trend show that all builds failed since Feb 3 9:02 am.
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/buildTimeTrend
>>>
>>
>> I started another build from master + doc patch (no code change)
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4012/
>>
>
> Fail with the same issue.
>
>
>>
>>
>>> This issue is blocking 4.3 work on 4k support.
>>>
>>> Nir
>>>
>>>
>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/76ATL4H2LJSXVD4F6KJ3ZQC45D7KJUVZ/


[ovirt-devel] Re: OST fails with LagoDeployError: ... failed with status 1 on lago-basic-suite-master-host-0

2019-02-03 Thread Nir Soffer
On Sun, Feb 3, 2019 at 8:17 PM Nir Soffer  wrote:

> On Sun, Feb 3, 2019 at 7:27 PM Nir Soffer  wrote:
>
>> I had 2 OST failure today with the same infrastructure failure, before
>> running any test:
>>
>> + install_cmd='yum install --nogpgcheck --downloaddir=/dev/shm -y'
>> + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
>> ovirt-engine ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'
>> otopi-debug-plugins cronie
>> Error: Package: 7:device-mapper-event-1.02.149-10.el7_6.3.x86_64
>> (alocalsync)
>>Requires: device-mapper = 7:1.02.149-10.el7_6.3
>>Installed: 7:device-mapper-1.02.149-10.el7_6.2.x86_64
>> (installed)
>>device-mapper = 7:1.02.149-10.el7_6.2
>> ret=1
>> + echo 'install failed with status 1'
>> + exit 1
>>   - STDERR
>>
>> Links to failed builds:
>> -
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4011/pipeline
>> -
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4010/pipeline
>>
>> The last successful build was at Feb 1:
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4004/
>>
>> The trend show that all builds failed since Feb 3 9:02 am.
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/buildTimeTrend
>>
>
> I started another build from master + doc patch (no code change)
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4012/
>

Fail with the same issue.


>
>
>> This issue is blocking 4.3 work on 4k support.
>>
>> Nir
>>
>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/X6D2CDWC4FIF5F4I3OBY2P6ZWHCLDHOL/


[ovirt-devel] Re: OST fails with LagoDeployError: ... failed with status 1 on lago-basic-suite-master-host-0

2019-02-03 Thread Nir Soffer
On Sun, Feb 3, 2019 at 7:27 PM Nir Soffer  wrote:

> I had 2 OST failure today with the same infrastructure failure, before
> running any test:
>
> + install_cmd='yum install --nogpgcheck --downloaddir=/dev/shm -y'
> + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp net-snmp
> ovirt-engine ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'
> otopi-debug-plugins cronie
> Error: Package: 7:device-mapper-event-1.02.149-10.el7_6.3.x86_64
> (alocalsync)
>Requires: device-mapper = 7:1.02.149-10.el7_6.3
>Installed: 7:device-mapper-1.02.149-10.el7_6.2.x86_64
> (installed)
>device-mapper = 7:1.02.149-10.el7_6.2
> ret=1
> + echo 'install failed with status 1'
> + exit 1
>   - STDERR
>
> Links to failed builds:
> -
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4011/pipeline
> -
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_manual/detail/ovirt-system-tests_manual/4010/pipeline
>
> The last successful build was at Feb 1:
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4004/
>
> The trend show that all builds failed since Feb 3 9:02 am.
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/buildTimeTrend
>

I started another build from master + doc patch (no code change)
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/4012/


> This issue is blocking 4.3 work on 4k support.
>
> Nir
>
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MWVYHQJO4JFZCNTL3TTGU733WLEC7KRP/


[ovirt-devel] Re: OST fails: python2-pbr requires git-core

2018-11-25 Thread Yedidyah Bar David
On Sun, Nov 25, 2018 at 2:51 PM Emil Natan  wrote:
>
> Sorry, I should have said there is a patch, but not yet merged.

That's ok, I clicked and saw that. It still failed for me,
I see that Galit is still refining it.

>
> On Sun, Nov 25, 2018 at 2:43 PM Yedidyah Bar David  wrote:
>>
>> On Sun, Nov 25, 2018 at 2:22 PM Emil Natan  wrote:
>> >
>> > I see there is a patch https://gerrit.ovirt.org/#/c/95713/
>>
>> Thanks, trying:
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3637/
>>
>> >
>> > On Sun, Nov 25, 2018 at 1:35 PM Yedidyah Bar David  wrote:
>> >>
>> >> Hi all,
>> >>
>> >> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3636/
>> >>
>> >> fails with:
>> >>
>> >>
>> >> 11:04:30 + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp
>> >> net-snmp ovirt-engine ovirt-log-collector
>> >> 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins cronie
>> >> 11:04:30 Error: Package: python2-pbr-4.2.0-2.el7.noarch (alocalsync)
>> >> 11:04:30Requires: git-core
>> >> 11:04:30 + ret=1
>> >>
>> >> Any idea?
>> >>
>> >> Thanks and best regards,
>> >> --
>> >> Didi
>> >> ___
>> >> Infra mailing list -- in...@ovirt.org
>> >> To unsubscribe send an email to infra-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> >> oVirt Code of Conduct: 
>> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives: 
>> >> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/KMDEBJHTV545EP6VLX3UJWBNIW65STT6/
>> >
>> >
>> >
>> > --
>> > Emil Natan
>> > RHV/CNV DevOps
>>
>>
>>
>> --
>> Didi
>
>
>
> --
> Emil Natan
> RHV/CNV DevOps



-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NUSABY5RXZDIOUPVECDYYVGDPSGOJ7O5/


[ovirt-devel] Re: OST fails: python2-pbr requires git-core

2018-11-25 Thread Emil Natan
Sorry, I should have said there is a patch, but not yet merged.

On Sun, Nov 25, 2018 at 2:43 PM Yedidyah Bar David  wrote:

> On Sun, Nov 25, 2018 at 2:22 PM Emil Natan  wrote:
> >
> > I see there is a patch https://gerrit.ovirt.org/#/c/95713/
>
> Thanks, trying:
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3637/
>
> >
> > On Sun, Nov 25, 2018 at 1:35 PM Yedidyah Bar David 
> wrote:
> >>
> >> Hi all,
> >>
> >>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3636/
> >>
> >> fails with:
> >>
> >>
> >> 11:04:30 + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp
> >> net-snmp ovirt-engine ovirt-log-collector
> >> 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins cronie
> >> 11:04:30 Error: Package: python2-pbr-4.2.0-2.el7.noarch (alocalsync)
> >> 11:04:30Requires: git-core
> >> 11:04:30 + ret=1
> >>
> >> Any idea?
> >>
> >> Thanks and best regards,
> >> --
> >> Didi
> >> ___
> >> Infra mailing list -- in...@ovirt.org
> >> To unsubscribe send an email to infra-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/KMDEBJHTV545EP6VLX3UJWBNIW65STT6/
> >
> >
> >
> > --
> > Emil Natan
> > RHV/CNV DevOps
>
>
>
> --
> Didi
>


-- 
Emil Natan
RHV/CNV DevOps
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BW2Q3J4KAEKV3RGXVD36P77HNNZ6XB2H/


[ovirt-devel] Re: OST fails: python2-pbr requires git-core

2018-11-25 Thread Yedidyah Bar David
On Sun, Nov 25, 2018 at 2:22 PM Emil Natan  wrote:
>
> I see there is a patch https://gerrit.ovirt.org/#/c/95713/

Thanks, trying:
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3637/

>
> On Sun, Nov 25, 2018 at 1:35 PM Yedidyah Bar David  wrote:
>>
>> Hi all,
>>
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3636/
>>
>> fails with:
>>
>>
>> 11:04:30 + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp
>> net-snmp ovirt-engine ovirt-log-collector
>> 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins cronie
>> 11:04:30 Error: Package: python2-pbr-4.2.0-2.el7.noarch (alocalsync)
>> 11:04:30Requires: git-core
>> 11:04:30 + ret=1
>>
>> Any idea?
>>
>> Thanks and best regards,
>> --
>> Didi
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/KMDEBJHTV545EP6VLX3UJWBNIW65STT6/
>
>
>
> --
> Emil Natan
> RHV/CNV DevOps



-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/P5LIUI4YIDXK5SF3VSURZ6B4I2URLKH4/


[ovirt-devel] Re: OST fails: python2-pbr requires git-core

2018-11-25 Thread Emil Natan
I see there is a patch https://gerrit.ovirt.org/#/c/95713/

On Sun, Nov 25, 2018 at 1:35 PM Yedidyah Bar David  wrote:

> Hi all,
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3636/
>
> fails with:
>
>
> 11:04:30 + yum install --nogpgcheck --downloaddir=/dev/shm -y ntp
> net-snmp ovirt-engine ovirt-log-collector
> 'ovirt-engine-extension-aaa-ldap*' otopi-debug-plugins cronie
> 11:04:30 Error: Package: python2-pbr-4.2.0-2.el7.noarch (alocalsync)
> 11:04:30Requires: git-core
> 11:04:30 + ret=1
>
> Any idea?
>
> Thanks and best regards,
> --
> Didi
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/KMDEBJHTV545EP6VLX3UJWBNIW65STT6/
>


-- 
Emil Natan
RHV/CNV DevOps
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OQSGQXLY24GS3XV5KZQ6YBZ7B7URHFLO/