Hi,

Is there an ongoing engine master OST failure blocking?

[ INFO  ] Stage: Misc configuration
[ INFO  ]  Stage: Package installation
[ INFO  ]  Stage: Misc configuration
[ ERROR ] Failed to execute stage \'Misc configuration\': Failed to start
service \'openvswitch\'
[ INFO  ] Yum Performing yum transaction rollback


These are unrelated code changes:

http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/4644/
https://gerrit.ovirt.org/#/c/89347/

and
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/4647/
https://gerrit.ovirt.org/67166

But they both die in 001, with exactly 1.24MB in the log and 'Failed to
start service openvswitch'
001_initialize_engine.py.junit.xml    1.24 MB

Full file:
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/4644/artifact/exported-artifacts/basic-suite-master__logs/001_initialize_engine.py.junit.xml


On Fri, Mar 23, 2018 at 12:14 PM, Dafna Ron <[email protected]> wrote:

> Hello,
>
> I would like to update on this week's failures and OST current status.
>
> On 19-03-2018 - the CI team reported 3 different failures.
>
> On Master branch the failed changes reported were:
>
>
> *core: fix removal of vm-host device - https://gerrit.ovirt.org/#/c/89145/
> <https://gerrit.ovirt.org/#/c/89145/>*
>
> *core: USB in osinfo configuration depends on chipset -
> https://gerrit.ovirt.org/#/c/88777/ <https://gerrit.ovirt.org/#/c/88777/>*
> *On 4.2 *branch, the reported change was:
>
>
>
> *core: Call endAction() of all child commands in ImportVmCommand -
> https://gerrit.ovirt.org/#/c/89165/ <https://gerrit.ovirt.org/#/c/89165/>*
> The fix's for the regressions were merged the following day (20-03-2018)
>
> https://gerrit.ovirt.org/#/c/89250/- core: Replace generic unlockVm()
> logic in ImportVmCommand
> https://gerrit.ovirt.org/#/c/89187/ - core: Fix NPE when creating an
> instance type
>
> On 20-03-2018 - the CI team discovered an issue on the job's cleanup which
> caused random failures on changes testing due to failure in docker cleanup.
> There is an open Jira on the issue: https://ovirt-jira.atlassian.
> net/browse/OVIRT-1939
>
>
>
> *Below you can see the chart for this week's resolved issues but cause of
> failure:*Code = regression of working components/functionalities
> Configurations = package related issues
> Other = failed build artifacts
> Infra = infrastructure/OST/Lago related issues
>
>
>
>
>
>
>
>
>
>
> *Below is a chart of resolved failures based on ovirt version*
>
>
>
>
>
>
>
>
>
>
>
>
> *Below is a chart showing failures by suite type: *
> Thank you,
> Dafna
>
>
> _______________________________________________
> Infra mailing list
> [email protected]
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA

<https://www.redhat.com/>

[email protected]    IRC: gshereme
<https://red.ht/sig>
_______________________________________________
Devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/devel

Reply via email to