Re: How to trigger re-merge in github?

2020-03-11 Thread Dafna Ron
Hi Didi,

you need to trigger it from the webhooks.

I think this is the one that you need but you need to select the correct
change from the list and redeliver it:

https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/settings/hooks/43850293

let me know if you need help,
Dafna



On Wed, Mar 11, 2020 at 7:15 AM Yedidyah Bar David  wrote:

> Hi all,
>
> The docs say [1] that for gerrit we can comment 'ci re-merge please'.
> But for github [2] I don't see anything similar, only 'test|check' and
> 'build'.
>
> Latest merged PR for ovirt-ansible-hosted-engine-setup [3] failed due
> to an unrelated reason (see thread "OST basic suite fails on
> 002_bootstrap.add_secondary_storage_domains"), now fixed, and I want
> CQ to handle [3] again. How?
>
> Thanks!
>
> [1]
> https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_Gerrit/index.html
>
> [2]
> https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub/index.html
>
> [3] https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/306
> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NZW5SBTN7WPXTWDAG4PWP4CZSNPF5TFL/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ODXZTUIWKSSZFQPB34O6GTPZYC6FG6QP/


Re: Please remove 4.2 from your stdci.yaml files

2019-10-09 Thread Dafna Ron
The failure reported here was resolved along with another package that was
effected but as the configuration is in the projects themselves and we
cannot really go over each bone to check I am afraid we cannot give a list
of packages at this time (until it fails).

On Wed, Oct 9, 2019 at 3:36 PM Sandro Bonazzola  wrote:

>
>
> Il giorno ven 20 set 2019 alle ore 12:14 Dafna Ron  ha
> scritto:
>
>> Hi,
>>
>> We have several unstable jobs which are caused because 4.2 is no longer
>> available for CQ.
>>
>> Please remove 4.2 from your stdci.yaml files to avoid getting unstable
>> jobs.
>>
>
> have we got a list of packages affected?
>
>
>>
>> Thanks,
>> Dafna
>>
>> ___
>> Infra mailing list -- infra@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4SKBLVF7AFCAKWM6ZBB6BLDY6B3TDSTP/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PZTQF2ICD6TXXDC4MR3BX24VAG6ROKB3/


Please remove 4.2 from your stdci.yaml files

2019-09-20 Thread Dafna Ron
Hi,

We have several unstable jobs which are caused because 4.2 is no longer
available for CQ.

Please remove 4.2 from your stdci.yaml files to avoid getting unstable
jobs.

Thanks,
Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4SKBLVF7AFCAKWM6ZBB6BLDY6B3TDSTP/


Re: URGENT - ovirt-engine master is failing for the past 9 days

2019-09-03 Thread Dafna Ron
We have a passing ovirt-engine build
Thanks everyone!
Dafna


On Mon, Sep 2, 2019 at 1:00 PM Tal Nisan  wrote:

> Done
>
> On Mon, Sep 2, 2019 at 2:46 PM Dafna Ron  wrote:
>
>> Thanks Andrej,
>> Tal, can you merge?
>>
>> Thanks,
>> Dafna
>>
>>
>> On Mon, Sep 2, 2019 at 2:42 PM Andrej Krejcir 
>> wrote:
>>
>>> This patch fixes the issue: https://gerrit.ovirt.org/#/c/103019/
>>>
>>> On Mon, 2 Sep 2019 at 10:16, Andrej Krejcir  wrote:
>>>
>>>> I will post a fix later today. Sorry for the delay, It took me a while
>>>> to figure out what is the problem.
>>>>
>>>> On Sun, 1 Sep 2019 at 14:58, Tal Nisan  wrote:
>>>>
>>>>> OST failing for 9 days is totally unacceptable, the only reason I'm
>>>>> not sending a revert patch is that it'll make a bloody mess with the
>>>>> upgrade scripts as we have two on top of it, this needs to be fixed ASAP
>>>>>
>>>>> On Sun, Sep 1, 2019 at 3:27 PM Dafna Ron  wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We have been failing CQ master for project ovirt-engine for the past
>>>>>> 9 days.
>>>>>> this was reported last week and we have not yet seen a fix for this
>>>>>> issue.
>>>>>>
>>>>>> the patch reported by CQ is
>>>>>> https://gerrit.ovirt.org/#/c/101913/10 - core: Change CPU config to
>>>>>> secure/insecure concept
>>>>>>
>>>>>> logs for latest failure can be found here:
>>>>>>
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>>>
>>>>>> Error:
>>>>>> 2019-09-01 06:31:17,248-04 INFO
>>>>>>  [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
>>>>>> [174bc463-9a98-441f-ad14-4d97fefde4fa] Running command:
>>>>>> UpdateClusterCommand internal: false. Entities affected :  ID:
>>>>>> 0407bc06-4bef-4c47-99e1-0e42f9ced996 Type: ClusterAction group
>>>>>> EDIT_CLUSTER_CONFIGURATION with role type ADMIN
>>>>>> 2019-09-01 06:31:17,263-04 INFO
>>>>>>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>>>> (default task-1) [174bc463-9a98-441f-ad14-4d97fefde4fa] EVENT_ID:
>>>>>> CLUSTER_UPDATE_CPU_WHEN_DEPRECATED(9,029), Modified the CPU Type to Intel
>>>>>> Haswell-noTSX Family when upgrading the Compatibility Version of Cluster
>>>>>> test-cluster because the previous CPU Type was deprecated.
>>>>>> 2019-09-01 06:31:17,319-04 WARN
>>>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [18fd7817]
>>>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>>>> Reasons:
>>>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>>>> 2019-09-01 06:31:17,329-04 WARN
>>>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [62694a6a]
>>>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>>>> Reasons:
>>>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>>>> 2019-09-01 06:31:17,334-04 WARN
>>>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [10e41cd3]
>>>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>>>> Reasons:
>>>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>>>> 2019-09-01 06:31:17,345-04 WARN
>>>>>>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
>>>>>> [26a471dd] Validation of action 'UpdateVmTemplate' failed for user
>>>>>> admin@internal-authz. Reasons:
>>>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
>>>>>> 2019-09-01 06:31:17,352-04 WARN
>>>>>>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
>>>>>> [32fd89b5] Validation of action 'Upd

Re: URGENT - ovirt-engine master is failing for the past 9 days

2019-09-02 Thread Dafna Ron
Thanks Andrej,
Tal, can you merge?

Thanks,
Dafna


On Mon, Sep 2, 2019 at 2:42 PM Andrej Krejcir  wrote:

> This patch fixes the issue: https://gerrit.ovirt.org/#/c/103019/
>
> On Mon, 2 Sep 2019 at 10:16, Andrej Krejcir  wrote:
>
>> I will post a fix later today. Sorry for the delay, It took me a while to
>> figure out what is the problem.
>>
>> On Sun, 1 Sep 2019 at 14:58, Tal Nisan  wrote:
>>
>>> OST failing for 9 days is totally unacceptable, the only reason I'm not
>>> sending a revert patch is that it'll make a bloody mess with the upgrade
>>> scripts as we have two on top of it, this needs to be fixed ASAP
>>>
>>> On Sun, Sep 1, 2019 at 3:27 PM Dafna Ron  wrote:
>>>
>>>> Hi,
>>>>
>>>> We have been failing CQ master for project ovirt-engine for the past 9
>>>> days.
>>>> this was reported last week and we have not yet seen a fix for this
>>>> issue.
>>>>
>>>> the patch reported by CQ is
>>>> https://gerrit.ovirt.org/#/c/101913/10 - core: Change CPU config to
>>>> secure/insecure concept
>>>>
>>>> logs for latest failure can be found here:
>>>>
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>
>>>> Error:
>>>> 2019-09-01 06:31:17,248-04 INFO
>>>>  [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
>>>> [174bc463-9a98-441f-ad14-4d97fefde4fa] Running command:
>>>> UpdateClusterCommand internal: false. Entities affected :  ID:
>>>> 0407bc06-4bef-4c47-99e1-0e42f9ced996 Type: ClusterAction group
>>>> EDIT_CLUSTER_CONFIGURATION with role type ADMIN
>>>> 2019-09-01 06:31:17,263-04 INFO
>>>>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-1) [174bc463-9a98-441f-ad14-4d97fefde4fa] EVENT_ID:
>>>> CLUSTER_UPDATE_CPU_WHEN_DEPRECATED(9,029), Modified the CPU Type to Intel
>>>> Haswell-noTSX Family when upgrading the Compatibility Version of Cluster
>>>> test-cluster because the previous CPU Type was deprecated.
>>>> 2019-09-01 06:31:17,319-04 WARN
>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [18fd7817]
>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>> Reasons:
>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>> 2019-09-01 06:31:17,329-04 WARN
>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [62694a6a]
>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>> Reasons:
>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>> 2019-09-01 06:31:17,334-04 WARN
>>>>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [10e41cd3]
>>>> Validation of action 'UpdateVm' failed for user admin@internal-authz.
>>>> Reasons:
>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
>>>> 2019-09-01 06:31:17,345-04 WARN
>>>>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
>>>> [26a471dd] Validation of action 'UpdateVmTemplate' failed for user
>>>> admin@internal-authz. Reasons:
>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
>>>> 2019-09-01 06:31:17,352-04 WARN
>>>>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
>>>> [32fd89b5] Validation of action 'UpdateVmTemplate' failed for user
>>>> admin@internal-authz. Reasons:
>>>> VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
>>>> 2019-09-01 06:31:17,363-04 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-1) [32fd89b5] EVENT_ID:
>>>> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
>>>> compatibility version of Vm/Template: [vm-with-iface], Message: [No 
>>>> Message]
>>>> 2019-09-01 06:31:17,371-04 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-1) [32fd89b5] EVENT_ID:
>>>> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
>&

[JIRA] (OVIRT-2786) Re: fc30 packages in master snapshot

2019-09-02 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39769#comment-39769
 ] 

Dafna Ron commented on OVIRT-2786:
--

I am surprised it did not comment on the bug as it usually does and it also
sends a mail to the list.
However, I think as we are going to work with zuul, it may not be effective
to work on CQ :)






> Re: fc30 packages in master snapshot
> 
>
> Key: OVIRT-2786
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2786
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Anton Marchukov
>Assignee: infra
>
> Adding infra-support to open a ticket.
> > On 2 Sep 2019, at 09:09, Yedidyah Bar David  wrote:
> > 
> > Hi,
> > 
> > How are these published?
> > 
> > A few days ago we merged patches to add fc30 builds of otopi and
> > ovirt-host-deploy. Jenkins ran build-artifacts on them and all seems
> > good. But we do not see them in [1]. Should we? Are they stuck
> > somewhere, or there is simply nothing that publishes them these days?
> > Asking the latter, because the timestamps on the few files there are
> > from more than 3 months ago.
> > 
> > Thanks and best regards,
> > 
> > [1] https://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/fc30/
> > -- 
> > Didi
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YPRP36PGMWVBCAYNCVKFJS25I7NUEUZG/


Re: [JIRA] (OVIRT-2786) Re: fc30 packages in master snapshot

2019-09-02 Thread Dafna Ron
I am surprised it did not comment on the bug as it usually does and it also
sends a mail to the list.
However, I think as we are going to work with zuul, it may not be effective
to work on CQ :)



On Mon, Sep 2, 2019 at 10:52 AM Yedidyah Bar David  wrote:

> On Mon, Sep 2, 2019 at 10:47 AM Dafna Ron (oVirt JIRA)
>  wrote:
> >
> >
> > [
> https://ovirt-jira.atlassian.net/browse/OVIRT-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39768#comment-39768
> ]
> >
> > Dafna Ron commented on OVIRT-2786:
> > --
> >
> > packages only get into tested once they pass CQ.
> >
> > I looked at the latest patch that that build packages and it failed CQ:
> >
> > patch: [
> https://gerrit.ovirt.org/#/c/102941/|https://gerrit.ovirt.org/#/c/102941/]
> >
> > job
> >
> > [http://jenkins.ovirt.org/view/Change queue
> jobs/job/ovirt-master_change-queue-tester/15617/|
> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/15617/
> ]
>
> Is it possible to automatically add a comment to gerrit on CQ failure?
> It would greatly help notice such failures.
>
> This specific one should be fixed by these two together:
>
> https://gerrit.ovirt.org/103002
> https://gerrit.ovirt.org/103005
>
> >
> >
> >
> > > Re: fc30 packages in master snapshot
> > > 
> > >
> > > Key: OVIRT-2786
> > > URL:
> https://ovirt-jira.atlassian.net/browse/OVIRT-2786
> > > Project: oVirt - virtualization made easy
> > >  Issue Type: By-EMAIL
> > >Reporter: Anton Marchukov
> > >Assignee: infra
> > >
> > > Adding infra-support to open a ticket.
> > > > On 2 Sep 2019, at 09:09, Yedidyah Bar David  wrote:
> > > >
> > > > Hi,
> > > >
> > > > How are these published?
> > > >
> > > > A few days ago we merged patches to add fc30 builds of otopi and
> > > > ovirt-host-deploy. Jenkins ran build-artifacts on them and all seems
> > > > good. But we do not see them in [1]. Should we? Are they stuck
> > > > somewhere, or there is simply nothing that publishes them these days?
> > > > Asking the latter, because the timestamps on the few files there are
> > > > from more than 3 months ago.
> > > >
> > > > Thanks and best regards,
> > > >
> > > > [1]
> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/fc30/
> > > > --
> > > > Didi
> > > --
> > > Anton Marchukov
> > > Associate Manager - RHV DevOps - Red Hat
> >
> >
> >
> > --
> > This message was sent by Atlassian Jira
> > (v1001.0.0-SNAPSHOT#100108)
> > ___
> > Infra mailing list -- infra@ovirt.org
> > To unsubscribe send an email to infra-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HTNJHIIXMIOGFI6QDP2DMA6CWS43RB2V/
>
>
>
> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AVUXWTIZZEA2UOJWXUPYMGYZABRKJLOT/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SLKYRVBPMV3T5MOLHYOHLHDMDPEG2MTX/


[JIRA] (OVIRT-2786) Re: fc30 packages in master snapshot

2019-09-02 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39768#comment-39768
 ] 

Dafna Ron commented on OVIRT-2786:
--

packages only get into tested once they pass CQ. 

I looked at the latest patch that that build packages and it failed CQ: 

patch: 
[https://gerrit.ovirt.org/#/c/102941/|https://gerrit.ovirt.org/#/c/102941/]

job

[http://jenkins.ovirt.org/view/Change queue 
jobs/job/ovirt-master_change-queue-tester/15617/|http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/15617/]



> Re: fc30 packages in master snapshot
> 
>
> Key: OVIRT-2786
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2786
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Anton Marchukov
>Assignee: infra
>
> Adding infra-support to open a ticket.
> > On 2 Sep 2019, at 09:09, Yedidyah Bar David  wrote:
> > 
> > Hi,
> > 
> > How are these published?
> > 
> > A few days ago we merged patches to add fc30 builds of otopi and
> > ovirt-host-deploy. Jenkins ran build-artifacts on them and all seems
> > good. But we do not see them in [1]. Should we? Are they stuck
> > somewhere, or there is simply nothing that publishes them these days?
> > Asking the latter, because the timestamps on the few files there are
> > from more than 3 months ago.
> > 
> > Thanks and best regards,
> > 
> > [1] https://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/fc30/
> > -- 
> > Didi
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HTNJHIIXMIOGFI6QDP2DMA6CWS43RB2V/


URGENT - ovirt-engine master is failing for the past 9 days

2019-09-01 Thread Dafna Ron
Hi,

We have been failing CQ master for project ovirt-engine for the past 9
days.
this was reported last week and we have not yet seen a fix for this issue.

the patch reported by CQ is
https://gerrit.ovirt.org/#/c/101913/10 - core: Change CPU config to
secure/insecure concept

logs for latest failure can be found here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/

Error:
2019-09-01 06:31:17,248-04 INFO
 [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
[174bc463-9a98-441f-ad14-4d97fefde4fa] Running command:
UpdateClusterCommand internal: false. Entities affected :  ID:
0407bc06-4bef-4c47-99e1-0e42f9ced996 Type: ClusterAction group
EDIT_CLUSTER_CONFIGURATION with role type ADMIN
2019-09-01 06:31:17,263-04 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [174bc463-9a98-441f-ad14-4d97fefde4fa] EVENT_ID:
CLUSTER_UPDATE_CPU_WHEN_DEPRECATED(9,029), Modified the CPU Type to Intel
Haswell-noTSX Family when upgrading the Compatibility Version of Cluster
test-cluster because the previous CPU Type was deprecated.
2019-09-01 06:31:17,319-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [18fd7817]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,329-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [62694a6a]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,334-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [10e41cd3]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,345-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
[26a471dd] Validation of action 'UpdateVmTemplate' failed for user
admin@internal-authz. Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
2019-09-01 06:31:17,352-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
[32fd89b5] Validation of action 'UpdateVmTemplate' failed for user
admin@internal-authz. Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
2019-09-01 06:31:17,363-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm-with-iface], Message: [No Message]
2019-09-01 06:31:17,371-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm-with-iface-template], Message:
[No Message]
2019-09-01 06:31:17,380-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm0], Message: [No Message]
2019-09-01 06:31:17,388-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm1], Message: [No Message]
2019-09-01 06:31:17,405-04 INFO
 [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [32fd89b5]
Command [id=50898f00-4956-484a-94ff-f273adc2e93e]: Compensating
UPDATED_ONLY_ENTITY of
org.ovirt.engine.core.common.businessentities.Cluster; snapshot: Cluster
[test-cluster].
2019-09-01 06:31:17,422-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID: USER_UPDATE_CLUSTER_FAILED(812),
Failed to update Host cluster (User: admin@internal-authz)
2019-09-01 06:31:17,422-04 INFO
 [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
[32fd89b5] Lock freed to object
'EngineLock:{exclusiveLocks='[5d3b2f0f-f05f-41c6-b61a-c517704f79fd=TEMPLATE,
728f5530-4e34-4c92-beef-ca494ec104b9=TEMPLATE]',
sharedLocks='[a107316a-a961-403f-adb2-d01f22f0b8f1=VM,
dc3a2ad8-6019-4e00-85e5-d9fba7390d4f=VM,
6348722f-61ae-40be-a2b8-bd9d086b06dc=VM]'}'
2019-09-01 06:31:17,424-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: [Update of cluster compatibility version
failed because there are VMs/Templates [vm-with-iface,
vm-with-iface-template, vm0, vm1] with incorrect configuration. 

[JIRA] (OVIRT-2785) Unable to add Documentation +1 flag to patch

2019-08-29 Thread Dafna Ron (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dafna Ron updated OVIRT-2785:
-
Resolution: Fixed
Status: Done  (was: To Do)

> Unable to add Documentation +1 flag to patch
> 
>
> Key: OVIRT-2785
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2785
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Steve Goodman
>Assignee: infra
>
> I was able to add the Documentation +1 to patches in the ovirt-engine-api 
> project, but now I can't. For example, https://gerrit.ovirt.org/#/c/102438/, 
> when I click Reply, I don't see the Documentation flag. I do see the Code 
> Review and Verified flags.
> I checked to see if I'm on 
> https://gerrit.ovirt.org/#/admin/groups/131,members and I am.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LGLDOVPVCXMGEZCCSS7SOFQQTEQ2JKG5/


[JIRA] (OVIRT-2785) Unable to add Documentation +1 flag to patch

2019-08-29 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39726#comment-39726
 ] 

Dafna Ron commented on OVIRT-2785:
--

any time  



> Unable to add Documentation +1 flag to patch
> 
>
> Key: OVIRT-2785
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2785
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Steve Goodman
>Assignee: infra
>
> I was able to add the Documentation +1 to patches in the ovirt-engine-api 
> project, but now I can't. For example, https://gerrit.ovirt.org/#/c/102438/, 
> when I click Reply, I don't see the Documentation flag. I do see the Code 
> Review and Verified flags.
> I checked to see if I'm on 
> https://gerrit.ovirt.org/#/admin/groups/131,members and I am.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NHEVX6FO4AHIGW52XIFCDB622LP56EKV/


[JIRA] (OVIRT-2785) Unable to add Documentation +1 flag to patch

2019-08-29 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39720#comment-39720
 ] 

Dafna Ron commented on OVIRT-2785:
--

since your old user name is inactive, if you were added with that user to any 
groups you will need to be re-added with the new user.  this is not something 
you can fix yourself but as I could only find you in that project then it 
should not happen again. 

if you have any other groups that you would like me to check in case I missed 
it, please let me know and I will check it now. 

> Unable to add Documentation +1 flag to patch
> 
>
> Key: OVIRT-2785
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2785
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Steve Goodman
>Assignee: infra
>
> I was able to add the Documentation +1 to patches in the ovirt-engine-api 
> project, but now I can't. For example, https://gerrit.ovirt.org/#/c/102438/, 
> when I click Reply, I don't see the Documentation flag. I do see the Code 
> Review and Verified flags.
> I checked to see if I'm on 
> https://gerrit.ovirt.org/#/admin/groups/131,members and I am.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DYF5X52FELBSXDCPACSAU4V5OSGK527Z/


[JIRA] (OVIRT-2785) Unable to add Documentation +1 flag to patch

2019-08-29 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39718#comment-39718
 ] 

Dafna Ron commented on OVIRT-2785:
--

He does have two accounts but one is marked as inactive. 

I removed and added the user to see if the issue is that the inactive account 
was the one that is added and not the active one. 

[~accountid:5cbfd1757742d70ffbc788e2] can you try now? 



> Unable to add Documentation +1 flag to patch
> 
>
> Key: OVIRT-2785
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2785
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Steve Goodman
>Assignee: infra
>
> I was able to add the Documentation +1 to patches in the ovirt-engine-api 
> project, but now I can't. For example, https://gerrit.ovirt.org/#/c/102438/, 
> when I click Reply, I don't see the Documentation flag. I do see the Code 
> Review and Verified flags.
> I checked to see if I'm on 
> https://gerrit.ovirt.org/#/admin/groups/131,members and I am.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LSCMHTIURBIW3QT64MXRKZEWYR76BY7X/


[JIRA] (OVIRT-2785) Unable to add Documentation +1 flag to patch

2019-08-29 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39716#comment-39716
 ] 

Dafna Ron commented on OVIRT-2785:
--

[~accountid:557058:5fc78873-359e-47c9-aa0b-4845b0da8143] was this removed? 

> Unable to add Documentation +1 flag to patch
> 
>
> Key: OVIRT-2785
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2785
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Steve Goodman
>Assignee: infra
>
> I was able to add the Documentation +1 to patches in the ovirt-engine-api 
> project, but now I can't. For example, https://gerrit.ovirt.org/#/c/102438/, 
> when I click Reply, I don't see the Documentation flag. I do see the Code 
> Review and Verified flags.
> I checked to see if I'm on 
> https://gerrit.ovirt.org/#/admin/groups/131,members and I am.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WOTTEWRJG3Q4QZVCSSPJASMJOIW3O5XP/


Re: [ovirt-devel] [Ovirt] [CQ weekly status] [02-08-2019]

2019-08-05 Thread Dafna Ron
 I think I saw some projects still merging to 4.2

On Mon, Aug 5, 2019 at 5:44 PM Eyal Edri  wrote:

>
>
> On Mon, Aug 5, 2019 at 5:23 PM Dusan Fodor  wrote:
>
>>
>>
>> On Mon, Aug 5, 2019 at 11:22 AM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 2 ago 2019 alle ore 22:50 Dusan Fodor 
>>> ha scritto:
>>>
 Hi,

 This mail is to provide the current status of CQ and allow people to
 review status before and after the weekend.
 Please refer to below colour map for further information on the meaning
 of the colours.

 *CQ-4.2*:  RED (#1)

 Last failure was on 01-08 for ovirt-ansible-hosted-engine-setup caused
 by missing dependency, patch is pending to fix this.

>>>
>>> I think we can close 4.2 CQ, 4.2 gone EOL a few months ago
>>>
>>
> Can we drop it in upstream?
> In downstream its running 4.2 EUS with 4.3 engine.
>
>
>>
>>>
>>>

 *CQ-4.3*:   RED (#1)

 Last failure was on 02-08 for vdsm caused by missing dependency, patch
 is pending to fix this.

>>>
>>> Have we got a bug for this? If not please open one
>>>
>> It was already resolved by https://gerrit.ovirt.org/#/c/102317/
>>
>>>
>>>

 *CQ-Master:*  RED (#1)

 Last failure was on 02-08 for ovirt-engine due failure in
 build-artifacts, which was caused by gerrit issue, which was reported
 Evgheni.

  Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be
 found here:

 [1]
 http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

 [2]
 https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

 [3]
 http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

 Have a nice week!
 Dusan



 ---
 COLOUR MAP

 Green = job has been passing successfully

 ** green for more than 3 days may suggest we need a review of our test
 coverage


1.

1-3 days   GREEN (#1)
2.

4-7 days   GREEN (#2)
3.

Over 7 days GREEN (#3)


 Yellow = intermittent failures for different projects but no lasting or
 current regressions

 ** intermittent would be a healthy project as we expect a number of
 failures during the week

 ** I will not report any of the solved failures or regressions.


1.

Solved job failuresYELLOW (#1)
2.

Solved regressions  YELLOW (#2)


 Red = job has been failing

 ** Active Failures. The colour will change based on the amount of time
 the project/s has been broken. Only active regressions would be reported.


1.

1-3 days  RED (#1)
2.

4-7 days  RED (#2)
3.

Over 7 days RED (#3)

 ___
 Devel mailing list -- de...@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/de...@ovirt.org/message/YCNCKRK3G4EJXA3OCYAUS4VMKRDA67F4/

>>>
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> *Red Hat respects your work life balance.
>>> Therefore there is no need to answer this email out of your office hours.
>>> *
>>>
>>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat 
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IUUPA6RTP5BMBQC4JYWEYTYKQC6BCN2H/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DPELS26ERWH464APDF3SRFGSMSXOK3QQ/


[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39561#comment-39561
 ] 

Dafna Ron commented on OVIRT-2759:
--

yes. actually, the file is under the ost automation dir and I just checked and 
they are all with the same name so once the mirror job is updated this should 
be resolved 



> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fed

[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39559#comment-39559
 ] 

Dafna Ron commented on OVIRT-2759:
--

patch from CI side: 
[https://gerrit.ovirt.org/#/c/101889/|https://gerrit.ovirt.org/#/c/101889]

once we merge please add sac-gluster-ansible-el7 to your automation .repo file



> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fed

[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39553#comment-39553
 ] 

Dafna Ron commented on OVIRT-2759:
--

[~dfodor] 

> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection r

[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39552#comment-39552
 ] 

Dafna Ron commented on OVIRT-2759:
--

the repo is not down

[https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/|https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/]



> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fed

[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39550#comment-39550
 ] 

Dafna Ron commented on OVIRT-2759:
--

the local repo needs to be created by CI as until now it was not added. 

please provide the repo you would like to add and we will create for you. 

this will sync all packages locally so you can run your automation without 
external failures 



> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.

[JIRA] (OVIRT-2759) Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused

2019-07-18 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39548#comment-39548
 ] 

Dafna Ron commented on OVIRT-2759:
--

issue is in external repo and we do not have the repo configured in local 
mirrors file

patch needs to be added to add the mirror to mirrors-reposync.conf and then 
project needs to be fixed to include the local mirror name in the .repos file

> Fwd: OST failing repo sac-gluster-ansible-el7 - Connection refused
> --
>
> Key: OVIRT-2759
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2759
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Evgeny Slutsky
>Assignee: infra
>
> Hi,
> I'm trying to run Local lago OST,
> and failing with this error:
> @ Create prefix internal repo:
>   # Syncing remote repos locally (this might take some time):
> * Running reposync:
>   - reposync command failed for repoid: ovirt-4.3-tested-el7
> stdout:
> ovirt-4.3-tested-el7 | 3.0 kB 00:00
> centos-base-el7  | 3.6 kB 00:00
> centos-extras-el7| 3.4 kB 00:00
> centos-opstools-testing-el7  | 2.9 kB 00:00
> centos-ovirt-4.3-el7 | 3.4 kB 00:00
> centos-qemu-ev-testing-el7   | 2.9 kB 00:00
> centos-sclo-rh-release-el7   | 3.0 kB 00:00
> centos-updates-el7   | 3.4 kB 00:00
> epel-el7 | 5.3 kB 00:00
> glusterfs-6-el7  | 2.9 kB 00:00
> ovirt-4.3-snapshot-static-el7| 3.0 kB 00:00
> stderr:
> Traceback (most recent call last):
>   File "/bin/reposync", line 373, in 
> main()
>   File "/bin/reposync", line 185, in main
> my.doRepoSetup()
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in
> doRepoSetup
> return self._getRepos(thisrepo, True)
>   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in
> _getRepos
> self._repos.doSetup(thisrepo)
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
> self.retrieveAllMD()
>   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in
> retrieveAllMD
> dl = repo._async and repo._commonLoadRepoXML(repo)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1482, in
> _commonLoadRepoXML
> result = self._getFileRepoXML(local, text)
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1259, in
> _getFileRepoXML
> size=102400) # setting max size as 100K
>   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1042, in
> _getFile
> raise e
> yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
> sac-gluster-ansible-el7: [Errno 256] No more mirrors to try.
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.org:443;
> Connection refused"
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/repodata/repomd.xml:
> [Errno 14] curl#7 - "Failed connect to copr-be.cloud.fedoraproject.

Re: OST fails, cannot connect to repo

2019-07-18 Thread Dafna Ron
in your project you have an automation folder with a .repos file
the repos file should be
, 
example:
kvm-common-el7,http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/

if the specific repo is not created in the CI and you only go to external
repos then whenever that repo is down your jobs will fail and its not
related to CI as we do not maintain external mirrors.

I looked the specific repo and its not in the mirrors file so we can add it
and then you need to fix your automation to include the local mirror name.


On Thu, Jul 18, 2019 at 10:50 AM Vojtech Juranek 
wrote:

> > it seems like you are downloading from external mirror.
> > please use local mirrors (this fix should be done in you project)
>
> can you explain what actually I should fix? It fails to download gluster-
> ansible. I work on vdsm and it has no dependency on gluster-ansible
> AFAICT, so
> I have no idea what I should fix in "my" project.
>
> Thanks
>
> >
> > On Thu, Jul 18, 2019 at 10:42 AM Vojtech Juranek 
> >
> > wrote:
> > > Hi,
> > > OST fails with
> > >
> > > 09:47:03
> > > https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
> > > epel-7-x86_64/repodata/repomd.xml
> > > <
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel->
> > 7-x86_64/repodata/repomd.xml>: [Errno 14] curl#7 - "Failed connect to
> > > copr-be.cloud.fedoraproject.org:443; Connection refused"
> > >
> > > see e.g. [1] for full log. Stared to fail this morning.
> > > Can anyone take a look and fix it?
> > >
> > > Thanks in advance.
> > > Vojta
> > >
> > > [1]
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5132/console
> > > ___
> > > Infra mailing list -- infra@ovirt.org
> > > To unsubscribe send an email to infra-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> > >
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DXDNFFNWE3DC
> > > 2IJTOH7CMXN7FD4PO4HU/
>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BZP6JTIQGGQ7GZ66RWW3LMRYTUQ65ARM/


Re: OST fails, cannot connect to repo

2019-07-18 Thread Dafna Ron
it seems like you are downloading from external mirror.
please use local mirrors (this fix should be done in you project)


On Thu, Jul 18, 2019 at 10:42 AM Vojtech Juranek 
wrote:

> Hi,
> OST fails with
>
> 09:47:03
> https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/
> epel-7-x86_64/repodata/repomd.xml
> :
> [Errno 14] curl#7 - "Failed connect to
> copr-be.cloud.fedoraproject.org:443; Connection refused"
>
> see e.g. [1] for full log. Stared to fail this morning.
> Can anyone take a look and fix it?
>
> Thanks in advance.
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5132/console
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DXDNFFNWE3DC2IJTOH7CMXN7FD4PO4HU/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PAEWFBQKS2TP6XJSDCTUUHNBWANWEWLC/


[Ovirt] [CQ weekly status] [12-07-2019]

2019-07-12 Thread Dafna Ron
Hi,

This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.

*CQ-4.2*:  GREEN (#1)

Last failure was on 02-07 for ovirt-provider-ovm due to failed test:
098_ovirt_provider_ovn.use_ovn_provide.
This was a code regression and was fixed by patch
https://gerrit.ovirt.org/#/c/97072/

*CQ-4.3*:  GREEN (#1)

Last failure was on 12-07 for ovirt-provider-ovm due to failed
build-artifacts which was caused due to a mirror issue.

*CQ-Master:*  RED (#1)

Last failure was on 12-07 for ovirt-provider-ovm due to failed
build-artifacts which was caused due to a mirror issue.

 Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:

[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

Happy week!
Dafna


---
COLOUR MAP

Green = job has been passing successfully

** green for more than 3 days may suggest we need a review of our test
coverage


   1.

   1-3 days   GREEN (#1)
   2.

   4-7 days   GREEN (#2)
   3.

   Over 7 days GREEN (#3)


Yellow = intermittent failures for different projects but no lasting or
current regressions

** intermittent would be a healthy project as we expect a number of
failures during the week

** I will not report any of the solved failures or regressions.


   1.

   Solved job failuresYELLOW (#1)
   2.

   Solved regressions  YELLOW (#2)


Red = job has been failing

** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.


   1.

   1-3 days  RED (#1)
   2.

   4-7 days  RED (#2)
   3.

   Over 7 days RED (#3)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YCNCKRK3G4EJXA3OCYAUS4VMKRDA67F4/


Re: [CQ]: 101696,1 (ovirt-provider-ovn) failed "ovirt-4.3" system tests

2019-07-11 Thread Dafna Ron
failed build-artifacts
there is a new build running now so lets see if it fails

On Thu, Jul 11, 2019 at 2:23 PM oVirt Jenkins  wrote:

> Change 101696,1 (ovirt-provider-ovn) is probably the reason behind recent
> system test failures in the "ovirt-4.3" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101696/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1491/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HVZ7H57JYLBX4FP67BFJNPCCUSUCE3MH/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/J5RI3MZKUG6HBAPHF3QRYYL4O7AKIEJ4/


Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-07-2019 ] [ 004_basic_sanity.vdsm_recovery ]

2019-07-11 Thread Dafna Ron
It seems there is a passing vdsm build after this one so whatever this was
is now fixed.
but I think the ksm path test should be fixed.
Adding Arik

On Wed, Jul 10, 2019 at 5:06 PM Dafna Ron  wrote:

>
>
> On Wed, Jul 10, 2019 at 4:18 PM Milan Zamazal  wrote:
>
>> Dafna Ron  writes:
>>
>> > Hi,
>> >
>> > We have a failure on test  004_basic_sanity.vdsm_recovery on basic
>> suite.
>> > the error seems to be an error in KSM (invalid arg)
>> >
>> > can you please have a look?
>> >
>> > Link and headline of suspected patches:
>> >
>> >
>> > cq identified this as the cause of failure:
>> >
>> > https://gerrit.ovirt.org/#/c/101603/ - localFsSD: Enable 4k block_size
>> and
>> > alignments
>> >
>> >
>> > However, I can see some py3 patches merged at the same time:
>> >
>> >
>> > py3: storage: Fix bytes x string in lvm locking type validation -
>> > https://gerrit.ovirt.org/#/c/101124/
>>
>> OST was successfully run on this patch before merging, so it's unlikely
>> to be the cause.
>>
>> > Link to Job:
>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/
>> >
>> > Link to all logs:
>> >
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
>> >
>> > (Relevant) error snippet from the log:
>> >
>> > 
>> >
>> > s/da0eeccb-5dd8-47e5-9009-8a848fe17ea5.ovirt-guest-agent.0',) {}
>> > MainProcess|vm/da0eeccb::DEBUG::2019-07-10
>> > 07:53:41,003::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper)
>> > return prepareVmChannel with None
>> > MainProcess|jsonrpc/1::DEBUG::2019-07-10
>> > 07:54:05,580::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
>> > call ksmTune with ({u'pages_to_scan': 64, u'run': 1, u'sleep
>> > _millisecs': 89.25152465623417},) {}
>> > MainProcess|jsonrpc/1::ERROR::2019-07-10
>> > 07:54:05,581::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
>> > Error in ksmTune
>> > Traceback (most recent call last):
>> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
>> > 101, in wrapper
>> > res = func(*args, **kwargs)
>> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_api/ksm.py",
>> line
>> > 45, in ksmTune
>> > f.write(str(v))
>>
>> This writes to files in /sys/kernel/mm/ksm/ and the values are checked
>> in the code.  It's also weird that Vdsm starts happily and then fails on
>> this in recovery.
>>
>> Can you exclude there is some problem with the system OST runs on?
>>
>
> I see there was a change in ksm patch 7 weeks ago which explains the
> failure we are seeing.
> However, I am not sure why its failing the test now and I am not seeing
> any other error that can cause this.
>
> Adding Ehud and Evgheni.
> Are the manual jobs running on containers or physical severs?
>
>
>
> https://gerrit.ovirt.org/#/c/95994/ - fix path of ksm files in a comment
>
>>
>> > IOError: [Errno 22] Invalid argument
>> > MainProcess|jsonrpc/5::DEBUG::2019-07-10
>> > 07:56:33,211::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
>> > call rmAppropriateMultipathRules with
>> > ('da0eeccb-5dd8-47e5-9009-8a848fe17ea5',) {}
>> >
>> >
>> > 
>>
>>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VBEZWGXV4E2RUXVZVB2AZ6II3N4J763W/


Re: [CQ]: 101395,2 (vdsm) failed "ovirt-master" system tests

2019-07-11 Thread Dafna Ron
there is a passing build after this one


On Wed, Jul 10, 2019 at 9:06 PM oVirt Jenkins  wrote:

> Change 101395,2 (vdsm) is probably the reason behind recent system test
> failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101395/2
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14970/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XFWBKU4MF4ZVP4IRB5EDA3KJR7UGGHYP/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/O426J4I3YNT6QGZEI72OJWVZOIONLGCX/


Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-07-2019 ] [ 004_basic_sanity.vdsm_recovery ]

2019-07-10 Thread Dafna Ron
On Wed, Jul 10, 2019 at 4:18 PM Milan Zamazal  wrote:

> Dafna Ron  writes:
>
> > Hi,
> >
> > We have a failure on test  004_basic_sanity.vdsm_recovery on basic suite.
> > the error seems to be an error in KSM (invalid arg)
> >
> > can you please have a look?
> >
> > Link and headline of suspected patches:
> >
> >
> > cq identified this as the cause of failure:
> >
> > https://gerrit.ovirt.org/#/c/101603/ - localFsSD: Enable 4k block_size
> and
> > alignments
> >
> >
> > However, I can see some py3 patches merged at the same time:
> >
> >
> > py3: storage: Fix bytes x string in lvm locking type validation -
> > https://gerrit.ovirt.org/#/c/101124/
>
> OST was successfully run on this patch before merging, so it's unlikely
> to be the cause.
>
> > Link to Job:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/
> >
> > Link to all logs:
> >
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
> >
> > (Relevant) error snippet from the log:
> >
> > 
> >
> > s/da0eeccb-5dd8-47e5-9009-8a848fe17ea5.ovirt-guest-agent.0',) {}
> > MainProcess|vm/da0eeccb::DEBUG::2019-07-10
> > 07:53:41,003::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper)
> > return prepareVmChannel with None
> > MainProcess|jsonrpc/1::DEBUG::2019-07-10
> > 07:54:05,580::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
> > call ksmTune with ({u'pages_to_scan': 64, u'run': 1, u'sleep
> > _millisecs': 89.25152465623417},) {}
> > MainProcess|jsonrpc/1::ERROR::2019-07-10
> > 07:54:05,581::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
> > Error in ksmTune
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> > 101, in wrapper
> > res = func(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_api/ksm.py", line
> > 45, in ksmTune
> > f.write(str(v))
>
> This writes to files in /sys/kernel/mm/ksm/ and the values are checked
> in the code.  It's also weird that Vdsm starts happily and then fails on
> this in recovery.
>
> Can you exclude there is some problem with the system OST runs on?
>

I see there was a change in ksm patch 7 weeks ago which explains the
failure we are seeing.
However, I am not sure why its failing the test now and I am not seeing any
other error that can cause this.

Adding Ehud and Evgheni.
Are the manual jobs running on containers or physical severs?



https://gerrit.ovirt.org/#/c/95994/ - fix path of ksm files in a comment

>
> > IOError: [Errno 22] Invalid argument
> > MainProcess|jsonrpc/5::DEBUG::2019-07-10
> > 07:56:33,211::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
> > call rmAppropriateMultipathRules with
> > ('da0eeccb-5dd8-47e5-9009-8a848fe17ea5',) {}
> >
> >
> > 
>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NGGHPRTCFTJLLWYAITW7TDXV3BLGFM4Y/


[ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-07-2019 ] [ 004_basic_sanity.vdsm_recovery ]

2019-07-10 Thread Dafna Ron
Hi,

We have a failure on test  004_basic_sanity.vdsm_recovery on basic suite.
the error seems to be an error in KSM (invalid arg)

can you please have a look?

Link and headline of suspected patches:


cq identified this as the cause of failure:

https://gerrit.ovirt.org/#/c/101603/ - localFsSD: Enable 4k block_size and
alignments


However, I can see some py3 patches merged at the same time:


py3: storage: Fix bytes x string in lvm locking type validation -
https://gerrit.ovirt.org/#/c/101124/


Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/

Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14963/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/

(Relevant) error snippet from the log:



s/da0eeccb-5dd8-47e5-9009-8a848fe17ea5.ovirt-guest-agent.0',) {}
MainProcess|vm/da0eeccb::DEBUG::2019-07-10
07:53:41,003::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper)
return prepareVmChannel with None
MainProcess|jsonrpc/1::DEBUG::2019-07-10
07:54:05,580::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
call ksmTune with ({u'pages_to_scan': 64, u'run': 1, u'sleep
_millisecs': 89.25152465623417},) {}
MainProcess|jsonrpc/1::ERROR::2019-07-10
07:54:05,581::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
Error in ksmTune
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
101, in wrapper
res = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_api/ksm.py", line
45, in ksmTune
f.write(str(v))
IOError: [Errno 22] Invalid argument
MainProcess|jsonrpc/5::DEBUG::2019-07-10
07:56:33,211::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
call rmAppropriateMultipathRules with
('da0eeccb-5dd8-47e5-9009-8a848fe17ea5',) {}



___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2E2FERLTPLPSN3LZFHH3DYJTJGTKGKHK/


Re: [CQ]: 100408,4 (ovirt-engine) failed "ovirt-master" system tests

2019-07-10 Thread Dafna Ron
same issue with  008_basic_ui_sanity.initialize_chrome

On Wed, Jul 10, 2019 at 11:25 AM oVirt Jenkins  wrote:

> Change 100408,4 (ovirt-engine) is probably the reason behind recent system
> test
> failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/100408/4
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14960/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/7WON32D6D7QPDADQB7G6U6SPVVWBPXEO/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/P2DSJM6G5GUZBTSROCSXQDP2CAUT7SMY/


Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-10 Thread Dafna Ron
This host caused 2 more failures today so if we cannot fix it please remove
the host from running CQ as its causing too much noise..


On Wed, Jul 10, 2019 at 10:00 AM Eyal Edri  wrote:

> Evgheni,
> Can you check maybe for docker version on that host? or anything that
> might be different from other hosts? maybe reboot / upgrade it?
>
> On Tue, Jul 9, 2019 at 6:23 PM Dafna Ron  wrote:
>
>> another failure on same host
>>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14951/consoleFull
>>
>>
>> On Tue, Jul 9, 2019 at 12:11 PM Dafna Ron  wrote:
>>
>>> from the failures I have seen it only fails on that specific host which
>>> probably means this is a problem in the specific host and not in the test.
>>>
>>> On Tue, Jul 9, 2019 at 12:02 PM Daniel Belenky 
>>> wrote:
>>>
>>>> Seems like a where the test tries to initialize the browser before the
>>>> browser is up or something like that... I'm not familiar with Selenium
>>>> though so maybe ask the maintainers of this test to look into it? Does it
>>>> fail on other hosts?
>>>>
>>>> On Tue, Jul 9, 2019 at 12:23 PM Dafna Ron  wrote:
>>>>
>>>>> sure. the previous failures were on ovirt-srv19.phx.ovirt.org
>>>>> <https://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/>
>>>>> it seems the new failure is on the same host:
>>>>>
>>>>> *16:23:04*  Running on ovirt-srv19.phx.ovirt.org 
>>>>> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/> in 
>>>>> /home/jenkins/workspace/ovirt-master_change-queue-tester
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 9, 2019 at 10:01 AM Eyal Edri  wrote:
>>>>>
>>>>>> Daniel/Evgheni - any insights on what could be specific to that host?
>>>>>> Dafna, can you share the hostname?
>>>>>>
>>>>>> On Tue, Jul 9, 2019 at 11:57 AM Dafna Ron  wrote:
>>>>>>
>>>>>>> we looked at past failures which seem to have the same host in
>>>>>>> common.
>>>>>>> I asked Daniel (when he was infra owner) and Evgheni to look at this
>>>>>>> and the issue stopped.
>>>>>>> It happened again yesterday evening for one test
>>>>>>>
>>>>>>> On Tue, Jul 9, 2019 at 9:31 AM Eyal Edri  wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Jul 8, 2019 at 8:10 PM Dafna Ron  wrote:
>>>>>>>>
>>>>>>>>> This is the chrome ui test failure.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Did we find the root cause for this already? anyone is looking into
>>>>>>>> it?
>>>>>>>>
>>>>>>>>
>>>>>>>>> there are more vdsm patches running now
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Change 101600,2 (vdsm) is probably the reason behind recent
>>>>>>>>>> system test
>>>>>>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>>>>>>
>>>>>>>>>> This change had been removed from the testing queue. Artifacts
>>>>>>>>>> build from this
>>>>>>>>>> change will not be released until it is fixed.
>>>>>>>>>>
>>>>>>>>>> For further details about the change see:
>>>>>>>>>> https://gerrit.ovirt.org/#/c/101600/2
>>>>>>>>>>
>>>>>>>>>> For failed test results see:
>>>>>>>>>>
>>>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
>>>>>>>>>> ___
>>>>>>>>>> Infra mailing list -- infra@ovirt.org
>>>>>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>

Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-09 Thread Dafna Ron
another failure on same host
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14951/consoleFull


On Tue, Jul 9, 2019 at 12:11 PM Dafna Ron  wrote:

> from the failures I have seen it only fails on that specific host which
> probably means this is a problem in the specific host and not in the test.
>
> On Tue, Jul 9, 2019 at 12:02 PM Daniel Belenky 
> wrote:
>
>> Seems like a where the test tries to initialize the browser before the
>> browser is up or something like that... I'm not familiar with Selenium
>> though so maybe ask the maintainers of this test to look into it? Does it
>> fail on other hosts?
>>
>> On Tue, Jul 9, 2019 at 12:23 PM Dafna Ron  wrote:
>>
>>> sure. the previous failures were on ovirt-srv19.phx.ovirt.org
>>> <https://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/>
>>> it seems the new failure is on the same host:
>>>
>>> *16:23:04*  Running on ovirt-srv19.phx.ovirt.org 
>>> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/> in 
>>> /home/jenkins/workspace/ovirt-master_change-queue-tester
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019 at 10:01 AM Eyal Edri  wrote:
>>>
>>>> Daniel/Evgheni - any insights on what could be specific to that host?
>>>> Dafna, can you share the hostname?
>>>>
>>>> On Tue, Jul 9, 2019 at 11:57 AM Dafna Ron  wrote:
>>>>
>>>>> we looked at past failures which seem to have the same host in common.
>>>>> I asked Daniel (when he was infra owner) and Evgheni to look at this
>>>>> and the issue stopped.
>>>>> It happened again yesterday evening for one test
>>>>>
>>>>> On Tue, Jul 9, 2019 at 9:31 AM Eyal Edri  wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Jul 8, 2019 at 8:10 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> This is the chrome ui test failure.
>>>>>>>
>>>>>>
>>>>>> Did we find the root cause for this already? anyone is looking into
>>>>>> it?
>>>>>>
>>>>>>
>>>>>>> there are more vdsm patches running now
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Change 101600,2 (vdsm) is probably the reason behind recent system
>>>>>>>> test
>>>>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>>>>
>>>>>>>> This change had been removed from the testing queue. Artifacts
>>>>>>>> build from this
>>>>>>>> change will not be released until it is fixed.
>>>>>>>>
>>>>>>>> For further details about the change see:
>>>>>>>> https://gerrit.ovirt.org/#/c/101600/2
>>>>>>>>
>>>>>>>> For failed test results see:
>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
>>>>>>>> ___
>>>>>>>> Infra mailing list -- infra@ovirt.org
>>>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>> oVirt Code of Conduct:
>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>> List Archives:
>>>>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U3SGLJEN6VHLCSZSIRKVPHYM4NEZ5P7H/
>>>>>>>>
>>>>>>> ___
>>>>>>> Infra mailing list -- infra@ovirt.org
>>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct:
>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives:
>>>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L5UB5UYJPOJPTS26DKZP7XUQHZYTGCOR/
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Eyal edri
>>>>>>
>>>>>> He / Him / His
>>>>>>
>>>>>>
>>>>>> MANAGER
>>>>>>
>>>>>> CONTINUOUS PRODUCTIZATION
>>>>>>
>>>>>> SYSTEM ENGINEERING
>>>>>>
>>>>>> Red Hat <https://www.redhat.com/>
>>>>>> <https://red.ht/sig>
>>>>>> phone: +972-9-7692018
>>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> Eyal edri
>>>>
>>>> He / Him / His
>>>>
>>>>
>>>> MANAGER
>>>>
>>>> CONTINUOUS PRODUCTIZATION
>>>>
>>>> SYSTEM ENGINEERING
>>>>
>>>> Red Hat <https://www.redhat.com/>
>>>> <https://red.ht/sig>
>>>> phone: +972-9-7692018
>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>>
>>>
>>
>> --
>>
>> Daniel Belenky
>>
>> Red Hat <https://www.redhat.com/>
>> <https://red.ht/sig>
>>
>>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZJYN7IDRBBOUEO7H2GSL6QBLZWX35SUV/


Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-09 Thread Dafna Ron
from the failures I have seen it only fails on that specific host which
probably means this is a problem in the specific host and not in the test.

On Tue, Jul 9, 2019 at 12:02 PM Daniel Belenky  wrote:

> Seems like a where the test tries to initialize the browser before the
> browser is up or something like that... I'm not familiar with Selenium
> though so maybe ask the maintainers of this test to look into it? Does it
> fail on other hosts?
>
> On Tue, Jul 9, 2019 at 12:23 PM Dafna Ron  wrote:
>
>> sure. the previous failures were on ovirt-srv19.phx.ovirt.org
>> <https://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/>
>> it seems the new failure is on the same host:
>>
>> *16:23:04*  Running on ovirt-srv19.phx.ovirt.org 
>> <http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/> in 
>> /home/jenkins/workspace/ovirt-master_change-queue-tester
>>
>>
>>
>> On Tue, Jul 9, 2019 at 10:01 AM Eyal Edri  wrote:
>>
>>> Daniel/Evgheni - any insights on what could be specific to that host?
>>> Dafna, can you share the hostname?
>>>
>>> On Tue, Jul 9, 2019 at 11:57 AM Dafna Ron  wrote:
>>>
>>>> we looked at past failures which seem to have the same host in common.
>>>> I asked Daniel (when he was infra owner) and Evgheni to look at this
>>>> and the issue stopped.
>>>> It happened again yesterday evening for one test
>>>>
>>>> On Tue, Jul 9, 2019 at 9:31 AM Eyal Edri  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Jul 8, 2019 at 8:10 PM Dafna Ron  wrote:
>>>>>
>>>>>> This is the chrome ui test failure.
>>>>>>
>>>>>
>>>>> Did we find the root cause for this already? anyone is looking into it?
>>>>>
>>>>>
>>>>>> there are more vdsm patches running now
>>>>>>
>>>>>>
>>>>>> On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins 
>>>>>> wrote:
>>>>>>
>>>>>>> Change 101600,2 (vdsm) is probably the reason behind recent system
>>>>>>> test
>>>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>>>
>>>>>>> This change had been removed from the testing queue. Artifacts build
>>>>>>> from this
>>>>>>> change will not be released until it is fixed.
>>>>>>>
>>>>>>> For further details about the change see:
>>>>>>> https://gerrit.ovirt.org/#/c/101600/2
>>>>>>>
>>>>>>> For failed test results see:
>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
>>>>>>> ___
>>>>>>> Infra mailing list -- infra@ovirt.org
>>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct:
>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives:
>>>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U3SGLJEN6VHLCSZSIRKVPHYM4NEZ5P7H/
>>>>>>>
>>>>>> ___
>>>>>> Infra mailing list -- infra@ovirt.org
>>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L5UB5UYJPOJPTS26DKZP7XUQHZYTGCOR/
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Eyal edri
>>>>>
>>>>> He / Him / His
>>>>>
>>>>>
>>>>> MANAGER
>>>>>
>>>>> CONTINUOUS PRODUCTIZATION
>>>>>
>>>>> SYSTEM ENGINEERING
>>>>>
>>>>> Red Hat <https://www.redhat.com/>
>>>>> <https://red.ht/sig>
>>>>> phone: +972-9-7692018
>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>>>
>>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>> He / Him / His
>>>
>>>
>>> MANAGER
>>>
>>> CONTINUOUS PRODUCTIZATION
>>>
>>> SYSTEM ENGINEERING
>>>
>>> Red Hat <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>
>>
>
> --
>
> Daniel Belenky
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CPCYFWGGV2UUECCMEX2AYJ3IUO23FG7A/


Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-09 Thread Dafna Ron
sure. the previous failures were on ovirt-srv19.phx.ovirt.org
<https://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/>
it seems the new failure is on the same host:

*16:23:04*  Running on ovirt-srv19.phx.ovirt.org
<http://jenkins.ovirt.org/computer/ovirt-srv19.phx.ovirt.org/> in
/home/jenkins/workspace/ovirt-master_change-queue-tester



On Tue, Jul 9, 2019 at 10:01 AM Eyal Edri  wrote:

> Daniel/Evgheni - any insights on what could be specific to that host?
> Dafna, can you share the hostname?
>
> On Tue, Jul 9, 2019 at 11:57 AM Dafna Ron  wrote:
>
>> we looked at past failures which seem to have the same host in common.
>> I asked Daniel (when he was infra owner) and Evgheni to look at this and
>> the issue stopped.
>> It happened again yesterday evening for one test
>>
>> On Tue, Jul 9, 2019 at 9:31 AM Eyal Edri  wrote:
>>
>>>
>>>
>>> On Mon, Jul 8, 2019 at 8:10 PM Dafna Ron  wrote:
>>>
>>>> This is the chrome ui test failure.
>>>>
>>>
>>> Did we find the root cause for this already? anyone is looking into it?
>>>
>>>
>>>> there are more vdsm patches running now
>>>>
>>>>
>>>> On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins  wrote:
>>>>
>>>>> Change 101600,2 (vdsm) is probably the reason behind recent system test
>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>
>>>>> This change had been removed from the testing queue. Artifacts build
>>>>> from this
>>>>> change will not be released until it is fixed.
>>>>>
>>>>> For further details about the change see:
>>>>> https://gerrit.ovirt.org/#/c/101600/2
>>>>>
>>>>> For failed test results see:
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
>>>>> ___
>>>>> Infra mailing list -- infra@ovirt.org
>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U3SGLJEN6VHLCSZSIRKVPHYM4NEZ5P7H/
>>>>>
>>>> ___
>>>> Infra mailing list -- infra@ovirt.org
>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L5UB5UYJPOJPTS26DKZP7XUQHZYTGCOR/
>>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>> He / Him / His
>>>
>>>
>>> MANAGER
>>>
>>> CONTINUOUS PRODUCTIZATION
>>>
>>> SYSTEM ENGINEERING
>>>
>>> Red Hat <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>
>>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LDG3JI6AAVS3Z6PVG6BX52D5FGAVHUBZ/


Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-09 Thread Dafna Ron
we looked at past failures which seem to have the same host in common.
I asked Daniel (when he was infra owner) and Evgheni to look at this and
the issue stopped.
It happened again yesterday evening for one test

On Tue, Jul 9, 2019 at 9:31 AM Eyal Edri  wrote:

>
>
> On Mon, Jul 8, 2019 at 8:10 PM Dafna Ron  wrote:
>
>> This is the chrome ui test failure.
>>
>
> Did we find the root cause for this already? anyone is looking into it?
>
>
>> there are more vdsm patches running now
>>
>>
>> On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins  wrote:
>>
>>> Change 101600,2 (vdsm) is probably the reason behind recent system test
>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>
>>> This change had been removed from the testing queue. Artifacts build
>>> from this
>>> change will not be released until it is fixed.
>>>
>>> For further details about the change see:
>>> https://gerrit.ovirt.org/#/c/101600/2
>>>
>>> For failed test results see:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
>>> ___
>>> Infra mailing list -- infra@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U3SGLJEN6VHLCSZSIRKVPHYM4NEZ5P7H/
>>>
>> ___
>> Infra mailing list -- infra@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L5UB5UYJPOJPTS26DKZP7XUQHZYTGCOR/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/737UUAKGNJPOPTID3227ZZRSW7ETI4LO/


Re: [CQ]: 101600,2 (vdsm) failed "ovirt-master" system tests

2019-07-08 Thread Dafna Ron
This is the chrome ui test failure.
there are more vdsm patches running now


On Mon, Jul 8, 2019 at 6:00 PM oVirt Jenkins  wrote:

> Change 101600,2 (vdsm) is probably the reason behind recent system test
> failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101600/2
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14933/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U3SGLJEN6VHLCSZSIRKVPHYM4NEZ5P7H/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L5UB5UYJPOJPTS26DKZP7XUQHZYTGCOR/


Re: [CQ]: 101537,2 (vdsm) failed "ovirt-4.3" system tests

2019-07-08 Thread Dafna Ron
we have a passing vdsm after this failure.


On Mon, Jul 8, 2019 at 5:49 PM oVirt Jenkins  wrote:

> Change 101537,2 (vdsm) is probably the reason behind recent system test
> failures in the "ovirt-4.3" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101537/2
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1444/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QA3YQNET5SEKTCDKD4PVSKY2RKUC4VQD/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L4CM7CJS2WNR3WEJ5QZSSY3645UKMMB7/


[Ovirt] [CQ weekly status] [05-07-2019]

2019-07-05 Thread Dafna Ron
Hi,

This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.

*CQ-4.2*:  GREEN (#1)

Last failure was on 02-07 for ovirt-provider-ovm due to failed test:
098_ovirt_provider_ovn.use_ovn_provide.
This was a code regression and was fixed by patch
https://gerrit.ovirt.org/#/c/97072/

*CQ-4.3*:  RED (#1)

We are failing because our ost image is centos 7.6 and we now have 7.7
packages.
Galit will need to create a new image to fix the failing tests.

*CQ-Master:*  RED (#1)

We are failing because our ost image is centos 7.6 and we now have 7.7
packages.
Galit will need to create a new image to fix the failing tests.

 Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:

[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

Happy week!
Dafna


---
COLOUR MAP

Green = job has been passing successfully

** green for more than 3 days may suggest we need a review of our test
coverage


   1.

   1-3 days   GREEN (#1)
   2.

   4-7 days   GREEN (#2)
   3.

   Over 7 days GREEN (#3)


Yellow = intermittent failures for different projects but no lasting or
current regressions

** intermittent would be a healthy project as we expect a number of
failures during the week

** I will not report any of the solved failures or regressions.


   1.

   Solved job failuresYELLOW (#1)
   2.

   Solved regressions  YELLOW (#2)


Red = job has been failing

** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.


   1.

   1-3 days  RED (#1)
   2.

   4-7 days  RED (#2)
   3.

   Over 7 days RED (#3)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/65Y52U4DML4CPQLYEPJYWIIPBZQVIKQM/


[JIRA] (OVIRT-1919) missing documentation on ci re-merge please in github

2019-07-04 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39487#comment-39487
 ] 

Dafna Ron commented on OVIRT-1919:
--

sure. I am not sure how many projects are still using github as I did not see 
commits from there for a while. 

> missing documentation on ci re-merge please in github
> -
>
> Key: OVIRT-1919
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1919
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>        Reporter: Dafna Ron
>Assignee: infra
>
> we needed to re-merge a change from github and did not know how to do it. 
> looking at the documentation it either does not exist or not documented. 
> http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub/index.html
> I am assuming that we can re-merge so opening a jira for the doc. 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3MMUVAPY7VBOJDEXOH5BVVNQIYJP3EUD/


[JIRA] (OVIRT-1536) Documentation about the change-queue flow

2019-07-04 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39484#comment-39484
 ] 

Dafna Ron commented on OVIRT-1536:
--

I did a diagram (two actually, simple one and a more detailed one) but I think 
that this is the best way to document cq rather then a written doc

> Documentation about the change-queue flow
> -
>
> Key: OVIRT-1536
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1536
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: Change Queue, Documentation
>Reporter: Barak Korren
>Assignee: infra
>  Labels: change-queue
>
> Add some reference documentation about the change-queue flow.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/7TUS7XYD42NUPMHQT334MQJRV6OYAFSQ/


Re: [CQ]: 101107, 6 (ovirt-provider-ovn) failed "ovirt-master" system tests

2019-07-03 Thread Dafna Ron
it passed CI so lets merge and see if we have a new build...

On Wed, Jul 3, 2019 at 4:35 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Wed, Jul 3, 2019 at 5:25 PM Dafna Ron  wrote:
> >
> >
> >
> > On Wed, Jul 3, 2019 at 3:11 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> >>
> >> On Wed, Jul 3, 2019 at 2:26 PM Dafna Ron  wrote:
> >> >
> >> > build-artifacts failing on el7 and on fc29
> >> >
> >> > the el7 is failing on download of mom package (which exists on repo)
> >> >
> >> > [2019-07-03T08:04:45.865Z] Delta RPMs disabled because
> /usr/bin/applydeltarpm not installed.
> >> > [2019-07-03T08:04:45.865Z]
> http://resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/noarch/mom-0.5.13-0.0.master.el7.noarch.rpm:
> [Errno -1] Package does not match intended download. Suggestion: run yum
> --enablerepo=masterLatestTested clean metadata
> >> > [2019-07-03T08:04:45.865Z] Trying other mirror.
> >> > [2019-07-03T08:04:45.865Z]
> >> > [2019-07-03T08:04:45.865Z]
> >> > [2019-07-03T08:04:45.865Z] Error downloading packages:
> >> > [2019-07-03T08:04:45.865Z]   mom-0.5.13-0.0.master.el7.noarch: [Errno
> 256] No more mirrors to try.
> >> >
> >>
> >> So... About this ?... What can be done ?
> >
> >
> > added a patch: https://gerrit.ovirt.org/#/c/101491/
> > lets see if it does the trick
> >
> >>
> >> >
> >> > fc29 is failing on date:
> >> >
> >> >
> >> > [2019-07-03T08:08:12.497Z] done
> >> > [2019-07-03T08:08:12.497Z] %{__python2} -m compileall .
> >>
> >> It is not failing on 'date', it's failing here, because apparently the
> >> mock environment doesn't have python-setuptools installed (and thus it
> >> can't replace the __python2 macro w/ 'python').
> >>
> >> In the next patch - https://gerrit.ovirt.org/#/c/101386/ - I add the
> >> python-setuptools dependencies.
> >>
> >> Would you let me know ASAP the status of the fc29 build of the
> >> aforementioned patch ?
> >
> >
> > looks like this is the last one and it failed on same issue:
> https://gerrit.ovirt.org/#/c/101107/
>
> This one is the last merged one: https://gerrit.ovirt.org/#/c/101386/
>
> The check-merged failed, so it was not sent to the change queue.
>
> Failed on el7 because of the MOM thing - check [0]:
>
> 10:04:45
> http://resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/noarch/mom-0.5.13-0.0.master.el7.noarch.rpm
> :
> [Errno -1] Package does not match intended download. Suggestion: run
> yum --enablerepo=masterLatestTested clean metadata
> 10:04:45  Trying other mirror.
> 10:04:45
> 10:04:45
> 10:04:45  Error downloading packages:
> 10:04:45mom-0.5.13-0.0.master.el7.noarch: [Errno 256] No more
> mirrors to try.
>
> I'm hoping you patch fixes this MOM thing ...
>
> [0] -
> https://jenkins.ovirt.org/job/ovirt-provider-ovn_standard-on-merge/335/consoleFull
>
> >
> >>
> >> > [2019-07-03T08:08:12.497Z] bash: line 0: fg: no job control
> >> > [2019-07-03T08:08:12.497Z] make[1]: *** [Makefile:39: compile] Error 1
> >> > [2019-07-03T08:08:12.497Z] make[1]: Leaving directory
> '/home/jenkins/workspace/ovirt-provider-ovn_standard-on-merge/ovirt-provider-ovn/rpmbuild/BUILD/ovirt-provider-ovn-1.2.24'
> >> > [2019-07-03T08:08:12.497Z] error: Bad exit status from
> /var/tmp/rpm-tmp.5xvxHO (%install)
> >> > [2019-07-03T08:08:12.497Z]
> >> > [2019-07-03T08:08:12.497Z]
> >> > [2019-07-03T08:08:12.497Z] RPM build errors:
> >> > [2019-07-03T08:08:12.497Z] bogus date in %changelog: Wed Nov 15
> 2018 Miguel Duarte Barroso  - 1.2.17
> >> > [2019-07-03T08:08:12.497Z] Bad exit status from
> /var/tmp/rpm-tmp.5xvxHO (%install)
> >> > [2019-07-03T08:08:12.497Z] make: *** [Makefile:129: rpm] Error 1
> >> > [2019-07-03T08:08:12.497Z] Took 1 seconds
> >> > [2019-07-03T08:08:12.497Z] ===
> >> > [2019-07-03T08:08:12.497Z] Finish: shell
> >> >
> >> >
> >> >
> >> > On Wed, Jul 3, 2019 at 1:09 PM oVirt Jenkins 
> wrote:
> >> >>
> >> >> Change 101107,6 (ovirt-provider-ovn) is probably the reason behind
> recent
> >> >> system test failures in the "ovirt-master" change queue and needs to
> be fixed.
> >> >>
> >> >> This change had bee

Re: [CQ]: 101107, 6 (ovirt-provider-ovn) failed "ovirt-master" system tests

2019-07-03 Thread Dafna Ron
On Wed, Jul 3, 2019 at 3:11 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Wed, Jul 3, 2019 at 2:26 PM Dafna Ron  wrote:
> >
> > build-artifacts failing on el7 and on fc29
> >
> > the el7 is failing on download of mom package (which exists on repo)
> >
> > [2019-07-03T08:04:45.865Z] Delta RPMs disabled because
> /usr/bin/applydeltarpm not installed.
> > [2019-07-03T08:04:45.865Z]
> http://resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/noarch/mom-0.5.13-0.0.master.el7.noarch.rpm:
> [Errno -1] Package does not match intended download. Suggestion: run yum
> --enablerepo=masterLatestTested clean metadata
> > [2019-07-03T08:04:45.865Z] Trying other mirror.
> > [2019-07-03T08:04:45.865Z]
> > [2019-07-03T08:04:45.865Z]
> > [2019-07-03T08:04:45.865Z] Error downloading packages:
> > [2019-07-03T08:04:45.865Z]   mom-0.5.13-0.0.master.el7.noarch: [Errno
> 256] No more mirrors to try.
> >
>
> So... About this ?... What can be done ?
>

added a patch: https://gerrit.ovirt.org/#/c/101491/
lets see if it does the trick


> >
> > fc29 is failing on date:
> >
> >
> > [2019-07-03T08:08:12.497Z] done
> > [2019-07-03T08:08:12.497Z] %{__python2} -m compileall .
>
> It is not failing on 'date', it's failing here, because apparently the
> mock environment doesn't have python-setuptools installed (and thus it
> can't replace the __python2 macro w/ 'python').
>
> In the next patch - https://gerrit.ovirt.org/#/c/101386/ - I add the
> python-setuptools dependencies.
>
> Would you let me know ASAP the status of the fc29 build of the
> aforementioned patch ?
>

looks like this is the last one and it failed on same issue:
https://gerrit.ovirt.org/#/c/101107/


> > [2019-07-03T08:08:12.497Z] bash: line 0: fg: no job control
> > [2019-07-03T08:08:12.497Z] make[1]: *** [Makefile:39: compile] Error 1
> > [2019-07-03T08:08:12.497Z] make[1]: Leaving directory
> '/home/jenkins/workspace/ovirt-provider-ovn_standard-on-merge/ovirt-provider-ovn/rpmbuild/BUILD/ovirt-provider-ovn-1.2.24'
> > [2019-07-03T08:08:12.497Z] error: Bad exit status from
> /var/tmp/rpm-tmp.5xvxHO (%install)
> > [2019-07-03T08:08:12.497Z]
> > [2019-07-03T08:08:12.497Z]
> > [2019-07-03T08:08:12.497Z] RPM build errors:
> > [2019-07-03T08:08:12.497Z] bogus date in %changelog: Wed Nov 15 2018
> Miguel Duarte Barroso  - 1.2.17
> > [2019-07-03T08:08:12.497Z] Bad exit status from
> /var/tmp/rpm-tmp.5xvxHO (%install)
> > [2019-07-03T08:08:12.497Z] make: *** [Makefile:129: rpm] Error 1
> > [2019-07-03T08:08:12.497Z] Took 1 seconds
> > [2019-07-03T08:08:12.497Z] ===
> > [2019-07-03T08:08:12.497Z] Finish: shell
> >
> >
> >
> > On Wed, Jul 3, 2019 at 1:09 PM oVirt Jenkins  wrote:
> >>
> >> Change 101107,6 (ovirt-provider-ovn) is probably the reason behind
> recent
> >> system test failures in the "ovirt-master" change queue and needs to be
> fixed.
> >>
> >> This change had been removed from the testing queue. Artifacts build
> from this
> >> change will not be released until it is fixed.
> >>
> >> For further details about the change see:
> >> https://gerrit.ovirt.org/#/c/101107/6
> >>
> >> For failed test results see:
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14825/
> >> ___
> >> Infra mailing list -- infra@ovirt.org
> >> To unsubscribe send an email to infra-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4IH3TDNYYI3GJ6PK6WTFVO6EH7GBXBVD/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GU6YD4B4IBTKZN7YIGQN2DMC54VGOCG3/


Re: [CQ]: 101107, 6 (ovirt-provider-ovn) failed "ovirt-master" system tests

2019-07-03 Thread Dafna Ron
build-artifacts failing on el7 and on fc29

the el7 is failing on download of mom package (which exists on repo)

 
[2019-07-03T08:04:45.865Z]
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
 
[2019-07-03T08:04:45.865Z]
http://resources.ovirt.org/repos/ovirt/tested/master/rpm/el7/noarch/mom-0.5.13-0.0.master.el7.noarch.rpm:
[Errno -1] Package does not match intended download. Suggestion: run
yum --enablerepo=masterLatestTested clean metadata
 
[2019-07-03T08:04:45.865Z]
Trying other mirror.
 
[2019-07-03T08:04:45.865Z]
 
[2019-07-03T08:04:45.865Z]
 
[2019-07-03T08:04:45.865Z]
Error downloading packages:
 
[2019-07-03T08:04:45.865Z]
  mom-0.5.13-0.0.master.el7.noarch: [Errno 256] No more mirrors to
try.


fc29 is failing on date:


 
[2019-07-03T08:08:12.497Z]
done
 
[2019-07-03T08:08:12.497Z]
%{__python2} -m compileall .
 
[2019-07-03T08:08:12.497Z]
bash: line 0: fg: no job control
 
[2019-07-03T08:08:12.497Z]
make[1]: *** [Makefile:39: compile] Error 1
 
[2019-07-03T08:08:12.497Z]
make[1]: Leaving directory
'/home/jenkins/workspace/ovirt-provider-ovn_standard-on-merge/ovirt-provider-ovn/rpmbuild/BUILD/ovirt-provider-ovn-1.2.24'
 
[2019-07-03T08:08:12.497Z]
error: Bad exit status from /var/tmp/rpm-tmp.5xvxHO (%install)
 
[2019-07-03T08:08:12.497Z]
 
[2019-07-03T08:08:12.497Z]
 
[2019-07-03T08:08:12.497Z]
RPM build errors:
 
[2019-07-03T08:08:12.497Z]
bogus date in %changelog: Wed Nov 15 2018 Miguel Duarte Barroso
 - 1.2.17
 
[2019-07-03T08:08:12.497Z]
Bad exit status from /var/tmp/rpm-tmp.5xvxHO (%install)
 
[2019-07-03T08:08:12.497Z]
make: *** [Makefile:129: rpm] Error 1
 
[2019-07-03T08:08:12.497Z]
Took 1 

[JIRA] (OVIRT-2749) Re: [ovirt-devel] mom install error on fc29

2019-07-01 Thread Dafna Ron (oVirt JIRA)
Dafna Ron created OVIRT-2749:


 Summary: Re: [ovirt-devel] mom install error on fc29
 Key: OVIRT-2749
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2749
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Dafna Ron
Assignee: infra


On Thu, Jun 20, 2019 at 5:35 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> Hi,
>
> I'm seeing the following error when I try to install
> ovirt-provider-ovn-driver (which requires vdsm):
>
> [MIRROR] mom-0.5.12-0.0.master.fc29.noarch.rpm: Interrupted by header
> callback: Server reports Content-Length: 340 but expected size is:
> 133356
> 17:13:50  [FAILED] mom-0.5.12-0.0.master.fc29.noarch.rpm: No more
> mirrors to try - All mirrors were already tried without success
> 17:13:50  Error: Error downloading packages:
> 17:13:50Cannot download
> noarch/mom-0.5.12-0.0.master.fc29.noarch.rpm: All mirrors were tried
>
> Any clue ?.. it can be seen in [0].
>
> Thanks in advance,
> Miguel
>
> [0] -
> https://jenkins.ovirt.org/job/ovirt-provider-ovn_standard-check-patch/2309/console
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/OLLWGI4NKMZD2TDCYLMIK3KXXET4RP55/
>



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GDWO4KIHMV6OFMCLJ23NUD5AJE67GF4Z/


Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-07-01 Thread Dafna Ron
Thanks Shani.
Tal, any updates on merging this?

On Mon, Jun 24, 2019 at 2:18 PM Shani Leviim  wrote:

> Hi Dafna,
> A patch for backporting was done: https://gerrit.ovirt.org/#/c/101113/
> Waiting for merging it.
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Mon, Jun 24, 2019 at 11:17 AM Dafna Ron  wrote:
>
>> Hi Shani and Tal,
>>
>> Can you backport this to 4.3 as well? we had a failure on 4.3 for same
>> issue but not for master.
>>
>> Thanks,
>> Dafna
>>
>>
>>
>> On Thu, Jun 20, 2019 at 3:39 PM Dafna Ron  wrote:
>>
>>> Hi Shani,
>>>
>>> Thanks for the patch
>>> I will monitor and in case we see any further failures will let you
>>> know.
>>>
>>> Thanks again,
>>> Dafna
>>>
>>>
>>> On Sun, Jun 16, 2019 at 3:53 PM Shani Leviim  wrote:
>>>
>>>> I've worked on this patch: https://gerrit.ovirt.org/#/c/100852/
>>>> Dafna, can you please try it?
>>>>
>>>>
>>>> *Regards,*
>>>>
>>>> *Shani Leviim*
>>>>
>>>>
>>>> On Fri, Jun 14, 2019 at 11:07 AM Dafna Ron  wrote:
>>>>
>>>>> and another:
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14559/
>>>>>
>>>>> On Fri, Jun 14, 2019 at 9:05 AM Dafna Ron  wrote:
>>>>>
>>>>>> here you go:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14561/
>>>>>> I pressed the save build forever button. please press the don't keep
>>>>>> button once you get the info from the failure.
>>>>>>
>>>>>> Thanks,
>>>>>> Dafna
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 5, 2019 at 9:09 AM Galit Rosenthal 
>>>>>> wrote:
>>>>>>
>>>>>>> We didn't make any changes.
>>>>>>> I don't think retrie will help in this case
>>>>>>> we are trying to catch what cause this.
>>>>>>>
>>>>>>> If you see this again please let us know.
>>>>>>>
>>>>>>> We are still debugging it
>>>>>>>
>>>>>>> On Wed, Jun 5, 2019 at 10:56 AM Dafna Ron  wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> This is a random failure - so no, I have not.
>>>>>>>> However, I looked at several failures and they are all the same,
>>>>>>>> the action on engine/vdsm side succeeds and lago repots a failure but
>>>>>>>> prints a success from logs.
>>>>>>>> Did you add anything more to the tests to allow better debugging?
>>>>>>>> Did you add a re-try to the test?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Dafna
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi Dafna
>>>>>>>>>
>>>>>>>>> If you see this failure again, please send us the link to the job.
>>>>>>>>>
>>>>>>>>> We are trying to reproduce it and find the root cause.
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Galit
>>>>>>>>>
>>>>>>>>> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>>>>>>>>>
>>>>>>>>>> sure
>>>>>>>>>>
>>>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Dafna,
>>>>>>>>>>> Can you please direct me to the full test's log?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>

Re: [CQ]: 101190,12 (ovirt-engine) failed "ovirt-master" system tests

2019-07-01 Thread Dafna Ron
we have a passing ovirt-engine so this is not relevant

On Fri, Jun 28, 2019 at 8:33 PM oVirt Jenkins  wrote:

> Change 101190,12 (ovirt-engine) is probably the reason behind recent system
> test failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101190/12
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14773/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MIPHMZCKQTXUH5CLJ7RDHOKM266EFSFC/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RKWU66SBPEMIQYVNJPXZYF2H4IFW5MSK/


[JIRA] (OVIRT-2748) Re: "Dev Role" on Jenkins

2019-07-01 Thread Dafna Ron (oVirt JIRA)
Dafna Ron created OVIRT-2748:


 Summary: Re: "Dev Role" on Jenkins
 Key: OVIRT-2748
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2748
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Dafna Ron
Assignee: infra


Adding infra-support as this needs a ticket


On Mon, Jul 1, 2019 at 7:07 AM Germano Veit Michel 
wrote:

> Hi,
>
> I would like to be able to manually trigger OST. According to [1] I might
> be missing the "Dev Role" as I cannot see the "*build with parameters*"
> menu.
>
> Could you please check if I have permissions?
>
> [1]
> https://ovirt-system-tests.readthedocs.io/en/latest/CI/developers_info/index.html
>
> Thanks,
> Germano
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YEEUZ6B6WJUNV4HC7KHYSVMDQO7E5PBF/
>



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NPCQ4HFLH775NXZYT2ZTQCA5I57E4KDS/


Re: "Dev Role" on Jenkins

2019-07-01 Thread Dafna Ron
Adding infra-support as this needs a ticket


On Mon, Jul 1, 2019 at 7:07 AM Germano Veit Michel 
wrote:

> Hi,
>
> I would like to be able to manually trigger OST. According to [1] I might
> be missing the "Dev Role" as I cannot see the "*build with parameters*"
> menu.
>
> Could you please check if I have permissions?
>
> [1]
> https://ovirt-system-tests.readthedocs.io/en/latest/CI/developers_info/index.html
>
> Thanks,
> Germano
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YEEUZ6B6WJUNV4HC7KHYSVMDQO7E5PBF/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/F4U6RGOA4OHYDTHCGTB7D7MK4ZSBYNH4/


[Ovirt] [CQ weekly status] [28-06-2019]

2019-06-28 Thread Dafna Ron
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.

*CQ-4.2*:  GREEN (#1)

Last failure was on 28-06 for mom due to failed test:
002_bootstrap.add_master_storage_domain. this is a known issue which has a
fix which was probably not added to the upgrade suite. I re-triggered the
change to see if it passes


*CQ-4.3*:  GREEN (#2)

Last failure was on 27-06-2019 for project ovirt-web-ui due to test
get_host_device which seemed to be a race. I re-triggered the patch and it
passed.

*CQ-Master:*  GREEN (#1)

Last failure was on 28-06-3019 for project ovirt-node-ng-image on
build-artifacts for fc29
There is a ticket opened https://ovirt-jira.atlassian.net/browse/OVIRT-2747

 Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:

[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

Happy week!
Dafna


---
COLOUR MAP

Green = job has been passing successfully

** green for more than 3 days may suggest we need a review of our test
coverage


   1.

   1-3 days   GREEN (#1)
   2.

   4-7 days   GREEN (#2)
   3.

   Over 7 days GREEN (#3)


Yellow = intermittent failures for different projects but no lasting or
current regressions

** intermittent would be a healthy project as we expect a number of
failures during the week

** I will not report any of the solved failures or regressions.


   1.

   Solved job failuresYELLOW (#1)
   2.

   Solved regressions  YELLOW (#2)


Red = job has been failing

** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.


   1.

   1-3 days  RED (#1)
   2.

   4-7 days  RED (#2)
   3.

   Over 7 days RED (#3)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DUNGHY4OYXMIKDTXYB36E7PVXWDW3V4E/


Re: [CQ]: 101096, 3 (ovirt-engine-nodejs-modules) failed "ovirt-4.3" system tests

2019-06-24 Thread Dafna Ron
missing dpdk (was expected and tested on different patch).

Adding a patch to resolve this + remove the ansible workaround from the
weekend.

https://gerrit.ovirt.org/#/c/101023/

On Mon, Jun 24, 2019 at 5:06 PM oVirt Jenkins  wrote:

> Change 101096,3 (ovirt-engine-nodejs-modules) is probably the reason behind
> recent system test failures in the "ovirt-4.3" change queue and needs to be
> fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/101096/3
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1290/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PRPF4PQWH77IPBYHTMIKVLO5B3UKOWHE/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4IY7YJMCX2CX2IVC7JEXMPSYQZOKOLTF/


Re: [Ovirt] [CQ weekly status] [21-06-2019]

2019-06-24 Thread Dafna Ron
On Mon, Jun 24, 2019 at 4:15 PM Eyal Edri  wrote:

>
>
> On Fri, Jun 21, 2019 at 11:14 AM Dafna Ron  wrote:
>
>> This mail is to provide the current status of CQ and allow people to
>> review status before and after the weekend.
>> Please refer to below colour map for further information on the meaning
>> of the colours.
>>
>> *CQ-4.2*:  GREEN (#1)
>>
>> Last failure was on 18-06 for project v2v-conversion-host due to failed
>> build-artifacts
>> which is already fixed by next merged patches.
>>
>> *CQ-4.3*:  RED (#1)
>>
>> 1. We have a failure on ovirt-engine-metrics due to a dependency to a new
>> ansible package that has not been synced yet to centos repo.
>> I am adding virt-sig-common repo until this is synced next week:
>> https://gerrit.ovirt.org/#/c/101023/
>>
>
> It looks like its constantly failing on ovirt-ovn-provider now -
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1287/
>
>
>>
>> *CQ-Master:*  RED (#1)
>>
>> 1. same failure on ovirt-engine-metrics as 4.3
>>
>
> Is it fixed or still failing?
>
looks like metrics passed today

>
>
>> 2. ovirt-hosted-engine-setup is failing due to package dependency change
>> from python-libguestfs to python2-libguestfs. mail sent to Ido to check the
>> issue.
>>
>>
>>  Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be
>> found here:
>>
>> [1]
>> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/
>>
>> [2]
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/
>>
>> [3]
>> http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/
>>
>> Happy week!
>> Dafna
>>
>>
>>
>> ---
>> COLOUR MAP
>>
>> Green = job has been passing successfully
>>
>> ** green for more than 3 days may suggest we need a review of our test
>> coverage
>>
>>
>>1.
>>
>>1-3 days   GREEN (#1)
>>2.
>>
>>4-7 days   GREEN (#2)
>>3.
>>
>>Over 7 days GREEN (#3)
>>
>>
>> Yellow = intermittent failures for different projects but no lasting or
>> current regressions
>>
>> ** intermittent would be a healthy project as we expect a number of
>> failures during the week
>>
>> ** I will not report any of the solved failures or regressions.
>>
>>
>>1.
>>
>>Solved job failuresYELLOW (#1)
>>2.
>>
>>Solved regressions  YELLOW (#2)
>>
>>
>> Red = job has been failing
>>
>> ** Active Failures. The colour will change based on the amount of time
>> the project/s has been broken. Only active regressions would be reported.
>>
>>
>>1.
>>
>>1-3 days  RED (#1)
>>2.
>>
>>4-7 days  RED (#2)
>>3.
>>
>>Over 7 days RED (#3)
>>
>>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/42BD6SQVNDHJK5GRSP3QEHXZ7E7TX4ER/


Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-06-24 Thread Dafna Ron
Hi Shani and Tal,

Can you backport this to 4.3 as well? we had a failure on 4.3 for same
issue but not for master.

Thanks,
Dafna



On Thu, Jun 20, 2019 at 3:39 PM Dafna Ron  wrote:

> Hi Shani,
>
> Thanks for the patch
> I will monitor and in case we see any further failures will let you know.
>
> Thanks again,
> Dafna
>
>
> On Sun, Jun 16, 2019 at 3:53 PM Shani Leviim  wrote:
>
>> I've worked on this patch: https://gerrit.ovirt.org/#/c/100852/
>> Dafna, can you please try it?
>>
>>
>> *Regards,*
>>
>> *Shani Leviim*
>>
>>
>> On Fri, Jun 14, 2019 at 11:07 AM Dafna Ron  wrote:
>>
>>> and another:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14559/
>>>
>>> On Fri, Jun 14, 2019 at 9:05 AM Dafna Ron  wrote:
>>>
>>>> here you go:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14561/
>>>> I pressed the save build forever button. please press the don't keep
>>>> button once you get the info from the failure.
>>>>
>>>> Thanks,
>>>> Dafna
>>>>
>>>>
>>>> On Wed, Jun 5, 2019 at 9:09 AM Galit Rosenthal 
>>>> wrote:
>>>>
>>>>> We didn't make any changes.
>>>>> I don't think retrie will help in this case
>>>>> we are trying to catch what cause this.
>>>>>
>>>>> If you see this again please let us know.
>>>>>
>>>>> We are still debugging it
>>>>>
>>>>> On Wed, Jun 5, 2019 at 10:56 AM Dafna Ron  wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> This is a random failure - so no, I have not.
>>>>>> However, I looked at several failures and they are all the same, the
>>>>>> action on engine/vdsm side succeeds and lago repots a failure but prints 
>>>>>> a
>>>>>> success from logs.
>>>>>> Did you add anything more to the tests to allow better debugging?
>>>>>> Did you add a re-try to the test?
>>>>>>
>>>>>> Thanks,
>>>>>> Dafna
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Dafna
>>>>>>>
>>>>>>> If you see this failure again, please send us the link to the job.
>>>>>>>
>>>>>>> We are trying to reproduce it and find the root cause.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Galit
>>>>>>>
>>>>>>> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>>>>>>>
>>>>>>>> sure
>>>>>>>>
>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi Dafna,
>>>>>>>>> Can you please direct me to the full test's log?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Regards,*
>>>>>>>>>
>>>>>>>>> *Shani Leviim*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, May 21, 2019 at 11:57 AM Tal Nisan 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Sure, Shani can you please have a look?
>>>>>>>>>>
>>>>>>>>>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Tal,
>>>>>>>>>>>
>>>>>>>>>>> I am seeing random failures for test
>>>>>>>>>>> 002_bootstrap.resize_and_refresh_storage_domain
>>>>>>>>>>> It looks like this is a timing issue since by the time we print
>>>>>>>>>>> out the error we actually see the resize succeeded. see example 
>>>>>>>>>>> below:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>>>>>>>>>
>>>>>>>>>>> Can you please assign someone from the storage team to fix this
>>>>>>>>>>> test?
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Dafna
>>>>>>>>>>>
>>>>>>>>>>> ___
>>>>>>>> Devel mailing list -- de...@ovirt.org
>>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>> oVirt Code of Conduct:
>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>> List Archives:
>>>>>>>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> GALIT ROSENTHAL
>>>>>>>
>>>>>>> SOFTWARE ENGINEER
>>>>>>>
>>>>>>> Red Hat
>>>>>>>
>>>>>>> <https://www.redhat.com/>
>>>>>>>
>>>>>>> ga...@redhat.comT: 972-9-7692230
>>>>>>> <https://red.ht/sig>
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> GALIT ROSENTHAL
>>>>>
>>>>> SOFTWARE ENGINEER
>>>>>
>>>>> Red Hat
>>>>>
>>>>> <https://www.redhat.com/>
>>>>>
>>>>> ga...@redhat.comT: 972-9-7692230
>>>>> <https://red.ht/sig>
>>>>>
>>>>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LGQ7S7NLQQ22VQ44T6OLEW2EKII667K5/


[Ovirt] [CQ weekly status] [21-06-2019]

2019-06-21 Thread Dafna Ron
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.

*CQ-4.2*:  GREEN (#1)

Last failure was on 18-06 for project v2v-conversion-host due to failed
build-artifacts
which is already fixed by next merged patches.

*CQ-4.3*:  RED (#1)

1. We have a failure on ovirt-engine-metrics due to a dependency to a new
ansible package that has not been synced yet to centos repo.
I am adding virt-sig-common repo until this is synced next week:
https://gerrit.ovirt.org/#/c/101023/

*CQ-Master:*  RED (#1)

1. same failure on ovirt-engine-metrics as 4.3
2. ovirt-hosted-engine-setup is failing due to package dependency change
from python-libguestfs to python2-libguestfs. mail sent to Ido to check the
issue.


 Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:

[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

Happy week!
Dafna


---
COLOUR MAP

Green = job has been passing successfully

** green for more than 3 days may suggest we need a review of our test
coverage


   1.

   1-3 days   GREEN (#1)
   2.

   4-7 days   GREEN (#2)
   3.

   Over 7 days GREEN (#3)


Yellow = intermittent failures for different projects but no lasting or
current regressions

** intermittent would be a healthy project as we expect a number of
failures during the week

** I will not report any of the solved failures or regressions.


   1.

   Solved job failuresYELLOW (#1)
   2.

   Solved regressions  YELLOW (#2)


Red = job has been failing

** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.


   1.

   1-3 days  RED (#1)
   2.

   4-7 days  RED (#2)
   3.

   Over 7 days RED (#3)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FFCX4RLOHSNC4DHOEUDOML2LUCOTMXQA/


Re: [CQ]: 100968, 2 (ovirt-hosted-engine-setup) failed "ovirt-master" system tests, but isn't the failure root cause

2019-06-21 Thread Dafna Ron
Hi Ido,

Can you please check your patch again? we seem to have a package deps issue
to the old package you are changing:

*09:56:35*  Error: Package:
ovirt-hosted-engine-setup-2.4.0-0.0.master.20190620134616.git2057a59.el7.noarch
(alocalsync)*09:56:35* Requires:
python-libguestfs*09:56:35*  *09:56:35*   [31m  - STDERR*09:56:35*
 + yum -y install ovirt-host*09:56:35*  Error: Package:
ovirt-hosted-engine-setup-2.4.0-0.0.master.20190620134616.git2057a59.el7.noarch
(alocalsync)*09:56:35* Requires: python-libguestfs


On Fri, Jun 21, 2019 at 11:22 AM oVirt Jenkins  wrote:

> A system test invoked by the "ovirt-master" change queue including change
> 100968,2 (ovirt-hosted-engine-setup) failed. However, this change seems
> not to
> be the root cause for this failure. Change 100883,1
> (ovirt-hosted-engine-setup)
> that this change depends on or is based on, was detected as the cause of
> the
> testing failures.
>
> This change had been removed from the testing queue. Artifacts built from
> this
> change will not be released until either change 100883,1
> (ovirt-hosted-engine-
> setup) is fixed and this change is updated to refer to or rebased on the
> fixed
> version, or this change is modified to no longer depend on it.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/100968/2
>
> For further details about the change that seems to be the root cause
> behind the
> testing failures see:
> https://gerrit.ovirt.org/#/c/100883/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14665/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3667YS2TQ3R5YQK47JZCIQAV6G3P7LXK/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BEBDMGL4EGJIDN72H6OXNKIHDVBWSHVB/


Re: [CQ]: 100969, 1 (ovirt-engine-metrics) failed "ovirt-4.3" system tests

2019-06-21 Thread Dafna Ron
new ansible has not been synced yet to extras:

[dron@dron ovirt-system-tests]$ repoquery --repofrompath=testrepo,
http://mirror.centos.org/centos/7/extras/x86_64/ --repoid=testrepo --query
--all |grep ansible
Repository google-chrome is listed more than once in the configuration
ansible-0:2.4.2.0-2.el7.noarch
ansible-doc-0:2.4.2.0-2.el7.noarch
centos-release-ansible26-0:1-3.el7.centos.noarch


On Fri, Jun 21, 2019 at 10:13 AM oVirt Jenkins  wrote:

> Change 100969,1 (ovirt-engine-metrics) is probably the reason behind recent
> system test failures in the "ovirt-4.3" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/100969/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1245/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SYOSTQFHYCHFPJ742TZQPTQQ6AORCB5M/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZWYZS2HILDTSVZSL3IWHZQP5YZ62RUAM/


Re: [CQ]: 100778, 6 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2019-06-21 Thread Dafna Ron
The build-artifacts are the developer's responsibility and is not related
to CQ.
it does not actually matter though why the build-artifacts failed as there
are passing builds after this failure which means the project would have a
passing package to be tested by CQ which include the changes that came
before it.
We do not have a re-try on failures as it failed for a specific reason that
needs to be looked at.
you can look at the chart that I sent to the list and see that if
build-artifacts fail I will check the status and if needed will alert the
developer to look into it.

On Fri, Jun 21, 2019 at 12:01 AM Nir Soffer  wrote:

> On Wed, Jun 19, 2019 at 7:03 PM Dafna Ron  wrote:
>
>> this was a failed build-artifacts.
>> its now fixed as there are several passing builds after this one
>>
>
> Why build-artifacts failed?
>
> Do we have a retry on failures?
>
>
>> On Wed, Jun 19, 2019 at 3:16 AM oVirt Jenkins  wrote:
>>
>>> A system test invoked by the "ovirt-master" change queue including change
>>> 100778,6 (vdsm) failed. However, this change seems not to be the root
>>> cause for
>>> this failure. Change 100783,7 (vdsm) that this change depends on or is
>>> based
>>> on, was detected as the cause of the testing failures.
>>>
>>> This change had been removed from the testing queue. Artifacts built
>>> from this
>>> change will not be released until either change 100783,7 (vdsm) is fixed
>>> and
>>> this change is updated to refer to or rebased on the fixed version, or
>>> this
>>> change is modified to no longer depend on it.
>>>
>>> For further details about the change see:
>>> https://gerrit.ovirt.org/#/c/100778/6
>>>
>>> For further details about the change that seems to be the root cause
>>> behind the
>>> testing failures see:
>>> https://gerrit.ovirt.org/#/c/100783/7
>>>
>>> For failed test results see:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14624/
>>> ___
>>> Infra mailing list -- infra@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CHXVUBUQZO3OS2OQ2FWEGSFB2KS7F33V/
>>>
>> ___
>> Infra mailing list -- infra@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FNSPEESXJOPRYBRKKABJ3PUGILZ2R53X/
>>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YUWHSZH7ZF5I5JIYKL25DITGDBHATR2R/


Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-06-20 Thread Dafna Ron
Hi Shani,

Thanks for the patch
I will monitor and in case we see any further failures will let you know.

Thanks again,
Dafna


On Sun, Jun 16, 2019 at 3:53 PM Shani Leviim  wrote:

> I've worked on this patch: https://gerrit.ovirt.org/#/c/100852/
> Dafna, can you please try it?
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Fri, Jun 14, 2019 at 11:07 AM Dafna Ron  wrote:
>
>> and another:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14559/
>>
>> On Fri, Jun 14, 2019 at 9:05 AM Dafna Ron  wrote:
>>
>>> here you go:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14561/
>>> I pressed the save build forever button. please press the don't keep
>>> button once you get the info from the failure.
>>>
>>> Thanks,
>>> Dafna
>>>
>>>
>>> On Wed, Jun 5, 2019 at 9:09 AM Galit Rosenthal 
>>> wrote:
>>>
>>>> We didn't make any changes.
>>>> I don't think retrie will help in this case
>>>> we are trying to catch what cause this.
>>>>
>>>> If you see this again please let us know.
>>>>
>>>> We are still debugging it
>>>>
>>>> On Wed, Jun 5, 2019 at 10:56 AM Dafna Ron  wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> This is a random failure - so no, I have not.
>>>>> However, I looked at several failures and they are all the same, the
>>>>> action on engine/vdsm side succeeds and lago repots a failure but prints a
>>>>> success from logs.
>>>>> Did you add anything more to the tests to allow better debugging?
>>>>> Did you add a re-try to the test?
>>>>>
>>>>> Thanks,
>>>>> Dafna
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal 
>>>>> wrote:
>>>>>
>>>>>> Hi Dafna
>>>>>>
>>>>>> If you see this failure again, please send us the link to the job.
>>>>>>
>>>>>> We are trying to reproduce it and find the root cause.
>>>>>>
>>>>>> Regards,
>>>>>> Galit
>>>>>>
>>>>>> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> sure
>>>>>>>
>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>>>>>>
>>>>>>>
>>>>>>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Dafna,
>>>>>>>> Can you please direct me to the full test's log?
>>>>>>>>
>>>>>>>>
>>>>>>>> *Regards,*
>>>>>>>>
>>>>>>>> *Shani Leviim*
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, May 21, 2019 at 11:57 AM Tal Nisan 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Sure, Shani can you please have a look?
>>>>>>>>>
>>>>>>>>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron  wrote:
>>>>>>>>>
>>>>>>>>>> Hi Tal,
>>>>>>>>>>
>>>>>>>>>> I am seeing random failures for test
>>>>>>>>>> 002_bootstrap.resize_and_refresh_storage_domain
>>>>>>>>>> It looks like this is a timing issue since by the time we print
>>>>>>>>>> out the error we actually see the resize succeeded. see example 
>>>>>>>>>> below:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>>>>>>>>
>>>>>>>>>> Can you please assign someone from the storage team to fix this
>>>>>>>>>> test?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Dafna
>>>>>>>>>>
>>>>>>>>>> ___
>>>>>>> Devel mailing list -- de...@ovirt.org
>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct:
>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives:
>>>>>>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> GALIT ROSENTHAL
>>>>>>
>>>>>> SOFTWARE ENGINEER
>>>>>>
>>>>>> Red Hat
>>>>>>
>>>>>> <https://www.redhat.com/>
>>>>>>
>>>>>> ga...@redhat.comT: 972-9-7692230
>>>>>> <https://red.ht/sig>
>>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> GALIT ROSENTHAL
>>>>
>>>> SOFTWARE ENGINEER
>>>>
>>>> Red Hat
>>>>
>>>> <https://www.redhat.com/>
>>>>
>>>> ga...@redhat.comT: 972-9-7692230
>>>> <https://red.ht/sig>
>>>>
>>>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NSFMZ3SFQPQCL2T34ZYSU2CXSPSHBXXR/


Re: [CQ]: 100969, 1 (ovirt-engine-metrics) failed "ovirt-4.3" system tests

2019-06-20 Thread Dafna Ron
we are taking old ansible package instead of new package.
re-running tool and will create a patch

On Thu, Jun 20, 2019 at 12:40 PM oVirt Jenkins  wrote:

> Change 100969,1 (ovirt-engine-metrics) is probably the reason behind recent
> system test failures in the "ovirt-4.3" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/100969/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1240/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/Z52CYJSOOJN4YS6QNZ6ACKYUBFSAT74F/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LKSLQFEM7RVYDEFRHJFXTCKDM2442PYC/


Re: [CQ]: 087bbdf (v2v-conversion-host) failed "ovirt-master" system tests, but isn't the failure root cause

2019-06-19 Thread Dafna Ron
the failure is due to a build artifacts failure and its now have a passing
builds.


On Tue, Jun 18, 2019 at 11:19 PM oVirt Jenkins  wrote:

> A system test invoked by the "ovirt-master" change queue including change
> 087bbdf (v2v-conversion-host) failed. However, this change seems not to be
> the
> root cause for this failure. Change 8dd62a4 (v2v-conversion-host) that this
> change depends on or is based on, was detected as the cause of the testing
> failures.
>
> This change had been removed from the testing queue. Artifacts built from
> this
> change will not be released until either change 8dd62a4
> (v2v-conversion-host)
> is fixed and this change is updated to refer to or rebased on the fixed
> version, or this change is modified to no longer depend on it.
>
> For further details about the change see:
>
> https://github.com/oVirt/v2v-conversion-host/commit/087bbdf3b804c1299edac1d1db5afe6207732a68
>
> For further details about the change that seems to be the root cause
> behind the
> testing failures see:
>
> https://github.com/oVirt/v2v-conversion-host/commit/8dd62a4ba67acf89e10b09d3c122413fe5592a11
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14613/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RFYL4K4GEXKG72LBY5WPME3I4QV5HVQO/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AEBQRJOCFXDOSBFVWB73SDI4MCNE533V/


Re: [CQ]: 100778, 6 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2019-06-19 Thread Dafna Ron
this was a failed build-artifacts.
its now fixed as there are several passing builds after this one

On Wed, Jun 19, 2019 at 3:16 AM oVirt Jenkins  wrote:

> A system test invoked by the "ovirt-master" change queue including change
> 100778,6 (vdsm) failed. However, this change seems not to be the root
> cause for
> this failure. Change 100783,7 (vdsm) that this change depends on or is
> based
> on, was detected as the cause of the testing failures.
>
> This change had been removed from the testing queue. Artifacts built from
> this
> change will not be released until either change 100783,7 (vdsm) is fixed
> and
> this change is updated to refer to or rebased on the fixed version, or this
> change is modified to no longer depend on it.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/100778/6
>
> For further details about the change that seems to be the root cause
> behind the
> testing failures see:
> https://gerrit.ovirt.org/#/c/100783/7
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14624/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CHXVUBUQZO3OS2OQ2FWEGSFB2KS7F33V/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FNSPEESXJOPRYBRKKABJ3PUGILZ2R53X/


Fwd: [CQ]: 100863,1 (ovirt-appliance) failed "ovirt-4.3" system tests

2019-06-17 Thread Dafna Ron
Hi,

it seems this job died waiting for an executioner due to jenkins restart

Resuming build at Mon Jun 17 10:16:34 UTC 2019 after Jenkins restart
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: Finished waiting
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: Finished waiting
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’
Waiting to resume part of ovirt-appliance_standard-on-merge #35
ovirt-appliance [check-merged]: There are no nodes with the label
‘loader-container-6jlvz’[Pipeline] End of PipelineERROR: Killed
hudson.model.Queue$BuildableItem:ExecutorStepExecution.PlaceholderTask{runId=ovirt-appliance_standard-on-merge#35,label=loader-container-6jlvz,context=CpsStepContext[5:node]:Owner[ovirt-appliance_standard-on-merge/35:ovirt-appliance_standard-on-merge
#35],cookie=13b7e39d-6fbe-4bdc-b706-e50526c79848,auth=null}:482356
after waiting for 300,000 ms because we assume unknown Node
loader-container-6jlvz is never going to appear!
Finished: FAILURE





-- Forwarded message -
From: oVirt Jenkins 
Date: Mon, Jun 17, 2019 at 2:07 PM
Subject: [CQ]: 100863,1 (ovirt-appliance) failed "ovirt-4.3" system tests
To: 


Change 100863,1 (ovirt-appliance) is probably the reason behind recent
system
test failures in the "ovirt-4.3" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from
this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/100863/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/1205/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YSNAENCQ7QHTFOODUTN4KERBY7OOIT4U/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/3AI6T57JLOVOVHZX3K4BA2UQJXJRPET3/


Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-14 Thread Dafna Ron
thanks Martin!
we have a passing build on 4.3 as well
thanks for all the help :)
Dafna


On Thu, Jun 13, 2019 at 2:34 PM Martin Perina  wrote:

>
>
> On Thu, Jun 13, 2019 at 2:32 PM Dafna Ron  wrote:
>
>> will monitor new change once merged :)
>>
>
>> On Thu, Jun 13, 2019 at 1:23 PM Martin Perina  wrote:
>>
>>>
>>>
>>> On Thu, Jun 13, 2019 at 2:02 PM Dafna Ron  wrote:
>>>
>>>> 4.3 is still failing.
>>>>
>>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1172/
>>>> Martin, can you apply the change to 4.3 tests as well?
>>>>
>>>
>>> Arghhh.
>>>
>>> I looked if master and 4.3 shares the same file when I started to work
>>> on the fix and they were the same.
>>> But at the same day I posted orignal patch, master and 4.3 were split :-(
>>>
>>> So hopefully here's the final fix: https://gerrit.ovirt.org/100809
>>>
>>
> Just merged the patch, manual OST verification was successful
>
>>
>>>
>>>> On Thu, Jun 13, 2019 at 10:33 AM Dafna Ron  wrote:
>>>>
>>>>> We have a passing build on master
>>>>> waiting for 4.3
>>>>>
>>>>> On Thu, Jun 13, 2019 at 9:01 AM Dafna Ron  wrote:
>>>>>
>>>>>> Thanks Shirly and Martin.
>>>>>> I see the patch was merged so monitoring and will update
>>>>>>
>>>>>> On Wed, Jun 12, 2019 at 4:09 PM Martin Perina 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jun 12, 2019 at 11:53 AM Dafna Ron  wrote:
>>>>>>>
>>>>>>>> latest tests are still failing on log collector
>>>>>>>>
>>>>>>>
>>>>>>> I had a typo in my patch, here is the fix:
>>>>>>> https://gerrit.ovirt.org/#/c/100784/
>>>>>>> Currently trying to verify using manual OST to make sure latest VDSM
>>>>>>> is used (and not latest tested as in check-patch executed from the fix) 
>>>>>>> ...
>>>>>>>
>>>>>>>
>>>>>>>> On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:
>>>>>>>>
>>>>>>>>> thanks Martin.
>>>>>>>>> we are failing on missing packages so I have a patch to fix it.
>>>>>>>>> I will update once we have a vdsm build
>>>>>>>>>
>>>>>>>>> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron 
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Galit's patch should have solved it.
>>>>>>>>>>>> Marcin, are you still failing?
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I've just rebased https://gerrit.ovirt.org/100716 on top of
>>>>>>>>>>> Galit's change so we should know within an hour or so ...
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> CI finished successfully, please review so we can merge ...
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk <
>>>>>>>>>>>>> msobc...@redhat.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>&

Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-06-14 Thread Dafna Ron
and another:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14559/

On Fri, Jun 14, 2019 at 9:05 AM Dafna Ron  wrote:

> here you go:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14561/
> I pressed the save build forever button. please press the don't keep
> button once you get the info from the failure.
>
> Thanks,
> Dafna
>
>
> On Wed, Jun 5, 2019 at 9:09 AM Galit Rosenthal 
> wrote:
>
>> We didn't make any changes.
>> I don't think retrie will help in this case
>> we are trying to catch what cause this.
>>
>> If you see this again please let us know.
>>
>> We are still debugging it
>>
>> On Wed, Jun 5, 2019 at 10:56 AM Dafna Ron  wrote:
>>
>>> Hi,
>>>
>>> This is a random failure - so no, I have not.
>>> However, I looked at several failures and they are all the same, the
>>> action on engine/vdsm side succeeds and lago repots a failure but prints a
>>> success from logs.
>>> Did you add anything more to the tests to allow better debugging?
>>> Did you add a re-try to the test?
>>>
>>> Thanks,
>>> Dafna
>>>
>>>
>>>
>>> On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal 
>>> wrote:
>>>
>>>> Hi Dafna
>>>>
>>>> If you see this failure again, please send us the link to the job.
>>>>
>>>> We are trying to reproduce it and find the root cause.
>>>>
>>>> Regards,
>>>> Galit
>>>>
>>>> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>>>>
>>>>> sure
>>>>>
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>>>>
>>>>>
>>>>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim 
>>>>> wrote:
>>>>>
>>>>>> Hi Dafna,
>>>>>> Can you please direct me to the full test's log?
>>>>>>
>>>>>>
>>>>>> *Regards,*
>>>>>>
>>>>>> *Shani Leviim*
>>>>>>
>>>>>>
>>>>>> On Tue, May 21, 2019 at 11:57 AM Tal Nisan  wrote:
>>>>>>
>>>>>>> Sure, Shani can you please have a look?
>>>>>>>
>>>>>>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron  wrote:
>>>>>>>
>>>>>>>> Hi Tal,
>>>>>>>>
>>>>>>>> I am seeing random failures for test
>>>>>>>> 002_bootstrap.resize_and_refresh_storage_domain
>>>>>>>> It looks like this is a timing issue since by the time we print out
>>>>>>>> the error we actually see the resize succeeded. see example below:
>>>>>>>>
>>>>>>>>
>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>>>>>>
>>>>>>>> Can you please assign someone from the storage team to fix this
>>>>>>>> test?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Dafna
>>>>>>>>
>>>>>>>> ___
>>>>> Devel mailing list -- de...@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> GALIT ROSENTHAL
>>>>
>>>> SOFTWARE ENGINEER
>>>>
>>>> Red Hat
>>>>
>>>> <https://www.redhat.com/>
>>>>
>>>> ga...@redhat.comT: 972-9-7692230
>>>> <https://red.ht/sig>
>>>>
>>>
>>
>> --
>>
>> GALIT ROSENTHAL
>>
>> SOFTWARE ENGINEER
>>
>> Red Hat
>>
>> <https://www.redhat.com/>
>>
>> ga...@redhat.comT: 972-9-7692230
>> <https://red.ht/sig>
>>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/H2CWSHM5CZQJTTOFFQVSHJCIQHONUEI4/


Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-06-14 Thread Dafna Ron
here you go:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14561/
I pressed the save build forever button. please press the don't keep button
once you get the info from the failure.

Thanks,
Dafna


On Wed, Jun 5, 2019 at 9:09 AM Galit Rosenthal  wrote:

> We didn't make any changes.
> I don't think retrie will help in this case
> we are trying to catch what cause this.
>
> If you see this again please let us know.
>
> We are still debugging it
>
> On Wed, Jun 5, 2019 at 10:56 AM Dafna Ron  wrote:
>
>> Hi,
>>
>> This is a random failure - so no, I have not.
>> However, I looked at several failures and they are all the same, the
>> action on engine/vdsm side succeeds and lago repots a failure but prints a
>> success from logs.
>> Did you add anything more to the tests to allow better debugging?
>> Did you add a re-try to the test?
>>
>> Thanks,
>> Dafna
>>
>>
>>
>> On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal 
>> wrote:
>>
>>> Hi Dafna
>>>
>>> If you see this failure again, please send us the link to the job.
>>>
>>> We are trying to reproduce it and find the root cause.
>>>
>>> Regards,
>>> Galit
>>>
>>> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>>>
>>>> sure
>>>>
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>>>
>>>>
>>>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim 
>>>> wrote:
>>>>
>>>>> Hi Dafna,
>>>>> Can you please direct me to the full test's log?
>>>>>
>>>>>
>>>>> *Regards,*
>>>>>
>>>>> *Shani Leviim*
>>>>>
>>>>>
>>>>> On Tue, May 21, 2019 at 11:57 AM Tal Nisan  wrote:
>>>>>
>>>>>> Sure, Shani can you please have a look?
>>>>>>
>>>>>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> Hi Tal,
>>>>>>>
>>>>>>> I am seeing random failures for test
>>>>>>> 002_bootstrap.resize_and_refresh_storage_domain
>>>>>>> It looks like this is a timing issue since by the time we print out
>>>>>>> the error we actually see the resize succeeded. see example below:
>>>>>>>
>>>>>>>
>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>>>>>
>>>>>>> Can you please assign someone from the storage team to fix this
>>>>>>> test?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Dafna
>>>>>>>
>>>>>>> ___
>>>> Devel mailing list -- de...@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/
>>>>
>>>
>>>
>>> --
>>>
>>> GALIT ROSENTHAL
>>>
>>> SOFTWARE ENGINEER
>>>
>>> Red Hat
>>>
>>> <https://www.redhat.com/>
>>>
>>> ga...@redhat.comT: 972-9-7692230
>>> <https://red.ht/sig>
>>>
>>
>
> --
>
> GALIT ROSENTHAL
>
> SOFTWARE ENGINEER
>
> Red Hat
>
> <https://www.redhat.com/>
>
> ga...@redhat.comT: 972-9-7692230
> <https://red.ht/sig>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/APZMFLVSCAK2SFMCSJWD3BIXQDY5EYPX/


Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-13 Thread Dafna Ron
will monitor new change once merged :)

On Thu, Jun 13, 2019 at 1:23 PM Martin Perina  wrote:

>
>
> On Thu, Jun 13, 2019 at 2:02 PM Dafna Ron  wrote:
>
>> 4.3 is still failing.
>>
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1172/
>> Martin, can you apply the change to 4.3 tests as well?
>>
>
> Arghhh.
>
> I looked if master and 4.3 shares the same file when I started to work on
> the fix and they were the same.
> But at the same day I posted orignal patch, master and 4.3 were split :-(
>
> So hopefully here's the final fix: https://gerrit.ovirt.org/100809
>
>
>> On Thu, Jun 13, 2019 at 10:33 AM Dafna Ron  wrote:
>>
>>> We have a passing build on master
>>> waiting for 4.3
>>>
>>> On Thu, Jun 13, 2019 at 9:01 AM Dafna Ron  wrote:
>>>
>>>> Thanks Shirly and Martin.
>>>> I see the patch was merged so monitoring and will update
>>>>
>>>> On Wed, Jun 12, 2019 at 4:09 PM Martin Perina 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Jun 12, 2019 at 11:53 AM Dafna Ron  wrote:
>>>>>
>>>>>> latest tests are still failing on log collector
>>>>>>
>>>>>
>>>>> I had a typo in my patch, here is the fix:
>>>>> https://gerrit.ovirt.org/#/c/100784/
>>>>> Currently trying to verify using manual OST to make sure latest VDSM
>>>>> is used (and not latest tested as in check-patch executed from the fix) 
>>>>> ...
>>>>>
>>>>>
>>>>>> On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:
>>>>>>
>>>>>>> thanks Martin.
>>>>>>> we are failing on missing packages so I have a patch to fix it.
>>>>>>> I will update once we have a vdsm build
>>>>>>>
>>>>>>> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>>>>>>>>
>>>>>>>>>> Galit's patch should have solved it.
>>>>>>>>>> Marcin, are you still failing?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I've just rebased https://gerrit.ovirt.org/100716 on top of
>>>>>>>>> Galit's change so we should know within an hour or so ...
>>>>>>>>>
>>>>>>>>
>>>>>>>> CI finished successfully, please review so we can merge ...
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk <
>>>>>>>>>>> msobc...@redhat.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> basic suite fails for the patch with the following log:
>>>>>>>>>>>>
>>>>>>>>>>>>  
>>>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>>>>>>>>- STDERR
>>>>>>>>>>>>  
>>>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>>>>>>>>  + yum -y install ovirt-host
>>>>>>>>>>>>  
>>>>>&g

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-13 Thread Dafna Ron
4.3 is still failing.
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1172/
Martin, can you apply the change to 4.3 tests as well?

On Thu, Jun 13, 2019 at 10:33 AM Dafna Ron  wrote:

> We have a passing build on master
> waiting for 4.3
>
> On Thu, Jun 13, 2019 at 9:01 AM Dafna Ron  wrote:
>
>> Thanks Shirly and Martin.
>> I see the patch was merged so monitoring and will update
>>
>> On Wed, Jun 12, 2019 at 4:09 PM Martin Perina  wrote:
>>
>>>
>>>
>>> On Wed, Jun 12, 2019 at 11:53 AM Dafna Ron  wrote:
>>>
>>>> latest tests are still failing on log collector
>>>>
>>>
>>> I had a typo in my patch, here is the fix:
>>> https://gerrit.ovirt.org/#/c/100784/
>>> Currently trying to verify using manual OST to make sure latest VDSM is
>>> used (and not latest tested as in check-patch executed from the fix) ...
>>>
>>>
>>>> On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:
>>>>
>>>>> thanks Martin.
>>>>> we are failing on missing packages so I have a patch to fix it.
>>>>> I will update once we have a vdsm build
>>>>>
>>>>> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>>>>>>
>>>>>>>> Galit's patch should have solved it.
>>>>>>>> Marcin, are you still failing?
>>>>>>>>
>>>>>>>
>>>>>>> I've just rebased https://gerrit.ovirt.org/100716 on top of Galit's
>>>>>>> change so we should know within an hour or so ...
>>>>>>>
>>>>>>
>>>>>> CI finished successfully, please review so we can merge ...
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk <
>>>>>>>>> msobc...@redhat.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> basic suite fails for the patch with the following log:
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>>>>>>- STDERR
>>>>>>>>>>  
>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>>>>>>  + yum -y install ovirt-host
>>>>>>>>>>  
>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>>>>>>>>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>>>>>>>>>  
>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>>>>>>>>>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>>>>>>>>>  
>>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>>>>>>>>>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch 
>>>>>>>>>> (installed)
>>>>&g

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-13 Thread Dafna Ron
We have a passing build on master
waiting for 4.3

On Thu, Jun 13, 2019 at 9:01 AM Dafna Ron  wrote:

> Thanks Shirly and Martin.
> I see the patch was merged so monitoring and will update
>
> On Wed, Jun 12, 2019 at 4:09 PM Martin Perina  wrote:
>
>>
>>
>> On Wed, Jun 12, 2019 at 11:53 AM Dafna Ron  wrote:
>>
>>> latest tests are still failing on log collector
>>>
>>
>> I had a typo in my patch, here is the fix:
>> https://gerrit.ovirt.org/#/c/100784/
>> Currently trying to verify using manual OST to make sure latest VDSM is
>> used (and not latest tested as in check-patch executed from the fix) ...
>>
>>
>>> On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:
>>>
>>>> thanks Martin.
>>>> we are failing on missing packages so I have a patch to fix it.
>>>> I will update once we have a vdsm build
>>>>
>>>> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> Galit's patch should have solved it.
>>>>>>> Marcin, are you still failing?
>>>>>>>
>>>>>>
>>>>>> I've just rebased https://gerrit.ovirt.org/100716 on top of Galit's
>>>>>> change so we should know within an hour or so ...
>>>>>>
>>>>>
>>>>> CI finished successfully, please review so we can merge ...
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> basic suite fails for the patch with the following log:
>>>>>>>>>
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>>>>>- STDERR
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>>>>>  + yum -y install ovirt-host
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>>>>>>>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>>>>>>>>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>>>>>>>>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch 
>>>>>>>>> (installed)
>>>>>>>>>  
>>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-774>[2019-06-11T11:34:26.175Z]
>>>>>>>>> bind-license = 32:9.9.4-73.el7_6
>>>>>>>>>
>>>>>>>>>
>>>>>>>> I think https://gerrit.ovirt.org/#/c/100691/ which was merged 2
>>>>>>>> hours ago should address this issu

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-13 Thread Dafna Ron
Thanks Shirly and Martin.
I see the patch was merged so monitoring and will update

On Wed, Jun 12, 2019 at 4:09 PM Martin Perina  wrote:

>
>
> On Wed, Jun 12, 2019 at 11:53 AM Dafna Ron  wrote:
>
>> latest tests are still failing on log collector
>>
>
> I had a typo in my patch, here is the fix:
> https://gerrit.ovirt.org/#/c/100784/
> Currently trying to verify using manual OST to make sure latest VDSM is
> used (and not latest tested as in check-patch executed from the fix) ...
>
>
>> On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:
>>
>>> thanks Martin.
>>> we are failing on missing packages so I have a patch to fix it.
>>> I will update once we have a vdsm build
>>>
>>> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>>>>
>>>>>> Galit's patch should have solved it.
>>>>>> Marcin, are you still failing?
>>>>>>
>>>>>
>>>>> I've just rebased https://gerrit.ovirt.org/100716 on top of Galit's
>>>>> change so we should know within an hour or so ...
>>>>>
>>>>
>>>> CI finished successfully, please review so we can merge ...
>>>>
>>>>
>>>>>
>>>>>>
>>>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> basic suite fails for the patch with the following log:
>>>>>>>>
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>>>>- STDERR
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>>>>  + yum -y install ovirt-host
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>>>>>>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>>>>>>>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>>>>>>>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch 
>>>>>>>> (installed)
>>>>>>>>  
>>>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-774>[2019-06-11T11:34:26.175Z]
>>>>>>>> bind-license = 32:9.9.4-73.el7_6
>>>>>>>>
>>>>>>>>
>>>>>>> I think https://gerrit.ovirt.org/#/c/100691/ which was merged 2
>>>>>>> hours ago should address this issue.
>>>>>>> Maybe other reposync files should be updated as well?
>>>>>>>
>>>>>>>
>>>>>>>>  Seems unrelated and I'm having the same issue locally on my server.
>>>>>>>> Dafna, do you have some insight regarding this dependency error?
>>>>>>>>
>>>>>>>> Thanks, Marcin
>>>

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-12 Thread Dafna Ron
latest tests are still failing on log collector

On Wed, Jun 12, 2019 at 10:18 AM Dafna Ron  wrote:

> thanks Martin.
> we are failing on missing packages so I have a patch to fix it.
> I will update once we have a vdsm build
>
> On Tue, Jun 11, 2019 at 10:08 PM Martin Perina  wrote:
>
>>
>>
>> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina  wrote:
>>
>>>
>>>
>>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>>
>>>> Galit's patch should have solved it.
>>>> Marcin, are you still failing?
>>>>
>>>
>>> I've just rebased https://gerrit.ovirt.org/100716 on top of Galit's
>>> change so we should know within an hour or so ...
>>>
>>
>> CI finished successfully, please review so we can merge ...
>>
>>
>>>
>>>>
>>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> basic suite fails for the patch with the following log:
>>>>>>
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>>- STDERR
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>>  + yum -y install ovirt-host
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>>>>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>>>>>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>>>>>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch (installed)
>>>>>>  
>>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-774>[2019-06-11T11:34:26.175Z]
>>>>>> bind-license = 32:9.9.4-73.el7_6
>>>>>>
>>>>>>
>>>>> I think https://gerrit.ovirt.org/#/c/100691/ which was merged 2 hours
>>>>> ago should address this issue.
>>>>> Maybe other reposync files should be updated as well?
>>>>>
>>>>>
>>>>>>  Seems unrelated and I'm having the same issue locally on my server.
>>>>>> Dafna, do you have some insight regarding this dependency error?
>>>>>>
>>>>>> Thanks, Marcin
>>>>>> On 6/11/19 1:15 PM, Martin Perina wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 11, 2019 at 11:58 AM Milan Zamazal 
>>>>>> wrote:
>>>>>>
>>>>>>> Dafna Ron  writes:
>>>>>>>
>>>>>>> > Hi,
>>>>>>> >
>>>>>>> > Please note vdsm has been broken since Fri the 7th
>>>>>>> >
>>>>>>> > to summarize again,  vdsm has a patch to remove sos plugin which
>>>>>>> is what
>>>>>>> > metrics is using in its ost tests
>>>>>>> > due to that, vdsm is failing the metrics tests and in order to
>>>>>>> solve it we
>>>>>>> > need to make a choice:
>>>>>>> > 1. fix the metrics tests to not use sos
>>>>>>> > 2. disable the metrics tests
>>>>>>> > 3. re

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-12 Thread Dafna Ron
thanks Martin.
we are failing on missing packages so I have a patch to fix it.
I will update once we have a vdsm build

On Tue, Jun 11, 2019 at 10:08 PM Martin Perina  wrote:

>
>
> On Tue, Jun 11, 2019 at 7:41 PM Martin Perina  wrote:
>
>>
>>
>> On Tue, Jun 11, 2019 at 6:53 PM Dafna Ron  wrote:
>>
>>> Galit's patch should have solved it.
>>> Marcin, are you still failing?
>>>
>>
>> I've just rebased https://gerrit.ovirt.org/100716 on top of Galit's
>> change so we should know within an hour or so ...
>>
>
> CI finished successfully, please review so we can merge ...
>
>
>>
>>>
>>> On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> basic suite fails for the patch with the following log:
>>>>>
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>>>>- STDERR
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>>>>  + yum -y install ovirt-host
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>>>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>>>>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>>>>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch (installed)
>>>>>  
>>>>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-774>[2019-06-11T11:34:26.175Z]
>>>>>         bind-license = 32:9.9.4-73.el7_6
>>>>>
>>>>>
>>>> I think https://gerrit.ovirt.org/#/c/100691/ which was merged 2 hours
>>>> ago should address this issue.
>>>> Maybe other reposync files should be updated as well?
>>>>
>>>>
>>>>>  Seems unrelated and I'm having the same issue locally on my server.
>>>>> Dafna, do you have some insight regarding this dependency error?
>>>>>
>>>>> Thanks, Marcin
>>>>> On 6/11/19 1:15 PM, Martin Perina wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 11, 2019 at 11:58 AM Milan Zamazal 
>>>>> wrote:
>>>>>
>>>>>> Dafna Ron  writes:
>>>>>>
>>>>>> > Hi,
>>>>>> >
>>>>>> > Please note vdsm has been broken since Fri the 7th
>>>>>> >
>>>>>> > to summarize again,  vdsm has a patch to remove sos plugin which is
>>>>>> what
>>>>>> > metrics is using in its ost tests
>>>>>> > due to that, vdsm is failing the metrics tests and in order to
>>>>>> solve it we
>>>>>> > need to make a choice:
>>>>>> > 1. fix the metrics tests to not use sos
>>>>>> > 2. disable the metrics tests
>>>>>> > 3. revert the sos patch until a decision is made on ^^
>>>>>>
>>>>>> #3 is not an option, it would make Vdsm uninstallable on newer RHEL
>>>>>> versions.
>>>>>>
>>>>>
>>>>> I've posted a patch https://gerrit.ovirt.org/100716 which is trying
>>>>> to install vdsm sos plugin if it's not installed either by vdsm nor sos.
>>>>> Currenlt

Re: [ovirt-devel] Re: [urgent] vdsm broken since Friday (failing CQ)

2019-06-11 Thread Dafna Ron
Galit's patch should have solved it.
Marcin, are you still failing?


On Tue, Jun 11, 2019 at 2:40 PM Eyal Edri  wrote:

>
>
> On Tue, Jun 11, 2019 at 4:35 PM Marcin Sobczyk 
> wrote:
>
>> Hi,
>>
>> basic suite fails for the patch with the following log:
>>
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-769>[2019-06-11T11:34:26.175Z]
>>- STDERR
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-770>[2019-06-11T11:34:26.175Z]
>>  + yum -y install ovirt-host
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-771>[2019-06-11T11:34:26.175Z]
>>  Error: Package: 32:bind-libs-9.9.4-74.el7_6.1.x86_64 (alocalsync)
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-772>[2019-06-11T11:34:26.175Z]
>> Requires: bind-license = 32:9.9.4-74.el7_6.1
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-773>[2019-06-11T11:34:26.175Z]
>> Installed: 32:bind-license-9.9.4-73.el7_6.noarch (installed)
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/4762/pipeline/107#step-188-log-774>[2019-06-11T11:34:26.175Z]
>> bind-license = 32:9.9.4-73.el7_6
>>
>>
> I think https://gerrit.ovirt.org/#/c/100691/ which was merged 2 hours ago
> should address this issue.
> Maybe other reposync files should be updated as well?
>
>
>>  Seems unrelated and I'm having the same issue locally on my server.
>> Dafna, do you have some insight regarding this dependency error?
>>
>> Thanks, Marcin
>> On 6/11/19 1:15 PM, Martin Perina wrote:
>>
>>
>>
>> On Tue, Jun 11, 2019 at 11:58 AM Milan Zamazal 
>> wrote:
>>
>>> Dafna Ron  writes:
>>>
>>> > Hi,
>>> >
>>> > Please note vdsm has been broken since Fri the 7th
>>> >
>>> > to summarize again,  vdsm has a patch to remove sos plugin which is
>>> what
>>> > metrics is using in its ost tests
>>> > due to that, vdsm is failing the metrics tests and in order to solve
>>> it we
>>> > need to make a choice:
>>> > 1. fix the metrics tests to not use sos
>>> > 2. disable the metrics tests
>>> > 3. revert the sos patch until a decision is made on ^^
>>>
>>> #3 is not an option, it would make Vdsm uninstallable on newer RHEL
>>> versions.
>>>
>>
>> I've posted a patch https://gerrit.ovirt.org/100716 which is trying to
>> install vdsm sos plugin if it's not installed either by vdsm nor sos.
>> Currenlty waiting for CI, if run is successfull, I will extend the patch
>> also for 4.3 basic suite.
>>
>>
>>> > Thanks,
>>> > Dafna
>>> >
>>> >
>>> > -- Forwarded message -
>>> > From: Dafna Ron 
>>> > Date: Mon, Jun 10, 2019 at 1:30 PM
>>> > Subject: Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [
>>> 07-06-2019 ]
>>> > [ 003_00_metrics_bootstrap.metrics_and_log_collector ]
>>> > To: Martin Perina , Milan Zamazal <
>>> mzama...@redhat.com>,
>>> > Shirly Radco 
>>> > Cc: devel , infra 
>>> >
>>> >
>>> > Shirly? any update on this?
>>> >
>>> > On Fri, Jun 7, 2019 at 11:54 AM Dafna Ron  wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> We have a failure in vdsm project on master.
>>> >>
>>> >> The issue is change:
>>> >> https://gerrit.ovirt.org/#/c/100576/ - Remove SOS VDSM plugin
>>> >>
>>> >> which is failing on metrics as metrics is calling sos-logcollector.
>>> >>
>>> >> The patch cannot be changed as until centos 7.7 when sos-3.7-3, which
>>> >> contains vdsm plugin will come out.
>>> >> so until then, w

Re: [CQ]: e3cbb5d (ovirt-ansible-hosted-engine-setup) failed "ovirt-master" system tests

2019-06-11 Thread Dafna Ron
missing packages.
I added a patch: https://gerrit.ovirt.org/#/c/100711/
once passes we can merge and re-run failure

On Tue, Jun 11, 2019 at 10:58 AM oVirt Jenkins  wrote:

> Change e3cbb5d (ovirt-ansible-hosted-engine-setup) is probably the reason
> behind recent system test failures in the "ovirt-master" change queue and
> needs
> to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
>
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/commit/e3cbb5de09dc85264cfb35c8a58c2e6a73bcaa04
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14501/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/7ZCGITA4JMOVFT6TXEEH4QU5HQOFE3H4/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/624T3OPF2NEU5NDZV3DFKANTZ5XZNXP4/


[urgent] vdsm broken since Friday (failing CQ)

2019-06-11 Thread Dafna Ron
Hi,

Please note vdsm has been broken since Fri the 7th

to summarize again,  vdsm has a patch to remove sos plugin which is what
metrics is using in its ost tests
due to that, vdsm is failing the metrics tests and in order to solve it we
need to make a choice:
1. fix the metrics tests to not use sos
2. disable the metrics tests
3. revert the sos patch until a decision is made on ^^

Thanks,
Dafna


-- Forwarded message -
From: Dafna Ron 
Date: Mon, Jun 10, 2019 at 1:30 PM
Subject: Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 07-06-2019 ]
[ 003_00_metrics_bootstrap.metrics_and_log_collector ]
To: Martin Perina , Milan Zamazal ,
Shirly Radco 
Cc: devel , infra 


Shirly? any update on this?

On Fri, Jun 7, 2019 at 11:54 AM Dafna Ron  wrote:

> Hi,
>
> We have a failure in vdsm project on master.
>
> The issue is change:
> https://gerrit.ovirt.org/#/c/100576/ - Remove SOS VDSM plugin
>
> which is failing on metrics as metrics is calling sos-logcollector.
>
> The patch cannot be changed as until centos 7.7 when sos-3.7-3, which
> contains vdsm plugin will come out.
> so until then, we are left with no sos plugin, which is causing the
> metrics test to fail.
>
> Shirly, can you please take a look and see if we can change the test to
> not call sos-logcollector?
> Please note, that we are expecting 4.3 to fail on same issue very soon.
>
> failed job can be found here:
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14452/
>
>
> ERROR from test:
>
> lago.ssh: DEBUG: Command 8626fe70 on lago-basic-suite-master-engine  errors:
>  ERROR: Failed to get a sosreport from: lago-basic-suite-master-host-1; Could 
> not parse sosreport output
> ERROR: Failed to get a sosreport from: lago-basic-suite-master-host-0; Could 
> not parse sosreport output
>
> lago.utils: DEBUG: Error while running thread Thread-3
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in 
> _ret_via_queue
> queue.put({'return': func()})
>   File 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/003_00_metrics_bootstrap.py",
>  line 97, in run_log_collector
> 'log collector failed. Exit code is %s' % result.code
>   File "/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue
> raise self.failureException(msg)
> AssertionError: log collector failed. Exit code is 1
> - >> end captured logging << --
>
> Thanks,
> Dafna
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/E2UBWXYAE73XJ6I75N4ESXUFDIVMSIIS/


Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 07-06-2019 ] [ 003_00_metrics_bootstrap.metrics_and_log_collector ]

2019-06-10 Thread Dafna Ron
Shirly? any update on this?

On Fri, Jun 7, 2019 at 11:54 AM Dafna Ron  wrote:

> Hi,
>
> We have a failure in vdsm project on master.
>
> The issue is change:
> https://gerrit.ovirt.org/#/c/100576/ - Remove SOS VDSM plugin
>
> which is failing on metrics as metrics is calling sos-logcollector.
>
> The patch cannot be changed as until centos 7.7 when sos-3.7-3, which
> contains vdsm plugin will come out.
> so until then, we are left with no sos plugin, which is causing the
> metrics test to fail.
>
> Shirly, can you please take a look and see if we can change the test to
> not call sos-logcollector?
> Please note, that we are expecting 4.3 to fail on same issue very soon.
>
> failed job can be found here:
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14452/
>
>
> ERROR from test:
>
> lago.ssh: DEBUG: Command 8626fe70 on lago-basic-suite-master-engine  errors:
>  ERROR: Failed to get a sosreport from: lago-basic-suite-master-host-1; Could 
> not parse sosreport output
> ERROR: Failed to get a sosreport from: lago-basic-suite-master-host-0; Could 
> not parse sosreport output
>
> lago.utils: DEBUG: Error while running thread Thread-3
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in 
> _ret_via_queue
> queue.put({'return': func()})
>   File 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/003_00_metrics_bootstrap.py",
>  line 97, in run_log_collector
> 'log collector failed. Exit code is %s' % result.code
>   File "/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue
> raise self.failureException(msg)
> AssertionError: log collector failed. Exit code is 1
> - >> end captured logging << --
>
> Thanks,
> Dafna
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/OTRK36TZ25KWDPSMSTTE3SX2T7VUJVCE/


[ OST Failure Report ] [ oVirt Master (vdsm) ] [ 07-06-2019 ] [ 003_00_metrics_bootstrap.metrics_and_log_collector ]

2019-06-07 Thread Dafna Ron
Hi,

We have a failure in vdsm project on master.

The issue is change:
https://gerrit.ovirt.org/#/c/100576/ - Remove SOS VDSM plugin

which is failing on metrics as metrics is calling sos-logcollector.

The patch cannot be changed as until centos 7.7 when sos-3.7-3, which
contains vdsm plugin will come out.
so until then, we are left with no sos plugin, which is causing the metrics
test to fail.

Shirly, can you please take a look and see if we can change the test to not
call sos-logcollector?
Please note, that we are expecting 4.3 to fail on same issue very soon.

failed job can be found here:

https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14452/


ERROR from test:

lago.ssh: DEBUG: Command 8626fe70 on lago-basic-suite-master-engine  errors:
 ERROR: Failed to get a sosreport from:
lago-basic-suite-master-host-1; Could not parse sosreport output
ERROR: Failed to get a sosreport from: lago-basic-suite-master-host-0;
Could not parse sosreport output

lago.utils: DEBUG: Error while running thread Thread-3
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File 
"/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/003_00_metrics_bootstrap.py",
line 97, in run_log_collector
'log collector failed. Exit code is %s' % result.code
  File "/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue
raise self.failureException(msg)
AssertionError: log collector failed. Exit code is 1
- >> end captured logging << --

Thanks,
Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AGIX5H4EJQ66FULFJ6UL5OEDHZTLVMLO/


Re: [CQ]: 3fdab27 (ovirt-ansible-cluster-upgrade) failed "ovirt-4.2" system tests

2019-06-05 Thread Dafna Ron
asked Galit to have a look as this was suppose to have been resolved.
re-adding this change

On Wed, Jun 5, 2019 at 10:25 AM oVirt Jenkins  wrote:

> Change 3fdab27 (ovirt-ansible-cluster-upgrade) is probably the reason
> behind
> recent system test failures in the "ovirt-4.2" change queue and needs to be
> fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
>
> https://github.com/oVirt/ovirt-ansible-cluster-upgrade/commit/3fdab275f49d11e0a5bf5c1895f9765314d402d7
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/4459/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/E4NG5TAZD5Z4T2ZSJXVCAOTW2OVEXMRE/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PHWYVSCO5YJPLMNP2QFZZVHWSBLTBF3T/


Re: [ovirt-devel] Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-06-05 Thread Dafna Ron
Hi,

This is a random failure - so no, I have not.
However, I looked at several failures and they are all the same, the action
on engine/vdsm side succeeds and lago repots a failure but prints a success
from logs.
Did you add anything more to the tests to allow better debugging?
Did you add a re-try to the test?

Thanks,
Dafna



On Wed, Jun 5, 2019 at 8:21 AM Galit Rosenthal  wrote:

> Hi Dafna
>
> If you see this failure again, please send us the link to the job.
>
> We are trying to reproduce it and find the root cause.
>
> Regards,
> Galit
>
> On Tue, May 21, 2019 at 5:15 PM Dafna Ron  wrote:
>
>> sure
>>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/
>>
>>
>> On Tue, May 21, 2019 at 3:04 PM Shani Leviim  wrote:
>>
>>> Hi Dafna,
>>> Can you please direct me to the full test's log?
>>>
>>>
>>> *Regards,*
>>>
>>> *Shani Leviim*
>>>
>>>
>>> On Tue, May 21, 2019 at 11:57 AM Tal Nisan  wrote:
>>>
>>>> Sure, Shani can you please have a look?
>>>>
>>>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron  wrote:
>>>>
>>>>> Hi Tal,
>>>>>
>>>>> I am seeing random failures for test
>>>>> 002_bootstrap.resize_and_refresh_storage_domain
>>>>> It looks like this is a timing issue since by the time we print out
>>>>> the error we actually see the resize succeeded. see example below:
>>>>>
>>>>>
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>>>
>>>>> Can you please assign someone from the storage team to fix this test?
>>>>>
>>>>> Thanks,
>>>>> Dafna
>>>>>
>>>>> ___
>> Devel mailing list -- de...@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/
>>
>
>
> --
>
> GALIT ROSENTHAL
>
> SOFTWARE ENGINEER
>
> Red Hat
>
> <https://www.redhat.com/>
>
> ga...@redhat.comT: 972-9-7692230
> <https://red.ht/sig>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PY5PCZWS3ED7PRC7EEHHDEMX23GW4PLE/


Re: [CQ]: 100504,2 (ovirt-engine) failed "ovirt-4.3" system tests

2019-06-04 Thread Dafna Ron
failed to download ng image from 4.2 tested.
we already have a new ovirt-engine patch running so nothing to do on this
one except look at why we failed to access tested 4.2

[2019-06-04T11:26:59.059Z]
ovirt-node-ng-image-update 99% [===-]  85 MB/s | 640 MB   00:00 ETA
ovirt-node-ng-image-update-4.2.8-1.el7.noarch: [Errno 256] No more
mirrors to try.
 
[2019-06-04T11:26:59.059Z]
 
[2019-06-04T11:26:59.059Z]
stderr:
 
[2019-06-04T11:26:59.059Z]
 
[2019-06-04T11:26:59.059Z]
  - repo: ovirt-4.2-tested-el7: failed, re-running.
 
[2019-06-04T11:26:59.059Z]
  - removing conflicting RPM:
/var/lib/lago/ovirt-4.2-tested-el7/noarch/ovirt-node-ng-image-update-4.2.8-1.el7.noarch.rpm
 
[2019-06-04T11:26:59.059Z]
  - repo: ovirt-4.2-tested-el7: failed. clearing cache and
re-running.
 
[2019-06-04T11:26:59.059Z]
* Running reposync: ERROR (in 0:08:13)
 
[2019-06-04T11:26:59.059Z]
  # Syncing remote repos locally (this might take some time): ERROR
(in 0:08:13)
 
[2019-06-04T11:26:59.059Z]
@ Create prefix internal repo: ERROR (in 0:08:13)
 
[2019-06-04T11:26:59.059Z]
Error occured, aborting
 
[2019-06-04T11:26:59.059Z]
Traceback (most recent call last):
 
[2019-06-04T11:26:59.059Z]
  File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 383,
in do_run
 
[2019-06-04T11:26:59.059Z]
self.cli_plugins[args.ovirtverb].do_run(args)
 
[2019-06-04T11:26:59.059Z]
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
184, in do_run
 
[2019-06-04T11:26:59.059Z]
self._do_run(**vars(args))
 
[2019-06-04T11:26:59.059Z]
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in
wrapper
 
[2019-06-04T11:26:59.059Z]
return func(*args, **kwargs)
 
[2019-06-04T11:26:59.059Z]
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in
wrapper
 
[2019-06-04T11:26:59.059Z]
return func(*args, prefix=prefix, **kwargs)
 
[2019-06-04T11:26:59.059Z]
  File 

Re: Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-05-21 Thread Dafna Ron
sure
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/


On Tue, May 21, 2019 at 3:04 PM Shani Leviim  wrote:

> Hi Dafna,
> Can you please direct me to the full test's log?
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Tue, May 21, 2019 at 11:57 AM Tal Nisan  wrote:
>
>> Sure, Shani can you please have a look?
>>
>> On Mon, May 20, 2019 at 8:30 PM Dafna Ron  wrote:
>>
>>> Hi Tal,
>>>
>>> I am seeing random failures for test
>>> 002_bootstrap.resize_and_refresh_storage_domain
>>> It looks like this is a timing issue since by the time we print out the
>>> error we actually see the resize succeeded. see example below:
>>>
>>>
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/
>>>
>>> Can you please assign someone from the storage team to fix this test?
>>>
>>> Thanks,
>>> Dafna
>>>
>>>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/P4UZJ4PWELVW2AGLVIFXKM5OTMHUHKGC/


vdsm failing on kernel package in CQ - Fix will be merged soon

2019-05-21 Thread Dafna Ron
Hi,

vdsm project is failing on 4.3 and master due to a kernel package
dependency [1]

Galit already has a patch and will merge once it passes CI [2]

[1]
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14195/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/lago-upgrade-from-release-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-mgmt-ansible-20190521064741-lago-upgrade-from-release-suite-master-host-0-cd7bb16e-b4e6-49fd-ad52-fbce54d240a0.log
[2] https://gerrit.ovirt.org/#/c/100182/


Thanks,
Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MPIQHWFARGHOIYKIU2WMKPB6GY6NPYVH/


Storage test case needs to be fixed: 002_bootstrap.resize_and_refresh_storage_domain

2019-05-20 Thread Dafna Ron
Hi Tal,

I am seeing random failures for test
002_bootstrap.resize_and_refresh_storage_domain
It looks like this is a timing issue since by the time we print out the
error we actually see the resize succeeded. see example below:

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14185/testReport/junit/(root)/002_bootstrap/running_tests___basic_suite_el7_x86_64___resize_and_refresh_storage_domain/

Can you please assign someone from the storage team to fix this test?

Thanks,
Dafna
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L4RSLYR4M6RJ6VFXHDSB2TR6HW5RCRNU/


[Ovirt] [CQ weekly status] [17-05-2019]

2019-05-17 Thread Dafna Ron
Hi,

This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.

*CQ-4.2*:  GREEN (#1)

Last failure was on 15-5 for project ovirt-ansible-engine-setup due to
failure in test 002_bootstrap.add_master_storage_domain.
The fix for this test was submitted and merged in ost:
https://gerrit.ovirt.org/#/c/100089/

*CQ-4.3*:  GREEN (#1)

Last failure was on 16-05 for project ovirt-host due to a build-artifacts
for fc28.
the issue has been resolved.

*CQ-Master:*  RED (#1)

We have a current failure in project ovirt-engine which started last night.
The failure is in run vm test and caused by change:
https://gerrit.ovirt.org/#/c/99372/6 - core: Initial Ignition support over
custom script 
It already has a fix (thank you Roy for the fast response) and we are
awaiting a merge for the patch:
https://gerrit.ovirt.org/#/c/100133/ - core: handle a case where cloudinit
script is null

 Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:

[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/

[2]
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/

[3]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/

Happy week!
Dafna


---
COLOUR MAP

Green = job has been passing successfully

** green for more than 3 days may suggest we need a review of our test
coverage


   1.

   1-3 days   GREEN (#1)
   2.

   4-7 days   GREEN (#2)
   3.

   Over 7 days GREEN (#3)


Yellow = intermittent failures for different projects but no lasting or
current regressions

** intermittent would be a healthy project as we expect a number of
failures during the week

** I will not report any of the solved failures or regressions.


   1.

   Solved job failuresYELLOW (#1)
   2.

   Solved regressions  YELLOW (#2)


Red = job has been failing

** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.


   1.

   1-3 days  RED (#1)
   2.

   4-7 days  RED (#2)
   3.

   Over 7 days RED (#3)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/354VGI5DB7VSX2GAYDHXWB5ROE3SXB4A/


[ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 17-05-19 ] [ 004_basic_sanity.run_vms ]

2019-05-17 Thread Dafna Ron
Hi,

We are failing to run vm in project ovirt-engine on master branch on both
basic and upgrade suites

CQ reported this patch as root cause:

https://gerrit.ovirt.org/#/c/99372/6 - core: Initial Ignition support over
custom script 

I can see errors in the log related to cloud-init which happen at the same
time we try to run the vm.
There are also NPE on GetVMLeaseInfo happening before.

Roy can you please take a look?

Full logs from first failure can be found here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14148/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/


Thanks,
Dafna


Error from log:

2019-05-16 12:53:46,184-04 INFO
 [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-2)
[ad7888f0-4368-4792-b6a8-f7cd4b0ebbe6] START, CreateVDSCommand( CreateVD
SCommandParameters:{hostId='31dd3a99-5821-437b-8995-232cfcd67d84',
vmId='362080c4-4d08-4b40-b6e7-9c04bf854d68', vm='VM [vm0]'}), log id:
522026aa
2019-05-16 12:53:46,190-04 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(default task-2) [ad7888f0-4368-4792-b6a8-f7cd4b0ebbe6] START, CreateBrok
erVDSCommand(HostName = lago-basic-suite-master-host-1,
CreateVDSCommandParameters:{hostId='31dd3a99-5821-437b-8995-232cfcd67d84',
vmId='362080c4-4d08-4b40-b6e7-9c04bf854d68
', vm='VM [vm0]'}), log id: 6bbd62c6
2019-05-16 12:53:46,201-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default
task-2) [ad7888f0-4368-4792-b6a8-f7cd4b0ebbe6] Failed in 'CreateBrokerVDS'
method, for vds: 'lago-basic-suite-master-host-1'; host:
'lago-basic-suite-master-host-1': Failed to build cloud-init data:
2019-05-16 12:53:46,201-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default
task-2) [ad7888f0-4368-4792-b6a8-f7cd4b0ebbe6] Command
'CreateBrokerVDSCommand(HostName = lago-basic-suite-master-host-1,
CreateVDSCommandParameters:{hostId='31dd3a99-5821-437b-8995-232cfcd67d84',
vmId='362080c4-4d08-4b40-b6e7-9c04bf854d68', vm='VM [vm0]'})' execution
failed: Failed to build cloud-init data:
2019-05-16 12:53:46,201-04 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default
task-2) [ad7888f0-4368-4792-b6a8-f7cd4b0ebbe6] Exception:
java.lang.RuntimeException: Failed to build cloud-init data:
at
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand.getPayload(CreateBrokerVDSCommand.java:177)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand.generateDomainXml(CreateBrokerVDSCommand.java:90)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand.createInfo(CreateBrokerVDSCommand.java:52)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand.executeVdsBrokerCommand(CreateBrokerVDSCommand.java:44)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:397)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.CreateVDSCommand.executeVmCommand(CreateVDSCommand.java:37)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ManagingVmCommand.executeVDSCommand(ManagingVmCommand.java:17)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:397)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor253.invoke(Unknown Source)
[:1.8.0_212]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_212]
at

Re: [CQ]: 99979,1 (cockpit-ovirt) failed "ovirt-4.3" system tests

2019-05-14 Thread Dafna Ron
this was already fixed in later builds.
https://gerrit.ovirt.org/#/c/99981/ was successfully built
http://jenkins.ovirt.org/job/cockpit-ovirt_4.3_build-artifacts-el7-x86_64/50/

On Tue, May 14, 2019 at 2:13 PM oVirt Jenkins  wrote:

> Change 99979,1 (cockpit-ovirt) is probably the reason behind recent system
> test
> failures in the "ovirt-4.3" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/99979/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.3_change-queue-tester/874/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MCXAHX3JATOCSCKP2MEV4USVS776FZIC/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZGF44CLDV56TGX24ISQJUHEWYUTWQMD2/


Re: [CQ]: 98774,3 (ovirt-engine) failed "ovirt-master" system tests

2019-05-14 Thread Dafna Ron
Adding Galit

On Tue, May 14, 2019 at 10:32 AM Barak Korren  wrote:

> Yeah, its my mock upgrade patch causing some issues - if you see this,
> downgrade mock on the slave with yum and rerun the build
>
> On Tue, 14 May 2019 at 12:29, Dafna Ron  wrote:
>
>> this is a failure to build fc29 because mock could not find yaml
>>
>> WARN: Unable to find environment.yaml file 
>> automation/build-artifacts.environment.yaml or 
>> automation/build-artifacts.environment.yaml.fc29, skipping environment.yaml
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-821>==
>>  Initializing chroot
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-822>
>> mock \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-823>
>> --old-chroot \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-824>
>> 
>> --configdir="/home/jenkins/workspace/ovirt-engine_master_build-artifacts-fc29-x86_64/ovirt-engine"
>>  \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-825>
>> --root="mocker-fedora-29-x86_64.fc29" \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-826>
>> --resultdir="/tmp/mock_logs.raYNKEdF/init" \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-827>
>> --init
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-828>
>>   File 
>> "/home/jenkins/workspace/ovirt-engine_master_build-artifacts-fc29-x86_64/ovirt-engine/mocker-fedora-29-x86_64.fc29.cfg",
>>  line 77, in 
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-829>
>> from mirror_client import mirrors_from_uri, \
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-830>
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-831>ModuleNotFoundError:
>>  No module named 'yaml'
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-832>
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-833>ERROR:
>>  Error in configuration
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-834>Init
>>  took 1 seconds
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-835>
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-836>@@
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_build-artifacts-fc29-x86_64/detail/ovirt-engine_master_build-artifacts-fc29-x86_64/4/pipeline#log-837>@@
>>  Tue May 14 07:38:37 UTC 2019 automation/build-artifacts.sh chroot finished
>>  
>> <http://jenkins.ovirt.org/blue/organizations/jenkins

Re: [CQ]: 98774,3 (ovirt-engine) failed "ovirt-master" system tests

2019-05-14 Thread Dafna Ron
this is a failure to build fc29 because mock could not find yaml

WARN: Unable to find environment.yaml file
automation/build-artifacts.environment.yaml or
automation/build-artifacts.environment.yaml.fc29, skipping
environment.yaml
 
==
Initializing chroot
 

   mock \
 

   --old-chroot \
 

   
--configdir="/home/jenkins/workspace/ovirt-engine_master_build-artifacts-fc29-x86_64/ovirt-engine"
\
 

   --root="mocker-fedora-29-x86_64.fc29" \
 

   --resultdir="/tmp/mock_logs.raYNKEdF/init" \
 

   --init
 

 File 
"/home/jenkins/workspace/ovirt-engine_master_build-artifacts-fc29-x86_64/ovirt-engine/mocker-fedora-29-x86_64.fc29.cfg",
line 77, in 
 

   from mirror_client import mirrors_from_uri, \
 

 
ModuleNotFoundError:
No module named 'yaml'
 

 
ERROR:
Error in configuration
 
Init
took 1 seconds
 

 
@@
 
@@
Tue May 14 07:38:37 UTC 2019 automation/build-artifacts.sh chroot
finished
 
@@
 took 2 seconds
 
@@
 rc = 1
 
@@
 
==
Scrubbing chroot
 

Re: [CQ]: 99967,1 (ovirt-engine) failed "ovirt-4.3" system tests

2019-05-14 Thread Dafna Ron
This is due to build-artifacts of fc28

seems like mock cannot find the environment yaml.

WARN: Unable to find environment.yaml file
automation/build-artifacts.environment.yaml or
automation/build-artifacts.environment.yaml.fc28, skipping
environment.yaml
 
==
Initializing chroot
 

   mock \
 

   --old-chroot \
 

   
--configdir="/home/jenkins/workspace/ovirt-engine_4.3_build-artifacts-fc28-x86_64/ovirt-engine"
\
 

   --root="mocker-fedora-28-x86_64.fc28" \
 

   --resultdir="/tmp/mock_logs.jnU7afAD/init" \
 

   --init
 

 File 
"/home/jenkins/workspace/ovirt-engine_4.3_build-artifacts-fc28-x86_64/ovirt-engine/mocker-fedora-28-x86_64.fc28.cfg",
line 79, in 
 

   from mirror_client import mirrors_from_uri, \
 

 
ModuleNotFoundError:
No module named 'yaml'
 

 
ERROR:
Error in configuration
 
Init
took 1 seconds
 

 
@@
 
@@
Tue May 14 08:08:13 UTC 2019 automation/build-artifacts.sh chroot
finished
 
@@
 took 2 seconds
 
@@
 rc = 1
 
@@
 
==
Scrubbing chroot
 

   mock \
 

[JIRA] (OVIRT-2725) Build artifacts failed - not enough free space on file system

2019-05-13 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39309#comment-39309
 ] 

Dafna Ron commented on OVIRT-2725:
--

The build that failed is fc28.x390:
15:52:20  Running on lfedora1.lf-dev.marist.edu in 
/home/ovirt/workspace/standard-manual-runner
Agent lfedora1.lf-dev.marist.edu (lfedora1.lf-dev.marist.edu s390x loaned VM)

[~ederevea] can you please take a look? 

> Build artifacts failed - not enough free space on file system
> -
>
> Key: OVIRT-2725
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2725
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I had this failure in yum install:
> [2019-05-12T14:54:55.706Z] Error Summary
> [2019-05-12T14:54:55.706Z] -
> [2019-05-12T14:54:55.706Z] Disk Requirements:
> [2019-05-12T14:54:55.706Z]At least 70MB more space needed on the /
> filesystem.
> Build:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/247/pipeline
> Trying build again, hopefully will get a slave with more space...
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100101)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IDBU3KJ7QXYNSATY57RBREI2X5PWWAYX/


Re: [CQ]: 99533,1 (ovirt-vmconsole) failed "ovirt-4.2" system tests

2019-05-10 Thread Dafna Ron
we may need new lago images as I think the selinux conf is overwritten when
we upgrade newer packages.

http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/4323/testReport/junit/(root)/001_upgrade_engine/running_tests___upgrade_from_prevrelease_suite_el7_x86_64___test_initialize_engine/

On Fri, May 10, 2019 at 2:25 PM oVirt Jenkins  wrote:

> Change 99533,1 (ovirt-vmconsole) is probably the reason behind recent
> system
> test failures in the "ovirt-4.2" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/99533/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/4323/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NI7TPHGBKRGWIFH4XHDKA72C74VS4DP7/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VDWLW6XCGNAGQ6NEXIULKKMXESQWCNY5/


Re: [ovirt-devel] Re: URGENT - ovirt-engine broken for 3 days Re: Subject: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 05-05-2019 ] [ upgrade_hosts ]

2019-05-10 Thread Dafna Ron
we have a passing ovirt-engine build today.
Thank you all for a fast response.
Dafna


On Thu, May 9, 2019 at 12:43 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno gio 9 mag 2019 alle ore 12:59 Dafna Ron  ha
> scritto:
>
>> As IL are on independence day, anyone else can merge?
>> https://gerrit.ovirt.org/#/c/99845/
>>
>>
> I have merge rights but I need at least CI to pass. Waiting on jenkins.
>
>
>>
>> On Thu, May 9, 2019 at 11:30 AM Dafna Ron  wrote:
>>
>>> Thanks Andrej.
>>> I will follow the patch and update.
>>> Dafna
>>>
>>> On Thu, May 9, 2019 at 11:23 AM Andrej Krejcir 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Ok, I have posted the reverting patch:
>>>> https://gerrit.ovirt.org/#/c/99845/
>>>>
>>>> I'm still investigating what is the problem. Sorry for the delay, we
>>>> had a public holiday yesturday.
>>>>
>>>>
>>>> Andrej
>>>>
>>>> On Thu, 9 May 2019 at 11:20, Dafna Ron  wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have not heard back on this issue and ovirt-engine has been broken
>>>>> for the past 3 days.
>>>>>
>>>>> As this does not seem a simple debug and fix I suggest reverting the
>>>>> patch and investigating later.
>>>>>
>>>>> thanks,
>>>>> Dafna
>>>>>
>>>>>
>>>>>
>>>>> On Wed, May 8, 2019 at 9:42 AM Dafna Ron  wrote:
>>>>>
>>>>>> Any news?
>>>>>>
>>>>>> Thanks,
>>>>>> Dafna
>>>>>>
>>>>>>
>>>>>> On Tue, May 7, 2019 at 4:57 PM Dafna Ron  wrote:
>>>>>>
>>>>>>> thanks for the quick reply and investigation.
>>>>>>> Please update me if I can help any further and if you find the cause
>>>>>>> and have a patch let me know.
>>>>>>> Note that ovirt-engine project is broken and if we cannot find the
>>>>>>> cause relatively fast we should consider reverting the patch to allow a 
>>>>>>> new
>>>>>>> package to be built in CQ with other changes that were submitted.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Dafna
>>>>>>>
>>>>>>>
>>>>>>> On Tue, May 7, 2019 at 4:42 PM Andrej Krejcir 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> After running a few OSTs manually, it seems that the patch is the
>>>>>>>> cause. Investigating...
>>>>>>>>
>>>>>>>> On Tue, 7 May 2019 at 14:58, Andrej Krejcir 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> The issue is probably not caused by the patch.
>>>>>>>>>
>>>>>>>>> This log line means that the VM does not exist in the DB:
>>>>>>>>>
>>>>>>>>> 2019-05-07 06:02:04,215-04 WARN
>>>>>>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>>>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] 
>>>>>>>>> Validation
>>>>>>>>> of action 'MigrateMultipleVms' failed for user admin@internal-authz.
>>>>>>>>> Reasons: ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>>>>>>>
>>>>>>>>> I will investigate more, why the VM is missing.
>>>>>>>>>
>>>>>>>>> On Tue, 7 May 2019 at 14:07, Dafna Ron  wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> We are failing test upgrade_hosts on
>>>>>>>>>> upgrade-from-release-suite-master.
>>>>>>>>>> From the logs I can see that we are calling migrate vm when we
>>>>>>>>>> have only one host and the vm seem to have been shut down before the
>>>>>>>>>> maintenance call is issued.
>>>>>>>>>>
>>>>>>>>>> Can you please look into this?
>>>>>>>

Re: [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-05-2019 ] [ 004_basic_sanity.vm_run ]

2019-05-10 Thread Dafna Ron
On Fri, May 10, 2019 at 12:04 PM Milan Zamazal  wrote:

> Marcin Sobczyk  writes:
>
> > Hi,
> >
> > the mentioned patch only touches our yum plugin - I don't think it's
> > related to the failure.
> >
> > VDSM fails on 'VM.create' call - Milan, could you please take a quick
> > look at it?
>
> I merged a patch to master yesterday that disables legacy pre-XML Engine
> (< 4.2) support.  Engines and clusters < 4.2 are no longer supported in
> master.
>
> I can see the VM is run with a legacy configuration rather than with
> `xml' parameter.  I wonder why -- is perhaps the cluster level < 4.2?
> If so then the cluster level must be updated before the VM is run.
>

This is on master branch. but it is happening on upgrade suite.
however, the suite is upgrading from 4.3 to 4.4:

pre-reposync-config.repo -> ../common/yum-repos/ovirt-4.3-cq.repo
reposync-config.repo -> ../common/yum-repos/ovirt-master-cq.repo



> > Regards, Marcin
> >
> > On 5/10/19 11:36 AM, Dafna Ron wrote:
> >> Hi,
> >>
> >> We are failing upgrade-from-release-suite.el7.x86_64 /
> >> 004_basic_sanity.vm_run
> >>
> >> The issue is an unexpected exception in vdsm.
> >>
> >> root cause based on CQ is this patch:
> >> https://gerrit.ovirt.org/#/c/99854/ - yum: Allow downloading only
> >> 'vdsm' package
> >>
> >> Logs can be found here:
> >>
> >>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14079/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
> >>
> >> Marcin, can you please take a look?
> >>
> >> error:
> >>
> >> vdsm:
> >>
> >> 2019-05-10 05:06:38,329-0400 ERROR (jsonrpc/0) [api] FINISH create
> >> error=Unexpected exception (api:131)
> >> Traceback (most recent call last):
> >>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> >> 124, in method
> >> ret = func(*args, **kwargs)
> >>   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 245, in
> create
> >> raise exception.UnexpectedError()
> >> UnexpectedError: Unexpected exception
> >> 2019-05-10 05:06:38,330-0400 INFO  (jsonrpc/0) [api.virt] FINISH
> >> create return={'status': {'message': 'Unexpected exception', 'code':
> >> 16}} from=:::192.168.201.2,39486,
> >> flow_id=83e4f0c4-a39a-45aa-891f-4022765e1a87,
> >> vmId=397013a0-b7d4-4d38-86c3-e944cebd75a7 (api:54)
> >> 2019-05-10 05:06:38,331-0400 INFO  (jsonrpc/0)
> >> [jsonrpc.JsonRpcServer] RPC call VM.create failed (error 16) in 0.00
> >> seconds (__init__:312)
> >> 2019-05-10 05:06:38,629-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH
> >> getStorageDomainInfo return={'info': {'uuid':
> >> '24498263-8985-46a9-9161-f65ee776cb7f', 'vgMetadataDevice':
> >> '360014051f333158820a4cc6ab3be5b55', 'vguuid':
> >> 'rXTLNc-JwEz-0dtL-Z2EC-ASqn-gzkg-jDJa1b', 'metadataDevice':
> >> '360014051f333158820a4cc6ab3be5b55', 'state': 'OK', 'version': '4',
> >> 'role': 'Master', 'type': 'ISCSI', 'class': 'Data', 'pool':
> >> ['3e330122-c587-4078-b9a4-13dbb697c5cc'], 'name': 'iscsi'}}
> >> from=:::192.168.201.2,39486, flow_id=3f76d32a,
> >> task_id=084e7ee5-3c43-4905-833f-9a522043a862 (api:54)
> >>
> >> engine:
> >>
> >> 2019-05-10 05:06:38,328-04 ERROR
> >> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-1)
> >> [83e4f0c4-a39a-45aa-891f-4022765e1a87] VDS::create Failed creating
> >> vm 'v
> >> m0' in vds = '566eda0f-3ef0-4791-a618-1e649af4b0be' error =
> >> 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> >> VDSGenericException: VDSErrorException: Failed to C
> >> reateBrokerVDS, error = Unexpected exception, code = 16'
> >> 2019-05-10 05:06:38,328-04 INFO
> >> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-1)
> >> [83e4f0c4-a39a-45aa-891f-4022765e1a87] FINISH, CreateVDSCommand,
> >> return:
> >>  Down, log id: 3cb7c62d
> >> 2019-05-10 05:06:38,328-04 WARN
> >> [org.ovirt.engine.core.bll.RunVmCommand] (default task-1)
> >> [83e4f0c4-a39a-45aa-891f-4022765e1a87] Failed to run VM 'vm0':
> >> EngineException: or
> >> g.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> >> VDSGenericException: VDSErrorException: Failed to CreateBrokerVDS,
> >> error = Unexpected exception, code = 16 (Failed
> >>  with error unexpected

[ OST Failure Report ] [ oVirt Master (vdsm) ] [ 10-05-2019 ] [ 004_basic_sanity.vm_run ]

2019-05-10 Thread Dafna Ron
Hi,

We are failing  upgrade-from-release-suite.el7.x86_64 /
004_basic_sanity.vm_run

The issue is an unexpected exception in vdsm.

root cause based on CQ is this patch:
https://gerrit.ovirt.org/#/c/99854/ - yum: Allow downloading only 'vdsm'
package

Logs can be found here:

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14079/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/

Marcin, can you please take a look?

error:

vdsm:

2019-05-10 05:06:38,329-0400 ERROR (jsonrpc/0) [api] FINISH create
error=Unexpected exception (api:131)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 245, in create
raise exception.UnexpectedError()
UnexpectedError: Unexpected exception
2019-05-10 05:06:38,330-0400 INFO  (jsonrpc/0) [api.virt] FINISH create
return={'status': {'message': 'Unexpected exception', 'code': 16}}
from=:::192.168.201.2,39486,
flow_id=83e4f0c4-a39a-45aa-891f-4022765e1a87,
vmId=397013a0-b7d4-4d38-86c3-e944cebd75a7 (api:54)
2019-05-10 05:06:38,331-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call VM.create failed (error 16) in 0.00 seconds (__init__:312)
2019-05-10 05:06:38,629-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH
getStorageDomainInfo return={'info': {'uuid':
'24498263-8985-46a9-9161-f65ee776cb7f', 'vgMetadataDevice':
'360014051f333158820a4cc6ab3be5b55', 'vguuid':
'rXTLNc-JwEz-0dtL-Z2EC-ASqn-gzkg-jDJa1b', 'metadataDevice':
'360014051f333158820a4cc6ab3be5b55', 'state': 'OK', 'version': '4', 'role':
'Master', 'type': 'ISCSI', 'class': 'Data', 'pool':
['3e330122-c587-4078-b9a4-13dbb697c5cc'], 'name': 'iscsi'}}
from=:::192.168.201.2,39486, flow_id=3f76d32a,
task_id=084e7ee5-3c43-4905-833f-9a522043a862 (api:54)

engine:

2019-05-10 05:06:38,328-04 ERROR
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] VDS::create Failed creating vm 'v
m0' in vds = '566eda0f-3ef0-4791-a618-1e649af4b0be' error =
'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to C
reateBrokerVDS, error = Unexpected exception, code = 16'
2019-05-10 05:06:38,328-04 INFO
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] FINISH, CreateVDSCommand, return:
 Down, log id: 3cb7c62d
2019-05-10 05:06:38,328-04 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
(default task-1) [83e4f0c4-a39a-45aa-891f-4022765e1a87] Failed to run VM
'vm0': EngineException: or
g.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateBrokerVDS, error =
Unexpected exception, code = 16 (Failed
 with error unexpected and code 16)
2019-05-10 05:06:38,328-04 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] Lock freed to object
'EngineLock:{exclu
siveLocks='[397013a0-b7d4-4d38-86c3-e944cebd75a7=VM]', sharedLocks=''}'
2019-05-10 05:06:38,329-04 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
(default task-1) [83e4f0c4-a39a-45aa-891f-4022765e1a87] Trying to rerun VM
'vm0'
2019-05-10 05:06:38,373-04 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [83e4f0c4-a39a-45aa-891f-4022765e1a87] EVENT_ID: USE
R_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm0 on Host
lago-upgrade-from-release-suite-master-host-0.
2019-05-10 05:06:38,381-04 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] Lock Acquired to object
'EngineLock:{ex
clusiveLocks='[397013a0-b7d4-4d38-86c3-e944cebd75a7=VM]', sharedLocks=''}'
2019-05-10 05:06:38,393-04 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
task-1) [83e4f0c4-a39a-45aa-891f-4022765e1a87] START, IsVmDuringIn
itiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='397013a0-b7d4-4d38-86c3-e944cebd75a7'}),
log id: 96be328
2019-05-10 05:06:38,393-04 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default
task-1) [83e4f0c4-a39a-45aa-891f-4022765e1a87] FINISH, IsVmDuringI
nitiatingVDSCommand, return: false, log id: 96be328
2019-05-10 05:06:38,403-04 WARN
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] Validation of action 'RunVmOnce'
failed
 for user admin@internal-authz. Reasons:
VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2019-05-10 05:06:38,404-04 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
[83e4f0c4-a39a-45aa-891f-4022765e1a87] Lock freed to object
'EngineLock:{exclu
siveLocks='[397013a0-b7d4-4d38-86c3-e944cebd75a7=VM]', sharedLocks=''}'
2019-05-10 05:06:38,413-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

Re: URGENT - ovirt-engine broken for 3 days Re: Subject: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 05-05-2019 ] [ upgrade_hosts ]

2019-05-09 Thread Dafna Ron
As IL are on independence day, anyone else can merge?
https://gerrit.ovirt.org/#/c/99845/


On Thu, May 9, 2019 at 11:30 AM Dafna Ron  wrote:

> Thanks Andrej.
> I will follow the patch and update.
> Dafna
>
> On Thu, May 9, 2019 at 11:23 AM Andrej Krejcir 
> wrote:
>
>> Hi,
>>
>> Ok, I have posted the reverting patch:
>> https://gerrit.ovirt.org/#/c/99845/
>>
>> I'm still investigating what is the problem. Sorry for the delay, we had
>> a public holiday yesturday.
>>
>>
>> Andrej
>>
>> On Thu, 9 May 2019 at 11:20, Dafna Ron  wrote:
>>
>>> Hi,
>>>
>>> I have not heard back on this issue and ovirt-engine has been broken for
>>> the past 3 days.
>>>
>>> As this does not seem a simple debug and fix I suggest reverting the
>>> patch and investigating later.
>>>
>>> thanks,
>>> Dafna
>>>
>>>
>>>
>>> On Wed, May 8, 2019 at 9:42 AM Dafna Ron  wrote:
>>>
>>>> Any news?
>>>>
>>>> Thanks,
>>>> Dafna
>>>>
>>>>
>>>> On Tue, May 7, 2019 at 4:57 PM Dafna Ron  wrote:
>>>>
>>>>> thanks for the quick reply and investigation.
>>>>> Please update me if I can help any further and if you find the cause
>>>>> and have a patch let me know.
>>>>> Note that ovirt-engine project is broken and if we cannot find the
>>>>> cause relatively fast we should consider reverting the patch to allow a 
>>>>> new
>>>>> package to be built in CQ with other changes that were submitted.
>>>>>
>>>>> Thanks,
>>>>> Dafna
>>>>>
>>>>>
>>>>> On Tue, May 7, 2019 at 4:42 PM Andrej Krejcir 
>>>>> wrote:
>>>>>
>>>>>> After running a few OSTs manually, it seems that the patch is the
>>>>>> cause. Investigating...
>>>>>>
>>>>>> On Tue, 7 May 2019 at 14:58, Andrej Krejcir 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> The issue is probably not caused by the patch.
>>>>>>>
>>>>>>> This log line means that the VM does not exist in the DB:
>>>>>>>
>>>>>>> 2019-05-07 06:02:04,215-04 WARN
>>>>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] 
>>>>>>> Validation
>>>>>>> of action 'MigrateMultipleVms' failed for user admin@internal-authz.
>>>>>>> Reasons: ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>>>>>
>>>>>>> I will investigate more, why the VM is missing.
>>>>>>>
>>>>>>> On Tue, 7 May 2019 at 14:07, Dafna Ron  wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> We are failing test upgrade_hosts on
>>>>>>>> upgrade-from-release-suite-master.
>>>>>>>> From the logs I can see that we are calling migrate vm when we have
>>>>>>>> only one host and the vm seem to have been shut down before the 
>>>>>>>> maintenance
>>>>>>>> call is issued.
>>>>>>>>
>>>>>>>> Can you please look into this?
>>>>>>>>
>>>>>>>> suspected patch reported as root cause by CQ is:
>>>>>>>>
>>>>>>>> https://gerrit.ovirt.org/#/c/98920/ - core: Add MigrateMultipleVms
>>>>>>>> command and use it for host maintenance
>>>>>>>>
>>>>>>>>
>>>>>>>> logs are found here:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14021/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>>>>>
>>>>>>>>
>>>>>>>> I can see the issue is vm migration when putting host in
>>>>>>>> maintenance:
>>>>>>>>
>>>>>>>>
>>>>>>>> 2019-05-07 06:02:04,170-04 INFO

Re: URGENT - ovirt-engine broken for 3 days Re: Subject: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 05-05-2019 ] [ upgrade_hosts ]

2019-05-09 Thread Dafna Ron
Thanks Andrej.
I will follow the patch and update.
Dafna

On Thu, May 9, 2019 at 11:23 AM Andrej Krejcir  wrote:

> Hi,
>
> Ok, I have posted the reverting patch: https://gerrit.ovirt.org/#/c/99845/
>
> I'm still investigating what is the problem. Sorry for the delay, we had a
> public holiday yesturday.
>
>
> Andrej
>
> On Thu, 9 May 2019 at 11:20, Dafna Ron  wrote:
>
>> Hi,
>>
>> I have not heard back on this issue and ovirt-engine has been broken for
>> the past 3 days.
>>
>> As this does not seem a simple debug and fix I suggest reverting the
>> patch and investigating later.
>>
>> thanks,
>> Dafna
>>
>>
>>
>> On Wed, May 8, 2019 at 9:42 AM Dafna Ron  wrote:
>>
>>> Any news?
>>>
>>> Thanks,
>>> Dafna
>>>
>>>
>>> On Tue, May 7, 2019 at 4:57 PM Dafna Ron  wrote:
>>>
>>>> thanks for the quick reply and investigation.
>>>> Please update me if I can help any further and if you find the cause
>>>> and have a patch let me know.
>>>> Note that ovirt-engine project is broken and if we cannot find the
>>>> cause relatively fast we should consider reverting the patch to allow a new
>>>> package to be built in CQ with other changes that were submitted.
>>>>
>>>> Thanks,
>>>> Dafna
>>>>
>>>>
>>>> On Tue, May 7, 2019 at 4:42 PM Andrej Krejcir 
>>>> wrote:
>>>>
>>>>> After running a few OSTs manually, it seems that the patch is the
>>>>> cause. Investigating...
>>>>>
>>>>> On Tue, 7 May 2019 at 14:58, Andrej Krejcir 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> The issue is probably not caused by the patch.
>>>>>>
>>>>>> This log line means that the VM does not exist in the DB:
>>>>>>
>>>>>> 2019-05-07 06:02:04,215-04 WARN
>>>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] 
>>>>>> Validation
>>>>>> of action 'MigrateMultipleVms' failed for user admin@internal-authz.
>>>>>> Reasons: ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>>>>
>>>>>> I will investigate more, why the VM is missing.
>>>>>>
>>>>>> On Tue, 7 May 2019 at 14:07, Dafna Ron  wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> We are failing test upgrade_hosts on
>>>>>>> upgrade-from-release-suite-master.
>>>>>>> From the logs I can see that we are calling migrate vm when we have
>>>>>>> only one host and the vm seem to have been shut down before the 
>>>>>>> maintenance
>>>>>>> call is issued.
>>>>>>>
>>>>>>> Can you please look into this?
>>>>>>>
>>>>>>> suspected patch reported as root cause by CQ is:
>>>>>>>
>>>>>>> https://gerrit.ovirt.org/#/c/98920/ - core: Add MigrateMultipleVms
>>>>>>> command and use it for host maintenance
>>>>>>>
>>>>>>>
>>>>>>> logs are found here:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14021/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>>>>
>>>>>>>
>>>>>>> I can see the issue is vm migration when putting host in
>>>>>>> maintenance:
>>>>>>>
>>>>>>>
>>>>>>> 2019-05-07 06:02:04,170-04 INFO
>>>>>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
>>>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>>>>>>> [05592db2-f859-487b-b779-4b32eec5bab
>>>>>>> 3] Running command: MaintenanceVdsCommand internal: true. Entities
>>>>>>> affected : ID: 38e1379b-c3b6-4a2e-91df-d1f346e414a9 Type: VDS
>>>>>>> 2019-05-07 06:02:04,215-04 WARN
>>>>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>

URGENT - ovirt-engine broken for 3 days Re: Subject: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 05-05-2019 ] [ upgrade_hosts ]

2019-05-09 Thread Dafna Ron
Hi,

I have not heard back on this issue and ovirt-engine has been broken for
the past 3 days.

As this does not seem a simple debug and fix I suggest reverting the patch
and investigating later.

thanks,
Dafna



On Wed, May 8, 2019 at 9:42 AM Dafna Ron  wrote:

> Any news?
>
> Thanks,
> Dafna
>
>
> On Tue, May 7, 2019 at 4:57 PM Dafna Ron  wrote:
>
>> thanks for the quick reply and investigation.
>> Please update me if I can help any further and if you find the cause and
>> have a patch let me know.
>> Note that ovirt-engine project is broken and if we cannot find the cause
>> relatively fast we should consider reverting the patch to allow a new
>> package to be built in CQ with other changes that were submitted.
>>
>> Thanks,
>> Dafna
>>
>>
>> On Tue, May 7, 2019 at 4:42 PM Andrej Krejcir 
>> wrote:
>>
>>> After running a few OSTs manually, it seems that the patch is the cause.
>>> Investigating...
>>>
>>> On Tue, 7 May 2019 at 14:58, Andrej Krejcir  wrote:
>>>
>>>> Hi,
>>>>
>>>> The issue is probably not caused by the patch.
>>>>
>>>> This log line means that the VM does not exist in the DB:
>>>>
>>>> 2019-05-07 06:02:04,215-04 WARN
>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] Validation
>>>> of action 'MigrateMultipleVms' failed for user admin@internal-authz.
>>>> Reasons: ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>>
>>>> I will investigate more, why the VM is missing.
>>>>
>>>> On Tue, 7 May 2019 at 14:07, Dafna Ron  wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We are failing test upgrade_hosts on
>>>>> upgrade-from-release-suite-master.
>>>>> From the logs I can see that we are calling migrate vm when we have
>>>>> only one host and the vm seem to have been shut down before the 
>>>>> maintenance
>>>>> call is issued.
>>>>>
>>>>> Can you please look into this?
>>>>>
>>>>> suspected patch reported as root cause by CQ is:
>>>>>
>>>>> https://gerrit.ovirt.org/#/c/98920/ - core: Add MigrateMultipleVms
>>>>> command and use it for host maintenance
>>>>>
>>>>>
>>>>> logs are found here:
>>>>>
>>>>>
>>>>>
>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14021/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>>
>>>>>
>>>>> I can see the issue is vm migration when putting host in maintenance:
>>>>>
>>>>>
>>>>> 2019-05-07 06:02:04,170-04 INFO
>>>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>>>>> [05592db2-f859-487b-b779-4b32eec5bab
>>>>> 3] Running command: MaintenanceVdsCommand internal: true. Entities
>>>>> affected : ID: 38e1379b-c3b6-4a2e-91df-d1f346e414a9 Type: VDS
>>>>> 2019-05-07 06:02:04,215-04 WARN
>>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] 
>>>>> Validation
>>>>> of action
>>>>> 'MigrateMultipleVms' failed for user admin@internal-authz. Reasons:
>>>>> ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>>> 2019-05-07 06:02:04,221-04 ERROR
>>>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] Failed to
>>>>> migrate one or
>>>>> more VMs.
>>>>> 2019-05-07 06:02:04,227-04 ERROR
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] EVEN
>>>>> T_ID: VDS_MAINTENANCE_FAILED(17), Failed to switch Host
>>>>> lago-upgrade-from-release-suite-master-host-0 to Maintenance mode.
>>>>> 2019-05-07 06:02:04,239-04 INFO
>>>>> [org.ovirt.engine.core.bll.ActivateVdsCommand]
>>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [70840477] Lock
>>>>> Acquired to object 'Eng
>>>>> ineLock:{exclusiveLocks='

Re: [CQ]: 99533,1 (ovirt-vmconsole) failed "ovirt-4.2" system tests

2019-05-08 Thread Dafna Ron
missing packages.
patch submitted:
https://gerrit.ovirt.org/#/c/99833/

On Wed, May 8, 2019 at 2:47 PM oVirt Jenkins  wrote:

> Change 99533,1 (ovirt-vmconsole) is probably the reason behind recent
> system
> test failures in the "ovirt-4.2" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/99533/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/4297/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CZQZYYNXLI627X4DZX355OZBQXZJ2VXQ/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GKVMBIEVA7D6672SVAGSGP3NPOQ4/


[JIRA] (OVIRT-2724) Re: Please add me jenkins 'dev' role

2019-05-08 Thread Dafna Ron (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dafna Ron reassigned OVIRT-2724:


Assignee: Dafna Ron  (was: infra)

> Re: Please add me jenkins 'dev' role
> 
>
> Key: OVIRT-2724
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2724
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Amit Bawer
>    Assignee: Dafna Ron
>
> Hi,
> seems I am still unable to run OST on ovirt jenkins:
> "abawer is missing the Job/Build permission"
> Please add the required permissions for user *abawer*
> Thanks
> Amit
> On Tue, Apr 23, 2019 at 12:51 PM Amit Bawer  wrote:
> > Hi
> >
> > usename: *abawer*
> > will need a jenkins 'dev' role for OST executions.
> >
> >
> > Thanks,
> > Amit
> >



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100100)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FXWHQXUIVNICPU64LI5YLNNG7Z6NLEZ2/


[JIRA] (OVIRT-2724) Re: Please add me jenkins 'dev' role

2019-05-08 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39300#comment-39300
 ] 

Dafna Ron commented on OVIRT-2724:
--

added.
please check now

> Re: Please add me jenkins 'dev' role
> 
>
> Key: OVIRT-2724
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2724
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Amit Bawer
>Assignee: infra
>
> Hi,
> seems I am still unable to run OST on ovirt jenkins:
> "abawer is missing the Job/Build permission"
> Please add the required permissions for user *abawer*
> Thanks
> Amit
> On Tue, Apr 23, 2019 at 12:51 PM Amit Bawer  wrote:
> > Hi
> >
> > usename: *abawer*
> > will need a jenkins 'dev' role for OST executions.
> >
> >
> > Thanks,
> > Amit
> >



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100100)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VUGS5TBJZYQO3RC6KL26VELZZBAIAEBF/


[JIRA] (OVIRT-2722) Jenkins is very slow now - can someone restart it?

2019-05-08 Thread Dafna Ron (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39299#comment-39299
 ] 

Dafna Ron commented on OVIRT-2722:
--

We try to avoid a restart without a planned maintenance but I also see this was 
sorted and jenkins is not slow (at least not for me) 
I am leaving open as I don't know if anyone has restarted and also, perhaps 
Evgheni can look once he's back to see if this is a known issue. 

> Jenkins is very slow now - can someone restart it?
> --
>
> Key: OVIRT-2722
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2722
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> $ time curl -s https://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> >/dev/null
> real 0m27.380s
> user 0m0.038s
> sys 0m0.028s
> For reference:
> $ time curl -s https://travis-ci.org/oVirt/vdsm/builds >/dev/null
> real 0m0.614s
> user 0m0.027s
> sys 0m0.015s
> Can someone restart jenkins?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100100)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EZNXWSSXBOKKZMV3WJON2YQDGJUAROVQ/


Re: Subject: [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 05-05-2019 ] [ upgrade_hosts ]

2019-05-08 Thread Dafna Ron
Any news?

Thanks,
Dafna


On Tue, May 7, 2019 at 4:57 PM Dafna Ron  wrote:

> thanks for the quick reply and investigation.
> Please update me if I can help any further and if you find the cause and
> have a patch let me know.
> Note that ovirt-engine project is broken and if we cannot find the cause
> relatively fast we should consider reverting the patch to allow a new
> package to be built in CQ with other changes that were submitted.
>
> Thanks,
> Dafna
>
>
> On Tue, May 7, 2019 at 4:42 PM Andrej Krejcir  wrote:
>
>> After running a few OSTs manually, it seems that the patch is the cause.
>> Investigating...
>>
>> On Tue, 7 May 2019 at 14:58, Andrej Krejcir  wrote:
>>
>>> Hi,
>>>
>>> The issue is probably not caused by the patch.
>>>
>>> This log line means that the VM does not exist in the DB:
>>>
>>> 2019-05-07 06:02:04,215-04 WARN
>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] Validation
>>> of action 'MigrateMultipleVms' failed for user admin@internal-authz.
>>> Reasons: ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>
>>> I will investigate more, why the VM is missing.
>>>
>>> On Tue, 7 May 2019 at 14:07, Dafna Ron  wrote:
>>>
>>>> Hi,
>>>>
>>>> We are failing test upgrade_hosts on upgrade-from-release-suite-master.
>>>> From the logs I can see that we are calling migrate vm when we have
>>>> only one host and the vm seem to have been shut down before the maintenance
>>>> call is issued.
>>>>
>>>> Can you please look into this?
>>>>
>>>> suspected patch reported as root cause by CQ is:
>>>>
>>>> https://gerrit.ovirt.org/#/c/98920/ - core: Add MigrateMultipleVms
>>>> command and use it for host maintenance
>>>>
>>>>
>>>> logs are found here:
>>>>
>>>>
>>>>
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/14021/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>>>>
>>>>
>>>> I can see the issue is vm migration when putting host in maintenance:
>>>>
>>>>
>>>> 2019-05-07 06:02:04,170-04 INFO
>>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
>>>> [05592db2-f859-487b-b779-4b32eec5bab
>>>> 3] Running command: MaintenanceVdsCommand internal: true. Entities
>>>> affected : ID: 38e1379b-c3b6-4a2e-91df-d1f346e414a9 Type: VDS
>>>> 2019-05-07 06:02:04,215-04 WARN
>>>> [org.ovirt.engine.core.bll.MigrateMultipleVmsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] Validation
>>>> of action
>>>> 'MigrateMultipleVms' failed for user admin@internal-authz. Reasons:
>>>> ACTION_TYPE_FAILED_VMS_NOT_FOUND
>>>> 2019-05-07 06:02:04,221-04 ERROR
>>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] Failed to
>>>> migrate one or
>>>> more VMs.
>>>> 2019-05-07 06:02:04,227-04 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [33485140] EVEN
>>>> T_ID: VDS_MAINTENANCE_FAILED(17), Failed to switch Host
>>>> lago-upgrade-from-release-suite-master-host-0 to Maintenance mode.
>>>> 2019-05-07 06:02:04,239-04 INFO
>>>> [org.ovirt.engine.core.bll.ActivateVdsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [70840477] Lock
>>>> Acquired to object 'Eng
>>>> ineLock:{exclusiveLocks='[38e1379b-c3b6-4a2e-91df-d1f346e414a9=VDS]',
>>>> sharedLocks=''}'
>>>> 2019-05-07 06:02:04,242-04 INFO
>>>> [org.ovirt.engine.core.bll.ActivateVdsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [70840477] Running
>>>> command: ActivateVds
>>>> Command internal: true. Entities affected : ID:
>>>> 38e1379b-c3b6-4a2e-91df-d1f346e414a9 Type: VDSAction group MANIPULATE_HOST
>>>> with role type ADMIN
>>>> 2019-05-07 06:02:04,243-04 INFO
>>>> [org.ovirt.engine.core.bll.ActivateVdsCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [70840477] Before
>>

  1   2   3   4   5   6   7   8   9   10   >