[JIRA] (OVIRT-3049) ovirt-ansible-collection: Automatic bugzilla linking

2020-10-26 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40884#comment-40884
 ] 

Barak Korren commented on OVIRT-3049:
-

Its done by the Gerrit hooks no? Theoretically you could run similar code in a 
GitHub action to get similar results.

> ovirt-ansible-collection: Automatic bugzilla linking
> 
>
> Key: OVIRT-3049
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3049
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> I now pushed my first PR to ovirt-ansible-collection [1].
> It has a Bug-Url link to [2].
> The bug wasn't updated with a link to the PR. Should it? Please make
> it so that it does.
> Also, perhaps, when creating new projects, do this automatically for
> them (and ask what bugzilla product should be affected).
> Thanks and best regards,
> [1] https://github.com/oVirt/ovirt-ansible-collection/pull/151
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1844965
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100149)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FKW3OIPJYN27VVWFIPAHK323S6AS4JPF/


Re: otopi and change-queue

2020-05-17 Thread Barak Korren
We disabled all the test suits in CQ, everything gets published to `tested`
immediately.

On Sun, 17 May 2020 at 13:04, Yedidyah Bar David  wrote:

> Hi all,
>
> I recently merged a patch to otopi:
>
> https://gerrit.ovirt.org/#/c/108590/
>
> It failed in CQ:
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/23512/
>
> Now commented there "ci re-merge", and it passed:
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/23736/
>
> But didn't run basic-suite-master, and I do not understand if it will
> be published.
>
> Was anything changed in how change-queue works?
>
> Should I do anything else to make sure it's published?
>
> Thanks,
> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/5QJCEAVOEDOYLWRWUL5IG5U3OYRG6ZRP/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/KVNNLSHD7BQBLHNCJWRW3F2LDRWQAGOC/


[JIRA] (OVIRT-2924) Update mock_runner to support the new `--isolation` option for `mock`

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Summary: Update mock_runner to support the new `--isolation` option for 
`mock`  (was: Fwd: Problem with the mock parameters)

> Update mock_runner to support the new `--isolation` option for `mock`
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> <https://www.redhat.com>
> l...@redhat.com | lve...@redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HWFUTXO4D3O3GI4YTZ36RY7FCKDVDMCN/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Issue Type: Bug  (was: By-EMAIL)

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> <https://www.redhat.com>
> l...@redhat.com | lve...@redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SMSETJV63VXSA4ZTDQVNQIK7E2BNNJ2J/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Components: mock_runner

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> <https://www.redhat.com>
> l...@redhat.com | lve...@redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WNVCWMGC7TAS5KRZEPPW2AD7552R2UZY/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40465#comment-40465
 ] 

Barak Korren commented on OVIRT-2935:
-

It seems the Gerrit trigger configuration was reset to its default 
configuration instead of our custom one. I restored it manually. 
[~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794] please find out the RC 
for this.

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DHPPL7OKOELLDKVIJ6GGZ4PULRP2FI47/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2935:
---

Assignee: Evgheni Dereveanchin  (was: infra)

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Evgheni Dereveanchin
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PXU4FTG3H4L3XEBVAUJJRBKWLGLON5PY/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40462#comment-40462
 ] 

Barak Korren commented on OVIRT-2935:
-

Looks liks someone messed up the Gerrit trigger configuration, 
[~accountid:5aa0f39f5a4d022884128a0f], [~accountid:5dbc31f88704ba0dab2444b3], 
[~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794], 
[~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] any idea?

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/26KSAGWQ3X2Q5TFZ3WQ5K7EJB4DANSA7/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-04-25 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40387#comment-40387
 ] 

Barak Korren commented on OVIRT-2924:
-

We need to ensure the new `{{--isolation}}` option still enables one to use 
chroots with mock as opposed to systemd-nspawn containers.

If it does support chroot - use can enable it by making `{{mock_runner.sh}}` 
check the output of `{{mock -h}}` and use the new option if its found there. We 
already had code in it that does something like that fromthe last time the mock 
CLI was changed.

If the chroot support was removed we have two options:
# Keep an older version of mock somewhere and use that
# Port our code to work with systemd-nspawn. I used to have n epic about this 
in Jira that detailed what we would need to fix to make that happen. I think 
most of the more serious fixes are in placealeady anyway.

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> <https://www.redhat.com>
> l...@redhat.com | lve...@redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100125)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HXIOHRL6YS3KTMDH6JHNPSVM5YRGD6B/


[JIRA] (OVIRT-2917) Vagrant VM container

2020-04-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40355#comment-40355
 ] 

Barak Korren commented on OVIRT-2917:
-

no. no docs atm.

Also despite what I wrote before, workloads using such a container may not 
continue to be supported in the near future.

Due to the age of the existing oVirt CI hardware, and the new requirements 
introduced by RHEL8, The oVirt CI infrastructure is going to undergo some 
massive transformations in the near future. We're basically going to rebuild a 
lot of it from scratch.

In the meantime I'd advise against making any significant changes to existing 
CI workloads or adding any major new ones.

Sorry for the inconvenience.

cc: [~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] 

> Vagrant VM container
> 
>
> Key: OVIRT-2917
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2917
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Ales Musil
>Assignee: infra
>
> Hi,
> is there any documentation on how to use the new container backend with the
> CI container that spawns VM for privileged operations?
> Thank you.
> Regards,
> Ales
> -- 
> Ales Musil
> Software Engineer - RHV Network
> Red Hat EMEA <https://www.redhat.com>
> amu...@redhat.comIM: amusil
> <https://red.ht/sig>



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100125)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LUOTSINXSA5VTMWVK6PG7LYHL4FYIUOL/


[JIRA] (OVIRT-2901) some sync_mirror jobs fail due to cache error of another repo

2020-04-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40291#comment-40291
 ] 

Barak Korren commented on OVIRT-2901:
-

I remember talking to [~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794] 
about this befoe, maybe we already have a ticket on this.

The reason this happens is because the ` {{ data/mirrors-reposync.conf }} ` 
file is shared between all mirrors, so the ` {{ reposync }} ` command for each 
mirror tryies to fetch metadate for all mirrors.

The way to solve this is the change the ` {{ mirror_mgr }} ` script to remove 
the data for all other mirrors when running for a particular mirror (We can  do 
this to the file directly, because each mirror job clones the ` {{ jenkins }} ` 
repo on its own and therfor has its own local copy of the file).

Its possible to do this with a single ` {{ sed }} ` command, I think we even 
prototyped it at some point, so please look for previous tickets about this.

> some sync_mirror jobs fail due to cache error of another repo
> -
>
> Key: OVIRT-2901
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2901
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Shlomi Zidmi
>Assignee: infra
>Priority: Low
>
> In the recent days some sync_mirror jobs have been failing with the following 
> error:
> Error setting up repositories: Error making cache directory: 
> /home/jenkins/mirrors_cache/centos-qemu-ev-release-el7 error was: [Errno 17] 
> File exists: '/home/jenkins/mirrors_cache/centos-qemu-ev-release-el7'
> As an example, a build of fedora-updates-fc29 failed with this error:
> https://jenkins.ovirt.org/job/system-sync_mirrors-fedora-updates-fc29-x86_64/1544/console



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/5OQAHWUCNZ5TWLNBWYLBFM4BCJCC2OX2/


[JIRA] (OVIRT-2899) Use Nginex as Jenkins reverse proxy instead on Apache

2020-04-06 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2899:
---

 Summary: Use Nginex as Jenkins reverse proxy instead on Apache
 Key: OVIRT-2899
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2899
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: Jenkins Master
Reporter: Barak Korren
Assignee: infra


The following issue: 
[JENKINS-47279|https://issues.jenkins-ci.org/browse/JENKINS-47279] indicated 
there is currently no way to expose the Jenkins CLI over HTTP/HTTPs.

The Jenkins CLI is essential to access parts of the Jenkins internal API that 
allow detecting internal requirements for hosts, this in turn is essential for 
having a service running inside a firewalled network that provides specialized 
hosts to our public Jenkins instance.

The main use case for this is providing RHEL BM host from the Red Hat network 
to the oVirt Jenkins.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZS36BEM3EBOOTIBXXIGKBLTLVLLDUIDN/


[JIRA] (OVIRT-2898) CQ Changes is empty

2020-04-05 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2898:

Resolution: Won't Fix
Status: Done  (was: To Do)

> CQ Changes is empty
> ---
>
> Key: OVIRT-2898
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2898
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> If I look e.g. at:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/
> It says:
> Testing 58 changes:
> But then no list of changes.
> If I then press Changes:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/changes
> I get an empty page (other than red circle and "Changes" title).
> I think this is a bug, which was already supposed to fixed by:
> https://gerrit.ovirt.org/79036
> But for some reason this does not work as expected.
> Please handle.
> Thanks,
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RSVIKX5HKZBZUQXPZMSGZUDSZXTF7D6W/


[JIRA] (OVIRT-2898) CQ Changes is empty

2020-04-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40276#comment-40276
 ] 

Barak Korren commented on OVIRT-2898:
-

The changes screen in Jenkins only shows a log of Git changes that were cloned 
via the Git (or other SCM plugin). Its only useful for a single-project, 
single-branch job. Having it show the CQ changes would require writing a 
jenkins plugin that would make Jenkins think the CQ is some kind of an SCM 
(This is not trivial, SCM plugins in jenkns are strange...)  

WRT the linked path, it works as expected:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/execution/node/99/log/

This show up under the "loading change data" stage both in blue ocean and in 
the pipeline view.

Closing NOT A BUG.

> CQ Changes is empty
> ---
>
> Key: OVIRT-2898
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2898
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> If I look e.g. at:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/
> It says:
> Testing 58 changes:
> But then no list of changes.
> If I then press Changes:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/changes
> I get an empty page (other than red circle and "Changes" title).
> I think this is a bug, which was already supposed to fixed by:
> https://gerrit.ovirt.org/79036
> But for some reason this does not work as expected.
> Please handle.
> Thanks,
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AIU7RVA4Q7KOJBY3A7E5XBKFGGHFIUCM/


Re: mirrors.phx.ovirt.org is down

2020-04-05 Thread Barak Korren
On Sun, 5 Apr 2020 at 08:47, Yedidyah Bar David  wrote:

> On Thu, Apr 2, 2020 at 7:41 PM Barak Korren  wrote:
> >
> > Not sure why you're seeing any access to the mirrors from your laptop,
> they should only be used in CI.
>
> I just checked it manually, copy/pasted the address from the job below.
>
> >
> > As for the CI code, of the mirrors are down, it ignores then and goes to
> the upstream repos.
>
> OK, so:
>
> >
> > Is there some script that is hardwired to get something from a specific
> mirror url?
> >
> >
> > בתאריך יום ה׳, 2 באפר׳ 2020, 17:32, מאת Yedidyah Bar David ‏<
> d...@redhat.com>:
> >>
> >> Both from my laptop and e.g.:
> >>
> >>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6729/
>
> Why did above run fail, then?
>

The CentOS 8 mirrors have been recently added after the mirroring scripts
were changed to add module support, either that does not work well yet,
this test correlated with one of the initial issues found (see the history
in the infra ML), or the OST code needs to somehow explicitly enable some
modules.



> >>
> >> 17:23:21 + yum install '--disablerepo=*'
> >>
> --enablerepo=ovirt-master-tested-el8,centos-base-el8,centos-appstream-el8,centos-powertools-el8,epel-el8,ovirt-master-glusterfs-7-el8,ovirt-master-virtio-win-latest-el8,ovirt-master-copr-sbonazzo-collection-el8,ovirt-master-copr:copr.fedorainfracloud.org:
> sac:gluster-ansible-el8,ovirt-master-copr:copr.fedorainfracloud.org:
> mdbarroso:ovsdbapp-el8,ovirt-master-copr-nmstate-0.2-el8,ovirt-master-copr-NetworkManager-1.22-el8,ovirt-master-centos-advanced-virtualization-el8,ovirt-master-centos-ovirt44-el8
> >> -y yum-utils
> >>
> >> 17:23:21 Error: Error downloading packages:
> >>
> >> 17:23:21   Status code: 404 for
> >>
> http://mirrors.phx.ovirt.org/repos/yum/centos-base-el8/base/Packages/yum-utils-4.0.8-3.el8.noarch.rpm
> >>
> >> Known problem?
> >> --
> >> Didi
> >> ___
> >> Infra mailing list -- infra@ovirt.org
> >> To unsubscribe send an email to infra-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BPER52OO73BBJW22W4UCEYKWCEQRI77M/
>
>
>
> --
> Didi
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/6UREQJS5NLDPPKZBER2NCFCZ35ACLT34/


Re: mirrors.phx.ovirt.org is down

2020-04-02 Thread Barak Korren
Not sure why you're seeing any access to the mirrors from your laptop, they
should only be used in CI.

As for the CI code, of the mirrors are down, it ignores then and goes to
the upstream repos.

Is there some script that is hardwired to get something from a specific
mirror url?


בתאריך יום ה׳, 2 באפר׳ 2020, 17:32, מאת Yedidyah Bar David ‏:

> Both from my laptop and e.g.:
>
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/6729/
>
> 17:23:21 + yum install '--disablerepo=*'
>
> --enablerepo=ovirt-master-tested-el8,centos-base-el8,centos-appstream-el8,centos-powertools-el8,epel-el8,ovirt-master-glusterfs-7-el8,ovirt-master-virtio-win-latest-el8,ovirt-master-copr-sbonazzo-collection-el8,ovirt-master-copr:copr.fedorainfracloud.org:
> sac:gluster-ansible-el8,ovirt-master-copr:copr.fedorainfracloud.org:
> mdbarroso:ovsdbapp-el8,ovirt-master-copr-nmstate-0.2-el8,ovirt-master-copr-NetworkManager-1.22-el8,ovirt-master-centos-advanced-virtualization-el8,ovirt-master-centos-ovirt44-el8
> -y yum-utils
>
> 17:23:21 Error: Error downloading packages:
>
> 17:23:21   Status code: 404 for
>
> http://mirrors.phx.ovirt.org/repos/yum/centos-base-el8/base/Packages/yum-utils-4.0.8-3.el8.noarch.rpm
>
> Known problem?
> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BPER52OO73BBJW22W4UCEYKWCEQRI77M/
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EZDAVF2GYGSXAZGNFRMVLMUJSUMBDPDX/


Re: [ovirt-users] Mirror oVirt content

2020-04-01 Thread Barak Korren
There is this document, but as the notice on it states, at least some parts
of it are outdated:
https://www.ovirt.org/develop/infra/repository-mirrors.html

Forwarding this to the infra team, hopefully someone there will contact you
soon.

On Wed, 1 Apr 2020 at 22:52,  wrote:

> Hello oVirt Community / infrastructure team,
> we would like to get guidance on how to mirror the oVirt content
> publicly.  We replicate content using our own networks so we'd only be
> pulling the content from the oVirt content server from one location in
> Chicago.
>
> Please advise.
>
> Thank you,
>
> Adrian
> ___
> Users mailing list -- us...@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/MJGILYQBKGMKOG3V47DETHJE26FSU5QA/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XZFN4XE2ANME5YH4YCJZIQB2SO6M7JTF/


Re: How to trigger re-merge in github?

2020-03-11 Thread Barak Korren
On Wed, 11 Mar 2020 at 13:29, Yedidyah Bar David  wrote:

> On Wed, Mar 11, 2020 at 10:54 AM Ehud Yonasi  wrote:
> >
> > You can trigger the merged job again in jenkins also.
>
> Is this documented?
>
> How?
>
> Just rebuild:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/21146/
> ?
>
> Also: How to find it? The only reliable way I have right now is from
> the emails it sends to infra.
> Can you somehow make the job add a comment in github? Although I admit
> I now consider moving it

to gerrit, as I do not see any advantages to github, and gerrit is
> definitely more supported.
>

There is no place in GitHub to add such a comment.
Once a PR is closed it can no longer be commented upon, unlike patches in
Gerrit.


> >
> > On Wed, Mar 11, 2020 at 10:33 AM Dafna Ron  wrote:
> >>
> >> Hi Didi,
> >>
> >> you need to trigger it from the webhooks.
> >>
> >> I think this is the one that you need but you need to select the
> correct change from the list and redeliver it:
> >>
> >>
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/settings/hooks/43850293
>
> I'll try and update. Thanks!
>
> >>
> >> let me know if you need help,
> >> Dafna
> >>
> >>
> >>
> >> On Wed, Mar 11, 2020 at 7:15 AM Yedidyah Bar David 
> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> The docs say [1] that for gerrit we can comment 'ci re-merge please'.
> >>> But for github [2] I don't see anything similar, only 'test|check' and
> >>> 'build'.
> >>>
> >>> Latest merged PR for ovirt-ansible-hosted-engine-setup [3] failed due
> >>> to an unrelated reason (see thread "OST basic suite fails on
> >>> 002_bootstrap.add_secondary_storage_domains"), now fixed, and I want
> >>> CQ to handle [3] again. How?
> >>>
> >>> Thanks!
> >>>
> >>> [1]
> https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_Gerrit/index.html
> >>>
> >>> [2]
> https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub/index.html
> >>>
> >>> [3]
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/306
> >>> --
> >>> Didi
> >>> ___
> >>> Infra mailing list -- infra@ovirt.org
> >>> To unsubscribe send an email to infra-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NZW5SBTN7WPXTWDAG4PWP4CZSNPF5TFL/
> >>
> >> ___
> >> Infra mailing list -- infra@ovirt.org
> >> To unsubscribe send an email to infra-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ODXZTUIWKSSZFQPB34O6GTPZYC6FG6QP/
>
>
>
> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/T34D2W7RCIB2A7X7USRC3B6Q6VCHR2C2/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CR4KUQSEAH7UVN7BODFKSNTJBOKQZFIZ/


Re: ovirt-engine-nodejs-modules build-artifacts failing on el7 and fc30

2020-03-03 Thread Barak Korren
It matters - because it would block sending the relevant builds to the CQ.

One thing that could make YARN fail in CI but not in local mock is the fact
the we have HTTP_PROXY defined in the CI environment and pointing to a
Squid server.

A typical issue we see people having is when connecting to `localhost` and
ending up being blocked by the proxy. Please make sure the NO_PROXY env var
is set appropriately if that is the case?

Why is this not failing in check-patch as well BTW?

On Tue, 3 Mar 2020 at 10:07, Michal Skrivanek 
wrote:

> is this a better list? or no one cares?
>
> On 2 Mar 2020, at 09:40, Michal Skrivanek 
> wrote:
>
>
>
> On 1 Mar 2020, at 17:54, Scott Dickerson  wrote:
>
> Hi,
>
> On the merge phase on patch [1], both the el7 and fc30 build have been
> failing.  Examples are [2] and [3].  I'm guessing there are environmental
> issues that I can't fix from the project's perspective.  Typical example of
> the error from the log:
>
>
> it seems to fail randomly, in run 93 el8 and fc30 fails while el7 and fc29
> succeeds, in run 94 it’s fc30 that’s failing and el8 succeeded.
>
>
> [2020-03-01T15:55:19.475Z] + yarn install --pure-lockfile --har
> [2020-03-01T15:55:19.475Z] +
> /home/jenkins/workspace/ovirt-engine-nodejs-modules_standard-on-merge/ovirt-engine-nodejs-modules/yarn-1.17.3.js
> install --pure-lockfile --har
> [2020-03-01T15:55:19.475Z] yarn install v1.17.3
> [2020-03-01T15:55:19.475Z] [1/5] Resolving packages...
> [2020-03-01T15:55:19.475Z] [2/5] Fetching packages...
> [2020-03-01T15:55:19.475Z] error An unexpected error occurred: "
> https://registry.yarnpkg.com/@patternfly/react-core/-/react-core-3.134.2.tgz:
> unexpected end of file".
>
> Running the build in mock_runner locally targeted for el7, el8, fc29 and
> fc30 work just fine.
>
> Help please!
>
> [1] - https://gerrit.ovirt.org/#/c/107309/
> [2] -
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine-nodejs-modules_standard-on-merge/detail/ovirt-engine-nodejs-modules_standard-on-merge/88/pipeline
> [3] -
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine-nodejs-modules_standard-on-merge/detail/ovirt-engine-nodejs-modules_standard-on-merge/92/pipeline
>
>
> --
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc
>
>
>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MJAIPCG6VJJAXKD4JKSEHEQ6UUX6HXMD/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/K4XSCMDNMXMMZ5TRHIPTOSE7A2A4DQ6J/


Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 346 - Still Failing!

2020-02-12 Thread Barak Korren
On Wed, 12 Feb 2020 at 12:47, Yedidyah Bar David  wrote:

> On Wed, Feb 12, 2020 at 12:27 PM Yedidyah Bar David 
> wrote:
>
>> On Mon, Feb 10, 2020 at 3:19 PM Barak Korren  wrote:
>>
>>>
>>>
>>> On Mon, 10 Feb 2020 at 15:11, Dominik Holler  wrote:
>>>
>>>>
>>>> On Mon, Feb 10, 2020 at 10:32 AM Dominik Holler 
>>>> wrote:
>>>>
>>>>> Hello,
>>>>> is this issue reproducible local for someone, or does this happen only
>>>>> on jenkins?
>>>>>
>>>>>
>>>> For me it seems to happen only on jenkins.
>>>> I don't know enough about the environment on jenkins.
>>>> Might a "yum update" fix the problem?
>>>>
>>>
>>>
>>> `yum update` actually caused the problem...
>>>
>>>
>>>> I guess that there are incompatible versions of libvirt and firewalld
>>>> installed.
>>>>
>>>
>>> We have the latest versions for both, currently we suspect its the
>>> startup order
>>>
>>>
>>>>
>>>>
>>>>> On Mon, Feb 10, 2020 at 9:46 AM Galit Rosenthal 
>>>>> wrote:
>>>>>
>>>>>> Checking this
>>>>>> Once I have more info I will update.
>>>>>>
>>>>>> On Mon, Feb 10, 2020 at 9:16 AM Parth Dhanjal 
>>>>>> wrote:
>>>>>>
>>>>>>> Hey!
>>>>>>>
>>>>>>> hc_basic_suite_4.3 is failing with the same error as well
>>>>>>>
>>>>>>> Project:
>>>>>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/
>>>>>>> Build:
>>>>>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/328/consoleFul
>>>>>>> <https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_hc-basic-suite-4.3/328/consoleFull>
>>>>>>> l
>>>>>>>
>>>>>>> *07:36:24* @ Start Prefix: *07:36:24*   # Start nets: *07:36:24* * 
>>>>>>> Create network lago-hc-basic-suite-4-3-net-management: *07:36:30* * 
>>>>>>> Create network lago-hc-basic-suite-4-3-net-management: ERROR (in 
>>>>>>> 0:00:05)*07:36:30*   # Start nets: ERROR (in 0:00:05)*07:36:30* @ Start 
>>>>>>> Prefix: ERROR (in 0:00:05)*07:36:30* Error occured, aborting*07:36:30* 
>>>>>>> Traceback (most recent call last):*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in 
>>>>>>> main*07:36:30* cli_plugins[args.verb].do_run(args)*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
>>>>>>> do_run*07:36:30* self._do_run(**vars(args))*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 573, in 
>>>>>>> wrapper*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 584, in 
>>>>>>> wrapper*07:36:30* return func(*args, prefix=prefix, 
>>>>>>> **kwargs)*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/cmd.py", line 271, in 
>>>>>>> do_start*07:36:30* prefix.start(vm_names=vm_names)*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line 50, in 
>>>>>>> wrapped*07:36:30* return func(*args, **kwargs)*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1323, in 
>>>>>>> start*07:36:30* self.virt_env.start(vm_names=vm_names)*07:36:30*   
>>>>>>> File "/usr/lib/python2.7/site-packages/lago/virt.py", line 341, in 
>>>>>>> start*07:36:30* net.start()*07:36:30*   File 
>>>>>>> "/usr/lib/python2.7/site-packages/lago/providers/libvirt/network.py", 
>>>>>>> line 115, in start*07:36:30* net = 
>>>>>>> self.libvirt_con.networkCreateXML(self._libvirt_xml())*07:36:30*   File 
>>>>>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 4216, in 
>>>

Re: CI agents contaminated with some leftover files

2020-02-11 Thread Barak Korren
On Tue, 11 Feb 2020 at 15:31, Marcin Sobczyk  wrote:

>
>
> On 2/11/20 2:04 PM, Marcin Sobczyk wrote:
>
>
>
> On 2/11/20 1:36 PM, Marcin Sobczyk wrote:
>
>
>
> On 2/11/20 1:16 PM, Barak Korren wrote:
>
>
>
> On Tue, 11 Feb 2020 at 12:02, Marcin Sobczyk  wrote:
>
>> Hi,
>>
>> agents used for CI runs for one of my patches [1] seem to be
>> contaminated, i.e. our linters
>> run complains about files like [2]:
>>
>> [2020-02-11T09:16:00.381Z] 
>> ./.local/share/virtualenv/seed-v1/3.7/image/SymlinkPipInstall/wheel-0.34.2-py2.py3-none-any/wheel/bdist_wheel.py:135:80:
>>  E501 line too long (84 > 79 characters)
>> [2020-02-11T09:16:00.381Z] raise ValueError('Unsupported 
>> compression: {}'.format(self.compression))
>> [2020-02-11T09:16:00.381Z]   
>>  ^
>> [2020-02-11T09:16:00.381Z] 
>> ./.local/share/virtualenv/seed-v1/3.7/image/SymlinkPipInstall/wheel-0.34.2-py2.py3-none-any/wheel/bdist_wheel.py:145:80:
>>  E501 line too long (93 > 79 characters)
>> [2020-02-11T09:16:00.381Z] if self.py_limited_api and not 
>> re.match(PY_LIMITED_API_PATTERN, self.py_limited_api):
>>
>>
>> tests are failing with [3]:
>>
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-864>[2020-02-11T09:13:53.807Z]
>>  tox -e "tests,storage,lib,network,virt,gluster"
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-865>[2020-02-11T09:13:54.074Z]
>>  tests create: 
>> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/tests
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-866>[2020-02-11T09:13:54.336Z]
>>  ERROR: invocation failed (exit code 1), logfile: 
>> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/tests/log/tests-0.log
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-867>[2020-02-11T09:13:54.337Z]
>>  == log start 
>> ===
>>  
>> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-868>[2020-02-11T09:13:54.337Z]
>>  ERROR:root:ImportError: cannot import name 'ensure_text'
>>
>>
>> The patch itself is minimal and the parent patch seemed to be fine [4].
>>
>
> I think the root cause for this is that inside mock $HOME == $PWD == where
> the repo is cloned.
>
> Where we're seeing is that tox is trying to place the virtualenv it
> creates in $HOME/.local and then flake8 trying to scan everything under
> $PWD which happens to include the `.local` directory which ends up
> containing libraries that are not pep8 conforment
>
> Tox uses '.tox' directory to keep its stuff and we stick to specific
> version [5].
> We also didn't have any changes around tox/CI recently - I still think
> it's a defunct agent.
>
> [5]
> https://github.com/oVirt/vdsm/blob/9e1ea54bea2a3ea1b7d434617bd8445af4953f21/automation/common.sh#L38
>
> Ok, scratch that - I think it's due to new virtualenv version... let me
> try to fix the version to the older one for now.
>
> Yep, that did the trick: https://gerrit.ovirt.org/#/c/106877/
> Sorry for the noise.
>

N/P - In our CI scripts we force the virtualenvs to go to a specific
location which is also cached on the host, this prevents this kind of issue
as well as make things run faster. Since we use pipenv, for us this was as
simple as setting PIPENV_CACHE_DIR to a bind-mounted location.

Btw leaving dirty nodes behind is close to impossible. `mock` takes care of
95% of things and `git clean -fdx` takes care of the rest. AFAIK the only
way you can successfully leave files behind is by placing them under your
git clone dire and including them in `.gitignore`.



> Marcin
>
>
>
>
> To solve this we should either move the location where tox places
> virtualenvs (I suppose there is some env var that controls this), of
> make flak8 ignore the `.local` directoy.
>
>
>
>> Regards, Marcin
>>
>> [1] https://gerrit.ovirt.org/#/c/106846/
>> [2]
>> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/18071/nodes/150/steps/29

Re: CI agents contaminated with some leftover files

2020-02-11 Thread Barak Korren
On Tue, 11 Feb 2020 at 12:02, Marcin Sobczyk  wrote:

> Hi,
>
> agents used for CI runs for one of my patches [1] seem to be contaminated,
> i.e. our linters
> run complains about files like [2]:
>
> [2020-02-11T09:16:00.381Z] 
> ./.local/share/virtualenv/seed-v1/3.7/image/SymlinkPipInstall/wheel-0.34.2-py2.py3-none-any/wheel/bdist_wheel.py:135:80:
>  E501 line too long (84 > 79 characters)
> [2020-02-11T09:16:00.381Z] raise ValueError('Unsupported 
> compression: {}'.format(self.compression))
> [2020-02-11T09:16:00.381Z]
> ^
> [2020-02-11T09:16:00.381Z] 
> ./.local/share/virtualenv/seed-v1/3.7/image/SymlinkPipInstall/wheel-0.34.2-py2.py3-none-any/wheel/bdist_wheel.py:145:80:
>  E501 line too long (93 > 79 characters)
> [2020-02-11T09:16:00.381Z] if self.py_limited_api and not 
> re.match(PY_LIMITED_API_PATTERN, self.py_limited_api):
>
>
> tests are failing with [3]:
>
>  
> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-864>[2020-02-11T09:13:53.807Z]
>  tox -e "tests,storage,lib,network,virt,gluster"
>  
> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-865>[2020-02-11T09:13:54.074Z]
>  tests create: 
> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/tests
>  
> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-866>[2020-02-11T09:13:54.336Z]
>  ERROR: invocation failed (exit code 1), logfile: 
> /home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/tests/log/tests-0.log
>  
> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-867>[2020-02-11T09:13:54.337Z]
>  == log start 
> ===
>  
> <https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151#step-314-log-868>[2020-02-11T09:13:54.337Z]
>  ERROR:root:ImportError: cannot import name 'ensure_text'
>
>
> The patch itself is minimal and the parent patch seemed to be fine [4].
>

I think the root cause for this is that inside mock $HOME == $PWD == where
the repo is cloned.

Where we're seeing is that tox is trying to place the virtualenv it creates
in $HOME/.local and then flake8 trying to scan everything under $PWD which
happens to include the `.local` directory which ends up containing
libraries that are not pep8 conforment

To solve this we should either move the location where tox places
virtualenvs (I suppose there is some env var that controls this), of
make flak8 ignore the `.local` directoy.



> Regards, Marcin
>
> [1] https://gerrit.ovirt.org/#/c/106846/
> [2]
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/18071/nodes/150/steps/297/log/?start=0
> [3]
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/18071/pipeline/151
> [4] https://gerrit.ovirt.org/#/c/106590/
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VJKVASATTJQ4ZWIISSR7BJIHP74KPESW/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XKYHPWSMCIJ2SK2MBY5HVDM4HWN3E7R2/


Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-4.3 - Build # 346 - Still Failing!

2020-02-10 Thread Barak Korren
gt;>>
>>>>> lago.log has:
>>>>>
>>>>> 1. some repo issue:
>>>>>
>>>>> 2020-02-10 02:11:13,681::ERROR::repoman.common.parser::No artifacts
>>>>> found for source /var/lib/lago/ovirt-appliance-4.3-el7:only-missing
>>>>>
>>>>> Galit - any idea?
>>>>>
>>>>> 2. IPv6 issue - after a long series of tracebacks:
>>>>>
>>>>> libvirtError: COMMAND_FAILED: INVALID_IPV: 'ipv6' is not a valid
>>>>> backend or is unavailable
>>>>>
>>>>> Dominik, any idea? Not sure if this is an infra issue or a recent
>>>>> change to OST (or lago, or libvirt...).
>>>>>
>>>>> Thanks and best regards,
>>>>> --
>>>>> Didi
>>>>> ___
>>>>> Infra mailing list -- infra@ovirt.org
>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/URERUP2VUCW25RFUIOLVEE3DMBMHBF6C/
>>>>>
>>>>
>>>
>>> --
>>>
>>> GALIT ROSENTHAL
>>>
>>> SOFTWARE ENGINEER
>>>
>>> Red Hat
>>>
>>> <https://www.redhat.com/>
>>>
>>> ga...@redhat.comT: 972-9-7692230
>>> <https://red.ht/sig>
>>>
>> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SYYL7RVRKNWHD64CIBWGXBV4REOGVHFK/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/574TXW4JBJGY53OI5QGAWCZVO6DLO4R2/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2861:
---

Assignee: Barak Korren  (was: infra)

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: Barak Korren
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XXTNTNBD4CFSBGZT3BTSBWBZC7LFQR6Z/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40085#comment-40085
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 10:00 AM:
---

I can see now that you don't have the gh-pages branch stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.


was (Author: bkor...@redhat.com):
I can see now that you don't have the gh-pages stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FD2LPVIKZDRR7ETBFFXLCWEXU7DQG5YK/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40085#comment-40085
 ] 

Barak Korren commented on OVIRT-2861:
-

I can see now that you don't have the gh-pages stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/I2FSAI6X32WGGP2VO4LDNHLALFT2NQMS/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 8:04 AM:
--

{quote}
I can’t seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.


was (Author: bkor...@redhat.com):
{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XJENDMX2JIUOO56KQTFAM5CCDN6LQ4WD/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren commented on OVIRT-2861:
-

{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that, do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L6OLAWNJJXRUMML5CYWKJWTSWTJOX6QE/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 8:03 AM:
--

{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.


was (Author: bkor...@redhat.com):
{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that, do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NKSSNXLWITRFN72QY2FKEI3NPGRTK3D4/


Re: permission to run jenkins builds after gerrit patches

2020-01-23 Thread Barak Korren
Actually it is 'ci please test' to trigger ci, 'ci please build' only
builds packages for manual testing.

On Thu, 23 Jan 2020 at 13:10, Shlomi Zidmi  wrote:

> Hi,
>
> You have been added to the whitelist.
> Please note that you can post 'ci please build' on your patches to trigger
> CI manually.
>
> On Wed, Jan 22, 2020 at 6:58 PM Radoslaw Szwajkowski 
> wrote:
>
>> Hi,
>> I've just joined Ovirt UI team. I've noticed that my patches do not
>> trigger jenkins builds.
>> The message is:
>>
>> To avoid overloading the infrastructure, a whitelist for running
>> gerrit triggered jobs has been set in place, if you feel like you
>> should be in it, please contact infra at ovirt dot org.
>>
>> Could you please add me to the whitelist?
>>
>> best regards,
>> Radek
>> ___
>> Infra mailing list -- infra@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/U67N76XN23R27SZ6OADBFX4GHUBMNI4Q/
>>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/O5V4JNXNSHYNDMOSFPSJBTZH35SX4YOO/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/G7HPRYBRLJFQ7AX5TOGAUVMTX46QOHP2/


Re: Container-based CI backend is now available for use

2019-12-31 Thread Barak Korren
*Update #2: *We have now merged all the patches that deal with artifact and
log collection. And have updated the documentation
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
 accordingly.
The container-based backend should now be usable for the vast majority of
the CI use cases.

We do have some more features coming down the line geared towards more
sophisticated use cases such as running OST suits and integrating with
gating and change-queue flows. those Include:

   1. Supporting the use of privileged containers
   2. Invoking the container-based backed from the gating jobs
   3. Generating and providing the `extra_sources` file
   4. Runtime injection of YUM mirror URLs
   5. Support for storing and using secret data such as password and auth
   tokens.

I invite everyone to start moving workloads to the new system and enjoy the
enhanced speed and reliability.

On Sun, 15 Dec 2019 at 14:23, Barak Korren  wrote:

> *Update: *We have now merged the automated cloning support feature, the
> currently merged code should already be applicable for a wide range of uses
> including running 'check-patch' workloads.
>
> On Thu, 12 Dec 2019 at 09:00, Barak Korren  wrote:
>
>> A little less then a month ago I sent an email to this list telling you
>> all about ongoing work to create a new container-based backend for the
>> oVirt CI system.
>>
>> I'm pleased to announce that we have managed to finally merged an initial
>> set of patches implementing that backend yesterday, and it is now
>> available for general use.
>>
>> *What? Were? How do I use it?*
>>
>> Documentation about how to use the new backend is now available in read
>> the docs
>> <https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
>> .
>>
>> *Wait! I needed it to do X which it doesn't!*
>>
>> For the time being the new backend lacks some features that some may
>> consider to be essential, such as automated cloning of patch source code
>> and build artifact collection. We already have implemented patches
>> providing a substantial amount of additional functionality, and hopefully
>> we will be able to merge them soon. Following is a list of those patches
>> and the features they implement:
>>
>>1. Automated source cloning support:
>>- 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL
>>   support for initContainers
>>   - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
>>   `decorate` option
>>   - 104668 <https://gerrit.ovirt.org/104668>: Document source
>>   cloning extension for containers
>>   2. Artifact collection support
>>   - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
>>   container image
>>   - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
>>   for each job build's POD
>>   - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
>>   refactor: separate podspec func
>>   - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
>>   refactor: Use podspec struct def
>>   - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add
>>   artifact collection logic
>>   - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
>>   collection in containers
>>   3. Extended log collection
>>   - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD
>>   log collection
>>   - 105523 <https://gerrit.ovirt.org/105523>: Documented log
>>   collection in containers
>>4. Privileged container support
>>   - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
>>   privileged containers
>>   5. Support for using containers in gating jobs:
>>   - 104804 <https://gerrit.ovirt.org/104804>: standard-stage:
>>   refactor: move DSL to a library
>>   - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
>>   suits from STDCI DSL
>>   6. Providing the `extra_sources` file to OST suit containers:
>>   - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
>>   extra_sources for PODs
>>   7. Support for mirror injection and upstream source cloning
>>   - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
>>   STDCI tools
>>   - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
>>   - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
>>   container for `decorate`

Re: Container-based CI backend is now available for use

2019-12-15 Thread Barak Korren
*Update: *We have now merged the automated cloning support feature, the
currently merged code should already be applicable for a wide range of uses
including running 'check-patch' workloads.

On Thu, 12 Dec 2019 at 09:00, Barak Korren  wrote:

> A little less then a month ago I sent an email to this list telling you
> all about ongoing work to create a new container-based backend for the
> oVirt CI system.
>
> I'm pleased to announce that we have managed to finally merged an initial
> set of patches implementing that backend yesterday, and it is now
> available for general use.
>
> *What? Were? How do I use it?*
>
> Documentation about how to use the new backend is now available in read
> the docs
> <https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
> .
>
> *Wait! I needed it to do X which it doesn't!*
>
> For the time being the new backend lacks some features that some may
> consider to be essential, such as automated cloning of patch source code
> and build artifact collection. We already have implemented patches
> providing a substantial amount of additional functionality, and hopefully
> we will be able to merge them soon. Following is a list of those patches
> and the features they implement:
>
>1. Automated source cloning support:
>- 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL
>   support for initContainers
>   - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
>   `decorate` option
>   - 104668 <https://gerrit.ovirt.org/104668>: Document source cloning
>   extension for containers
>   2. Artifact collection support
>   - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
>   container image
>   - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
>   for each job build's POD
>   - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
>   refactor: separate podspec func
>   - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
>   refactor: Use podspec struct def
>   - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add
>   artifact collection logic
>   - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
>   collection in containers
>   3. Extended log collection
>   - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD log
>   collection
>   - 105523 <https://gerrit.ovirt.org/105523>: Documented log
>   collection in containers
>4. Privileged container support
>   - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
>   privileged containers
>   5. Support for using containers in gating jobs:
>   - 104804 <https://gerrit.ovirt.org/104804>: standard-stage:
>   refactor: move DSL to a library
>   - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
>   suits from STDCI DSL
>   6. Providing the `extra_sources` file to OST suit containers:
>   - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
>   extra_sources for PODs
>   7. Support for mirror injection and upstream source cloning
>   - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
>   STDCI tools
>   - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
>   - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
>   container for `decorate`
>   - 104994 <https://gerrit.ovirt.org/104994>: stdci_runner: Inject
>   mirrors in PODs
>
>
> As you can see, we have quite a big pile of reviews to do, as always, help
> is very welcome...
>
> Regards,
> Barak.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XNZMWHDSTG77LZGP6DNDS6BRCH72JDXH/


Container-based CI backend is now available for use

2019-12-11 Thread Barak Korren
A little less then a month ago I sent an email to this list telling you all
about ongoing work to create a new container-based backend for the oVirt CI
system.

I'm pleased to announce that we have managed to finally merged an initial
set of patches implementing that backend yesterday, and it is now
available for general use.

*What? Were? How do I use it?*

Documentation about how to use the new backend is now available in read the
docs
<https://ovirt-infra-docs.readthedocs.io/en/latest/CI/STDCI-Containers/index.html>
.

*Wait! I needed it to do X which it doesn't!*

For the time being the new backend lacks some features that some may
consider to be essential, such as automated cloning of patch source code
and build artifact collection. We already have implemented patches
providing a substantial amount of additional functionality, and hopefully
we will be able to merge them soon. Following is a list of those patches
and the features they implement:

   1. Automated source cloning support:
   - 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL support
  for initContainers
  - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
  `decorate` option
  - 104668 <https://gerrit.ovirt.org/104668>: Document source cloning
  extension for containers
  2. Artifact collection support
  - 104690 <https://gerrit.ovirt.org/104690>: Added NFS server
  container image
  - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID
  for each job build's POD
  - 104756 <https://gerrit.ovirt.org/104756>: pipeline-loader:
  refactor: separate podspec func
  - 104757 <https://gerrit.ovirt.org/104757>: pipeline-loader:
  refactor: Use podspec struct def
  - 104766 <https://gerrit.ovirt.org/104766>: STDCI PODs: Add artifact
  collection logic
  - 105522 <https://gerrit.ovirt.org/105522>: Documented artifact
  collection in containers
  3. Extended log collection
  - 104842 <https://gerrit.ovirt.org/104842>: STDCI PODs: Add POD log
  collection
  - 105523 <https://gerrit.ovirt.org/105523>: Documented log collection
  in containers
   4. Privileged container support
  - 104786 <https://gerrit.ovirt.org/104786>: STDCI DSL: Enable
  privileged containers
  5. Support for using containers in gating jobs:
  - 104804 <https://gerrit.ovirt.org/104804>: standard-stage: refactor:
  move DSL to a library
  - 104811 <https://gerrit.ovirt.org/104811>: gate: Support getting
  suits from STDCI DSL
  6. Providing the `extra_sources` file to OST suit containers:
  - 104843 <https://gerrit.ovirt.org/104843>: stdci_runner: Create
  extra_sources for PODs
  7. Support for mirror injection and upstream source cloning
  - 104917 <https://gerrit.ovirt.org/104917>: Added a container with
  STDCI tools
  - 104918 <https://gerrit.ovirt.org/104918>: decorate.py: Add script
  - 104989 <https://gerrit.ovirt.org/104989>: STDCI DSL: Use `tools`
  container for `decorate`
  - 104994 <https://gerrit.ovirt.org/104994>: stdci_runner: Inject
  mirrors in PODs


As you can see, we have quite a big pile of reviews to do, as always, help
is very welcome...

Regards,
Barak.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2B4SGYBPK3W7UN4G4PUQJEIAUBFSFQPA/


[JIRA] (OVIRT-2843) Whitlisting users for CI on GitHub does not work

2019-11-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2843:
---

Assignee: Barak Korren  (was: infra)

> Whitlisting users for CI on GitHub does not work
> 
>
> Key: OVIRT-2843
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2843
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: GitHub
>Reporter: Tomas Golembiovsky
>Assignee: Barak Korren
>
> I've tried multiple times to whitelist multiple different people in CI 
> integration with GitHub by using the `ci add to whitelist` but it does not 
> seem  to work. IIRC it runs the tests but does not whitelist the PR 
> originator.
> By the way there's a typo in Infra documentation in the command -- the word 
> _whitelist_ is misspelled.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PJ2XFQB2N35OFFAIGA7I2ABXT3CUVJIO/


[JIRA] (OVIRT-2839) CI jobs failing global_setup - docker service fails.

2019-11-20 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2839:
---

Assignee: Barak Korren  (was: infra)

> CI jobs failing global_setup - docker service fails.
> 
>
> Key: OVIRT-2839
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2839
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Bell Levin
>    Assignee: Barak Korren
>
> Hey,
> I have had a few consecutive jobs failing on global_setup.
> Here are a few jobs that failed on global_setup:
> https://jenkins.ovirt.org/job/standard-manual-runner/885
> https://jenkins.ovirt.org/job/standard-manual-runner/884
> https://jenkins.ovirt.org/job/standard-manual-runner/883
> https://jenkins.ovirt.org/job/standard-manual-runner/882
> https://jenkins.ovirt.org/job/standard-manual-runner/879
> https://jenkins.ovirt.org/job/standard-manual-runner/878
> Here are a few jobs that succeeded:
> https://jenkins.ovirt.org/job/standard-manual-runner/881
> https://jenkins.ovirt.org/job/standard-manual-runner/880
> https://jenkins.ovirt.org/job/standard-manual-runner/877
> this is currently blocking me as I am trying to push a new job to run the
> network functional tests in a container.
> Please let me know if you need any additional information.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2I4IEGNVGUC2B5FXNLDI6WJSLJMFAKQX/


Re: manual system test runner broken

2019-11-19 Thread Barak Korren
On Tue, 19 Nov 2019 at 12:31, Sandro Bonazzola  wrote:

>
>
> Il giorno mar 19 nov 2019 alle ore 11:02 Barak Korren 
> ha scritto:
>
>> The fallback is defined in the reposync file of the suit - the option in
>> the job that talks about this does nothing AFAIK. If the non-default value
>> is selected it tries to add the non existent `experimental` repo to
>> extra_sources. We should probably just drop that option from the Job GUI.
>>
>
>
> So we need a way to rewrite the fallback in the reposync from the manual
> runner job or it won't really be that useful.
>

Open a ticket...
I can't think of a way to do this that will not be very fragile though,
maybe it's best if it would just be passed as an env var to the suit and
have it decide what and how to do it.


>
>
>>
>> On Tue, 19 Nov 2019 at 11:49, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> just executed basic suite for testing 4.3.7 rc4 and instead of falling
>>> back on latest released it was falling back on latest tested which is newer
>>> than 4.3.7 rc4 so made the test useless.
>>> Job execution is here:
>>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6071/
>>> who's maintaining manual ost runner? I have the feeling it's not a suite
>>> bug, but a job bug.
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://www.redhat.com/>*Red Hat respects your work life balance.
>>> Therefore there is no need to answer this email out of your office hours.*
>>> ___
>>> Infra mailing list -- infra@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4EBDHNB24R7GTE64MIIJOICMZ5LGQ2KQ/
>>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R4PBN3FFNB2XBCIYWRLYSKDNYVWKAD4A/


Re: manual system test runner broken

2019-11-19 Thread Barak Korren
The fallback is defined in the reposync file of the suit - the option in
the job that talks about this does nothing AFAIK. If the non-default value
is selected it tries to add the non existent `experimental` repo to
extra_sources. We should probably just drop that option from the Job GUI.

On Tue, 19 Nov 2019 at 11:49, Sandro Bonazzola  wrote:

> Hi,
> just executed basic suite for testing 4.3.7 rc4 and instead of falling
> back on latest released it was falling back on latest tested which is newer
> than 4.3.7 rc4 so made the test useless.
> Job execution is here:
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6071/
> who's maintaining manual ost runner? I have the feeling it's not a suite
> bug, but a job bug.
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4EBDHNB24R7GTE64MIIJOICMZ5LGQ2KQ/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/H5NZUWPI2RHTYDB76NLIKJOUIAN3F3DA/


[JIRA] (OVIRT-2832) Add Fedora 31 support in the CI

2019-11-16 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39982#comment-39982
 ] 

Barak Korren commented on OVIRT-2832:
-

{quote}
Will be even better, thanks!
{quote}

I'm looking forward for your code review on all current and future relevant 
patches

> Add Fedora 31 support in the CI
> ---
>
> Key: OVIRT-2832
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2832
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Vdsm want to support only Fedora 31 at this point.
> Having Fedora 31 available, we can simplify the code since we have
> same versions of lvm and other packages in Fedora 31 and RHEL 8.
> We need:
> - mirrors for fedora 31 repos
> - mock config for fedora 31
> Thanks,
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XO5Z5H7ZPZMFLNW2YOCH6U3SB63CWSBU/


[JIRA] (OVIRT-2832) Add Fedora 31 support in the CI

2019-11-13 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39971#comment-39971
 ] 

Barak Korren commented on OVIRT-2832:
-

While we could make mock work, it'd be nicer if we could get help to move the 
new container-based backend forward so the upstream container image could be 
used directly instead.

> Add Fedora 31 support in the CI
> ---
>
> Key: OVIRT-2832
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2832
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Vdsm want to support only Fedora 31 at this point.
> Having Fedora 31 available, we can simplify the code since we have
> same versions of lvm and other packages in Fedora 31 and RHEL 8.
> We need:
> - mirrors for fedora 31 repos
> - mock config for fedora 31
> Thanks,
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/J3KQ7QF55B3INUA6KMPPH36STRXBEQUS/


Container-based CI backend for oVirt

2019-11-13 Thread Barak Korren
*What are we talking about exactly?*
The CI team had been doing work over the last few weeks to create and
enable a container-based CI backend that is usable as an alternative for
the legacy VM-and-chroot backend we've been using so far.

*Why?*

   1. *You asked for it: *We've been approached by several developer teams
   asking for this kind of functionality
   2. *Faster:* The new backend requires far less setup overhead then the
   legacy one, and therefore would allow jobs to start and finish faster
   3. *More flexible: *The old backend allowed specifying build/test
   dependencies in terms of RPM packages, the container based one uses
   container images for this which are far more flexible
   4. *More versatile:* There were some limits to what could be done
   reliably with chroots. The greater isolation provided by containers would
   provide better testing capabilities
   5. *Less maintenance:* With the legacy backend, the CI team had to take
   the time and provide support for every new release of a target
   distribution, the new backend is designed to allow using upstream container
   images directly therefore letting developers use them as soon as they are
   released instead of having to wait for the CI team to provide support.

*Tell me more!*
A pending documentation update describing the new features and how to use
them can be found here:

   - 104660 <https://gerrit.ovirt.org/104660>: Document the new STDCI
   containers syntax

*Great! so can I use this now?*
Unfortunately this is not yet enabled in production, while most of the
functionality had already been developed and tested, code review efforts
are still ongoing. If you'd like to help us move this along, please take
the time and help reviewing the following patches:

   - 103937 <https://gerrit.ovirt.org/103937>: Adda an STDCI DSL option for
   using containers
   - 104066 <https://gerrit.ovirt.org/104066>: Add STDCI DSL `podspecs`
   pseudo option
   - 104164 <https://gerrit.ovirt.org/104164>: stdci_runner: Support
   container-based threads

I'd like to thank @Marcin Sobczyk  and @Miguel Duarte
de Mora Barroso  for helping with the review effort
so far, further work is needed as changes had been made following the
reviews.

*Hey, it's nice and all, but I think you forgot to make it do X*
Hold on! we are not done yet! While we want to put an initial set of
features in the hands of developers as soon as possible, work is already
under way to provide more functionality. To check out what is coming down
the line (And perhaps help it along), please have a look at the following
patches:

   - 104668 <https://gerrit.ovirt.org/104668>: Document source cloning
   extension for containers
   - 104213 <https://gerrit.ovirt.org/104213>: Implement STDCI DSL support
   for initContainers
   - 104590 <https://gerrit.ovirt.org/104590>: STDCI DSL: Add the
   `decorate` option
   - 104273 <https://gerrit.ovirt.org/104273>: STDCI PODS: Unique UID for
   each job build's POD


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/24DTL5IACICPFDVOSEWHH27IT24IQ3YX/


[JIRA] (OVIRT-2825) reposync on mirrors machine does not support syncing of modules data

2019-11-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39945#comment-39945
 ] 

Barak Korren commented on OVIRT-2825:
-

How is module information stored in the repo? do consider that we generate our 
own metadata for the mirrors and not copy if from the distro repos.

Maybe we could just generated the modules or copy dome files from the disrot 
repos instead of having reposync do it for us...

> reposync on mirrors machine does not support syncing of modules data
> 
>
> Key: OVIRT-2825
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2825
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Emil Natan
>Assignee: infra
>
> That's something added to the reposync version coming with CentOS8, so we 
> should probably upgrade to that version.
> [~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a]
> [~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794]



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DZSOMKYEULYTMJO7UBFJNAPZP3BRGUKT/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39918#comment-39918
 ] 

Barak Korren edited comment on OVIRT-2814 at 10/22/19 1:54 PM:
---

{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see what you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...


was (Author: bkor...@redhat.com):
{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see whay you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two i

[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2814:

Components: CI Mirrors

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GP27PBBAXR56I7POEAVIMGUBC75YCXW4/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reopened OVIRT-2814:
-

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2YS4H2RQ55P64QSGRCUJPLTCJCJ44CFP/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39918#comment-39918
 ] 

Barak Korren commented on OVIRT-2814:
-

{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see whay you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YTD4KI6LRDOZPSYCSNYVGXNLTVYSSYF7/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39912#comment-39912
 ] 

Barak Korren commented on OVIRT-2814:
-

ticket is from 5 days ago, last run seems to have passed: 
https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2950/

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100113)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QVJIPHKEFORATIJZEXZUOFOT42FFS4UU/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39909#comment-39909
 ] 

Barak Korren commented on OVIRT-2814:
-

{code}
this looks like a concurrency issue, a lock should be used in order to
prevent two instances to use the same cache directory at the same time or
use separate cache directories for different repos.
{code}

As you can see in the path name - it is unique for the repo, and there is only 
one update job per repo that does not run concurrently. So the last sentence 
you wrote above holds.

This might be a cleanup issues, where some failure scenario leaves some lock 
files behind.

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100113)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WE5IYWTQQVIDGTUHUMAKRNLX7GT552RO/


[JIRA] (OVIRT-2811) s390x tests failing on trying to use `sudo` during artifact collection

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2811:
---

Assignee: Barak Korren  (was: infra)

> s390x tests failing on trying to use `sudo` during artifact collection
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: Barak Korren
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B6ORDHIXINK35JF3PJKO33EW7QNWCBAD/


[JIRA] (OVIRT-2811) s390x tests failing on trying to use `sudo` during artifact collection

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Summary: s390x tests failing on trying to use `sudo` during artifact 
collection  (was: s390x tests failing on missing tty)

> s390x tests failing on trying to use `sudo` during artifact collection
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B7TTNIPJDFQSVX3TE7U7S23GPEVGGNFT/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Issue Type: Outage  (was: By-EMAIL)

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XVIT72O5LJYMQOSAIQCQOQFBD3EVYEKP/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Labels: s390x  (was: )

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/W7JFB4LXNF4T4JENAVC6STYZQNPLJ64X/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Components: Jenkins Slaves
mock_runner

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R6S22K6OSML3LS34T3R3YOPIQ74L7X46/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39892#comment-39892
 ] 

Barak Korren commented on OVIRT-2811:
-

Root cause of the issue is  {{mock}} generating its log files with root 
permissions, whule it used to generated then with an unprivileged user before.

This (fixed) issue seems related:
https://github.com/rpm-software-management/mock/issues/322 

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/Y2P7YZWQ7NCWZDJOLR4DB5RWCKNDGKEN/


[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2810:

Issue Type: Outage  (was: By-EMAIL)

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Master
>Reporter: Nir Soffer
>Assignee: infra
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> <https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio>*
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at

[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2810:

Components: Jenkins Master

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Master
>Reporter: Nir Soffer
>Assignee: infra
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> <https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio>*
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at

[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2810:
---

Assignee: Barak Korren  (was: infra)

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Master
>Reporter: Nir Soffer
>    Assignee: Barak Korren
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> <https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio>*
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at

[JIRA] (OVIRT-2808) CI broken - builds fail early in "loading code" stage with "attempted duplicate class definition for name: "Project""

2019-10-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39879#comment-39879
 ] 

Barak Korren commented on OVIRT-2808:
-

known issue - we need the configuration in jenkins to "catch up" to the code in 
our master branch.

should be resolved once the following finishes running:
https://jenkins.ovirt.org/job/jenkins_standard-on-merge/820

> CI broken - builds fail early in "loading code" stage with "attempted 
> duplicate class definition for name: "Project""
> -
>
> Key: OVIRT-2808
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2808
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Example builds:
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12526/
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12527/
> java.lang.LinkageError: loader (instance of
> org/jenkinsci/plugins/workflow/cps/CpsGroovyShell$CleanGroovyClassLoader):
> attempted  duplicate class definition for name: "Project"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at groovy.lang.GroovyClassLoader.access$400(GroovyClassLoader.java:62)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.createClass(GroovyClassLoader.java:500)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.onClassNode(GroovyClassLoader.java:517)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.call(GroovyClassLoader.java:521)
>   at 
> org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:834)
>   at 
> org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
>   at 
> org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
>   at 
> org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
>   at 
> org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
>   at 
> groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
>   at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
>   at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.parse(CpsGroovyShell.java:113)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:736)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:727)
>   at 
> org.jenkinsci.plugins.workflow.cps.steps.LoadStepExecution.start(LoadStepExecution.java:49)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:269)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:177)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
>   at 
> org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
>   at WorkflowScript.load_code(WorkflowScript:45)
>   at Script4.on_load(Script4.groovy:22)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at Script1.on_load(Script1.groovy:13)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at WorkflowScript.run(WorkflowScript:22)
>   at ___cps.transform___(Native Method)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
>   at sun.reflect.GeneratedMethodAccessor681.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
>

[JIRA] (OVIRT-2808) CI broken - builds fail early in "loading code" stage with "attempted duplicate class definition for name: "Project""

2019-10-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2808:
---

Assignee: Barak Korren  (was: infra)

> CI broken - builds fail early in "loading code" stage with "attempted 
> duplicate class definition for name: "Project""
> -
>
> Key: OVIRT-2808
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2808
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>    Reporter: Nir Soffer
>Assignee: Barak Korren
>
> Example builds:
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12526/
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12527/
> java.lang.LinkageError: loader (instance of
> org/jenkinsci/plugins/workflow/cps/CpsGroovyShell$CleanGroovyClassLoader):
> attempted  duplicate class definition for name: "Project"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at groovy.lang.GroovyClassLoader.access$400(GroovyClassLoader.java:62)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.createClass(GroovyClassLoader.java:500)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.onClassNode(GroovyClassLoader.java:517)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.call(GroovyClassLoader.java:521)
>   at 
> org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:834)
>   at 
> org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
>   at 
> org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
>   at 
> org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
>   at 
> org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
>   at 
> groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
>   at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
>   at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.parse(CpsGroovyShell.java:113)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:736)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:727)
>   at 
> org.jenkinsci.plugins.workflow.cps.steps.LoadStepExecution.start(LoadStepExecution.java:49)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:269)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:177)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
>   at 
> org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
>   at WorkflowScript.load_code(WorkflowScript:45)
>   at Script4.on_load(Script4.groovy:22)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at Script1.on_load(Script1.groovy:13)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at WorkflowScript.run(WorkflowScript:22)
>   at ___cps.transform___(Native Method)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
>   at sun.reflect.GeneratedMethodAccessor681.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
>   at 
> com.cloudbees.groovy.cps.impl.LocalVariableBlock$LocalVariable.get(LocalVariableB

[JIRA] (OVIRT-2803) Re: CI is not triggered for pushed gerrit updates

2019-09-25 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39863#comment-39863
 ] 

Barak Korren commented on OVIRT-2803:
-

This is a known issue - tacker ticket:

https://ovirt-jira.atlassian.net/browse/OVIRT-2802

Will close new ticket as a duplicate

On Wed, 25 Sep 2019 at 13:08, Nir Soffer  wrote:

> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
>
>> CI has stopped from being triggered for pushed gerrit updates.
>>
>
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
>
> Example at: https://gerrit.ovirt.org/#/c/103320/
>> last PS did not trigger CI tests.
>>
>
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
>
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
>
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
>
> Maybe someone changed the pattern?
>
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

> Re: CI is not triggered for pushed gerrit updates
> -
>
> Key: OVIRT-2803
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2803
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
> > CI has stopped from being triggered for pushed gerrit updates.
> >
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
> Example at: https://gerrit.ovirt.org/#/c/103320/
> > last PS did not trigger CI tests.
> >
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
> Maybe someone changed the pattern?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R6KWTCRKLZ3CCZQMHREODUXR3M64ARCR/


Re: CI is not triggered for pushed gerrit updates

2019-09-25 Thread Barak Korren
This is a known issue - tacker ticket:

https://ovirt-jira.atlassian.net/browse/OVIRT-2802

Will close new ticket as a duplicate

On Wed, 25 Sep 2019 at 13:08, Nir Soffer  wrote:

> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
>
>> CI has stopped from being triggered for pushed gerrit updates.
>>
>
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
>
> Example at: https://gerrit.ovirt.org/#/c/103320/
>> last PS did not trigger CI tests.
>>
>
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
>
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
>
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
>
> Maybe someone changed the pattern?
>
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/Q245YZZMBQGWHT7BQF53LN32ZJEFNC43/


[JIRA] (OVIRT-1448) Enable devs to specifiy patch dependencies for OST

2019-09-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39851#comment-39851
 ] 

Barak Korren commented on OVIRT-1448:
-

well, we're probably not going to implement this, but as long as we use CQ - we 
can still get issues where dependent patches get tested alone

> Enable devs to specifiy patch dependencies for OST
> --
>
> Key: OVIRT-1448
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1448
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: oVirt CI
>    Reporter: Barak Korren
>Assignee: infra
>  Labels: change-queue
>
> We have an issue with our system ATM where if there are patches for different 
> projects that depend on one another, the system is unaware of that dependency.
> What typically happens in this scenario is that sending the dependent patch 
> makes the experimental test fail and keep failing until the other patch is 
> also merged.
> The change queue will handle this better, but the typical behaviour for it 
> would be to reject both patches, unless they are somehow coordinated to make 
> it into the same test.
> The change queue core code already includes the ability to track and 
> understand dependencies between changes. What is missing is the ability for 
> developers to specify theses dependencies.
> We would probably want to adopt OpenStack's convention here.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/TDTWAAV7TNYZZQLHFHBJUSO4HST3A2UV/


Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
On Thu, 19 Sep 2019 at 16:21, Yedidyah Bar David  wrote:

> On Thu, Sep 19, 2019 at 3:47 PM Barak Korren  wrote:
> >
> > I haven't seen any comments on this thread, so we are going to move
> forward with the change.
>
> I started writing some reply, then realized that the only effect on
> developers is when pushing patches to OST, not to their own project.
> Right? CQ will continue as normal, nightly runs, etc.? So I didn't
> reply...
>

Yeah, this only has to do with the big suits that are listed in $subject,
none of those are used by the CQ ATM.


>
> If so, that's fine for me.
>
> Please document that somewhere. Specifically, how to do the last two
> points in [1]:
>
> >
> > On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:
> >>
> >> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
> >>
> >> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
> >>>
> >>> If you have been using or monitoring any OST suits recently, you may
> have noticed we've been suffering from long delays in allocating CI
> hardware resources for running OST suits. I'd like to briefly discuss the
> reasons behind this, what are planning to do to resolve this and the
> implication of those actions for big suit owners.
> >>>
> >>> As you might know, we have moved a while ago from running OST suits
> each on its own dedicated server to running them inside containers managed
> by OpenShift. That had allowed us to run multiple OST suits on the same
> bare-metal host which in turn increased our overall capacity by 50% while
> still allowing us to free up hardware for accommodating the kubevirt
> project on our CI hardware.
> >>>
> >>> Our infrastructure is currently built in a way where we use the exact
> same POD specification (and therefore resource settings) for all suits.
> Making it more flexible at this point would require significant code
> changes we are not likely to make. What this means is that we need to make
> sure our PODs have enough resources to run the most demanding suits. It
> also means we waste some resources when running less demanding ones.
> >>>
> >>> Given the set of OST suits we have ATM, we sized our PODs to allocate
> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
> a time in parallel. This was sufficient for a while, but given increasing
> demand, and the expectation for it to increase further once we introduce
> the patch gating features we've been working on, we must find a way to
> significantly increase our suit running capacity.
> >>>
> >>> We have measured the amount of RAM required by each suit and came to
> the conclusion that for the vast majority of suits, we could settle for
> PODs that allocate only 14Gibs of RAM. If we make that change, we would be
> able to run a total of 40 suits at a time, almost tripling our current
> capacity.
> >>>
> >>> The downside of making this change is that our STDCI V2 infrastructure
> will no longer be able to run suits that require more then 14Gib of RAM.
> This effectively means it would no longer be possible to run these suits
> from OST's check-patch job or from the OST manual job.
> >>>
> >>> The list of relevant suits that would be affected follows, the suit
> owners, as documented in the CI configuration, have be added as "to"
> recipients to the message:
> >>>
> >>> hc-basic-suite-4.3
> >>> hc-basic-suite-master
> >>> metrics-suite-4.3
> >>>
> >>> Since we're aware people would still like to be able to work with the
> bigger suits, we will leverage the nightly suit invocation jobs to enable
> then to be run in the CI infra. We will support the following use cases:
> >>>
> >>> Periodically running the suit on the latest oVirt packages - this will
> be done by the nightly job like it is done today
> >>> Running the suit to test changes to the suit`s code - while currently
> this is done automatically by check-patch, this would have to be done
> manually in the future by manually triggering the nightly job and setting
> the REFSPEC parameter to point to the examined patch
> >>> Triggering the suit manually - This would be done by triggering the
> suit-specific nightly job (as opposed to the general OST manual job)
>
> [1] ^^
>
> >>>
> >>>  The patches listed below implement the changes outlined above:
> >>>
> >>> 102757 nightly-system-tests: big suits -> big containers
> >>> 102771: stdci: Drop `big` suits from check-patch
> >>>
&g

Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-19 Thread Barak Korren
I haven't seen any comments on this thread, so we are going to move forward
with the change.

On Mon, 2 Sep 2019 at 09:03, Barak Korren  wrote:

> Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.
>
> On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:
>
>> If you have been using or monitoring any OST suits recently, you may have
>> noticed we've been suffering from long delays in allocating CI hardware
>> resources for running OST suits. I'd like to briefly discuss the reasons
>> behind this, what are planning to do to resolve this and the implication of
>> those actions for big suit owners.
>>
>> As you might know, we have moved a while ago from running OST suits each
>> on its own dedicated server to running them inside containers managed by
>> OpenShift. That had allowed us to run multiple OST suits on the same
>> bare-metal host which in turn increased our overall capacity by 50% while
>> still allowing us to free up hardware for accommodating the kubevirt
>> project on our CI hardware.
>>
>> Our infrastructure is currently built in a way where we use the exact
>> same POD specification (and therefore resource settings) for all suits.
>> Making it more flexible at this point would require significant code
>> changes we are not likely to make. What this means is that we need to make
>> sure our PODs have enough resources to run the most demanding suits. It
>> also means we waste some resources when running less demanding ones.
>>
>> Given the set of OST suits we have ATM, we sized our PODs to allocate
>> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
>> a time in parallel. This was sufficient for a while, but given increasing
>> demand, and the expectation for it to increase further once we introduce
>> the patch gating features we've been working on, we must find a way to
>> significantly increase our suit running capacity.
>>
>> We have measured the amount of RAM required by each suit and came to the
>> conclusion that for the vast majority of suits, we could settle for PODs
>> that allocate only 14Gibs of RAM. If we make that change, we would be able
>> to run a total of 40 suits at a time, almost tripling our current capacity.
>>
>> The downside of making this change is that our STDCI V2 infrastructure
>> will no longer be able to run suits that require more then 14Gib of RAM.
>> This effectively means it would no longer be possible to run these suits
>> from OST's check-patch job or from the OST manual job.
>>
>> The list of relevant suits that would be affected follows, the suit
>> owners, as documented in the CI configuration, have be added as "to"
>> recipients to the message:
>>
>>- hc-basic-suite-4.3
>>- hc-basic-suite-master
>>- metrics-suite-4.3
>>
>> Since we're aware people would still like to be able to work with the
>> bigger suits, we will leverage the nightly suit invocation jobs to enable
>> then to be run in the CI infra. We will support the following use cases:
>>
>>- *Periodically running the suit on the latest oVirt packages* - this
>>will be done by the nightly job like it is done today
>>- *Running the suit to test changes to the suit`s code* - while
>>currently this is done automatically by check-patch, this would have to be
>>done manually in the future by manually triggering the nightly job and
>>setting the REFSPEC parameter to point to the examined patch
>>- *Triggering the suit manually* - This would be done by triggering
>>the suit-specific nightly job (as opposed to the general OST manual job)
>>
>>  The patches listed below implement the changes outlined above:
>>
>>- 102757 <https://gerrit.ovirt.org/102757> nightly-system-tests: big
>>suits -> big containers
>>- 102771 <https://gerrit.ovirt.org/102771>: stdci: Drop `big` suits
>>from check-patch
>>
>> We know that making the changes we presented will make things a little
>> less convenient for users and maintainers of the big suits, but we believe
>> the benefits of having vastly increased execution capacity for all other
>> suits outweigh those shortcomings.
>>
>> We would like to hear all relevant comment and questions from the quite
>> owners and other interested parties, especially is you think we should not
>> carry out the changes we propose.
>> Please take the time to respond on this thread, or on the linked patches.
>>
>> Thanks,
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>>

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2794:
---

Assignee: Ehud Yonasi  (was: infra)

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Ehud Yonasi
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/console>
> #5542 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/console>
> #5541 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/console>
> #5540 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> Sep 5, 2019 3:01 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/console>
> #5539 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> Sep 5, 2019 2:13 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/console>
> #5538 <https://jenkins.ovirt.org/job/ovirt-

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39798#comment-39798
 ] 

Barak Korren commented on OVIRT-2794:
-

[~accountid:5aa0f39f5a4d022884128a0f] had started testing {{docker_cleanup.py}} 
on CentOS 7, so assigning the ticket to him.

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/console>
> #5542 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/console>
> #5541 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/console>
> #5540 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> Sep 5, 2019 3:01 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/console>
> #5539 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> Sep 5, 2019 2:13 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> [image: Failed > Console Output]
> <https://jenkins.ov

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Labels: docker  (was: )

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/console>
> #5542 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/console>
> #5541 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/console>
> #5540 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> Sep 5, 2019 3:01 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/console>
> #5539 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> Sep 5, 2019 2:13 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/console>
> #5538 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/>

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Components: Jenkins Slaves

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/console>
> #5542 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/console>
> #5541 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> Sep 5, 2019 3:02 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/console>
> #5540 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> Sep 5, 2019 3:01 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/console>
> #5539 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> Sep 5, 2019 2:13 PM
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
> [image: Failed > Console Output]
> <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/console>
> #5538 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/>
> Sep 5, 2019 1:58 PM
> <ht

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39797#comment-39797
 ] 

Barak Korren commented on OVIRT-2794:
-

This was a bit Puzzling, we've seen issues between {{docker_cleanup.py}} and 
Docker appear sporadically in the past, and therefore have have made the job 
code generally not fail when {{docker_cleanup.py}} fails, and instead send an 
email to the infra list. It turn out that was only true for the V2 code, for 
the V1 code (which is still used in the manual job and the nightly jobs) thos 
failures could still arise.

We did verify that {{docker_cleanup.py}} works on CentOS 7 with the Python 3 
docker API client before merging the patch, so its strange we did not see the 
issue then.

[~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] some of your 
statements above seem to include some wrong assumption about how the system is 
built. We're not actually exposing the host's Docker deamon to the CI code, 
instead we we our own docker instance running inside the container that is used 
to run the CI code. That way we can ensure there can be no cross-talk when 
running multiple CI containers on the same hosts.

[~accountid:557058:cc1e0e66-9881-45e2-b0b7-ccaa3e60f26e] as far as using 
podman, I think doing that at this point will be quite a challenge for a number 
of reasons:
# We're currently using OpenShift 3.7 to manage our containers, this implies 
that we must run Docker on our hosts, since AFAIK OpenShift only started 
supporting CRIO in 4.0 or 4.1.
# To allow CI scripts and tests suits to use Docker we run nested Docker 
instances inside the CI containers. We know that Docker in Docker work well for 
our use cases. Running Podman in Docker will probably be more challenging.
# Since we're still using {{mock}} to encapsulate the CI script inside the CI 
container, we're bind-mounting the docker socket from the container into mock. 
We know there are issues when running Podman in mock, so solving those will 
take some work.
# People that write CI scripts and suits tend to expect things to "just work" 
in CI like it does on their laptops, and hence tend to use Docker commands. 
Removing docker will force everyone to learn Podman, and we'll need to make 
changes everywhere.

Out current suspicion is that this issue may have to do with the particular 
version Docker that is installed inside the CI container. While our 
{{global_setup.sh}} script generally keeps Docker up to date on the CI slaves, 
we've intentionally skipped that update code when running in a container. I 
suspect that the version of Docker that is in the CI containers is older then 
the once running on the CI slaves. That would explain why we did not see this 
issue when working on the {{docker_cleanup.py}} patch, since that was tested on 
the the normal slaves and not the containers.

Here is what I think we should do now:
# Verify again, that {{docker_cleanup.py}} woks well on CentOS with the Python 
3 Docker client API .
# If so, inspect the version of Docker we have in the containers and finally
# Build an updated container image with a newer version of Docker as needed

Note that updating the container image will require us to tests it thoroughly 
and ensure it can properly run both OST and {{kubevirt-ci}}. 



> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Description: 
The last successful build was today at 08:10:

Since then all builds fail very early with the error below - which is not
related to oVirt.

{code}
Removing image:
sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
force=True
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
in _raise_for_status
response.raise_for_status()
  File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
raise_for_status
raise HTTPError(http_error_msg, response=self)

requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 349, in 
main()
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 37, in main
safe_image_cleanup(client, whitelisted_repos)
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 107, in safe_image_cleanup
_safe_rm(client, parent)
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 329, in _safe_rm
client.images.remove(image_id, force=force)
  File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
288, in remove
self.client.api.remove_image(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
19, in wrapped
return f(self, resource_id, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
remove_image
return self._result(res, True)
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
in _result
self._raise_for_status(response)
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
in _raise_for_status
raise create_api_error_from_http_exception(e)
  File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
exist")

Aborting.

Build step 'Execute shell' marked build as failure
{code}

x

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/console>
#5542 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>
Sep 5, 2019 3:02 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5542/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/console>
#5541 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>
Sep 5, 2019 3:02 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5541/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/console>
#5540 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>
Sep 5, 2019 3:01 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5540/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/console>
#5539 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>
Sep 5, 2019 2:13 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5539/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/console>
#5538 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/>
Sep 5, 2019 1:58 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5538/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5537/console>
#5537 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5537/>
Sep 5, 2019 1:50 PM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5537/>

[image: Failed > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5536/console>
#5536 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5536/>
Sep 5, 2019 10:21 AM
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5536/>
 [image: x]
<http://jenkins.ovirt.org/job/ovirt-system-tests_manual/jobConfigHistory/showDiffFiles?timestamp1=2019-08-27_12-38-35=2019-09-05_08-22-23>
[image: Success > Console Output]
<https://jenkins.ovirt.org/job/ovirt-system

`jenkins` project is now gated

2019-09-05 Thread Barak Korren
Hi everyone,

Since we merged one of the more significant patches
<https://gerrit.ovirt.org/c/102053/> for enabling patch gating yesterday,
the `jenkins` repository is using the gating system as a showcase for this
functionality.

What this means in practice is that patches should no longer be merged
manually to this repo. Instead, once the code review flag is set to +2 in
Gerrit, and if the 'verified' and 'ci' flags are set to +1 as well, the
gating job will be triggered, and if successful the patch will be merged
automatically.

The gating job for the `jenkins` repo does not run OST. Instead, it runs
the `dummy_suit_master.sh` script that is stored in the repo itself. this
script doesn't do much ATM.

Regards,
Barak.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/V3XRK6RYNDF2JUDNN75MRWEDZKO6JFXB/


[JIRA] (OVIRT-2788) CI: Add the option to send and email if stage fails

2019-09-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39789#comment-39789
 ] 

Barak Korren commented on OVIRT-2788:
-

its not something you can define ATM.

[~accountid:557058:c4a3432b-f1c1-4620-b53b-c398d6d3a5c2] started implementing 
this when we were working on the general notifications mechanism (That was mean 
to be used for the tag stages as well) for STDCI but he moved to work on other 
things.

> CI: Add the option to send and email if stage fails
> ---
>
> Key: OVIRT-2788
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2788
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: CI client projects
>Reporter: Bell Levin
>Assignee: infra
>
> Added the poll-upstream-sources stage to be ran every night on vdsm \[1]. I 
> think it is useful if an email is sent to selected people if the stage has 
> failed.
> Such option is available in V1 (namely, the nightly ost network suite), and 
> would help me out if implemented in V2 as well.
> \[1] 
> [https://gerrit.ovirt.org/#/c/102901/|https://gerrit.ovirt.org/#/c/102901/]
> FYI [~accountid:557058:866c109f-3951-4680-8dac-b76caf296501] 
> [~accountid:557058:c4a3432b-f1c1-4620-b53b-c398d6d3a5c2] 
> [~accountid:5aa0f39f5a4d022884128a0f] 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100109)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/6QYIDDIR6FPSOFKR3OAGBE4WSBU73HFA/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39788#comment-39788
 ] 

Barak Korren commented on OVIRT-2790:
-

[~accountid:557058:7013bb8c-48b2-4b9b-898e-eccf5fb61fad] {{ci test please}} had 
been around for a long long time - while triggering from the GUI is still 
usable, and not going to be removed any time soon, I personally prefer if 
people stay away from the jenkins GUI unless they are reading the logs.

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100109)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AYQRUBGMGYGYKXG5LFLCXPUKPT2KTG76/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2790:

Resolution: Fixed
Status: Done  (was: To Do)

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IITE4ESJPBP2EMDO4J6VZA3O66UZO53O/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39784#comment-39784
 ] 

Barak Korren commented on OVIRT-2790:
-

I see, maybe put a cheat sheet for him somewhere...

closing the ticket now

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NDH4GD474377GWMLYG4CKTU5OZMYAUXD/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39782#comment-39782
 ] 

Barak Korren commented on OVIRT-2790:
-

If all you want to do is to re-run - you can just type {{ci test please}} into 
a comment on your patch

Where did you find the instructions for using the manual trigger? we might need 
to update it?

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EY7LIAWCKDYXUJOLWAXFIWI5CTCKEPB2/


[JIRA] (OVIRT-2789) Fwd: Any chance you can remove dp...@redhat.com from infra@ovirt.org ?

2019-09-03 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2789:
---

 Summary: Fwd: Any chance you can remove dp...@redhat.com from 
infra@ovirt.org ?
 Key: OVIRT-2789
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2789
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Barak Korren
Assignee: infra


Forwarding to infra-support, so Duck will see this.

-- Forwarded message -
From: Yaniv Kaul 
Date: Tue, 3 Sep 2019 at 16:34
Subject: Any chance you can remove dp...@redhat.com from infra@ovirt.org ?
To: Barak Korren 


He no longer works at Red Hat and neither his manager - and I'm getting
those emails, without the ability to remove him (or her?).

TIA,
Y.



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EOMBEFYDBDL3KLJ4SQCKLZIH6BQNTYZQ/


[JIRA] (OVIRT-1945) Allow to keep running containers in docker_cleanup

2019-09-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1945:

Components: docker_cleanup.py

> Allow to keep running containers in docker_cleanup
> --
>
> Key: OVIRT-1945
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1945
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: docker_cleanup.py, oVirt CI, Standard CI (Freestyle), 
> Standard CI (Pipelines)
>Reporter: Daniel Belenky
>Assignee: infra
>  Labels: standard-ci
>
> docker_cleanup.py stops all running containers before it removes the images.
> We should make it optional as well as allow whitelisting containers.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GBGTZ7LWETEX4WOA4NONRMZUE7HQGZKC/


Re: Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-02 Thread Barak Korren
Adding Evgeny and Shirly who are AFAIK the owners of the metrics suit.

On Sun, 1 Sep 2019 at 17:07, Barak Korren  wrote:

> If you have been using or monitoring any OST suits recently, you may have
> noticed we've been suffering from long delays in allocating CI hardware
> resources for running OST suits. I'd like to briefly discuss the reasons
> behind this, what are planning to do to resolve this and the implication of
> those actions for big suit owners.
>
> As you might know, we have moved a while ago from running OST suits each
> on its own dedicated server to running them inside containers managed by
> OpenShift. That had allowed us to run multiple OST suits on the same
> bare-metal host which in turn increased our overall capacity by 50% while
> still allowing us to free up hardware for accommodating the kubevirt
> project on our CI hardware.
>
> Our infrastructure is currently built in a way where we use the exact same
> POD specification (and therefore resource settings) for all suits. Making
> it more flexible at this point would require significant code changes we
> are not likely to make. What this means is that we need to make sure our
> PODs have enough resources to run the most demanding suits. It also means
> we waste some resources when running less demanding ones.
>
> Given the set of OST suits we have ATM, we sized our PODs to allocate
> 32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
> a time in parallel. This was sufficient for a while, but given increasing
> demand, and the expectation for it to increase further once we introduce
> the patch gating features we've been working on, we must find a way to
> significantly increase our suit running capacity.
>
> We have measured the amount of RAM required by each suit and came to the
> conclusion that for the vast majority of suits, we could settle for PODs
> that allocate only 14Gibs of RAM. If we make that change, we would be able
> to run a total of 40 suits at a time, almost tripling our current capacity.
>
> The downside of making this change is that our STDCI V2 infrastructure
> will no longer be able to run suits that require more then 14Gib of RAM.
> This effectively means it would no longer be possible to run these suits
> from OST's check-patch job or from the OST manual job.
>
> The list of relevant suits that would be affected follows, the suit
> owners, as documented in the CI configuration, have be added as "to"
> recipients to the message:
>
>- hc-basic-suite-4.3
>- hc-basic-suite-master
>- metrics-suite-4.3
>
> Since we're aware people would still like to be able to work with the
> bigger suits, we will leverage the nightly suit invocation jobs to enable
> then to be run in the CI infra. We will support the following use cases:
>
>- *Periodically running the suit on the latest oVirt packages* - this
>will be done by the nightly job like it is done today
>- *Running the suit to test changes to the suit`s code* - while
>currently this is done automatically by check-patch, this would have to be
>done manually in the future by manually triggering the nightly job and
>setting the REFSPEC parameter to point to the examined patch
>- *Triggering the suit manually* - This would be done by triggering
>the suit-specific nightly job (as opposed to the general OST manual job)
>
>  The patches listed below implement the changes outlined above:
>
>- 102757 <https://gerrit.ovirt.org/102757> nightly-system-tests: big
>suits -> big containers
>- 102771 <https://gerrit.ovirt.org/102771>: stdci: Drop `big` suits
>from check-patch
>
> We know that making the changes we presented will make things a little
> less convenient for users and maintainers of the big suits, but we believe
> the benefits of having vastly increased execution capacity for all other
> suits outweigh those shortcomings.
>
> We would like to hear all relevant comment and questions from the quite
> owners and other interested parties, especially is you think we should not
> carry out the changes we propose.
> Please take the time to respond on this thread, or on the linked patches.
>
> Thanks,
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/OURCWMCLA5KU36S5FJUG75KWJA3QAKLU/


Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-01 Thread Barak Korren
If you have been using or monitoring any OST suits recently, you may have
noticed we've been suffering from long delays in allocating CI hardware
resources for running OST suits. I'd like to briefly discuss the reasons
behind this, what are planning to do to resolve this and the implication of
those actions for big suit owners.

As you might know, we have moved a while ago from running OST suits each on
its own dedicated server to running them inside containers managed by
OpenShift. That had allowed us to run multiple OST suits on the same
bare-metal host which in turn increased our overall capacity by 50% while
still allowing us to free up hardware for accommodating the kubevirt
project on our CI hardware.

Our infrastructure is currently built in a way where we use the exact same
POD specification (and therefore resource settings) for all suits. Making
it more flexible at this point would require significant code changes we
are not likely to make. What this means is that we need to make sure our
PODs have enough resources to run the most demanding suits. It also means
we waste some resources when running less demanding ones.

Given the set of OST suits we have ATM, we sized our PODs to allocate
32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
a time in parallel. This was sufficient for a while, but given increasing
demand, and the expectation for it to increase further once we introduce
the patch gating features we've been working on, we must find a way to
significantly increase our suit running capacity.

We have measured the amount of RAM required by each suit and came to the
conclusion that for the vast majority of suits, we could settle for PODs
that allocate only 14Gibs of RAM. If we make that change, we would be able
to run a total of 40 suits at a time, almost tripling our current capacity.

The downside of making this change is that our STDCI V2 infrastructure will
no longer be able to run suits that require more then 14Gib of RAM. This
effectively means it would no longer be possible to run these suits from
OST's check-patch job or from the OST manual job.

The list of relevant suits that would be affected follows, the suit owners,
as documented in the CI configuration, have be added as "to" recipients to
the message:

   - hc-basic-suite-4.3
   - hc-basic-suite-master
   - metrics-suite-4.3

Since we're aware people would still like to be able to work with the
bigger suits, we will leverage the nightly suit invocation jobs to enable
then to be run in the CI infra. We will support the following use cases:

   - *Periodically running the suit on the latest oVirt packages* - this
   will be done by the nightly job like it is done today
   - *Running the suit to test changes to the suit`s code* - while
   currently this is done automatically by check-patch, this would have to be
   done manually in the future by manually triggering the nightly job and
   setting the REFSPEC parameter to point to the examined patch
   - *Triggering the suit manually* - This would be done by triggering the
   suit-specific nightly job (as opposed to the general OST manual job)

 The patches listed below implement the changes outlined above:

   - 102757 <https://gerrit.ovirt.org/102757> nightly-system-tests: big
   suits -> big containers
   - 102771 <https://gerrit.ovirt.org/102771>: stdci: Drop `big` suits from
   check-patch

We know that making the changes we presented will make things a little less
convenient for users and maintainers of the big suits, but we believe the
benefits of having vastly increased execution capacity for all other suits
outweigh those shortcomings.

We would like to hear all relevant comment and questions from the quite
owners and other interested parties, especially is you think we should not
carry out the changes we propose.
Please take the time to respond on this thread, or on the linked patches.

Thanks,

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2O3MV7X5VB32DG2KJDMJKDYWWSHNBZ3R/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren edited comment on OVIRT-2566 at 9/1/19 7:52 AM:
-

Not for specific patches - but we can easily enable it for ALL patches.

If we only want it for specific patches - we can consider allowing some 
customization of the Zull configuration at project level to allow running 
specific suits for specific patches, but this will require some code changes in 
several places. We can plan this once we're in production with the current set 
of suits.


was (Author: bkor...@redhat.com):
Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/K4QZIHSNI25ZOBWIRXAP2J6CAYY2SKRW/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren commented on OVIRT-2566:
-

Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HSF22UINFLBXE243SP6K7S5ABITO7GQ/


Fwd: Zuul posts non hiddable Gerrit comments

2019-09-01 Thread Barak Korren
What is out current plan for upgrading to Gerrit 2.15+ ?

As you can see below one of the nice features it provides, is the ability
to hide auto-generated comments from the UI. This will allow use to
de-clutter the comment lists on patches making them easier to track and
read.

-- Forwarded message -
From: Vitaliy Lotorev 
Date: Fri, 30 Aug 2019 at 01:01
Subject: Zuul posts non hiddable Gerrit comments
To: 


Hi,

Starting with at least v2.15 Gerrit is able to differentiate bot comments
and hide them using 'Show comments' trigger on web UI. This is achived
either using tagged comments or robot comments.

A while ago I filed an issue about Zuul not using these technics [1]
(ticket has links for Gerrit docs with tagged and robot comments). This
results that Zuul comments are not hiddable in Gerrit.

AFAIK, sending tagged comments via SSH to Gerrit is easy (one-line of code
at [2]).

I could try providing a patch for tagged comments.

What Zuul maintainers think about adding support for bot comments?

Should not be sending tagged comments done only if Gerrit version >= 2.15?

[1] https://storyboard.openstack.org/#!/story/2005661
[2]
https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/gerrit/gerritconnection.py#L860
___
Zuul-discuss mailing list
zuul-disc...@lists.zuul-ci.org
http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GCFMMA4BVIQGJO72OVUAT36LJAWNRKO6/


Sizing load on oVirt CI's "loader" containers

2019-08-22 Thread Barak Korren
Our STDCI V2 system make extensive use of the so-called 'loader' PODs
except for very special cases, every job we have allocates and uses such a
POD at least once. Those PODs are used for among other things:

   1. Loading Jenkins pipeline groovy code
   2. Loading the scripts from our `jenkins` repo
   3. Running the STDCI V2 logic to parse the YAML files and figure out
   what to run on which resources
   4. Rendering the STDCI V2 graphical report

The PODs are configured to require 500Mib of ram and run on the zone:ci,
type:vm hosts. This means they end up running on one of the following VMs:

name  memory
shift-n04.phx.ovirt.org   16264540Ki
shift-n05.phx.ovirt.org   16264540Ki
shift-n06.phx.ovirt.org   16264528Ki

So if we make the simple calculation of how many such pods can run on 16Gib
vms, we come up with the theoretical result of 96, but Running a query like
the following on one of those hosts reveals that we share those hosts with
many other containers:

oc get --all-namespaces pods --field-selector=spec.nodeName=
shift-n04.phx.ovirt.org,status.phase==Running

I suspect allocation of the loader container is starting to be a
bottleneck. I think we might have to either increase the amount of RAM the
VMs have, or make the loader containers require less RAM. But we need to be
able to measure some things better to make a decision. Do we have ongoing
metrics for:

   - What does the RAM utilization on the relevant VMs looks like
   - How much ram is actually  used inside the loader containers

WDYT?

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4SHN7E2MBHUH3UJDHPJNVOTFNFLDQXZU/


[JIRA] (OVIRT-2765) Jenkins builds not running "All nodes of label ‘loader-container’ are offline"

2019-07-29 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39599#comment-39599
 ] 

Barak Korren commented on OVIRT-2765:
-

For some reason the port jenkins is usign to talk to the containers was closed 
- I updated the jenkins configuration to re-open it and now we can see working 
containers again.

[~ederevea] do we know why the port was closed all of a sudden?

> Jenkins builds not running "All nodes of label ‘loader-container’ are offline"
> --
>
> Key: OVIRT-2765
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2765
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Scott Dickerson
>Assignee: infra
>
> Builds for at least ovirt-engine-ui-extensions [1] and
> ovirt-engine-nodejs-modules [2] are blocked with an erorr like, "All nodes
> of label ‘loader-container’ are offline".  Looks like they're all broken
> [3].
> Please help!
> [1]
> https://jenkins.ovirt.org/job/ovirt-engine-ui-extensions_standard-check-patch/126/console
> [2]
> https://jenkins.ovirt.org/job/ovirt-engine-nodejs-modules_standard-check-patch/74/console
> [3] https://jenkins.ovirt.org/label/loader-container/
> -- 
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100106)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YXSCREPZY2SKHGWG3LI3KTWRI62X2QGN/


[JIRA] (OVIRT-2443) Make sure that big containers KubVirt CI uses are cached on hosts

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2443:

Resolution: Fixed
Status: Done  (was: To Do)

blocking patch was merged long time ago - and [~eyon...@redhat.com] verified 
that we have the right images in cache

> Make sure that big containers KubVirt CI uses are cached on hosts
> -
>
> Key: OVIRT-2443
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2443
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: CI client projects
>    Reporter: Barak Korren
>Assignee: infra
>
> Make sure that big containers KubVirt CI uses are cached on hosts



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IYWF4ZDGBDZZH3BESR7ZWLJR4R7B2PBY/


[JIRA] (OVIRT-914) Better arch support for mock_runner.sh

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-914:
---
Resolution: Won't Fix
Status: Done  (was: To Do)

well we have one place where this would have been useful (we have a custom 
packages file for s390x in the jenkins project). 

we managed to do without this so far because of the added flexibility v2 gave 
us.

> Better arch support for mock_runner.sh
> --
>
> Key: OVIRT-914
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-914
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: mock_runner
>    Reporter: Barak Korren
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> We managed to us "{{mock_runner.sh}}" in multi-arch so far because it was 
> flexible enough to allow us to select the chroot file.
> The issue is that mock_runner does not actually *know* the arch we are 
> running on so we can`t::
> * do different mounts per-arch
> * install different packages per-arch
> * have different {{check_*}} scripts per-arch



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4ABND6OWIRK7HSX2WDGCRE4KVZ5WQKLJ/


[JIRA] (OVIRT-1396) Add a new 'test-system-artifacts' Standard-CI stage

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1396:

Resolution: Won't Fix
Status: Done  (was: To Do)

this was part of the ovirt-containers CI design, and a longer-term plan to 
enable properly handling node/appliance in the CQ. Since we're now working on 
gating to retire CQ, we're probably not gonna implement this

> Add a new 'test-system-artifacts' Standard-CI stage
> ---
>
> Key: OVIRT-1396
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1396
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Standard CI (Pipelines)
>    Reporter: Barak Korren
>Assignee: infra
>
> This is a part of [containers 
> CI/CD|https://docs.google.com/a/redhat.com/document/d/1mEo3E0kRvlUWT9VaSPKDeG5rXBVXgSaHHQfvZ1la41E/edit?usp=sharing]
>  flow implementation process.
> An order to allow building and testing processes for containers to be 
> triggered after package that are needed for them are built, we will introduce 
> the "{{test-system-artifacts}}" standard-CI stage.
> This stage will be invoked from the 'experimental' or 'change-queue-tester' 
> pipelines jost like the existing OST-based flows.
> In order to provide package and repo information to the std-CI script invoked 
> by this stage we well need to implement OVIRT-1391



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZKYZRHPTWJI4FTGJFWECRMYAKGEN723U/


[JIRA] (OVIRT-2230) Checkout using prow as a GitHub triggering mechanism

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2230:

Resolution: Won't Fix
Status: Done  (was: To Do)

that was an attempt ot integrate Prow with stdci - but Kubevirt decided to go 
100% prow

> Checkout using prow as a GitHub triggering mechanism
> 
>
> Key: OVIRT-2230
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2230
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Jenkins Master
>    Reporter: Barak Korren
>Assignee: infra
>
> [Prow|https://github.com/kubernetes/test-infra/tree/master/prow] is the 
> service that Kubernetes are using to trigger their CI on GitHub events.
> We should inspect it and see if it would be useful for us to adopt it.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WZPK2NSFHADLEAQEF5IBES5D7WG6BOZQ/


[JIRA] (OVIRT-1984) Create "out-of-band" slave cleanup and setup jobs

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1984:

Resolution: Won't Fix
Status: Done  (was: To Do)

> Create "out-of-band" slave cleanup and setup jobs
> -
>
> Key: OVIRT-1984
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1984
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Jenkins Slaves
>Reporter: Barak Korren
>Assignee: infra
>
> Right now, we run slave cleaup and setup steps as part or every single job we 
> run. This has several shortcomings:
> # It takes a long time from the point a user submitted a patch to the point 
> his actual test or build code runs
> # If slave setup or cleanup steps fail - they fail the whole job for the user
> # If slave setup or cleanup steps fail - they can keep failing for many jobs 
> until the CI team intervenes manually
> # There is a "chicken and an egg" issue where some parts of the CI code have 
> to run before the slave was properly cleaned up and configured.  This makes 
> if harder to add new slaves for the system.
> Here is a suggested scheme to fix all this:
> # Label all slaves that should be cleaned up automatically as 'cleanable'. 
> This is mostly to prevent the jobs described here from operating on the 
> master node.
> # Have a "cleanup scheduler" job that finds all slaves labelled as 
> "cleanable" but not as "dirty" or "clean", labels them as "dirty" and runs a 
> cleanup job on them.
> # Have a "cleanup" job that is triggered on particular slaves by the "cleanup 
> scheduler" job, runs cleaup and setup steps on them and then labels them as 
> "clean" and removes the "dirty" label.
> # Have all other CI jobs only use slaves with the "clean" label.
> Notes:
> # The "dirty" label is there to make the "cleanup scheduler" job not trigger 
> twice on the same slave before the"cleanup" job started cleaning it up.
> # Since all slaves used by the real jobs will always be clean - there will no 
> longer be a need to run cleanup steps in the real jobs, thus saving time.
> # If cleanup steps fail - the cleanup job will fail and the slave will not be 
> marked as "clean" so real jobs will never try to use it.
> # To solve the "chicken and egg" issue, the cleanup job probably must be a 
> FreeStyle jobs and all the cleanup and setup code must be embedded into it by 
> JJB. This will probably require a newer version of JJB then what we have so 
> setting OVIRT-1983 as a blocker.
> # There is an issue of how to make CI for this - if cleanup and setup steps 
> are removed from the normal STDCI jobs, they they will not be checked by the 
> "check-patch" job of the "jenkins repo". Here is a suggested scheme to solve 
> this:
> ## Have a way to "loan" slaves from the production jenkins to other Jenkins 
> instances - this could be done by having a job that starts up the Jenkins 
> JNLP client and tells it to connect to another Jenkins master.
> ## As part of the "check-patch" job for the 'jenkins' repo - start a Jenkins 
> master in a container - attach some production slaves to it and have it run 
> cleanup and setup steps on them  



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YPHRXKQXXYYLTQDOPDQ2VGJBLH52NX4W/


[JIRA] (OVIRT-2178) "Borrow" slaves from CentOS CI

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2178:

Resolution: Won't Fix
Status: Done  (was: To Do)

> "Borrow" slaves from CentOS CI
> --
>
> Key: OVIRT-2178
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2178
> Project: oVirt - virtualization made easy
>  Issue Type: Epic
>Reporter: Barak Korren
>Assignee: infra
>
> [CentOS CI|https://wiki.centos.org/QaWiki/CI] is a generic shared platform 
> for building CI service for Open Source projects.
> Amount other things, CentOS CI makes physical and virtual hosts available for 
> running CI processes via the [Duffy|http://wiki.centos.org/QaWiki/CI/Duffy] 
> system.
> We should make oVirt CI able to consume resources from CentOs CI to augment 
> and someday replace the hardware resources available to oVirt CI.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/OTAGGGVQP77BSCTGUWQPQBOSFSJ35234/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39489#comment-39489
 ] 

Barak Korren commented on OVIRT-886:


Reopening ticket - issue still relevant.

> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/TL6AL7JA4G4VSONSHSGPK47CHPPPYCFS/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-886:
---
Status: To Do  (was: In Progress)

> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2AERKKR2TMMRNX2TKUJA3EBCGN5VEYUP/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reopened OVIRT-886:


> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CHXU7X7GV6K4LZ5G4FVZ2H7S6555ZH4F/


Re: [CQ]: 101155,3 (otopi) failed "ovirt-master" system tests

2019-06-26 Thread Barak Korren
On Tue, 25 Jun 2019 at 22:48, Yedidyah Bar David  wrote:

> On Tue, Jun 25, 2019 at 10:26 PM oVirt Jenkins  wrote:
> >
> > Change 101155,3 (otopi) is probably the reason behind recent system test
> > failures in the "ovirt-master" change queue and needs to be fixed.
> >
> > This change had been removed from the testing queue. Artifacts build
> from this
> > change will not be released until it is fixed.
> >
> > For further details about the change see:
> > https://gerrit.ovirt.org/#/c/101155/3
>
> It failed on fcraw because rpm needs to be updated there. Opened an
> infra ticket about this.
>

This fixes it:
https://gerrit.ovirt.org/#/c/101061/



> --
> Didi
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZACLRXXQYEO47RWANPMGTAKTDLXTZWQ7/
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QDE5F46XMEJFIWFFUPUKTV6BAJJPRWMD/


[JIRA] (OVIRT-2744) Ugprade oVirt's OpenShift instance to OKD 4.x

2019-06-25 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2744:
---

Assignee: Evgheni Dereveanchin  (was: infra)

> Ugprade oVirt's OpenShift instance to OKD 4.x
> -
>
> Key: OVIRT-2744
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2744
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: OpenShift
>    Reporter: Barak Korren
>Assignee: Evgheni Dereveanchin
>
> Reasons to do this:
> * Automated OKD upgrades
> * Support for OpenShift pipelines (Knative/Tekton)
> Issues we need to solve:
> * The OS for the OKD nodes (CentOS CoreOS 8.x not release yet)
> * OKD Installer support for oVirt ([~rgo...@redhat.com]'s patches not merged 
> & released yet)



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LZUNPT3REOX4VDEDHY4763QIWZTWCATY/


[JIRA] (OVIRT-2744) Ugprade oVirt's OpenShift instance to OKD 4.x

2019-06-25 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2744:
---

 Summary: Ugprade oVirt's OpenShift instance to OKD 4.x
 Key: OVIRT-2744
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2744
 Project: oVirt - virtualization made easy
  Issue Type: New Feature
  Components: OpenShift
Reporter: Barak Korren
Assignee: infra


Reasons to do this:
* Automated OKD upgrades
* Support for OpenShift pipelines (Knative/Tekton)

Issues we need to solve:
* The OS for the OKD nodes (CentOS CoreOS 8.x not release yet)
* OKD Installer support for oVirt ([~rgo...@redhat.com]'s patches not merged & 
released yet)




--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/UCVETTYPTZSHC74OPC4TPR7WSLX55Y4T/


[JIRA] (OVIRT-2742) ovirt-appliance build failure on missing module urllib.request

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39425#comment-39425
 ] 

Barak Korren commented on OVIRT-2742:
-

Probably has more to do with changes in CentOS repos, because we did not do any 
change to the CI infra that could cause this kind of impact

> ovirt-appliance build failure on missing module urllib.request
> --
>
> Key: OVIRT-2742
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2742
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> *16:16:26* Traceback (most recent call last):*16:16:26*   File
> "scripts/create_ova.py", line 4, in *16:16:26* from
> imagefactory_plugins.ovfcommon.ovfcommon import
> RHEVOVFPackage*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imagefactory_plugins/ovfcommon/ovfcommon.py",
> line 29, in *16:16:26* from imgfac.PersistentImageManager
> import PersistentImageManager*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/PersistentImageManager.py",
> line 17, in *16:16:26* from .ApplicationConfiguration
> import ApplicationConfiguration*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/ApplicationConfiguration.py",
> line 24, in *16:16:26* import urllib.request*16:16:26*
> ImportError: No module named request
> Seen in
> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el7-x86_64/1205/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.3_build-artifacts-el7-x86_64/118/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.2_build-artifacts-el7-x86_64/481/console
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com
> <https://www.redhat.com/>



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WTI37VQ2GMIAXRX35KXWN4YZ33OW2OJF/


[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2741:
---

Assignee: Barak Korren  (was: infra)

> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>    Assignee: Barak Korren
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [2019-06-20T22:23:16.566Z] at java.lang.Thread.run(Thread.java:748)
> [2019-06-20T22:23:16.566Z] Caused by: hudson.plugins.git.GitException:
> Command "git clean -fdx" returned status code 1:
> [2019-06-20T22:23:16.566Z] stdout:
> [2019-06-20T22:23:16.566Z] stderr: warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage___init___py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockVolume_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockdev_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_check_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_clusterlock_py.html
> [2019-06-20T22:23:16.

[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39422#comment-39422
 ] 

Barak Korren edited comment on OVIRT-2741 at 6/24/19 7:39 AM:
--

This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probably got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Going to close this ticket, since this is not really an ongoing infra issue.



was (Author: bkor...@redhat.com):
This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probally got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Goind to close this ticket, since this is not really an ongoing infra issue.


> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(E

[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39422#comment-39422
 ] 

Barak Korren commented on OVIRT-2741:
-

This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probally got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Goind to close this ticket, since this is not really an ongoing infra issue.


> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [2019-06-20T22:23:16.566Z] at java.lang.Thread.run(Thread.java:748)
> [2019-06-20T22:23:16.566Z] Caused by: hudson.plugins.git.GitException:
> Command "git clean -fdx" returned status code 1:
> [2019-06-20T22:23:16.566Z] stdout:
> [2019-06-20T22:23:16.566Z] stderr: warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage___init___py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html
> [2019-06-20T22:23:1

  1   2   3   4   5   6   7   8   9   10   >