[JIRA] (OVIRT-3049) ovirt-ansible-collection: Automatic bugzilla linking

2020-10-26 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40884#comment-40884
 ] 

Barak Korren commented on OVIRT-3049:
-

Its done by the Gerrit hooks no? Theoretically you could run similar code in a 
GitHub action to get similar results.

> ovirt-ansible-collection: Automatic bugzilla linking
> 
>
> Key: OVIRT-3049
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-3049
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> I now pushed my first PR to ovirt-ansible-collection [1].
> It has a Bug-Url link to [2].
> The bug wasn't updated with a link to the PR. Should it? Please make
> it so that it does.
> Also, perhaps, when creating new projects, do this automatically for
> them (and ask what bugzilla product should be affected).
> Thanks and best regards,
> [1] https://github.com/oVirt/ovirt-ansible-collection/pull/151
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1844965
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100149)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FKW3OIPJYN27VVWFIPAHK323S6AS4JPF/


[JIRA] (OVIRT-2924) Update mock_runner to support the new `--isolation` option for `mock`

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Summary: Update mock_runner to support the new `--isolation` option for 
`mock`  (was: Fwd: Problem with the mock parameters)

> Update mock_runner to support the new `--isolation` option for `mock`
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HWFUTXO4D3O3GI4YTZ36RY7FCKDVDMCN/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Issue Type: Bug  (was: By-EMAIL)

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SMSETJV63VXSA4ZTDQVNQIK7E2BNNJ2J/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-05-12 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2924:

Components: mock_runner

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: mock_runner
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WNVCWMGC7TAS5KRZEPPW2AD7552R2UZY/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40465#comment-40465
 ] 

Barak Korren commented on OVIRT-2935:
-

It seems the Gerrit trigger configuration was reset to its default 
configuration instead of our custom one. I restored it manually. 
[~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794] please find out the RC 
for this.

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DHPPL7OKOELLDKVIJ6GGZ4PULRP2FI47/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2935:
---

Assignee: Evgheni Dereveanchin  (was: infra)

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Evgheni Dereveanchin
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PXU4FTG3H4L3XEBVAUJJRBKWLGLON5PY/


[JIRA] (OVIRT-2935) Jenkins succeeded, Code-Review +1

2020-05-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40462#comment-40462
 ] 

Barak Korren commented on OVIRT-2935:
-

Looks liks someone messed up the Gerrit trigger configuration, 
[~accountid:5aa0f39f5a4d022884128a0f], [~accountid:5dbc31f88704ba0dab2444b3], 
[~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794], 
[~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] any idea?

> Jenkins succeeded, Code-Review +1
> -
>
> Key: OVIRT-2935
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2935
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> It is nice to get good review from Jeknins, but I don't think it is smart 
> enough
> to do code review, and it should really use Continuous-Integration+1.
> Jenkins CI
> Patch Set 2: Code-Review+1
> Build Successful
> https://jenkins.ovirt.org/job/ovirt-imageio_standard-check-patch/2772/ : 
> SUCCESS
> See https://gerrit.ovirt.org/c/108772/2#message-6225787b_94ed3eac



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100126)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/26KSAGWQ3X2Q5TFZ3WQ5K7EJB4DANSA7/


[JIRA] (OVIRT-2924) Fwd: Problem with the mock parameters

2020-04-25 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40387#comment-40387
 ] 

Barak Korren commented on OVIRT-2924:
-

We need to ensure the new `{{--isolation}}` option still enables one to use 
chroots with mock as opposed to systemd-nspawn containers.

If it does support chroot - use can enable it by making `{{mock_runner.sh}}` 
check the output of `{{mock -h}}` and use the new option if its found there. We 
already had code in it that does something like that fromthe last time the mock 
CLI was changed.

If the chroot support was removed we have two options:
# Keep an older version of mock somewhere and use that
# Port our code to work with systemd-nspawn. I used to have n epic about this 
in Jira that detailed what we would need to fix to make that happen. I think 
most of the more serious fixes are in placealeady anyway.

> Fwd: Problem with the mock parameters
> -
>
> Key: OVIRT-2924
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2924
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Anton Marchukov
>Assignee: infra
>
> Forwarding to infra-support for analysis.
> -- Forwarded message -
> From: Lev Veyde 
> Date: Thu, Apr 23, 2020 at 2:02 PM
> Subject: Problem with the mock parameters
> To: infra 
> Cc: Sandro Bonazzola 
> Hi,
> it looks like we have some issues in the mock parameters on some of the
> slaves
> i.e.:
> https://jenkins.ovirt.org/job/ovirt-release_standard-check-patch/387/consoleFull
> note the ERROR: Option --old-chroot has been deprecated. Use
> --isolation=simple instead.
> Thanks in advance,
> -- 
> Lev Veyde
> Senior Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YI577UNFDVGDBXIOKOEEI5G5S2UWXDCA/
> -- 
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100125)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HXIOHRL6YS3KTMDH6JHNPSVM5YRGD6B/


[JIRA] (OVIRT-2917) Vagrant VM container

2020-04-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40355#comment-40355
 ] 

Barak Korren commented on OVIRT-2917:
-

no. no docs atm.

Also despite what I wrote before, workloads using such a container may not 
continue to be supported in the near future.

Due to the age of the existing oVirt CI hardware, and the new requirements 
introduced by RHEL8, The oVirt CI infrastructure is going to undergo some 
massive transformations in the near future. We're basically going to rebuild a 
lot of it from scratch.

In the meantime I'd advise against making any significant changes to existing 
CI workloads or adding any major new ones.

Sorry for the inconvenience.

cc: [~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] 

> Vagrant VM container
> 
>
> Key: OVIRT-2917
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2917
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Ales Musil
>Assignee: infra
>
> Hi,
> is there any documentation on how to use the new container backend with the
> CI container that spawns VM for privileged operations?
> Thank you.
> Regards,
> Ales
> -- 
> Ales Musil
> Software Engineer - RHV Network
> Red Hat EMEA 
> amu...@redhat.comIM: amusil
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100125)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LUOTSINXSA5VTMWVK6PG7LYHL4FYIUOL/


[JIRA] (OVIRT-2901) some sync_mirror jobs fail due to cache error of another repo

2020-04-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40291#comment-40291
 ] 

Barak Korren commented on OVIRT-2901:
-

I remember talking to [~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794] 
about this befoe, maybe we already have a ticket on this.

The reason this happens is because the ` {{ data/mirrors-reposync.conf }} ` 
file is shared between all mirrors, so the ` {{ reposync }} ` command for each 
mirror tryies to fetch metadate for all mirrors.

The way to solve this is the change the ` {{ mirror_mgr }} ` script to remove 
the data for all other mirrors when running for a particular mirror (We can  do 
this to the file directly, because each mirror job clones the ` {{ jenkins }} ` 
repo on its own and therfor has its own local copy of the file).

Its possible to do this with a single ` {{ sed }} ` command, I think we even 
prototyped it at some point, so please look for previous tickets about this.

> some sync_mirror jobs fail due to cache error of another repo
> -
>
> Key: OVIRT-2901
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2901
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Shlomi Zidmi
>Assignee: infra
>Priority: Low
>
> In the recent days some sync_mirror jobs have been failing with the following 
> error:
> Error setting up repositories: Error making cache directory: 
> /home/jenkins/mirrors_cache/centos-qemu-ev-release-el7 error was: [Errno 17] 
> File exists: '/home/jenkins/mirrors_cache/centos-qemu-ev-release-el7'
> As an example, a build of fedora-updates-fc29 failed with this error:
> https://jenkins.ovirt.org/job/system-sync_mirrors-fedora-updates-fc29-x86_64/1544/console



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/5OQAHWUCNZ5TWLNBWYLBFM4BCJCC2OX2/


[JIRA] (OVIRT-2899) Use Nginex as Jenkins reverse proxy instead on Apache

2020-04-06 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2899:
---

 Summary: Use Nginex as Jenkins reverse proxy instead on Apache
 Key: OVIRT-2899
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2899
 Project: oVirt - virtualization made easy
  Issue Type: Task
  Components: Jenkins Master
Reporter: Barak Korren
Assignee: infra


The following issue: 
[JENKINS-47279|https://issues.jenkins-ci.org/browse/JENKINS-47279] indicated 
there is currently no way to expose the Jenkins CLI over HTTP/HTTPs.

The Jenkins CLI is essential to access parts of the Jenkins internal API that 
allow detecting internal requirements for hosts, this in turn is essential for 
having a service running inside a firewalled network that provides specialized 
hosts to our public Jenkins instance.

The main use case for this is providing RHEL BM host from the Red Hat network 
to the oVirt Jenkins.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZS36BEM3EBOOTIBXXIGKBLTLVLLDUIDN/


[JIRA] (OVIRT-2898) CQ Changes is empty

2020-04-05 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2898:

Resolution: Won't Fix
Status: Done  (was: To Do)

> CQ Changes is empty
> ---
>
> Key: OVIRT-2898
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2898
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> If I look e.g. at:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/
> It says:
> Testing 58 changes:
> But then no list of changes.
> If I then press Changes:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/changes
> I get an empty page (other than red circle and "Changes" title).
> I think this is a bug, which was already supposed to fixed by:
> https://gerrit.ovirt.org/79036
> But for some reason this does not work as expected.
> Please handle.
> Thanks,
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RSVIKX5HKZBZUQXPZMSGZUDSZXTF7D6W/


[JIRA] (OVIRT-2898) CQ Changes is empty

2020-04-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40276#comment-40276
 ] 

Barak Korren commented on OVIRT-2898:
-

The changes screen in Jenkins only shows a log of Git changes that were cloned 
via the Git (or other SCM plugin). Its only useful for a single-project, 
single-branch job. Having it show the CQ changes would require writing a 
jenkins plugin that would make Jenkins think the CQ is some kind of an SCM 
(This is not trivial, SCM plugins in jenkns are strange...)  

WRT the linked path, it works as expected:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/execution/node/99/log/

This show up under the "loading change data" stage both in blue ocean and in 
the pipeline view.

Closing NOT A BUG.

> CQ Changes is empty
> ---
>
> Key: OVIRT-2898
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2898
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Yedidyah Bar David
>Assignee: infra
>
> Hi all,
> If I look e.g. at:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/
> It says:
> Testing 58 changes:
> But then no list of changes.
> If I then press Changes:
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/22202/changes
> I get an empty page (other than red circle and "Changes" title).
> I think this is a bug, which was already supposed to fixed by:
> https://gerrit.ovirt.org/79036
> But for some reason this does not work as expected.
> Please handle.
> Thanks,
> -- 
> Didi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100124)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AIU7RVA4Q7KOJBY3A7E5XBKFGGHFIUCM/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2861:
---

Assignee: Barak Korren  (was: infra)

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: Barak Korren
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XXTNTNBD4CFSBGZT3BTSBWBZC7LFQR6Z/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40085#comment-40085
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 10:00 AM:
---

I can see now that you don't have the gh-pages branch stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.


was (Author: bkor...@redhat.com):
I can see now that you don't have the gh-pages stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FD2LPVIKZDRR7ETBFFXLCWEXU7DQG5YK/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40085#comment-40085
 ] 

Barak Korren commented on OVIRT-2861:
-

I can see now that you don't have the gh-pages stored in Gerrit at all.

What you need to do is first push the branch to Gerrit with a "normal" push (I 
assume you have a copy of it from GitHub), and after you've done that you can 
wort on it with patches just like you would on any other branch in Gerrit.

The automated mirror process to GitHub should cause the pages to show up for 
your merged changes.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/I2FSAI6X32WGGP2VO4LDNHLALFT2NQMS/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 8:04 AM:
--

{quote}
I can’t seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.


was (Author: bkor...@redhat.com):
{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XJENDMX2JIUOO56KQTFAM5CCDN6LQ4WD/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren commented on OVIRT-2861:
-

{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that, do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/L6OLAWNJJXRUMML5CYWKJWTSWTJOX6QE/


[JIRA] (OVIRT-2861) Requesting permission to ovirt-engine-sdk in Github

2020-02-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=40081#comment-40081
 ] 

Barak Korren edited comment on OVIRT-2861 at 2/10/20 8:03 AM:
--

{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that? do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.


was (Author: bkor...@redhat.com):
{quote}
seem to push to it using gerrit
{quote}

Can you elaborate what you mean by that, do you have the branch configured on 
the Gerrit side? 

Merging directly in GitHub will definitely break things, enabling will probably 
be a bad idea.

> Requesting permission to ovirt-engine-sdk in Github
> ---
>
> Key: OVIRT-2861
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2861
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: GitHub
>Reporter: Ori Liel
>Assignee: infra
>Priority: High
>
> I'm a maintainer of ovirt-engine-sdk. 
> I'm able to merge changes to this repository through gerrit. 
> However, I don't have direct permissions for this repository on github. 
> Can I please be granted permissions for this repository Github? 
> Thanks



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100119)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NKSSNXLWITRFN72QY2FKEI3NPGRTK3D4/


[JIRA] (OVIRT-2843) Whitlisting users for CI on GitHub does not work

2019-11-27 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2843:
---

Assignee: Barak Korren  (was: infra)

> Whitlisting users for CI on GitHub does not work
> 
>
> Key: OVIRT-2843
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2843
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: GitHub
>Reporter: Tomas Golembiovsky
>Assignee: Barak Korren
>
> I've tried multiple times to whitelist multiple different people in CI 
> integration with GitHub by using the `ci add to whitelist` but it does not 
> seem  to work. IIRC it runs the tests but does not whitelist the PR 
> originator.
> By the way there's a typo in Infra documentation in the command -- the word 
> _whitelist_ is misspelled.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PJ2XFQB2N35OFFAIGA7I2ABXT3CUVJIO/


[JIRA] (OVIRT-2839) CI jobs failing global_setup - docker service fails.

2019-11-20 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2839:
---

Assignee: Barak Korren  (was: infra)

> CI jobs failing global_setup - docker service fails.
> 
>
> Key: OVIRT-2839
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2839
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Bell Levin
>Assignee: Barak Korren
>
> Hey,
> I have had a few consecutive jobs failing on global_setup.
> Here are a few jobs that failed on global_setup:
> https://jenkins.ovirt.org/job/standard-manual-runner/885
> https://jenkins.ovirt.org/job/standard-manual-runner/884
> https://jenkins.ovirt.org/job/standard-manual-runner/883
> https://jenkins.ovirt.org/job/standard-manual-runner/882
> https://jenkins.ovirt.org/job/standard-manual-runner/879
> https://jenkins.ovirt.org/job/standard-manual-runner/878
> Here are a few jobs that succeeded:
> https://jenkins.ovirt.org/job/standard-manual-runner/881
> https://jenkins.ovirt.org/job/standard-manual-runner/880
> https://jenkins.ovirt.org/job/standard-manual-runner/877
> this is currently blocking me as I am trying to push a new job to run the
> network functional tests in a container.
> Please let me know if you need any additional information.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2I4IEGNVGUC2B5FXNLDI6WJSLJMFAKQX/


[JIRA] (OVIRT-2832) Add Fedora 31 support in the CI

2019-11-16 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39982#comment-39982
 ] 

Barak Korren commented on OVIRT-2832:
-

{quote}
Will be even better, thanks!
{quote}

I'm looking forward for your code review on all current and future relevant 
patches

> Add Fedora 31 support in the CI
> ---
>
> Key: OVIRT-2832
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2832
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Vdsm want to support only Fedora 31 at this point.
> Having Fedora 31 available, we can simplify the code since we have
> same versions of lvm and other packages in Fedora 31 and RHEL 8.
> We need:
> - mirrors for fedora 31 repos
> - mock config for fedora 31
> Thanks,
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XO5Z5H7ZPZMFLNW2YOCH6U3SB63CWSBU/


[JIRA] (OVIRT-2832) Add Fedora 31 support in the CI

2019-11-13 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39971#comment-39971
 ] 

Barak Korren commented on OVIRT-2832:
-

While we could make mock work, it'd be nicer if we could get help to move the 
new container-based backend forward so the upstream container image could be 
used directly instead.

> Add Fedora 31 support in the CI
> ---
>
> Key: OVIRT-2832
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2832
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Vdsm want to support only Fedora 31 at this point.
> Having Fedora 31 available, we can simplify the code since we have
> same versions of lvm and other packages in Fedora 31 and RHEL 8.
> We need:
> - mirrors for fedora 31 repos
> - mock config for fedora 31
> Thanks,
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/J3KQ7QF55B3INUA6KMPPH36STRXBEQUS/


[JIRA] (OVIRT-2825) reposync on mirrors machine does not support syncing of modules data

2019-11-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39945#comment-39945
 ] 

Barak Korren commented on OVIRT-2825:
-

How is module information stored in the repo? do consider that we generate our 
own metadata for the mirrors and not copy if from the distro repos.

Maybe we could just generated the modules or copy dome files from the disrot 
repos instead of having reposync do it for us...

> reposync on mirrors machine does not support syncing of modules data
> 
>
> Key: OVIRT-2825
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2825
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>Reporter: Emil Natan
>Assignee: infra
>
> That's something added to the reposync version coming with CentOS8, so we 
> should probably upgrade to that version.
> [~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a]
> [~accountid:557058:caa507e4-2696-4f45-8da5-d2585a4bb794]



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DZSOMKYEULYTMJO7UBFJNAPZP3BRGUKT/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39918#comment-39918
 ] 

Barak Korren edited comment on OVIRT-2814 at 10/22/19 1:54 PM:
---

{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see what you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...


was (Author: bkor...@redhat.com):
{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see whay you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org

[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2814:

Components: CI Mirrors

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GP27PBBAXR56I7POEAVIMGUBC75YCXW4/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reopened OVIRT-2814:
-

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI Mirrors
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2YS4H2RQ55P64QSGRCUJPLTCJCJ44CFP/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39918#comment-39918
 ] 

Barak Korren commented on OVIRT-2814:
-

{quote}
correct me if I’m worng but no real fix has been included right? lock file has 
just been removed and we’re going to see this happen again sooner or later.
{quote}

Ok, I see whay you talk about now, I missed the fact that the failure was in a 
job for a different mirror.

To solve this, we need to make sure that when we sync repo X, the {{reposync}} 
command only sees the part of "{{mirrors-reposync.conf}}" that is relevant for 
repo X. It should probably be doable by adding a {{sed}} command to 
{{mirror_mgr.sh}} that will remove the unneeded repos from the file before 
running {{reposync}}.

[~accountid:5b25ec3f8b08c009c48b25c9] this is exactly what we've talked about 
before you left today...

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100114)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YTD4KI6LRDOZPSYCSNYVGXNLTVYSSYF7/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39912#comment-39912
 ] 

Barak Korren commented on OVIRT-2814:
-

ticket is from 5 days ago, last run seems to have passed: 
https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2950/

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100113)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QVJIPHKEFORATIJZEXZUOFOT42FFS4UU/


[JIRA] (OVIRT-2814) reposync fails syncing repos on completely unrelated paths (concurrency issue?)

2019-10-22 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39909#comment-39909
 ] 

Barak Korren commented on OVIRT-2814:
-

{code}
this looks like a concurrency issue, a lock should be used in order to
prevent two instances to use the same cache directory at the same time or
use separate cache directories for different repos.
{code}

As you can see in the path name - it is unique for the repo, and there is only 
one update job per repo that does not run concurrently. So the last sentence 
you wrote above holds.

This might be a cleanup issues, where some failure scenario leaves some lock 
files behind.

> reposync fails syncing repos on completely unrelated paths (concurrency 
> issue?)
> ---
>
> Key: OVIRT-2814
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2814
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>
> About job
> https://jenkins.ovirt.org/job/system-sync_mirrors-centos-kvm-common-el7-x86_64/2934/console
> As you can see below
> 05:35:07 ++ reposync --config=jenkins/data/mirrors-reposync.conf
> --repoid=centos-kvm-common-el7 --arch=x86_64
> --cachedir=/home/jenkins/mirrors_cache
> --download_path=/var/www/html/repos/yum/centos-kvm-common-el7/base
> --norepopath --newest-only --urls --quiet
> the sync is related to "/var/www/html/repos/yum/centos-kvm-common-el7/base"
> using as cache directory "/home/jenkins/mirrors_cache"
> but in "/home/jenkins/mirrors_cache" there's
> "/home/jenkins/mirrors_cache/fedora-base-fc29":
> 05:35:16 Traceback (most recent call last):
> 05:35:16   File "/usr/bin/reposync", line 373, in 
> 05:35:16 main()
> 05:35:16   File "/usr/bin/reposync", line 185, in main
> 05:35:16 my.doRepoSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 681, in doRepoSetup
> 05:35:16 return self._getRepos(thisrepo, True)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/__init__.py", line
> 721, in _getRepos
> 05:35:16 self._repos.doSetup(thisrepo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157,
> in doSetup
> 05:35:16 self.retrieveAllMD()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88,
> in retrieveAllMD
> 05:35:16 dl = repo._async and repo._commonLoadRepoXML(repo)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 1468, in _commonLoadRepoXML
> 05:35:16 local  = self.cachedir + '/repomd.xml'
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 777, in 
> 05:35:16 cachedir = property(lambda self: self._dirGetAttr('cachedir'))
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 760, in _dirGetAttr
> 05:35:16 self.dirSetup()
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 738, in dirSetup
> 05:35:16 self._dirSetupMkdir_p(dir)
> 05:35:16   File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line
> 715, in _dirSetupMkdir_p
> 05:35:16 raise Errors.RepoError, msg
> 05:35:16 yum.Errors.RepoError: Error making cache directory:
> /home/jenkins/mirrors_cache/fedora-base-fc29 error was: [Errno 17] File
> exists: '/home/jenkins/mirrors_cache/fedora-base-fc29'
> this looks like a concurrency issue, a lock should be used in order to
> prevent two instances to use the same cache directory at the same time or
> use separate cache directories for different repos.
> -- 
> Sandro Bonazzola



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100113)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WE5IYWTQQVIDGTUHUMAKRNLX7GT552RO/


[JIRA] (OVIRT-2811) s390x tests failing on trying to use `sudo` during artifact collection

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2811:
---

Assignee: Barak Korren  (was: infra)

> s390x tests failing on trying to use `sudo` during artifact collection
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: Barak Korren
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B6ORDHIXINK35JF3PJKO33EW7QNWCBAD/


[JIRA] (OVIRT-2811) s390x tests failing on trying to use `sudo` during artifact collection

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Summary: s390x tests failing on trying to use `sudo` during artifact 
collection  (was: s390x tests failing on missing tty)

> s390x tests failing on trying to use `sudo` during artifact collection
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/B7TTNIPJDFQSVX3TE7U7S23GPEVGGNFT/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Issue Type: Outage  (was: By-EMAIL)

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XVIT72O5LJYMQOSAIQCQOQFBD3EVYEKP/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Labels: s390x  (was: )

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>  Labels: s390x
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/W7JFB4LXNF4T4JENAVC6STYZQNPLJ64X/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2811:

Components: Jenkins Slaves
mock_runner

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves, mock_runner
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R6S22K6OSML3LS34T3R3YOPIQ74L7X46/


[JIRA] (OVIRT-2811) s390x tests failing on missing tty

2019-10-07 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39892#comment-39892
 ] 

Barak Korren commented on OVIRT-2811:
-

Root cause of the issue is  {{mock}} generating its log files with root 
permissions, whule it used to generated then with an unprivileged user before.

This (fixed) issue seems related:
https://github.com/rpm-software-management/mock/issues/322 

> s390x tests failing on missing tty
> --
>
> Key: OVIRT-2811
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2811
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Sandro Bonazzola
>Assignee: infra
>Priority: High
>
> Jenkins is failing on s390x with:
> [2019-10-07T11:45:31.272Z] + sudo -n chown -R ovirt:ovirt
> /home/ovirt/workspace/ovirt-release_standard-check-patch/check-patch.fc30.s390x
> [2019-10-07T11:45:31.272Z] sudo: sorry, you must have a tty to run sudo
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-release_standard-check-patch/detail/ovirt-release_standard-check-patch/151/pipeline/140
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/Y2P7YZWQ7NCWZDJOLR4DB5RWCKNDGKEN/


[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2810:

Issue Type: Outage  (was: By-EMAIL)

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Master
>Reporter: Nir Soffer
>Assignee: infra
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> *
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:892)*00:07:26*
> at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at 
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> 

[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2810:

Components: Jenkins Master

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Master
>Reporter: Nir Soffer
>Assignee: infra
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> *
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:892)*00:07:26*
> at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at 
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> 

[JIRA] (OVIRT-2810) CI build error with: hudson.plugins.git.GitException: Failed to fetch from https://gerrit.ovirt.org/jenkins

2019-10-06 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2810:
---

Assignee: Barak Korren  (was: infra)

> CI build error with: hudson.plugins.git.GitException: Failed to fetch from 
> https://gerrit.ovirt.org/jenkins
> ---
>
> Key: OVIRT-2810
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2810
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: Jenkins Master
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> Any imageio patch fail now with the error bellow.
> Examples:
> *https://gerrit.ovirt.org/q/topic:py3+is:open+project:ovirt-imageio
> *
> *00:07:26*  ERROR: Error fetching remote repo 'origin'
> *00:07:26*  hudson.plugins.git.GitException: Failed to fetch from
> https://gerrit.ovirt.org/jenkins*00:07:26*at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)*00:07:26*at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80)*00:07:26*
>   at 
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)*00:07:26*
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*00:07:26*
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)*00:07:26*
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)*00:07:26*
>   at java.lang.Thread.run(Thread.java:748)*00:07:26*  Caused by:
> hudson.plugins.git.GitException: Command "git fetch --tags --progress
> https://gerrit.ovirt.org/jenkins
> +133b1a1729ce5c6caf1745ad8034ceaa4dd187c5:myhead" returned status code
> 128:*00:07:26*  stdout: *00:07:26*  stderr: error: no such remote ref
> 133b1a1729ce5c6caf1745ad8034ceaa4dd187c5*00:07:26*  *00:07:26*at
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2042)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1761)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$400(CliGitAPIImpl.java:72)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:442)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)*00:07:26*
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:212)*00:07:26*
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)*00:07:26*
>   at hudson.remoting.Request$2.run(Request.java:369)*00:07:26*at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)*00:07:26*
>   ... 4 more*00:07:26*Suppressed:
> hudson.remoting.Channel$CallSiteStackTrace: Remote call to
> vm0008.workers-phx.ovirt.org*00:07:26*at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)*00:07:26*
>   at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)*00:07:26*
>   at hudson.remoting.Channel.call(Channel.java:957)*00:07:26* 
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:146)*00:07:26*
>   at sun.reflect.GeneratedMethodAccessor715.invoke(Unknown
> Source)*00:07:26* at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*00:07:26*
>   at java.lang.reflect.Method.invoke(Method.java:498)*00:07:26*   
> at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:132)*00:07:26*
>   at com.sun.proxy.$Proxy83.execute(Unknown Source)*00:07:26* 
> at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:892)*00:07:26*
> at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)*00:07:26*
>   at 
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)*00:07:26*
>   at 
> 

[JIRA] (OVIRT-2808) CI broken - builds fail early in "loading code" stage with "attempted duplicate class definition for name: "Project""

2019-10-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39879#comment-39879
 ] 

Barak Korren commented on OVIRT-2808:
-

known issue - we need the configuration in jenkins to "catch up" to the code in 
our master branch.

should be resolved once the following finishes running:
https://jenkins.ovirt.org/job/jenkins_standard-on-merge/820

> CI broken - builds fail early in "loading code" stage with "attempted 
> duplicate class definition for name: "Project""
> -
>
> Key: OVIRT-2808
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2808
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Example builds:
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12526/
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12527/
> java.lang.LinkageError: loader (instance of
> org/jenkinsci/plugins/workflow/cps/CpsGroovyShell$CleanGroovyClassLoader):
> attempted  duplicate class definition for name: "Project"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at groovy.lang.GroovyClassLoader.access$400(GroovyClassLoader.java:62)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.createClass(GroovyClassLoader.java:500)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.onClassNode(GroovyClassLoader.java:517)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.call(GroovyClassLoader.java:521)
>   at 
> org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:834)
>   at 
> org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
>   at 
> org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
>   at 
> org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
>   at 
> org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
>   at 
> groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
>   at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
>   at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.parse(CpsGroovyShell.java:113)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:736)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:727)
>   at 
> org.jenkinsci.plugins.workflow.cps.steps.LoadStepExecution.start(LoadStepExecution.java:49)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:269)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:177)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
>   at 
> org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
>   at WorkflowScript.load_code(WorkflowScript:45)
>   at Script4.on_load(Script4.groovy:22)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at Script1.on_load(Script1.groovy:13)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at WorkflowScript.run(WorkflowScript:22)
>   at ___cps.transform___(Native Method)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
>   at sun.reflect.GeneratedMethodAccessor681.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
>   at 
> com.cloudbees.groovy.cps.impl.LocalVariableBlock$LocalVariable.get(LocalVariableBlock.java:39)
>   at 
> com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
>   at 
> 

[JIRA] (OVIRT-2808) CI broken - builds fail early in "loading code" stage with "attempted duplicate class definition for name: "Project""

2019-10-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2808:
---

Assignee: Barak Korren  (was: infra)

> CI broken - builds fail early in "loading code" stage with "attempted 
> duplicate class definition for name: "Project""
> -
>
> Key: OVIRT-2808
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2808
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> Example builds:
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12526/
> - http://jenkins.ovirt.org/job/vdsm_standard-check-patch/12527/
> java.lang.LinkageError: loader (instance of
> org/jenkinsci/plugins/workflow/cps/CpsGroovyShell$CleanGroovyClassLoader):
> attempted  duplicate class definition for name: "Project"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at groovy.lang.GroovyClassLoader.access$400(GroovyClassLoader.java:62)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.createClass(GroovyClassLoader.java:500)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.onClassNode(GroovyClassLoader.java:517)
>   at 
> groovy.lang.GroovyClassLoader$ClassCollector.call(GroovyClassLoader.java:521)
>   at 
> org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:834)
>   at 
> org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065)
>   at 
> org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
>   at 
> org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
>   at 
> org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
>   at 
> groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
>   at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
>   at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:142)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.parse(CpsGroovyShell.java:113)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:736)
>   at groovy.lang.GroovyShell.parse(GroovyShell.java:727)
>   at 
> org.jenkinsci.plugins.workflow.cps.steps.LoadStepExecution.start(LoadStepExecution.java:49)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:269)
>   at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:177)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
>   at 
> org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>   at 
> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>   at 
> com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
>   at WorkflowScript.load_code(WorkflowScript:45)
>   at Script4.on_load(Script4.groovy:22)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at Script1.on_load(Script1.groovy:13)
>   at WorkflowScript.load_code(WorkflowScript:47)
>   at WorkflowScript.run(WorkflowScript:22)
>   at ___cps.transform___(Native Method)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
>   at 
> com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
>   at sun.reflect.GeneratedMethodAccessor681.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
>   at 
> com.cloudbees.groovy.cps.impl.LocalVariableBlock$LocalVariable.get(LocalVariableBlock.java:39)
>   at 
> com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
>   at 
> com.cloudbees.groovy.cps.impl.LocalVariableBlock.evalLValue(LocalVariableBlock.java:28)
>   at 
> com.cloudbees.groovy.cps.LValueBlock$BlockImpl.eval(LValueBlock.java:55)
>   at 

[JIRA] (OVIRT-2803) Re: CI is not triggered for pushed gerrit updates

2019-09-25 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39863#comment-39863
 ] 

Barak Korren commented on OVIRT-2803:
-

This is a known issue - tacker ticket:

https://ovirt-jira.atlassian.net/browse/OVIRT-2802

Will close new ticket as a duplicate

On Wed, 25 Sep 2019 at 13:08, Nir Soffer  wrote:

> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
>
>> CI has stopped from being triggered for pushed gerrit updates.
>>
>
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
>
> Example at: https://gerrit.ovirt.org/#/c/103320/
>> last PS did not trigger CI tests.
>>
>
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
>
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
>
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
>
> Maybe someone changed the pattern?
>
>
>

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

> Re: CI is not triggered for pushed gerrit updates
> -
>
> Key: OVIRT-2803
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2803
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:
> > CI has stopped from being triggered for pushed gerrit updates.
> >
> Adding infra-suppo...@ovirt.org - this file a bug in infra bug tracker.
> Example at: https://gerrit.ovirt.org/#/c/103320/
> > last PS did not trigger CI tests.
> >
> More examples:
> https://gerrit.ovirt.org/c/103490/2
> https://gerrit.ovirt.org/c/103455/4
> Last triggered build was seen here:
> https://gerrit.ovirt.org/c/103549/1
> Sep 24 18:42.
> "ci test" also did not trigger a build, but I tried now again using "ci
> please test"
> and it worked.
> https://gerrit.ovirt.org/c/103490/2#message-8d40f6de_bd0564ff
> Maybe someone changed the pattern?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R6KWTCRKLZ3CCZQMHREODUXR3M64ARCR/


[JIRA] (OVIRT-1448) Enable devs to specifiy patch dependencies for OST

2019-09-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39851#comment-39851
 ] 

Barak Korren commented on OVIRT-1448:
-

well, we're probably not going to implement this, but as long as we use CQ - we 
can still get issues where dependent patches get tested alone

> Enable devs to specifiy patch dependencies for OST
> --
>
> Key: OVIRT-1448
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1448
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: oVirt CI
>Reporter: Barak Korren
>Assignee: infra
>  Labels: change-queue
>
> We have an issue with our system ATM where if there are patches for different 
> projects that depend on one another, the system is unaware of that dependency.
> What typically happens in this scenario is that sending the dependent patch 
> makes the experimental test fail and keep failing until the other patch is 
> also merged.
> The change queue will handle this better, but the typical behaviour for it 
> would be to reject both patches, unless they are somehow coordinated to make 
> it into the same test.
> The change queue core code already includes the ability to track and 
> understand dependencies between changes. What is missing is the ability for 
> developers to specify theses dependencies.
> We would probably want to adopt OpenStack's convention here.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100111)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/TDTWAAV7TNYZZQLHFHBJUSO4HST3A2UV/


[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2794:
---

Assignee: Ehud Yonasi  (was: infra)

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Ehud Yonasi
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> 
> #5542 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5541 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5540 
> Sep 5, 2019 3:01 PM
> 
> [image: Failed > Console Output]
> 
> #5539 
> Sep 5, 2019 2:13 PM
> 
> [image: Failed > Console Output]
> 
> #5538 
> Sep 5, 2019 1:58 PM
> 
> [image: Failed > Console Output]
> 
> #5537 
> Sep 5, 2019 1:50 PM
> 
> [image: Failed > Console Output]
> 
> #5536 

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39798#comment-39798
 ] 

Barak Korren commented on OVIRT-2794:
-

[~accountid:5aa0f39f5a4d022884128a0f] had started testing {{docker_cleanup.py}} 
on CentOS 7, so assigning the ticket to him.

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> 
> #5542 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5541 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5540 
> Sep 5, 2019 3:01 PM
> 
> [image: Failed > Console Output]
> 
> #5539 
> Sep 5, 2019 2:13 PM
> 
> [image: Failed > Console Output]
> 
> #5538 
> Sep 5, 2019 1:58 PM
> 
> [image: Failed > Console Output]
> 
> #5537 
> Sep 5, 2019 1:50 PM
> 
> [image: Failed > 

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Labels: docker  (was: )

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>  Labels: docker
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> 
> #5542 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5541 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5540 
> Sep 5, 2019 3:01 PM
> 
> [image: Failed > Console Output]
> 
> #5539 
> Sep 5, 2019 2:13 PM
> 
> [image: Failed > Console Output]
> 
> #5538 
> Sep 5, 2019 1:58 PM
> 
> [image: Failed > Console Output]
> 
> #5537 
> Sep 5, 2019 1:50 PM
> 
> [image: Failed > Console Output]
> 
> #5536 

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Components: Jenkins Slaves

> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: infra
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 107, in safe_image_cleanup
> _safe_rm(client, parent)
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 329, in _safe_rm
> client.images.remove(image_id, force=force)
>   File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
> 288, in remove
> self.client.api.remove_image(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
> 19, in wrapped
> return f(self, resource_id, *args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
> remove_image
> return self._result(res, True)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
> in _result
> self._raise_for_status(response)
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
> in _raise_for_status
> raise create_api_error_from_http_exception(e)
>   File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
> create_api_error_from_http_exception
> raise cls(e, response=response, explanation=explanation)
> docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
> exist")
> Aborting.
> Build step 'Execute shell' marked build as failure
> {code}
> x
> [image: Failed > Console Output]
> 
> #5542 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5541 
> Sep 5, 2019 3:02 PM
> 
> [image: Failed > Console Output]
> 
> #5540 
> Sep 5, 2019 3:01 PM
> 
> [image: Failed > Console Output]
> 
> #5539 
> Sep 5, 2019 2:13 PM
> 
> [image: Failed > Console Output]
> 
> #5538 
> Sep 5, 2019 1:58 PM
> 
> [image: Failed > Console Output]
> 
> #5537 
> Sep 5, 2019 1:50 PM
> 
> [image: Failed > Console Output]
> 
> #5536 
> Sep 

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39797#comment-39797
 ] 

Barak Korren commented on OVIRT-2794:
-

This was a bit Puzzling, we've seen issues between {{docker_cleanup.py}} and 
Docker appear sporadically in the past, and therefore have have made the job 
code generally not fail when {{docker_cleanup.py}} fails, and instead send an 
email to the infra list. It turn out that was only true for the V2 code, for 
the V1 code (which is still used in the manual job and the nightly jobs) thos 
failures could still arise.

We did verify that {{docker_cleanup.py}} works on CentOS 7 with the Python 3 
docker API client before merging the patch, so its strange we did not see the 
issue then.

[~accountid:557058:5ca52a09-2675-4285-a044-12ad20f6166a] some of your 
statements above seem to include some wrong assumption about how the system is 
built. We're not actually exposing the host's Docker deamon to the CI code, 
instead we we our own docker instance running inside the container that is used 
to run the CI code. That way we can ensure there can be no cross-talk when 
running multiple CI containers on the same hosts.

[~accountid:557058:cc1e0e66-9881-45e2-b0b7-ccaa3e60f26e] as far as using 
podman, I think doing that at this point will be quite a challenge for a number 
of reasons:
# We're currently using OpenShift 3.7 to manage our containers, this implies 
that we must run Docker on our hosts, since AFAIK OpenShift only started 
supporting CRIO in 4.0 or 4.1.
# To allow CI scripts and tests suits to use Docker we run nested Docker 
instances inside the CI containers. We know that Docker in Docker work well for 
our use cases. Running Podman in Docker will probably be more challenging.
# Since we're still using {{mock}} to encapsulate the CI script inside the CI 
container, we're bind-mounting the docker socket from the container into mock. 
We know there are issues when running Podman in mock, so solving those will 
take some work.
# People that write CI scripts and suits tend to expect things to "just work" 
in CI like it does on their laptops, and hence tend to use Docker commands. 
Removing docker will force everyone to learn Podman, and we'll need to make 
changes everywhere.

Out current suspicion is that this issue may have to do with the particular 
version Docker that is installed inside the CI container. While our 
{{global_setup.sh}} script generally keeps Docker up to date on the CI slaves, 
we've intentionally skipped that update code when running in a container. I 
suspect that the version of Docker that is in the CI containers is older then 
the once running on the CI slaves. That would explain why we did not see this 
issue when working on the {{docker_cleanup.py}} patch, since that was tested on 
the the normal slaves and not the containers.

Here is what I think we should do now:
# Verify again, that {{docker_cleanup.py}} woks well on CentOS with the Python 
3 Docker client API .
# If so, inspect the version of Docker we have in the containers and finally
# Build an updated container image with a newer version of Docker as needed

Note that updating the container image will require us to tests it thoroughly 
and ensure it can properly run both OST and {{kubevirt-ci}}. 



> OST is broken since this morning - looks like infra issue
> -
>
> Key: OVIRT-2794
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2794
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> The last successful build was today at 08:10:
> Since then all builds fail very early with the error below - which is not
> related to oVirt.
> {code}
> Removing image:
> sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
> force=True
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
> in _raise_for_status
> response.raise_for_status()
>   File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
> raise_for_status
> raise HTTPError(http_error_msg, response=self)
> requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
> http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 349, in 
> main()
>   File
> "/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
> line 37, in main
> safe_image_cleanup(client, whitelisted_repos)
>   File
> 

[JIRA] (OVIRT-2794) OST is broken since this morning - looks like infra issue

2019-09-09 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2794:

Description: 
The last successful build was today at 08:10:

Since then all builds fail very early with the error below - which is not
related to oVirt.

{code}
Removing image:
sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa,
force=True
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 222,
in _raise_for_status
response.raise_for_status()
  File "/usr/lib/python3.6/site-packages/requests/models.py", line 893, in
raise_for_status
raise HTTPError(http_error_msg, response=self)

requests.exceptions.HTTPError: 404 Client Error: Not Found for url:
http+docker://localunixsocket/v1.30/images/sha256:f8e5aa8e979155e074411bfef9adade6cdcdf3a5a2eb1d5ad2dbf0288d585ffa?force=True=False

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 349, in 
main()
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 37, in main
safe_image_cleanup(client, whitelisted_repos)
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 107, in safe_image_cleanup
_safe_rm(client, parent)
  File
"/home/jenkins/workspace/ovirt-system-tests_manual/jenkins/scripts/docker_cleanup.py",
line 329, in _safe_rm
client.images.remove(image_id, force=force)
  File "/usr/lib/python3.6/site-packages/docker/models/images.py", line
288, in remove
self.client.api.remove_image(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/docker/utils/decorators.py", line
19, in wrapped
return f(self, resource_id, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/docker/api/image.py", line 481, in
remove_image
return self._result(res, True)
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 228,
in _result
self._raise_for_status(response)
  File "/usr/lib/python3.6/site-packages/docker/api/client.py", line 224,
in _raise_for_status
raise create_api_error_from_http_exception(e)
  File "/usr/lib/python3.6/site-packages/docker/errors.py", line 31, in
create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found ("reference does not
exist")

Aborting.

Build step 'Execute shell' marked build as failure
{code}

x

[image: Failed > Console Output]

#5542 
Sep 5, 2019 3:02 PM


[image: Failed > Console Output]

#5541 
Sep 5, 2019 3:02 PM


[image: Failed > Console Output]

#5540 
Sep 5, 2019 3:01 PM


[image: Failed > Console Output]

#5539 
Sep 5, 2019 2:13 PM


[image: Failed > Console Output]

#5538 
Sep 5, 2019 1:58 PM


[image: Failed > Console Output]

#5537 
Sep 5, 2019 1:50 PM


[image: Failed > Console Output]

#5536 
Sep 5, 2019 10:21 AM

 [image: x]

[image: Success > Console Output]

#5535 
Sep 5, 2019 8:10 AM


  was:
The last successful build was today at 08:10:

Since then all builds fail very early with the error 

[JIRA] (OVIRT-2788) CI: Add the option to send and email if stage fails

2019-09-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39789#comment-39789
 ] 

Barak Korren commented on OVIRT-2788:
-

its not something you can define ATM.

[~accountid:557058:c4a3432b-f1c1-4620-b53b-c398d6d3a5c2] started implementing 
this when we were working on the general notifications mechanism (That was mean 
to be used for the tag stages as well) for STDCI but he moved to work on other 
things.

> CI: Add the option to send and email if stage fails
> ---
>
> Key: OVIRT-2788
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2788
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: CI client projects
>Reporter: Bell Levin
>Assignee: infra
>
> Added the poll-upstream-sources stage to be ran every night on vdsm \[1]. I 
> think it is useful if an email is sent to selected people if the stage has 
> failed.
> Such option is available in V1 (namely, the nightly ost network suite), and 
> would help me out if implemented in V2 as well.
> \[1] 
> [https://gerrit.ovirt.org/#/c/102901/|https://gerrit.ovirt.org/#/c/102901/]
> FYI [~accountid:557058:866c109f-3951-4680-8dac-b76caf296501] 
> [~accountid:557058:c4a3432b-f1c1-4620-b53b-c398d6d3a5c2] 
> [~accountid:5aa0f39f5a4d022884128a0f] 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100109)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/6QYIDDIR6FPSOFKR3OAGBE4WSBU73HFA/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-05 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39788#comment-39788
 ] 

Barak Korren commented on OVIRT-2790:
-

[~accountid:557058:7013bb8c-48b2-4b9b-898e-eccf5fb61fad] {{ci test please}} had 
been around for a long long time - while triggering from the GUI is still 
usable, and not going to be removed any time soon, I personally prefer if 
people stay away from the jenkins GUI unless they are reading the logs.

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100109)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AYQRUBGMGYGYKXG5LFLCXPUKPT2KTG76/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2790:

Resolution: Fixed
Status: Done  (was: To Do)

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IITE4ESJPBP2EMDO4J6VZA3O66UZO53O/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39784#comment-39784
 ] 

Barak Korren commented on OVIRT-2790:
-

I see, maybe put a cheat sheet for him somewhere...

closing the ticket now

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NDH4GD474377GWMLYG4CKTU5OZMYAUXD/


[JIRA] (OVIRT-2790) Jenkins: build manual trigger rights for asocha

2019-09-03 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39782#comment-39782
 ] 

Barak Korren commented on OVIRT-2790:
-

If all you want to do is to re-run - you can just type {{ci test please}} into 
a comment on your patch

Where did you find the instructions for using the manual trigger? we might need 
to update it?

> Jenkins: build manual trigger rights for asocha   
> --
>
> Key: OVIRT-2790
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2790
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins Master
>Reporter: Artur Socha
>Assignee: infra
>Priority: Low
>
> Please grant me (jenkins user id: asocha) rights to manually trigger patch 
> builds (to be able to re-run)
> [https://jenkins.ovirt.org/gerrit_manual_trigger/|https://jenkins.ovirt.org/gerrit_manual_trigger/]
> Thanks\!



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EY7LIAWCKDYXUJOLWAXFIWI5CTCKEPB2/


[JIRA] (OVIRT-2789) Fwd: Any chance you can remove dp...@redhat.com from infra@ovirt.org ?

2019-09-03 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2789:
---

 Summary: Fwd: Any chance you can remove dp...@redhat.com from 
infra@ovirt.org ?
 Key: OVIRT-2789
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2789
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Barak Korren
Assignee: infra


Forwarding to infra-support, so Duck will see this.

-- Forwarded message -
From: Yaniv Kaul 
Date: Tue, 3 Sep 2019 at 16:34
Subject: Any chance you can remove dp...@redhat.com from infra@ovirt.org ?
To: Barak Korren 


He no longer works at Red Hat and neither his manager - and I'm getting
those emails, without the ability to remove him (or her?).

TIA,
Y.



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/EOMBEFYDBDL3KLJ4SQCKLZIH6BQNTYZQ/


[JIRA] (OVIRT-1945) Allow to keep running containers in docker_cleanup

2019-09-03 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1945:

Components: docker_cleanup.py

> Allow to keep running containers in docker_cleanup
> --
>
> Key: OVIRT-1945
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1945
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: docker_cleanup.py, oVirt CI, Standard CI (Freestyle), 
> Standard CI (Pipelines)
>Reporter: Daniel Belenky
>Assignee: infra
>  Labels: standard-ci
>
> docker_cleanup.py stops all running containers before it removes the images.
> We should make it optional as well as allow whitelisting containers.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GBGTZ7LWETEX4WOA4NONRMZUE7HQGZKC/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren edited comment on OVIRT-2566 at 9/1/19 7:52 AM:
-

Not for specific patches - but we can easily enable it for ALL patches.

If we only want it for specific patches - we can consider allowing some 
customization of the Zull configuration at project level to allow running 
specific suits for specific patches, but this will require some code changes in 
several places. We can plan this once we're in production with the current set 
of suits.


was (Author: bkor...@redhat.com):
Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/K4QZIHSNI25ZOBWIRXAP2J6CAYY2SKRW/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren commented on OVIRT-2566:
-

Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HSF22UINFLBXE243SP6K7S5ABITO7GQ/


[JIRA] (OVIRT-2765) Jenkins builds not running "All nodes of label ‘loader-container’ are offline"

2019-07-29 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39599#comment-39599
 ] 

Barak Korren commented on OVIRT-2765:
-

For some reason the port jenkins is usign to talk to the containers was closed 
- I updated the jenkins configuration to re-open it and now we can see working 
containers again.

[~ederevea] do we know why the port was closed all of a sudden?

> Jenkins builds not running "All nodes of label ‘loader-container’ are offline"
> --
>
> Key: OVIRT-2765
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2765
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Scott Dickerson
>Assignee: infra
>
> Builds for at least ovirt-engine-ui-extensions [1] and
> ovirt-engine-nodejs-modules [2] are blocked with an erorr like, "All nodes
> of label ‘loader-container’ are offline".  Looks like they're all broken
> [3].
> Please help!
> [1]
> https://jenkins.ovirt.org/job/ovirt-engine-ui-extensions_standard-check-patch/126/console
> [2]
> https://jenkins.ovirt.org/job/ovirt-engine-nodejs-modules_standard-check-patch/74/console
> [3] https://jenkins.ovirt.org/label/loader-container/
> -- 
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100106)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YXSCREPZY2SKHGWG3LI3KTWRI62X2QGN/


[JIRA] (OVIRT-2443) Make sure that big containers KubVirt CI uses are cached on hosts

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2443:

Resolution: Fixed
Status: Done  (was: To Do)

blocking patch was merged long time ago - and [~eyon...@redhat.com] verified 
that we have the right images in cache

> Make sure that big containers KubVirt CI uses are cached on hosts
> -
>
> Key: OVIRT-2443
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2443
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: CI client projects
>Reporter: Barak Korren
>Assignee: infra
>
> Make sure that big containers KubVirt CI uses are cached on hosts



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IYWF4ZDGBDZZH3BESR7ZWLJR4R7B2PBY/


[JIRA] (OVIRT-914) Better arch support for mock_runner.sh

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-914:
---
Resolution: Won't Fix
Status: Done  (was: To Do)

well we have one place where this would have been useful (we have a custom 
packages file for s390x in the jenkins project). 

we managed to do without this so far because of the added flexibility v2 gave 
us.

> Better arch support for mock_runner.sh
> --
>
> Key: OVIRT-914
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-914
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: mock_runner
>Reporter: Barak Korren
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> We managed to us "{{mock_runner.sh}}" in multi-arch so far because it was 
> flexible enough to allow us to select the chroot file.
> The issue is that mock_runner does not actually *know* the arch we are 
> running on so we can`t::
> * do different mounts per-arch
> * install different packages per-arch
> * have different {{check_*}} scripts per-arch



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4ABND6OWIRK7HSX2WDGCRE4KVZ5WQKLJ/


[JIRA] (OVIRT-1396) Add a new 'test-system-artifacts' Standard-CI stage

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1396:

Resolution: Won't Fix
Status: Done  (was: To Do)

this was part of the ovirt-containers CI design, and a longer-term plan to 
enable properly handling node/appliance in the CQ. Since we're now working on 
gating to retire CQ, we're probably not gonna implement this

> Add a new 'test-system-artifacts' Standard-CI stage
> ---
>
> Key: OVIRT-1396
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1396
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Standard CI (Pipelines)
>Reporter: Barak Korren
>Assignee: infra
>
> This is a part of [containers 
> CI/CD|https://docs.google.com/a/redhat.com/document/d/1mEo3E0kRvlUWT9VaSPKDeG5rXBVXgSaHHQfvZ1la41E/edit?usp=sharing]
>  flow implementation process.
> An order to allow building and testing processes for containers to be 
> triggered after package that are needed for them are built, we will introduce 
> the "{{test-system-artifacts}}" standard-CI stage.
> This stage will be invoked from the 'experimental' or 'change-queue-tester' 
> pipelines jost like the existing OST-based flows.
> In order to provide package and repo information to the std-CI script invoked 
> by this stage we well need to implement OVIRT-1391



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZKYZRHPTWJI4FTGJFWECRMYAKGEN723U/


[JIRA] (OVIRT-2230) Checkout using prow as a GitHub triggering mechanism

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2230:

Resolution: Won't Fix
Status: Done  (was: To Do)

that was an attempt ot integrate Prow with stdci - but Kubevirt decided to go 
100% prow

> Checkout using prow as a GitHub triggering mechanism
> 
>
> Key: OVIRT-2230
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2230
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Jenkins Master
>Reporter: Barak Korren
>Assignee: infra
>
> [Prow|https://github.com/kubernetes/test-infra/tree/master/prow] is the 
> service that Kubernetes are using to trigger their CI on GitHub events.
> We should inspect it and see if it would be useful for us to adopt it.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WZPK2NSFHADLEAQEF5IBES5D7WG6BOZQ/


[JIRA] (OVIRT-1984) Create "out-of-band" slave cleanup and setup jobs

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-1984:

Resolution: Won't Fix
Status: Done  (was: To Do)

> Create "out-of-band" slave cleanup and setup jobs
> -
>
> Key: OVIRT-1984
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1984
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: Jenkins Slaves
>Reporter: Barak Korren
>Assignee: infra
>
> Right now, we run slave cleaup and setup steps as part or every single job we 
> run. This has several shortcomings:
> # It takes a long time from the point a user submitted a patch to the point 
> his actual test or build code runs
> # If slave setup or cleanup steps fail - they fail the whole job for the user
> # If slave setup or cleanup steps fail - they can keep failing for many jobs 
> until the CI team intervenes manually
> # There is a "chicken and an egg" issue where some parts of the CI code have 
> to run before the slave was properly cleaned up and configured.  This makes 
> if harder to add new slaves for the system.
> Here is a suggested scheme to fix all this:
> # Label all slaves that should be cleaned up automatically as 'cleanable'. 
> This is mostly to prevent the jobs described here from operating on the 
> master node.
> # Have a "cleanup scheduler" job that finds all slaves labelled as 
> "cleanable" but not as "dirty" or "clean", labels them as "dirty" and runs a 
> cleanup job on them.
> # Have a "cleanup" job that is triggered on particular slaves by the "cleanup 
> scheduler" job, runs cleaup and setup steps on them and then labels them as 
> "clean" and removes the "dirty" label.
> # Have all other CI jobs only use slaves with the "clean" label.
> Notes:
> # The "dirty" label is there to make the "cleanup scheduler" job not trigger 
> twice on the same slave before the"cleanup" job started cleaning it up.
> # Since all slaves used by the real jobs will always be clean - there will no 
> longer be a need to run cleanup steps in the real jobs, thus saving time.
> # If cleanup steps fail - the cleanup job will fail and the slave will not be 
> marked as "clean" so real jobs will never try to use it.
> # To solve the "chicken and egg" issue, the cleanup job probably must be a 
> FreeStyle jobs and all the cleanup and setup code must be embedded into it by 
> JJB. This will probably require a newer version of JJB then what we have so 
> setting OVIRT-1983 as a blocker.
> # There is an issue of how to make CI for this - if cleanup and setup steps 
> are removed from the normal STDCI jobs, they they will not be checked by the 
> "check-patch" job of the "jenkins repo". Here is a suggested scheme to solve 
> this:
> ## Have a way to "loan" slaves from the production jenkins to other Jenkins 
> instances - this could be done by having a job that starts up the Jenkins 
> JNLP client and tells it to connect to another Jenkins master.
> ## As part of the "check-patch" job for the 'jenkins' repo - start a Jenkins 
> master in a container - attach some production slaves to it and have it run 
> cleanup and setup steps on them  



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YPHRXKQXXYYLTQDOPDQ2VGJBLH52NX4W/


[JIRA] (OVIRT-2178) "Borrow" slaves from CentOS CI

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2178:

Resolution: Won't Fix
Status: Done  (was: To Do)

> "Borrow" slaves from CentOS CI
> --
>
> Key: OVIRT-2178
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2178
> Project: oVirt - virtualization made easy
>  Issue Type: Epic
>Reporter: Barak Korren
>Assignee: infra
>
> [CentOS CI|https://wiki.centos.org/QaWiki/CI] is a generic shared platform 
> for building CI service for Open Source projects.
> Amount other things, CentOS CI makes physical and virtual hosts available for 
> running CI processes via the [Duffy|http://wiki.centos.org/QaWiki/CI/Duffy] 
> system.
> We should make oVirt CI able to consume resources from CentOs CI to augment 
> and someday replace the hardware resources available to oVirt CI.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/OTAGGGVQP77BSCTGUWQPQBOSFSJ35234/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39489#comment-39489
 ] 

Barak Korren commented on OVIRT-886:


Reopening ticket - issue still relevant.

> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/TL6AL7JA4G4VSONSHSGPK47CHPPPYCFS/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-886:
---
Status: To Do  (was: In Progress)

> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2AERKKR2TMMRNX2TKUJA3EBCGN5VEYUP/


[JIRA] (OVIRT-886) Yum install does not throw error on missing package

2019-07-04 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reopened OVIRT-886:


> Yum install does not throw error on missing package
> ---
>
> Key: OVIRT-886
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-886
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: mock_runner
>Reporter: Gil Shinar
>Assignee: infra
>  Labels: mock_runner.sh, standard-ci
>
> When running el7 mock on fc24 (not necessary the issue) and one of the 
> required packages is missing because a repository in .repos hadn't been 
> added, yum will not fail and the package will not be installed



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CHXU7X7GV6K4LZ5G4FVZ2H7S6555ZH4F/


[JIRA] (OVIRT-2744) Ugprade oVirt's OpenShift instance to OKD 4.x

2019-06-25 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2744:
---

Assignee: Evgheni Dereveanchin  (was: infra)

> Ugprade oVirt's OpenShift instance to OKD 4.x
> -
>
> Key: OVIRT-2744
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2744
> Project: oVirt - virtualization made easy
>  Issue Type: New Feature
>  Components: OpenShift
>Reporter: Barak Korren
>Assignee: Evgheni Dereveanchin
>
> Reasons to do this:
> * Automated OKD upgrades
> * Support for OpenShift pipelines (Knative/Tekton)
> Issues we need to solve:
> * The OS for the OKD nodes (CentOS CoreOS 8.x not release yet)
> * OKD Installer support for oVirt ([~rgo...@redhat.com]'s patches not merged 
> & released yet)



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LZUNPT3REOX4VDEDHY4763QIWZTWCATY/


[JIRA] (OVIRT-2744) Ugprade oVirt's OpenShift instance to OKD 4.x

2019-06-25 Thread Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-2744:
---

 Summary: Ugprade oVirt's OpenShift instance to OKD 4.x
 Key: OVIRT-2744
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2744
 Project: oVirt - virtualization made easy
  Issue Type: New Feature
  Components: OpenShift
Reporter: Barak Korren
Assignee: infra


Reasons to do this:
* Automated OKD upgrades
* Support for OpenShift pipelines (Knative/Tekton)

Issues we need to solve:
* The OS for the OKD nodes (CentOS CoreOS 8.x not release yet)
* OKD Installer support for oVirt ([~rgo...@redhat.com]'s patches not merged & 
released yet)




--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/UCVETTYPTZSHC74OPC4TPR7WSLX55Y4T/


[JIRA] (OVIRT-2742) ovirt-appliance build failure on missing module urllib.request

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39425#comment-39425
 ] 

Barak Korren commented on OVIRT-2742:
-

Probably has more to do with changes in CentOS repos, because we did not do any 
change to the CI infra that could cause this kind of impact

> ovirt-appliance build failure on missing module urllib.request
> --
>
> Key: OVIRT-2742
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2742
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> *16:16:26* Traceback (most recent call last):*16:16:26*   File
> "scripts/create_ova.py", line 4, in *16:16:26* from
> imagefactory_plugins.ovfcommon.ovfcommon import
> RHEVOVFPackage*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imagefactory_plugins/ovfcommon/ovfcommon.py",
> line 29, in *16:16:26* from imgfac.PersistentImageManager
> import PersistentImageManager*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/PersistentImageManager.py",
> line 17, in *16:16:26* from .ApplicationConfiguration
> import ApplicationConfiguration*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/ApplicationConfiguration.py",
> line 24, in *16:16:26* import urllib.request*16:16:26*
> ImportError: No module named request
> Seen in
> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el7-x86_64/1205/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.3_build-artifacts-el7-x86_64/118/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.2_build-artifacts-el7-x86_64/481/console
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/WTI37VQ2GMIAXRX35KXWN4YZ33OW2OJF/


[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2741:
---

Assignee: Barak Korren  (was: infra)

> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [2019-06-20T22:23:16.566Z] at java.lang.Thread.run(Thread.java:748)
> [2019-06-20T22:23:16.566Z] Caused by: hudson.plugins.git.GitException:
> Command "git clean -fdx" returned status code 1:
> [2019-06-20T22:23:16.566Z] stdout:
> [2019-06-20T22:23:16.566Z] stderr: warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage___init___py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockVolume_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockdev_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_check_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_clusterlock_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_compat_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> 

[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39422#comment-39422
 ] 

Barak Korren edited comment on OVIRT-2741 at 6/24/19 7:39 AM:
--

This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probably got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Going to close this ticket, since this is not really an ongoing infra issue.



was (Author: bkor...@redhat.com):
This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probally got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Goind to close this ticket, since this is not really an ongoing infra issue.


> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 

[JIRA] (OVIRT-2741) Most el7 vdsm build fail to fetch source - bad slave?

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39422#comment-39422
 ] 

Barak Korren commented on OVIRT-2741:
-

This is the real cause of failure:

{code}
[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html

[2019-06-20T22:20:54.330Z] warning: failed to remove 
tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blockSD_py.html
{code}

It means we have root-owned files win the VDSM workspace on the slave - this is 
caused by people manually aborting jobs before the cleanup code can run.

I see the slave is clean now, probally got cleaned by a job for a different 
project that did not stumble on the vdsm files.

Goind to close this ticket, since this is not really an ongoing infra issue.


> Most el7 vdsm build fail to fetch source - bad slave?
> -
>
> Key: OVIRT-2741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2741
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I have see many such failures today, here is 2 bulids - both on same slave
> (slave issue?)
> 1. [2019-06-20T22:23:12.401Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6511/pipeline
> 2. [2019-06-20T22:20:50.288Z] Running on node: vm0015.workers-phx.ovirt.org
> (el7 phx nested local_disk)
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/6509/pipeline
> Looks like there was no build on this slave in the last 2 days:
> https://jenkins.ovirt.org/computer/vm0015.workers-phx.ovirt.org/builds
> The failure looks like this:
> [2019-06-20T22:23:16.416Z] No credentials specified
> [2019-06-20T22:23:16.440Z] Fetching changes from the remote Git repository
> [2019-06-20T22:23:16.456Z] Cleaning workspace
> [2019-06-20T22:23:16.566Z] ERROR: Error fetching remote repo 'origin'
> [2019-06-20T22:23:16.566Z] hudson.plugins.git.GitException: Failed to fetch
> from https://gerrit.ovirt.org/vdsm
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:894)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1161)
> [2019-06-20T22:23:16.566Z] at
> hudson.plugins.git.GitSCM.checkout(GitSCM.java:1192)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:120)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:90)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:77)
> [2019-06-20T22:23:16.566Z] at
> org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [2019-06-20T22:23:16.566Z] at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [2019-06-20T22:23:16.566Z] at java.lang.Thread.run(Thread.java:748)
> [2019-06-20T22:23:16.566Z] Caused by: hudson.plugins.git.GitException:
> Command "git clean -fdx" returned status code 1:
> [2019-06-20T22:23:16.566Z] stdout:
> [2019-06-20T22:23:16.566Z] stderr: warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage___init___py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncevent_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_asyncutils_py.html
> [2019-06-20T22:23:16.566Z] warning: failed to remove
> tests/htmlcov-storage-py27/_home_jenkins_workspace_vdsm_standard-check-patch_vdsm_lib_vdsm_storage_blkdiscard_py.html
> [2019-06-20T22:23:16.566Z] warning: 

[JIRA] (OVIRT-2742) ovirt-appliance build failure on missing module urllib.request

2019-06-24 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39420#comment-39420
 ] 

Barak Korren commented on OVIRT-2742:
-

[~sbona...@redhat.com] why do you think this has to do with the CI infra as 
opposed to simple a needed updated to the {{*.packages}} files or the 
requirements in the specfiles?

Looking at the build script log, I see it already mostly eschews the CI 
system's ability to fetch dependencies and instead fetches them on its own 
(probably bypassing all the caches and mirrors in the process). 

> ovirt-appliance build failure on missing module urllib.request
> --
>
> Key: OVIRT-2742
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2742
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> *16:16:26* Traceback (most recent call last):*16:16:26*   File
> "scripts/create_ova.py", line 4, in *16:16:26* from
> imagefactory_plugins.ovfcommon.ovfcommon import
> RHEVOVFPackage*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imagefactory_plugins/ovfcommon/ovfcommon.py",
> line 29, in *16:16:26* from imgfac.PersistentImageManager
> import PersistentImageManager*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/PersistentImageManager.py",
> line 17, in *16:16:26* from .ApplicationConfiguration
> import ApplicationConfiguration*16:16:26*   File
> "/home/jenkins/workspace/ovirt-appliance_master_build-artifacts-el7-x86_64/ovirt-appliance/engine-appliance/imagefactory/imgfac/ApplicationConfiguration.py",
> line 24, in *16:16:26* import urllib.request*16:16:26*
> ImportError: No module named request
> Seen in
> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el7-x86_64/1205/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.3_build-artifacts-el7-x86_64/118/console
> https://jenkins.ovirt.org/job/ovirt-appliance_4.2_build-artifacts-el7-x86_64/481/console
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100105)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HT7ZIOCNEWLSI5ZCGYKP4CO2YN6IFRUX/


[JIRA] (OVIRT-2739) Builds on s390x fail again

2019-06-19 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2739:
---

Assignee: Barak Korren  (was: infra)

> Builds on s390x fail again
> --
>
> Key: OVIRT-2739
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2739
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> I cannot see any reason for the failure in the pipeline logs, and there are
> no
> build logs.
> https://jenkins.ovirt.org/job/standard-manual-runner/329/
> https://jenkins.ovirt.org/job/standard-manual-runner/329//artifact/build-artifacts.build-py27.fc29.s390x/mock_logs/script/stdout_stderr.log
> The el7 x86_64 repo was created, so I can still use this build to run OST,
> but we need to fix this
> quickly, or disable the s390x builds until we have a stable solution.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100104)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JUFNK7H322HJF73OOQNOYKD4RI2BVQHE/


[JIRA] (OVIRT-2737) job failing on mock runner configuration missing

2019-06-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39396#comment-39396
 ] 

Barak Korren commented on OVIRT-2737:
-

Here is the culprit probably:

{code}
[2019-06-06T12:49:23.527Z] make[1]: Leaving directory 
`/home/jenkins/workspace/ovirt-imageio_standard-check-patch/ovirt-imageio/proxy'
[2019-06-06T12:49:23.527Z] + git clean -fdx
[2019-06-06T12:49:24.542Z] Removing .cache/
...
[2019-06-06T12:49:24.542Z] Removing mocker-epel-7-x86_64.el7.cfg
...
{code}

This is from the STDCI script logfile:
https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/ovirt-imageio_standard-check-patch/runs/1288/nodes/98/steps/319/log/?start=0

> job failing on mock runner configuration missing
> 
>
> Key: OVIRT-2737
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2737
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> looking at
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-imageio_standard-check-patch/detail/ovirt-imageio_standard-check-patch/1288/pipeline/98
> and seems it's failing on missing mock runner configuration.
> Discussing with Dafna she thinks basically mocker-epel-7-ppc64le.el7.cfg
> shouldn't exist and the failure needs to be investigated.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/CVRE7MZB43YJ2XB6TXZRMUW4PHCV2QEZ/


[JIRA] (OVIRT-2737) job failing on mock runner configuration missing

2019-06-06 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39395#comment-39395
 ] 

Barak Korren commented on OVIRT-2737:
-

The {{mocker-*}} file is generated on the fly by {{mock_runner.sh}} - it should 
very much exist for the duration of its run.

I seems to exist just fine while the mock environment is initialized and when 
the STDCI script is triggered inside it.

My guess would be that something insode the STDCI script runs something like 
{{git clean}} and causes the file to be removed by the time {{mock_runner.sh}} 
tries to clean up after itself.

> job failing on mock runner configuration missing
> 
>
> Key: OVIRT-2737
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2737
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> looking at
> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-imageio_standard-check-patch/detail/ovirt-imageio_standard-check-patch/1288/pipeline/98
> and seems it's failing on missing mock runner configuration.
> Discussing with Dafna she thinks basically mocker-epel-7-ppc64le.el7.cfg
> shouldn't exist and the failure needs to be investigated.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/QAJOHGWCGZWY23EATVY6WO5KZU335NOY/


[JIRA] (OVIRT-2736) Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago directory missing

2019-05-30 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2736:

Resolution: Duplicate
Status: Done  (was: To Do)

Closing - this is a duplicate of OVIRT-2735

> Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago 
> directory missing
> --
>
> Key: OVIRT-2736
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2736
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Here a failed build:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/302/pipeline
> The second failure today.
> I hope we can fix this quickly.
> If not we need to disable fedora builds for now.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/SC4ADUHMGSPOEC3UCINUGPJ5QBIFXMLG/


[JIRA] (OVIRT-2736) Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago directory missing

2019-05-30 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39360#comment-39360
 ] 

Barak Korren commented on OVIRT-2736:
-

Please track issues on the ticket you already opened about this and avoid
spreading the information over multiple threads.

Its the s390x issue I already explained - its running f30 atm which is
causing some trouble because the `createrepo` package was dropped from it.

On Thu, 30 May 2019 at 15:41, Nir Soffer  wrote:

> Here a failed build:
>
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/302/pipeline
>
> The second failure today.
>
> I hope we can fix this quickly.
>
> If not we need to disable fedora builds for now.
>


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

> Jenkins build artifacts fail on Fedora - global_setup[lago_setup] WARN: Lago 
> directory missing
> --
>
> Key: OVIRT-2736
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2736
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Here a failed build:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/302/pipeline
> The second failure today.
> I hope we can fix this quickly.
> If not we need to disable fedora builds for now.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/AG3W7C634EQ6RST7H4NCNWPPRPNHVU4P/


[JIRA] (OVIRT-2735) Builds on s390x fail again, reason unknown

2019-05-30 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2735:
---

Assignee: Barak Korren  (was: infra)

> Builds on s390x fail again, reason unknown
> --
>
> Key: OVIRT-2735
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2735
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> Here are 2 failed builds:
> https://jenkins.ovirt.org/job/standard-manual-runner/300/
> https://jenkins.ovirt.org/job/standard-manual-runner/301/
> There are no logs for the failure:
> https://jenkins.ovirt.org/job/standard-manual-runner/300//artifact/build-artifacts.build-py36.fc28.s390x/mock_logs/script/stdout_stderr.log
> https://jenkins.ovirt.org/job/standard-manual-runner/300//artifact/build-artifacts.build-py27.fc28.s390x/mock_logs/script/stdout_stderr.log
> This breaks my workflow, running OST before merging patches.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FXNR7L6BHGKKKJW7CA4YKVX3BWMN7WVA/


[JIRA] (OVIRT-2735) Builds on s390x fail again, reason unknown

2019-05-30 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39353#comment-39353
 ] 

Barak Korren commented on OVIRT-2735:
-

the s390x slave is broken again, I'm working on a way where I can just 
blacklist the s390x builds on the fly when that happens.

Here is the patch that is going to do that:
https://gerrit.ovirt.org/c/100395

> Builds on s390x fail again, reason unknown
> --
>
> Key: OVIRT-2735
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2735
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> Here are 2 failed builds:
> https://jenkins.ovirt.org/job/standard-manual-runner/300/
> https://jenkins.ovirt.org/job/standard-manual-runner/301/
> There are no logs for the failure:
> https://jenkins.ovirt.org/job/standard-manual-runner/300//artifact/build-artifacts.build-py36.fc28.s390x/mock_logs/script/stdout_stderr.log
> https://jenkins.ovirt.org/job/standard-manual-runner/300//artifact/build-artifacts.build-py27.fc28.s390x/mock_logs/script/stdout_stderr.log
> This breaks my workflow, running OST before merging patches.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LXSSTH2DJNBTDWIWSTHNRQXI7JS43CWQ/


[JIRA] (OVIRT-2337) Ensure emergency access to PHX production VMs

2019-05-29 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39352#comment-39352
 ] 

Barak Korren commented on OVIRT-2337:
-

We only have one access option, if SPICE is not working for some reason - we 
have no backup.

We need to ensure the other options are available as well, at least VNC, and 
having the console which would probably be way faster then both would be useful 
as well.

> Ensure emergency access to PHX production VMs
> -
>
> Key: OVIRT-2337
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2337
> Project: oVirt - virtualization made easy
>  Issue Type: Epic
>  Components: oVirt Infra
>Reporter: Barak Korren
>Assignee: infra
>Priority: High
>
> Make sure that in case of malfunction we have multiple fail-safe ways of 
> gaining access to the PHX production VMs to resolve issues.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100103)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/INNYLSMJ4SODAY27LMFOEHWPV4QBAJUC/


[JIRA] (OVIRT-2730) Running sub stages using https://jenkins.ovirt.org/job/standard-manual-runner/

2019-05-19 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39336#comment-39336
 ] 

Barak Korren commented on OVIRT-2730:
-

On Fri, 17 May 2019 at 01:04, Nir Soffer (oVirt JIRA) <

I looked at the parameters screen
 for
the refspec you passed to the manual runner (refs/changes/07/17/8) and
then simply checked out the patch with `git review -d 17`. Looking at
`stdci.yaml` there you can see its quite different then the one in the
master branch.




So is commenting `ci test please` or `ci please test` or just `ci test` in
gerrit



Checkout the patch and have a look, the CI system does not rebase before
testing, so it does not matter which branch you intend to merge into, only
which commit you started from when first creating the local branch you
wrote the patch in.




-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted


> Running sub stages using https://jenkins.ovirt.org/job/standard-manual-runner/
> --
>
> Key: OVIRT-2730
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2730
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I tried to run check-patch and check-patch linters substage from the
> command line:
> using:
> https://github.com/nirs/oci/pull/37
> I assumed that this will run the entire check-patch stage, including all
> the sub stages:
> ./ovirt-ci run -s check-patch 17
> But it run something that looks like check-patch with el7:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/265/pipeline
> The test environment seems to be different that what we have when
> check-patch is triggered
> from gerrit.
> Then I tried to run the linters substage:
> ./ovirt-ci run -s check-patch.linters 17
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/267/pipeline
> ./ovirt-ci run -s check-patch.linters 17
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/266/pipeline
> Both succeeded without running anything :-)
> Running build-artifacts works, not sure about check-network.
> Can we fix the manual job so:
> - it run exactly the same way as it run when gerrit trigger the job
> - allow selection of a sub stage?
> - nice to have - allow selection of multiple sub stages
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100102)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GQEM6HHPYBOJ7VXAG2DH7CJLJTMLEDL5/


[JIRA] (OVIRT-2730) Running sub stages using https://jenkins.ovirt.org/job/standard-manual-runner/

2019-05-15 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39333#comment-39333
 ] 

Barak Korren commented on OVIRT-2730:
-

On Wed, 15 May 2019 at 18:58, Nir Soffer (oVirt JIRA) <


Looking at the stdci.yaml file I get when cloning the patch - I don't see
any substages or distroes defined in it, so STDCI does what it can in that
case - run the default substage for the specified architectures that it can
find script files for.

Is this based on a very old version of vdsm's stdci.yaml ?

You can see the exact same thing happening for check-patch triggered from
Gerrit:
https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/5461/pipeline

In fact running check-patch with manual runner is quite useless, since CI
will always run it for you anyway when you push the patch...




Because you can't select substages like that.



check-network would not work since its not defined in stdci.yaml for that
patch.




It runs the exact same code.

- allow selection of a sub stage?

Probably not gonna happen - this would require some deep changes.

- nice to have - allow selection of multiple sub stages

Same as above, you can only select a stage at this point.




-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted


> Running sub stages using https://jenkins.ovirt.org/job/standard-manual-runner/
> --
>
> Key: OVIRT-2730
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2730
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: infra
>
> I tried to run check-patch and check-patch linters substage from the
> command line:
> using:
> https://github.com/nirs/oci/pull/37
> I assumed that this will run the entire check-patch stage, including all
> the sub stages:
> ./ovirt-ci run -s check-patch 17
> But it run something that looks like check-patch with el7:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/265/pipeline
> The test environment seems to be different that what we have when
> check-patch is triggered
> from gerrit.
> Then I tried to run the linters substage:
> ./ovirt-ci run -s check-patch.linters 17
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/267/pipeline
> ./ovirt-ci run -s check-patch.linters 17
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/266/pipeline
> Both succeeded without running anything :-)
> Running build-artifacts works, not sure about check-network.
> Can we fix the manual job so:
> - it run exactly the same way as it run when gerrit trigger the job
> - allow selection of a sub stage?
> - nice to have - allow selection of multiple sub stages
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100102)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/66K6A5N3JKBMRODKW5WGF4LUGQT5PR4I/


[JIRA] (OVIRT-2725) Build artifacts failed - not enough free space on file system

2019-05-14 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39318#comment-39318
 ] 

Barak Korren commented on OVIRT-2725:
-

We can't have sudo access on the s390x node - its not owned by us.

> Build artifacts failed - not enough free space on file system
> -
>
> Key: OVIRT-2725
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2725
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Nir Soffer
>Assignee: infra
>
> I had this failure in yum install:
> [2019-05-12T14:54:55.706Z] Error Summary
> [2019-05-12T14:54:55.706Z] -
> [2019-05-12T14:54:55.706Z] Disk Requirements:
> [2019-05-12T14:54:55.706Z]At least 70MB more space needed on the /
> filesystem.
> Build:
> https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/247/pipeline
> Trying build again, hopefully will get a slave with more space...
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100102)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/HR4SEHBUEVD3TKFFB2G3XJAJ3R7EN54O/


[JIRA] (OVIRT-2714) Increase limits on concurrent https://jenkins.ovirt.org/job/ovirt-system-tests_manual/ jobs

2019-04-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39242#comment-39242
 ] 

Barak Korren commented on OVIRT-2714:
-

[~eedri] we can probably move the manual job to use containers if we didn't 
already, that aught to let it have more executors aviabale. 
[~gbenh...@redhat.com], [~dbele...@redhat.com] WDYT?

[~mskrivanek] please note that moving into containers will have about 30% 
impact on the job performance, so we're paying in latency for greater 
throughput...

> Increase limits on concurrent 
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/ jobs
> ---
>
> Key: OVIRT-2714
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2714
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: Michal Skrivanek
>Assignee: infra
>
> This one is about manual OST runs
> cloned from OVIRT-2704
> ---
> We started to use this job to build artifacts for running OST.
> Looks like this job is limited to one concurrent job, which make it less
> useful than
> it could be.
> If we don't make it easy to run OST, people will not run it, and the change
> queue
> will break.
> Please change the limit to use the same limit used for regular builds. We
> seem to be
> able to create more then 10 concurrent builds by uploading patches to
> gerrit. The manual
> runner should this as well.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/7G2AITZEUUU36BMB2ZE7DCOMSG5K2X7Z/


[JIRA] (OVIRT-2706) how do we cache docker images in OST?

2019-04-01 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2706:
---

Assignee: Barak Korren  (was: infra)

> how do we cache docker images in OST?
> -
>
> Key: OVIRT-2706
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2706
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: docker_cleanup.py
>Reporter: Greg Sheremeta
>Assignee: Barak Korren
>
> Currently we download the selenium grid containers on every run of 008
> basic ui sanity, and the containers are huge. Can we cache them somewhere
> so OST doesn't need to download them every time? Please advise how and what
> we need to do to enable this. Thanks!
> Best wishes,
> Greg
> -- 
> GREG SHEREMETA
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
> Red Hat NA
> 
> gsher...@redhat.comIRC: gshereme
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/BM4T2PY75BQIG3VNWWLVDGTFE2V647CK/


[JIRA] (OVIRT-2706) how do we cache docker images in OST?

2019-03-25 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2706:

Issue Type: Improvement  (was: By-EMAIL)

> how do we cache docker images in OST?
> -
>
> Key: OVIRT-2706
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2706
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: Greg Sheremeta
>Assignee: infra
>
> Currently we download the selenium grid containers on every run of 008
> basic ui sanity, and the containers are huge. Can we cache them somewhere
> so OST doesn't need to download them every time? Please advise how and what
> we need to do to enable this. Thanks!
> Best wishes,
> Greg
> -- 
> GREG SHEREMETA
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
> Red Hat NA
> 
> gsher...@redhat.comIRC: gshereme
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LHNTQJPI56ZZZDF62YGYVHGXFLCGL4TV/


[JIRA] (OVIRT-2706) how do we cache docker images in OST?

2019-03-25 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2706:

Component/s: docker_cleanup.py

> how do we cache docker images in OST?
> -
>
> Key: OVIRT-2706
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2706
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: docker_cleanup.py
>Reporter: Greg Sheremeta
>Assignee: infra
>
> Currently we download the selenium grid containers on every run of 008
> basic ui sanity, and the containers are huge. Can we cache them somewhere
> so OST doesn't need to download them every time? Please advise how and what
> we need to do to enable this. Thanks!
> Best wishes,
> Greg
> -- 
> GREG SHEREMETA
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
> Red Hat NA
> 
> gsher...@redhat.comIRC: gshereme
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/NQHFYW6ORSTOJAY7YE3MBS7KH3A5X4V4/


[JIRA] (OVIRT-2706) how do we cache docker images in OST?

2019-03-25 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39202#comment-39202
 ] 

Barak Korren commented on OVIRT-2706:
-

[~gsher...@redhat.com] Same way we cache any container - we have a whitelist of 
container images that the system leaves behind on the node it ran on.

If you can provide us with the list of selenium images we can make sure they 
are kept.

> how do we cache docker images in OST?
> -
>
> Key: OVIRT-2706
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2706
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Greg Sheremeta
>Assignee: infra
>
> Currently we download the selenium grid containers on every run of 008
> basic ui sanity, and the containers are huge. Can we cache them somewhere
> so OST doesn't need to download them every time? Please advise how and what
> we need to do to enable this. Thanks!
> Best wishes,
> Greg
> -- 
> GREG SHEREMETA
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
> Red Hat NA
> 
> gsher...@redhat.comIRC: gshereme
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/LHIVGUK2OXIM5QIQMFOBKY2K7L3VBX2A/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39155#comment-39155
 ] 

Barak Korren commented on OVIRT-2695:
-

Here:

https://docs.google.com/document/d/1UYwjJdZLvlLdbfV2izx4Qs8fjagG1FB-meqjOnChOuk/edit?usp=sharing

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/S3EDMUBQOV46UC744OFXZHQT3SOAEOKI/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39153#comment-39153
 ] 

Barak Korren commented on OVIRT-2695:
-

[~eedri] already added it to [~lmilb...@redhat.com]'s KB doc

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/YZ3QUC2WSVUXWH46ZENCLTRTU3JAZRXV/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2695:

Blocked By: user response
Status: Blocked  (was: To Do)

blocking ticket on user response

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/J67TBKAAFH3G4GNBJCMNX7LAKTLGMJW6/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39150#comment-39150
 ] 

Barak Korren commented on OVIRT-2695:
-

[~nsof...@redhat.com] I cleared the caches, please check that your jobs pass now

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/H26EBJOZWI5AHIXP7NVCMFVZJB5H5WZ4/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2695:

Component/s: CI client projects
 Jenkins Slaves

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MISPDVYNZ2FCFICKTNSEW3MMAW2SKPLS/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren updated OVIRT-2695:

Issue Type: Outage  (was: By-EMAIL)

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: Outage
>  Components: CI client projects, Jenkins Slaves
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/IZ4JXL3MZTBQT6KPIPOI74TDIOEMEYX6/


[JIRA] (OVIRT-2695) [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)

2019-03-10 Thread Barak Korren (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barak Korren reassigned OVIRT-2695:
---

Assignee: Barak Korren  (was: infra)

> [VDSM] build-artifacts on s390x broken again (bad mock/dnf cache?)
> --
>
> Key: OVIRT-2695
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2695
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Nir Soffer
>Assignee: Barak Korren
>
> build-artifacts on s390x fail now with:
> [Errno 2] No such file or directory:
> '/var/cache/dnf/updates-07b7057ee4fded96/packages/systemd-238-12.git07f8cd5.fc28.s390x.rpm'
> Here are few example failures:
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3727/pipeline
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/3732/pipeline
> This is going to block the change queue when we merge these patches,
> and we have 4.3 build soon.
> We had this issue few week ago, and Barak fixed this by deleting caches on
> the slaves. I hope this can be fixed quickly
> this time.
> Nir



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VF77JT6QDF4Q6BCD4R7236VVAK5LKCMU/


[JIRA] (OVIRT-2252) The s390x slave is used in parallel by both the staging and the production CI systems

2019-03-10 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39149#comment-39149
 ] 

Barak Korren commented on OVIRT-2252:
-

we cannot, because of the way stdciv2 works, if we drop the slave, the jobs for 
the 'jenkins' repo will never finish.

I wouldn't want to just drop the 'jenkins' repo jobs, because that is the only 
way we have ATM to pre-test some things.

I think at this point the solution we should aim for is to containerize the 
s390x host, that would resolve this issue and would enable us to drop support 
for not-containerized hosts in the future. I actually had a chat with the 
host's maintainer a while ago, and he was willing to install docker there. The 
intiative died out because I did not have time to push it further.

> The s390x slave is used in parallel by both the staging and the production CI 
> systems
> -
>
> Key: OVIRT-2252
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2252
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: Jenkins Slaves
>Reporter: Barak Korren
>Assignee: infra
>
> Since we only have one s390x slave, it is currently attached to both the 
> staging and the production CI systems, and while they use separate user 
> accounts, it turns out this is not enough to isolate them from one another.
> The are several issues that are caused by this configuration:
> # Tests that allocate a fixed network port can fail if they are run by both 
> systems at the same time - this happens in practice when sending Python 
> patchs to the '{{jenkins}}' repo because the {{mirror_client.py}} tests start 
> a web server on port 8675.
> # The {{mock_cleanup.sh}} script that is being run by one system can time out 
> trying to umount things from a mock environment that was created and is being 
> used by the other system.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/44Q6QHOXI4J4A4I7GZZQGN4TAWQ6AK7V/


[JIRA] (OVIRT-2001) Create FAQ/Knowledge Base/HOWTO pages for STDCI

2019-03-07 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39136#comment-39136
 ] 

Barak Korren commented on OVIRT-2001:
-

[~lmilb...@redhat.com] if we reached consensus about managing a KB in the 
google doc you've created, can you take the information that we collected in 
the comments here and put it there?

> Create FAQ/Knowledge Base/HOWTO pages for STDCI
> ---
>
> Key: OVIRT-2001
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2001
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Barak Korren
>Assignee: infra
>  Labels: first_time_task
>
> There are many common use cases and issues in STDCI that could benefit from 
> case/task-specific documentation. So we should have a FAQ page or a Knowledge 
> Base about STDCI to cover these issues.
> Before we can make the page we need a critical mass (at least 3) of issues 
> and solutions, so we will use comments on this ticket to collect them as they 
> arise.



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100099)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/RYHIXT36F4SZ4R7Y56Z2HMGD3PBLOE4Q/


  1   2   3   4   5   6   7   8   9   10   >