Re: [oVirt Jenkins] ovirt-engine_3.6_upgrade-from-3.5_el6_merged - Build # 953 - Failure!

2016-03-09 Thread Gil Shinar
Hi,

There's the follwoing error message:

*+ engine-setup
--config-append=/home/jenkins/workspace/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/jenkins/jobs/ovirt-engine_upgrade_to_3.6/setup.file.otopi
[ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup
Configuration files:
['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
'/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf',
'/home/jenkins/workspace/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/jenkins/jobs/ovirt-engine_upgrade_to_3.6/setup.file.otopi']
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20160309164051-bacxvr.log
Version: otopi-1.4.0_master
(otopi-1.4.0-0.0.master.20150821210045.gitbabbcae.el6) [ INFO ] Stage:
Environment packages setup [ ERROR ] Yum Cannot queue package iproute:
Cannot find a valid baseurl for repo: base [ ERROR ] Failed to execute
stage 'Environment packages setup': Cannot find a valid baseurl for
repo: base *

What should I do?

Thanks
Gil

On 03/09/2016 06:42 PM, jenk...@ovirt.org wrote:
> Project: 
> http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/ 
> Build: 
> http://jenkins.ovirt.org/job/ovirt-engine_3.6_upgrade-from-3.5_el6_merged/953/
> Build Number: 953
> Build Status:  Failure
> Triggered By: Triggered by Gerrit: https://gerrit.ovirt.org/54552
>
> -
> Changes Since Last Success:
> -
> Changes for Build #953
> [Yedidyah Bar David] packaging: engine-backup: recreate dwh conf only if found
>
> [Tolik Litovsky] adding 3.6 jobs for imgbased and ngnode
>
> [Tolik Litovsky] stop building ovirt node for master branch
>
> [Tolik Litovsky] adding imgbased to 3.6 publisher
>
> [Tolik Litovsky] making appliance job being a periodic build
>
> [David Caro] Some small improvement to mock_runner
>
>
>
>
> -
> Failed Tests:
> -
> No tests ran. 
>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: two otopi builds in a single jenkins job

2016-03-14 Thread Gil Shinar
Hi,

Investigating a bit it does seems like dirty environment because I saw
in the console log that it had only build the 1.5.0*.src.rpm so it means
that the 1.4.2*.src.rpm were there before.
During the cleanup phase I can see the following:

*15:55:47 **Making sure there are no device mappings... **15:55:47 **Removing 
the used loop devices... **15:55:48 **losetup: invalid option -- 'D' **15:55:48 
15:55:48 **Usage: **15:55:48 **losetup loop_device give info **15:55:48 
**losetup -a | --all list all used **15:55:48 **losetup -d | --detach  
[ ...] delete **15:55:48 **losetup -f | --find find unused **15:55:48 
**losetup -c | --set-capacity  resize **15:55:48 **losetup -j | 
--associated  [-o ] list all associated with
 **15:55:48 **losetup [ options ] {-f|--find|loopdev}  setup 
**15:55:48 15:55:48 **Options: **15:55:48 **-e | --encryption  enable 
data encryption with specified  **15:55:48 **-h | --help this help 
**15:55:48 **-o | --offset  start at offset  into file **15:55:48 
**--sizelimit  loop limited to only  bytes of the file **15:55:48 
**-p | --pass-fd  read passphrase from file descriptor  **15:55:48 
**-r | --read-only setup read-only loop device **15:55:48 **--show print device 
name (with -f ) **15:55:48 **-v | --verbose verbose mode **15:55:48 
15:55:48 15:55:48 **Usage: **15:55:48 **losetup loop_device give info 
**15:55:48 **losetup -a | --all list all used **15:55:48 **losetup -d | 
--detach  [ ...] delete **15:55:48 **losetup -f | --find find 
unused **15:55:48 **losetup -c | --set-capacity  resize **15:55:48 
**losetup -j | --associated  [-o ] list all associated with
 **15:55:48 **losetup [ options ] {-f|--find|loopdev}  setup 
**15:55:48 15:55:48 **Options: **15:55:48 **-e | --encryption  enable 
data encryption with specified  **15:55:48 **-h | --help this help 
**15:55:48 **-o | --offset  start at offset  into file **15:55:48 
**--sizelimit  loop limited to only  bytes of the file **15:55:48 
**-p | --pass-fd  read passphrase from file descriptor  **15:55:48 
**-r | --read-only setup read-only loop device **15:55:48 **--show print device 
name (with -f ) **15:55:48 **-v | --verbose verbose mode*


I think that this error interferes with the cleanup procedure. Am I wrong?


As for the retrigger link, I have no idea if it is missing on purpose or
should I fix that.
Can someone please assist?

Thanks
Gil


On 03/14/2016 09:39 AM, Yedidyah Bar David wrote:
> Hi all,
>
> Yesterday I worked on some change to otopi [1].
>
> Changeset 3 failed jenkins.
>
> Changeset 4 [2] succeeded, but was wrongly pushed to a somewhat-old parent.
>
> Then I rebased locally and pushed again, changeset 5 [3].
>
> In changeset 4, the version in master was 1.4.2-something.
>
> After the rebase, the version was 1.5.0-something.
>
> For some reason, changeset 5's build [3] built both 1.4.2- and 1.5.0- .
>
> Any idea why? Some dirty environment?
>
> Also, I now tried to run it again. I do not have a "retrigger" link,
> so I tried using [4], input the changeset number, marked patchset 5,
> pressed 'Trigger Selected', and now I see on the side:
>
> Triggered Builds
>  54697,5
>  No jobs triggered for this event
>
> Best,
>
> [1] https://gerrit.ovirt.org/54697
> [2] 
> http://jenkins.ovirt.org/job/otopi_master_create-rpms-el7-x86_64_created/57/
> [3] 
> http://jenkins.ovirt.org/job/otopi_master_create-rpms-el7-x86_64_created/58/
> [4] http://jenkins.ovirt.org/gerrit_manual_trigger/

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: two otopi builds in a single jenkins job

2016-03-14 Thread Gil Shinar
Hi,

Pushed a patch that will add clean workspace before build starts. This
will solve the presence of old rpms under the workspace.
Downgraded the gerrit trigger plugin on the jenkins. That did added back
the retrigger link to the builds.

Thanks a lot, David.
Gil

On 03/14/2016 02:30 PM, David Caro wrote:
> On 03/14 14:11, Gil Shinar wrote:
>> Hi,
>>
>> Investigating a bit it does seems like dirty environment because I saw
>> in the console log that it had only build the 1.5.0*.src.rpm so it means
>> that the 1.4.2*.src.rpm were there before.
>> During the cleanup phase I can see the following:
>>
>> *15:55:47 **Making sure there are no device mappings... **15:55:47 
>> **Removing the used loop devices... **15:55:48 **losetup: invalid option -- 
>> 'D' **15:55:48 15:55:48 **Usage: **15:55:48 **losetup loop_device give 
>> info **15:55:48 **losetup -a | --all list all used **15:55:48 **losetup -d | 
>> --detach  [ ...] delete **15:55:48 **losetup -f | --find 
>> find unused **15:55:48 **losetup -c | --set-capacity  resize 
>> **15:55:48 **losetup -j | --associated  [-o ] list all associated 
>> with
>>  **15:55:48 **losetup [ options ] {-f|--find|loopdev}  setup 
>> **15:55:48 15:55:48 **Options: **15:55:48 **-e | --encryption  
>> enable data encryption with specified  **15:55:48 **-h | --help 
>> this help **15:55:48 **-o | --offset  start at offset  into file 
>> **15:55:48 **--sizelimit  loop limited to only  bytes of the file 
>> **15:55:48 **-p | --pass-fd  read passphrase from file descriptor  
>> **15:55:48 **-r | --read-only setup read-only loop device **15:55:48 
>> **--show print device name (with -f ) **15:55:48 **-v | --verbose 
>> verbose mode **15:55:48 15:55:48 15:55:48 **Usage: **15:55:48 
>> **losetup loop_device give info **15:55:48 **losetup -a | --all list all 
>> used **15:55:48 **losetup -d | --detach  [ ...] delete 
>> **15:55:48 **losetup -f | --find find unused **15:55:48 **losetup -c | 
>> --set-capacity  resize **15:55:48 **losetup -j | --associated 
>>  [-o ] list all associated wi
 th
>>  **15:55:48 **losetup [ options ] {-f|--find|loopdev}  setup 
>> **15:55:48 15:55:48 **Options: **15:55:48 **-e | --encryption  
>> enable data encryption with specified  **15:55:48 **-h | --help 
>> this help **15:55:48 **-o | --offset  start at offset  into file 
>> **15:55:48 **--sizelimit  loop limited to only  bytes of the file 
>> **15:55:48 **-p | --pass-fd  read passphrase from file descriptor  
>> **15:55:48 **-r | --read-only setup read-only loop device **15:55:48 
>> **--show print device name (with -f ) **15:55:48 **-v | --verbose 
>> verbose mode*
>>
>>
>> I think that this error interferes with the cleanup procedure. Am I wrong?
> This issue should not be related to the dirty workspace, though it should be
> fixed too (not critical or anything, happens because on el6 the losetup tool
> does not have the -D option to free all the loop devices :/)
>
>>
>> As for the retrigger link, I have no idea if it is missing on purpose or
>> should I fix that.
>> Can someone please assist?
> About the triggers:
>
>   Project created events requires 2.12
>
> from http://jenkins.ovirt.org/gerrit-trigger/server/gerrit.ovirt.org/
>
> :(, something is not ok there
>
>> Thanks
>> Gil
>>
>>
>> On 03/14/2016 09:39 AM, Yedidyah Bar David wrote:
>>> Hi all,
>>>
>>> Yesterday I worked on some change to otopi [1].
>>>
>>> Changeset 3 failed jenkins.
>>>
>>> Changeset 4 [2] succeeded, but was wrongly pushed to a somewhat-old parent.
>>>
>>> Then I rebased locally and pushed again, changeset 5 [3].
>>>
>>> In changeset 4, the version in master was 1.4.2-something.
>>>
>>> After the rebase, the version was 1.5.0-something.
>>>
>>> For some reason, changeset 5's build [3] built both 1.4.2- and 1.5.0- .
>>>
>>> Any idea why? Some dirty environment?
>>>
>>> Also, I now tried to run it again. I do not have a "retrigger" link,
>>> so I tried using [4], input the changeset number, marked patchset 5,
>>> pressed 'Trigger Selected', and now I see on the side:
>>>
>>> Triggered Builds
>>>  54697,5
>>>  No jobs triggered for this event
>>>
>>> Best,
>>>
>>> [1] https://gerrit.ovirt.org/54697
>>> [2] 
>>> http://jenkins.ovirt.org/job/otopi_master_create-rpms-el7-x86_64_created/57/
>>> [3] 
>>> http://jenkins.ovirt.org/job/otopi_master_create-rpms-el7-x86_64_created/58/
>>> [4] http://jenkins.ovirt.org/gerrit_manual_trigger/
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ERROR: Command failed. See logs for output.

2016-04-11 Thread Gil Shinar
Hi,

Going over the failure, I saw that there was a change in qemuimg.py and
from that change on this job started to fail on a qemu-img requirement that
is missing.
Allon, can you please take a look?

Thanks
Gil


On Mon, Apr 11, 2016 at 10:44 AM, Nir Soffer  wrote:

> In the last hour all builds failing - hopefully someone can access the
> suggested *logs*.
>
> 08:20:32 INFO: installing package(s): autoconf automake git lago
> lago-ovirt libguestfs-tools-c m2crypto make mom policycoreutils-python
> pyflakes python-blivet python-coverage python-devel python-inotify
> python-ioprocess python-netaddr python-nose python-pep8
> python-pthreading python-rtslib python-six python34-nose python34-six
> python34 rpm-build sudo yum yum-utils grubby
> 08:20:44 ERROR: Command failed. See logs for output.
>
> see
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/555/console
>
> Nir
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ERROR: Command failed. See logs for output.

2016-04-11 Thread Gil Shinar
Seems like the check merged job is failing for quite a while now. Looking
for ybronhei on the IRC to look at it.

Gil

On Mon, Apr 11, 2016 at 11:05 AM, Eyal Edri  wrote:

> I'm not sure Allon's patch might be causing the failures if its not merged
> yet, especially when you're using mock via standard CI.
> We should look instead of the latest patches that got merged.
>
> e.
>
> On Mon, Apr 11, 2016 at 11:01 AM, Gil Shinar  wrote:
>
>> Hi,
>>
>> Going over the failure, I saw that there was a change in qemuimg.py and
>> from that change on this job started to fail on a qemu-img requirement
>> that is missing.
>> Allon, can you please take a look?
>>
>> Thanks
>> Gil
>>
>>
>> On Mon, Apr 11, 2016 at 10:44 AM, Nir Soffer  wrote:
>>
>>> In the last hour all builds failing - hopefully someone can access the
>>> suggested *logs*.
>>>
>>> 08:20:32 INFO: installing package(s): autoconf automake git lago
>>> lago-ovirt libguestfs-tools-c m2crypto make mom policycoreutils-python
>>> pyflakes python-blivet python-coverage python-devel python-inotify
>>> python-ioprocess python-netaddr python-nose python-pep8
>>> python-pthreading python-rtslib python-six python34-nose python34-six
>>> python34 rpm-build sudo yum yum-utils grubby
>>> 08:20:44 ERROR: Command failed. See logs for output.
>>>
>>> see
>>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/555/console
>>>
>>> Nir
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: master-snapshot repo is empty

2016-04-11 Thread Gil Shinar
We have an issue that the nightly rpm publish fails for a couple of days
now. It is missing some jobs that had been deleted lately.
I'm waiting for Juan to fix that. Will make sure that someone is working on
it.

Gil

On Mon, Apr 11, 2016 at 1:52 PM, Elad Ben Aharon 
wrote:

> Hi,
>
> It seems that [1] is empty. I can't find the latest ovirt packages (the
> ones from April 6 and 7). For example, the following do not exist anywhere:
>
> ovirt-engine-4.0.0-0.0.master.20160407161554.git4c3b9da.el7.centos.noarch
> ovirt-host-deploy-1.5.0-0.0.master.20160407112754.gitb51b27a.el7.centos.noarch
>
>
>
> [1] http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/
>
>
>
> Thanks
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: master-snapshot repo is empty

2016-04-11 Thread Gil Shinar
You can try now

On Mon, Apr 11, 2016 at 1:59 PM, Elad Ben Aharon 
wrote:

> Thanks
>
> On Mon, Apr 11, 2016 at 1:56 PM, Gil Shinar  wrote:
>
>> We have an issue that the nightly rpm publish fails for a couple of days
>> now. It is missing some jobs that had been deleted lately.
>> I'm waiting for Juan to fix that. Will make sure that someone is working
>> on it.
>>
>> Gil
>>
>> On Mon, Apr 11, 2016 at 1:52 PM, Elad Ben Aharon 
>> wrote:
>>
>>> Hi,
>>>
>>> It seems that [1] is empty. I can't find the latest ovirt packages (the
>>> ones from April 6 and 7). For example, the following do not exist anywhere:
>>>
>>> ovirt-engine-4.0.0-0.0.master.20160407161554.git4c3b9da.el7.centos.noarch
>>> ovirt-host-deploy-1.5.0-0.0.master.20160407112754.gitb51b27a.el7.centos.noarch
>>>
>>>
>>>
>>> [1] http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/
>>>
>>>
>>>
>>> Thanks
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: out of sync hosts in forman upstream

2016-04-11 Thread Gil Shinar
Hi Nadav,

There was no error on the artifactory server. I have no idea why it is
still in unsync state.

Thanks
Gil

On Mon, Apr 11, 2016 at 6:49 PM, Nadav Goldin  wrote:

> artifactory.ovirt.org
>> <https://foreman.ovirt.org/hosts/artifactory.ovirt.org> - tried to run
>> puppet agent -t. It completed successfully but the host
>>  is still not synced
>>
>
> what is the error?
>
>
>
>> deb81-vm01.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/deb81-vm01.phx.ovirt.org> - Cannot login
>>
>> not sure who set that up and when, our puppet code doesn't support debian
> anyways afaik.
>
>> grafana.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/grafana.phx.ovirt.org> - no ping
>>
> this machine is shutdown(under testing), I'll disable the alerts.
>
>>
>> graphite.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/graphite.phx.ovirt.org> - Cannot login
>>
> I'll fix.
>
>>
>> monitoring.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/monitoring.phx.ovirt.org> - no ping
>>
> same as grafana.
>
>
> On Mon, Apr 11, 2016 at 6:43 PM, Gil Shinar  wrote:
>
>> Hi,
>>
>> Here is a list of out of sync hosts in the upstream forman:
>> artifactory.ovirt.org
>> <https://foreman.ovirt.org/hosts/artifactory.ovirt.org> - tried to run
>> puppet agent -t. It completed successfully but the host
>>  is still not synced
>>
>> deb81-vm01.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/deb81-vm01.phx.ovirt.org> - Cannot login
>>
>> grafana.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/grafana.phx.ovirt.org> - no ping
>>
>> graphite.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/graphite.phx.ovirt.org> - Cannot login
>>
>> monitoring.phx.ovirt.org
>> <https://foreman.ovirt.org/hosts/monitoring.phx.ovirt.org> - no ping
>>
>> How should I handle these hosts?
>>
>> Thanks
>> Gil
>>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Upstream Jenkins maintenance

2016-04-12 Thread Gil Shinar
Hi,

We need to install two security patches and for that we need to restart the
Jenkins.
In order to do that we will need you to stop sending patches for two hours
so the Jenkins queue will clear itself.
I'm scheduling the restart for today at 18:00 IST. Patches that'll be sent
after 16:00 IST might not be checked.

Thanks
Gil
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Could not find artifact org.ovirt.ovirt-host-deploy:ovirt-host-deploy:jar:1.5.0-master

2016-04-12 Thread Gil Shinar
Hi all,

I have an engine build issue. It fails to find an artifact in artifactory.
Error message is:

*[ERROR] Failed to execute goal on project common-dependencies: Could
not resolve dependencies for project
org.ovirt.engine.core.manager:common-dependencies:jar:4.0.0-SNAPSHOT:
Could not find artifact
org.ovirt.ovirt-host-deploy:ovirt-host-deploy:jar:1.5.0-master in
ovirt-maven-repository
(http://artifactory.ovirt.org/artifactory/ovirt-mirror
) -> [Help 1]*


Job are: 
http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc23-x86_64
and http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64

Can someone please assist?


Thanks

Gil


On Tue, Apr 12, 2016 at 11:35 AM, Martin Perina  wrote:

> Hi,
>
> could you please fix above CI build issue?
>
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/12289/
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc23-x86_64/10108/
>
> Thanks
>
> Martin
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Could not find artifact org.ovirt.ovirt-host-deploy:ovirt-host-deploy:jar:1.5.0-master

2016-04-12 Thread Gil Shinar
Will go into the patch first next time

Thanks
Gil

On Tue, Apr 12, 2016 at 2:46 PM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Apr 12, 2016 at 10:53 AM, Gil Shinar  wrote:
>
>> Hi all,
>>
>> I have an engine build issue. It fails to find an artifact in
>> artifactory. Error message is:
>>
>> *[ERROR] Failed to execute goal on project common-dependencies: Could not 
>> resolve dependencies for project 
>> org.ovirt.engine.core.manager:common-dependencies:jar:4.0.0-SNAPSHOT: Could 
>> not find artifact 
>> org.ovirt.ovirt-host-deploy:ovirt-host-deploy:jar:1.5.0-master in 
>> ovirt-maven-repository 
>> (http://artifactory.ovirt.org/artifactory/ovirt-mirror 
>> <http://artifactory.ovirt.org/artifactory/ovirt-mirror>) -> [Help 1]*
>>
>>
>> Job are: 
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc23-x86_64 and 
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64
>>
>> Can someone please assist?
>>
>>
> It's not a bug in jenkins, it's a bug in the patch.
> I already commented on it. See https://gerrit.ovirt.org/53595
> thanks,
>
>
>
>>
>> Thanks
>>
>> Gil
>>
>>
>> On Tue, Apr 12, 2016 at 11:35 AM, Martin Perina 
>> wrote:
>>
>>> Hi,
>>>
>>> could you please fix above CI build issue?
>>>
>>>
>>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/12289/
>>>
>>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc23-x86_64/10108/
>>>
>>> Thanks
>>>
>>> Martin
>>>
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Upstream Jenkins maintenance

2016-04-12 Thread Gil Shinar
Hi,

Jenkins is back online. Some jobs were still in the queue. Please make sure
that your patches have passes check patch or check merge and if not,
re-trigger them.

Thanks
Gil

On Tue, Apr 12, 2016 at 11:36 AM, Gil Shinar  wrote:

> Hi,
>
> We need to install two security patches and for that we need to restart
> the Jenkins.
> In order to do that we will need you to stop sending patches for two hours
> so the Jenkins queue will clear itself.
> I'm scheduling the restart for today at 18:00 IST. Patches that'll be sent
> after 16:00 IST might not be checked.
>
> Thanks
> Gil
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


gerrit.ovirt.org restart

2016-04-13 Thread Gil Shinar
Hi all,

Due to very slow performance we are restarting the gerrit service.
It'll be back up in one minute

Sorry for the inconvenient
Gil
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Another job failure: "Command failed. See logs for output"

2016-05-17 Thread Gil Shinar
Hi,

I have tried to debug this issue yesterday with the help of Barak.
I don't think that the problem is history as I saw another failure like
this yesterday (it happens from time to time). Problem is the slave it
builds on being taken by another build and cleans the workspace. I tried to
look for logs under the workspace and they missing probably because a newer
build is being built on the same slave.

Gil

On Tue, May 17, 2016 at 8:52 AM, Eyal Edri  wrote:

> Can we increase the history for builds on that job?  We should be able to
> debug jobs at least a week back.  Artifacts are not needed
> On May 16, 2016 11:42 PM, "David Caro"  wrote:
>
>> On 05/15 23:23, Nir Soffer wrote:
>> > Another instance:
>> >
>> > 19:38:15 Start: yum install
>> > 19:38:22 ERROR: Command failed. See logs for output.
>>
>> That means that there was an issue with the yum repos, most common causes
>> are:
>>
>> * The repos were actually down (mirrors fail once a day usually, while
>> syncing
>>   the rpms, though the cause is just a guess)
>> * Repoproxy (that we use to cache rpms) was overloaded
>>
>> Both jobs histories are now gone, in order to be able to debug those
>> issues
>> (for the next time), try looking into the mock logs, under the 'logs.tgz'
>> file
>> that's archived in the job, then under the path:
>>/./vdsm/logs/mocker-epel-7-x86_64.el7.init/
>> or
>>   /./vdsm/logs/mocker-epel-7-x86_64.el7.install_packages/
>>
>> Usually in the log named 'root.log' (check which one has more size).
>>
>> >
>> >
>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1213/console
>>
>> >
>> > On Sun, May 15, 2016 at 11:21 PM, Nir Soffer 
>> wrote:
>> > >
>> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1209/console
>> > >
>> > > 19:30:56 Start: yum install
>> > > 19:31:03 ERROR: Command failed. See logs for output.
>> > >
>> > > Including the to make sure it will not disapper
>> > > 
>> > >
>> > > 19:30:48 Triggered by Gerrit: https://gerrit.ovirt.org/56550
>> > > 19:30:48 Building remotely on fc23-vm07.phx.ovirt.org (fc23 nested)
>> in
>> > > workspace /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
>> > > 19:30:48  > git rev-parse --is-inside-work-tree # timeout=10
>> > > 19:30:49 Fetching changes from the remote Git repository
>> > > 19:30:49  > git config remote.origin.url
>> > > git://gerrit.ovirt.org/vdsm.git # timeout=10
>> > > 19:30:49 Cleaning workspace
>> > > 19:30:49  > git rev-parse --verify HEAD # timeout=10
>> > > 19:30:49 Resetting working tree
>> > > 19:30:49  > git reset --hard # timeout=10
>> > > 19:30:49  > git clean -fdx # timeout=10
>> > > 19:30:49 Pruning obsolete local branches
>> > > 19:30:49 Fetching upstream changes from git://
>> gerrit.ovirt.org/vdsm.git
>> > > 19:30:49  > git --version # timeout=10
>> > > 19:30:49  > git -c core.askpass=true fetch --tags --progress
>> > > git://gerrit.ovirt.org/vdsm.git refs/changes/50/56550/6 --prune
>> > > 19:30:52  > git rev-parse
>> > > 0940208483f3a21261eb5d725348e65c23becdc0^{commit} # timeout=10
>> > > 19:30:52 Checking out Revision
>> 0940208483f3a21261eb5d725348e65c23becdc0 (master)
>> > > 19:30:52  > git config core.sparsecheckout # timeout=10
>> > > 19:30:52  > git checkout -f 0940208483f3a21261eb5d725348e65c23becdc0
>> > > 19:30:52  > git rev-parse FETCH_HEAD^{commit} # timeout=10
>> > > 19:30:52  > git rev-list a09c577837b939096f97108fdbbcafe5980d4a0d #
>> timeout=10
>> > > 19:30:52  > git branch -a # timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/master^{commit} # timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.1^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.2^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.3^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.3.0^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.4^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5-gluster^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5.0^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5.2^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5.4^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.5.6^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.6^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.6.0^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.6.1^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.6.2^{commit} #
>> timeout=10
>> > > 19:30:52  > git rev-parse remotes/origin/ovirt-3.6.3^{commit} #
>> timeout=10
>> > > 19:30:53  > git rev-parse --is-inside-work-tree # timeout=10
>> > > 19:30:53 Fetching 

Re: Another job failure: "Command failed. See logs for output"

2016-05-17 Thread Gil Shinar
Hi David,

I think you've got me wrong. I was referring to what Eyal said about
increasing the history we keep. Not to the problem itself.
I still have no idea what the problem is.
I thought about attaching the logs of the Mock to the build so we'll have
them archived on the master.
I'll look at the mock runner script to see where it is writing this logs so
we'll be able to archive them.

Thanks
Gil


On Tue, May 17, 2016 at 12:20 PM, dcaro  wrote:

> On 05/17 10:36, Gil Shinar wrote:
> > Hi,
> >
> > I have tried to debug this issue yesterday with the help of Barak.
> > I don't think that the problem is history as I saw another failure like
> > this yesterday (it happens from time to time). Problem is the slave it
> > builds on being taken by another build and cleans the workspace. I tried
> to
> > look for logs under the workspace and they missing probably because a
> newer
> > build is being built on the same slave.
>
> How did you find out about that?
> That has happened before when the same slave was added twice on jenkins
> with
> different names, but the behavior was way different (you'd get things like
> the
> dir does not exist and such, not errors when running yum inside mock)
>
> >
> > Gil
> >
> > On Tue, May 17, 2016 at 8:52 AM, Eyal Edri  wrote:
> >
> > > Can we increase the history for builds on that job?  We should be able
> to
> > > debug jobs at least a week back.  Artifacts are not needed
> > > On May 16, 2016 11:42 PM, "David Caro"  wrote:
> > >
> > >> On 05/15 23:23, Nir Soffer wrote:
> > >> > Another instance:
> > >> >
> > >> > 19:38:15 Start: yum install
> > >> > 19:38:22 ERROR: Command failed. See logs for output.
> > >>
> > >> That means that there was an issue with the yum repos, most common
> causes
> > >> are:
> > >>
> > >> * The repos were actually down (mirrors fail once a day usually, while
> > >> syncing
> > >>   the rpms, though the cause is just a guess)
> > >> * Repoproxy (that we use to cache rpms) was overloaded
> > >>
> > >> Both jobs histories are now gone, in order to be able to debug those
> > >> issues
> > >> (for the next time), try looking into the mock logs, under the
> 'logs.tgz'
> > >> file
> > >> that's archived in the job, then under the path:
> > >>/./vdsm/logs/mocker-epel-7-x86_64.el7.init/
> > >> or
> > >>   /./vdsm/logs/mocker-epel-7-x86_64.el7.install_packages/
> > >>
> > >> Usually in the log named 'root.log' (check which one has more size).
> > >>
> > >> >
> > >> >
> > >>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1213/console
> > >>
> > >> >
> > >> > On Sun, May 15, 2016 at 11:21 PM, Nir Soffer 
> > >> wrote:
> > >> > >
> > >>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1209/console
> > >> > >
> > >> > > 19:30:56 Start: yum install
> > >> > > 19:31:03 ERROR: Command failed. See logs for output.
> > >> > >
> > >> > > Including the to make sure it will not disapper
> > >> > > 
> > >> > >
> > >> > > 19:30:48 Triggered by Gerrit: https://gerrit.ovirt.org/56550
> > >> > > 19:30:48 Building remotely on fc23-vm07.phx.ovirt.org (fc23
> nested)
> > >> in
> > >> > > workspace
> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > >> > > 19:30:48  > git rev-parse --is-inside-work-tree # timeout=10
> > >> > > 19:30:49 Fetching changes from the remote Git repository
> > >> > > 19:30:49  > git config remote.origin.url
> > >> > > git://gerrit.ovirt.org/vdsm.git # timeout=10
> > >> > > 19:30:49 Cleaning workspace
> > >> > > 19:30:49  > git rev-parse --verify HEAD # timeout=10
> > >> > > 19:30:49 Resetting working tree
> > >> > > 19:30:49  > git reset --hard # timeout=10
> > >> > > 19:30:49  > git clean -fdx # timeout=10
> > >> > > 19:30:49 Pruning obsolete local branches
> > >> > > 19:30:49 Fetching upstream changes from git://
> > >> gerrit.ovirt.org/vdsm.git
> > >> > > 19:30:49  > git --version # timeout=10
> >

Re: Another job failure: "Command failed. See logs for output"

2016-05-17 Thread Gil Shinar
OK. My bad :-)

Thanks
Gil

On Tue, May 17, 2016 at 12:33 PM, dcaro  wrote:

> On 05/17 12:27, Gil Shinar wrote:
> > Hi David,
> >
> > I think you've got me wrong. I was referring to what Eyal said about
> > increasing the history we keep. Not to the problem itself.
>
> Yep, I got you wrong :)
>
> > I still have no idea what the problem is.
> > I thought about attaching the logs of the Mock to the build so we'll have
> > them archived on the master.
> > I'll look at the mock runner script to see where it is writing this logs
> so
> > we'll be able to archive them.
>
> They are already being archived, see the previous email I sent
>
> >
> > Thanks
> > Gil
> >
> >
> > On Tue, May 17, 2016 at 12:20 PM, dcaro  wrote:
> >
> > > On 05/17 10:36, Gil Shinar wrote:
> > > > Hi,
> > > >
> > > > I have tried to debug this issue yesterday with the help of Barak.
> > > > I don't think that the problem is history as I saw another failure
> like
> > > > this yesterday (it happens from time to time). Problem is the slave
> it
> > > > builds on being taken by another build and cleans the workspace. I
> tried
> > > to
> > > > look for logs under the workspace and they missing probably because a
> > > newer
> > > > build is being built on the same slave.
> > >
> > > How did you find out about that?
> > > That has happened before when the same slave was added twice on jenkins
> > > with
> > > different names, but the behavior was way different (you'd get things
> like
> > > the
> > > dir does not exist and such, not errors when running yum inside mock)
> > >
> > > >
> > > > Gil
> > > >
> > > > On Tue, May 17, 2016 at 8:52 AM, Eyal Edri  wrote:
> > > >
> > > > > Can we increase the history for builds on that job?  We should be
> able
> > > to
> > > > > debug jobs at least a week back.  Artifacts are not needed
> > > > > On May 16, 2016 11:42 PM, "David Caro"  wrote:
> > > > >
> > > > >> On 05/15 23:23, Nir Soffer wrote:
> > > > >> > Another instance:
> > > > >> >
> > > > >> > 19:38:15 Start: yum install
> > > > >> > 19:38:22 ERROR: Command failed. See logs for output.
> > > > >>
> > > > >> That means that there was an issue with the yum repos, most common
> > > causes
> > > > >> are:
> > > > >>
> > > > >> * The repos were actually down (mirrors fail once a day usually,
> while
> > > > >> syncing
> > > > >>   the rpms, though the cause is just a guess)
> > > > >> * Repoproxy (that we use to cache rpms) was overloaded
> > > > >>
> > > > >> Both jobs histories are now gone, in order to be able to debug
> those
> > > > >> issues
> > > > >> (for the next time), try looking into the mock logs, under the
> > > 'logs.tgz'
> > > > >> file
> > > > >> that's archived in the job, then under the path:
> > > > >>/./vdsm/logs/mocker-epel-7-x86_64.el7.init/
> > > > >> or
> > > > >>   /./vdsm/logs/mocker-epel-7-x86_64.el7.install_packages/
> > > > >>
> > > > >> Usually in the log named 'root.log' (check which one has more
> size).
> > > > >>
> > > > >> >
> > > > >> >
> > > > >>
> > >
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1213/console
> > > > >>
> > > > >> >
> > > > >> > On Sun, May 15, 2016 at 11:21 PM, Nir Soffer <
> nsof...@redhat.com>
> > > > >> wrote:
> > > > >> > >
> > > > >>
> > >
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1209/console
> > > > >> > >
> > > > >> > > 19:30:56 Start: yum install
> > > > >> > > 19:31:03 ERROR: Command failed. See logs for output.
> > > > >> > >
> > > > >> > > Including the to make sure it will not disapper
> > > > >> > > 
> > > > >> > >
> > > > >> > >

Re: Another job failure: "Command failed. See logs for output"

2016-05-17 Thread Gil Shinar
Hi David,

I'll write this as an AI to discuss in the next rhev-ci meeting

Thanks
Gil

On Tue, May 17, 2016 at 12:32 PM, dcaro  wrote:

> On 05/17 11:20, dcaro wrote:
> > On 05/17 10:36, Gil Shinar wrote:
> > > Hi,
> > >
> > > I have tried to debug this issue yesterday with the help of Barak.
> > > I don't think that the problem is history as I saw another failure like
> > > this yesterday (it happens from time to time). Problem is the slave it
> > > builds on being taken by another build and cleans the workspace. I
> tried to
> > > look for logs under the workspace and they missing probably because a
> newer
> > > build is being built on the same slave.
> >
> > How did you find out about that?
> > That has happened before when the same slave was added twice on jenkins
> with
> > different names, but the behavior was way different (you'd get things
> like the
> > dir does not exist and such, not errors when running yum inside mock)
>
>
> An example of yum failure:
>
>
>
> http://jenkins.ovirt.org/view/Master%20branch%20per%20project/view/vdsm/job/vdsm_master_check-patch-el7-x86_64/1523/
>
> You can see there the log in the console:
>
> 03:00:58 Start: yum install
> 03:01:07 ERROR: Command failed. See logs for output.
> 03:01:07  # /usr/bin/yum-deprecated --installroot
> /var/lib/mock/epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc/root/
> --releasever 7 install @buildsys-build --setopt=tsflags=nocontexts
>
> Then, in the archived artifacts logs.tgz, under
> vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log, you can see the error
> log:
>
>
> DEBUG util.py:474:  Executing command: ['/usr/bin/yum-deprecated',
> '--installroot',
> '/var/lib/mock/epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc/root/',
> '--releasever', '7', 'install', '@buildsys-build',
> '--setopt=tsflags=nocontexts'] with env {'SHELL': '/bin/bash',
> 'CCACHE_DIR': '/tmp/ccache', 'HOME': '/builddir', 'PATH':
> '/usr/bin:/bin:/usr/sbin:/sbin', 'CCACHE_UMASK': '002', 'LC_MESSAGES': 'C',
> 'PROMPT_COMMAND': 'printf "\x1b]0;\x07"', 'LANG':
> 'en_US.UTF-8', 'TERM': 'vt100', 'HOSTNAME': 'mock', 'LD_PRELOAD':
> '/tmp/tmpxqactgv2/$LIB/nosync.so'} and shell False
> DEBUG util.py:399:  Yum command has been deprecated, use dnf instead.
> DEBUG util.py:399:  See 'man dnf' and 'man yum2dnf' for more information.
> DEBUG util.py:399:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/bbf11fe5f13b4c6d0a9a242b422885c6ed41e05ae718d4ea3abfc3550ef4f243-comps-epel7.xml.xz:
> [Errno 14] HTTP Error 404 - Not Found
> DEBUG util.py:399:  Trying other mirror.
> DEBUG util.py:399:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/4e6f7cf18ae8bc3553da486c7847ac8f9a50671406d26c72a4d0765f914c5c76-updateinfo.xml.bz2:
> [Errno 14] HTTP Error 404 - Not Found
> DEBUG util.py:399:  Trying other mirror.
> DEBUG util.py:399:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/ff94d6a6fd8803f1ba27ab562b8e99c2b5f7f4ffa5d49b97689f3df6ca57e367-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> DEBUG util.py:399:  Trying other mirror.
> DEBUG util.py:399:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/ff94d6a6fd8803f1ba27ab562b8e99c2b5f7f4ffa5d49b97689f3df6ca57e367-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> DEBUG util.py:399:  Trying other mirror.
> DEBUG util.py:399:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/ff94d6a6fd8803f1ba27ab562b8e99c2b5f7f4ffa5d49b97689f3df6ca57e367-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> DEBUG util.py:399:  Trying other mirror.
> DEBUG util.py:399:   One of the configured repositories failed ("Custom
> epel"),
> DEBUG util.py:399:   and yum doesn't have enough cached data to continue.
> At this point the only
> DEBUG util.py:399:   safe thing yum can do is fail. There are a few ways
> to work "fix" this:
> DEBUG util.py:399:   1. Contact the upstream for the repository and
> get them to fix the problem.
> DEBUG util.py:399:   2. Reconfigure the baseurl/etc. for the
> repository, to point to a working
> DEBUG util.py:399:  upstream. This is most often useful if you are
> using a newer
> DEBUG util.py:399:  distribution release than is supported by the
> repository (and the
> DEBUG util.py:399:  packages for the previous dist

Re: Change in ovirt-engine[master]: core: Kernel cmdline - host deploy

2016-05-19 Thread Gil Shinar
Hi,

Did and Sandro are working on that

Thanks
Gil

On Thu, May 19, 2016 at 4:58 PM, Martin Perina  wrote:

> Hi,
>
> could you please take a look why upgrade jobs are failing?
>
> Thanks
>
> Martin
>
>
> On Thu, May 19, 2016 at 3:47 PM, Jenkins CI 
> wrote:
>
>> Jenkins CI has posted comments on this change.
>>
>> Change subject: core: Kernel cmdline - host deploy
>> ..
>>
>>
>> Patch Set 21:
>>
>> Build Failed
>>
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/306/
>> : FAILURE
>>
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/310/
>> : FAILURE
>>
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/298/
>> : SUCCESS
>>
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-fc23-x86_64/298/
>> : SUCCESS
>>
>> --
>> To view, visit https://gerrit.ovirt.org/57052
>> To unsubscribe, visit https://gerrit.ovirt.org/settings
>>
>> Gerrit-MessageType: comment
>> Gerrit-Change-Id: I314d96cc9970b07311e620bd4c4e2c878726fb72
>> Gerrit-PatchSet: 21
>> Gerrit-Project: ovirt-engine
>> Gerrit-Branch: master
>> Gerrit-Owner: Jakub Niedermertl 
>> Gerrit-Reviewer: Arik Hadas 
>> Gerrit-Reviewer: Jakub Niedermertl 
>> Gerrit-Reviewer: Jenkins CI
>> Gerrit-Reviewer: Martin Peřina 
>> Gerrit-Reviewer: Michal Skrivanek 
>> Gerrit-Reviewer: Moti Asayag 
>> Gerrit-Reviewer: Oved Ourfali 
>> Gerrit-Reviewer: Tomas Jelinek 
>> Gerrit-Reviewer: gerrit-hooks 
>> Gerrit-HasComments: No
>>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Infra issue with Jenkins CI on https://gerrit.ovirt.org/#/c/57819/

2016-05-22 Thread Gil Shinar
I see this a lot in many jobs. Cannot be network issue.
I wrote to myself to raise that issue in our next team meeting.
For me it looks more like an overload issue (maybe mirrors?)

Anyhow, I'll retrigger the job.

Gil

On Sun, May 22, 2016 at 11:01 AM, Eyal Edri  wrote:

> Looks like gerrit.ovirt.org is having network issues from old jenkins.
>
> Nadav, do we have network monitoring to see what might causing it?
> Also, we need to find away not to give -1 if git clone fails, not sure its
> possible with the gerrit trigger plugin.
>
> On Sun, May 22, 2016 at 10:58 AM, Allon Mureinik 
> wrote:
>
>> Hi Infra,
>>
>> ​​
>> My patch https://gerrit.ovirt.org/#/c/57819/ got flagged as failing CI
>> with the message
>> :
>> "
>> "
>> "
>> ​
>>
>> http://jenkins-old.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/45479/
>> : There was an infra issue, please contact infra@ovirt.org
>> ""
>> "
>>
>> The console output from the jenkins job (
>>
>> http://jenkins-old.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/45479/console
>> ) is as follows:
>> """
>>
>> *04:02:33* Triggered by Gerrit: https://gerrit.ovirt.org/57819*04:02:33* 
>> Building remotely on fc21-vm01.phx.ovirt.org 
>>  (vm phx 
>> nested fc21) in workspace 
>> /home/jenkins/workspace/ovirt-engine_master_find-bugs_gerrit*04:02:34*  > 
>> /usr/bin/git rev-parse --is-inside-work-tree # timeout=10*04:02:34* Fetching 
>> changes from the remote Git repository*04:02:34*  > /usr/bin/git config 
>> remote.origin.url git://gerrit.ovirt.org/ovirt-engine # timeout=10*04:02:34* 
>> Pruning obsolete local branches*04:02:34* Fetching upstream changes from 
>> git://gerrit.ovirt.org/ovirt-engine*04:02:34*  > /usr/bin/git --version # 
>> timeout=10*04:02:34*  > /usr/bin/git -c core.askpass=true fetch --tags 
>> --progress git://gerrit.ovirt.org/ovirt-engine refs/changes/19/57819/1 
>> --prune*04:12:34* ERROR: Timeout after 10 minutes*04:12:34* ERROR 
>> : Error fetching remote 
>> repo 'origin'*04:12:34* hudson.plugins.git.GitException 
>> :
>>  Failed to fetch from git://gerrit.ovirt.org/ovirt-engine*04:12:34*  at 
>> hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766) 
>> *04:12:34*
>> at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022) 
>> *04:12:34*
>>at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053) 
>> *04:12:34*
>>  at 
>> org.jenkinsci.plugins.multiplescms.MultiSCM.checkout(MultiSCM.java:129) 
>> *04:12:34*
>> at hudson.scm.SCM.checkout(SCM.java:485) 
>> *04:12:34*
>>at hudson.model.AbstractProject.checkout(AbstractProject.java:1276) 
>> *04:12:34*
>>   at 
>> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
>>  
>> *04:12:34*
>>  at 
>> jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) 
>> *04:12:34*
>>   at 
>> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
>>  
>> *04:12:34*
>>  at hudson.model.Run.execute(Run.java:1738) 
>> *04:12:34*
>> at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 
>> *04:12:34*
>>  at hudson.model.ResourceController.execute(ResourceController.java:98) 
>> *04:12:34*
>>  at hudson.model.Executor.run(Executor.java:410) 
>> *04:12:34*
>>  Caused by: hudson.plugins.git.GitException 
>> :
>>  Command "/usr/bin/git -c core.askpass=true fetch --tags --progress 
>> git://gerrit.ovirt

Re: www.ovirt.org is down

2016-05-30 Thread Gil Shinar
Up and running for me

On Mon, May 30, 2016 at 11:31 PM, Yedidyah Bar David 
wrote:

> $Subject. Please handle. Thanks.
> --
> Didi
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: www.ovirt.org is down

2016-05-30 Thread Gil Shinar
Didn't see it was 10 hours ago :-)

On Tue, May 31, 2016 at 9:34 AM, Gil Shinar  wrote:

> Up and running for me
>
> On Mon, May 30, 2016 at 11:31 PM, Yedidyah Bar David 
> wrote:
>
>> $Subject. Please handle. Thanks.
>> --
>> Didi
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Please enable gerrit gravatar plugin

2016-06-09 Thread Gil Shinar
I do not want to start "playing" with Gerrit plugins on Thursday a 18:00
o'clock.
Sorry I have seen this so late.
Shlomi or I Will do it on Monday.

Gil

On Thu, Jun 9, 2016 at 1:55 PM, Roy Golan  wrote:

> Dear infra, please enable this little important plugin [1]
>
> [1]
> https://gerrit-review.googlesource.com/#/admin/projects/plugins/avatars/gravatar
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins repository issues?

2016-06-30 Thread Gil Shinar
Hi Nir,

I have investigated this issue and it seems like usbutils, fcoe-utils and
ed packages were missing either in you prerequisites or in the repos they
should have been.
I have rerun your patch in that same Jenkins slave and now it completes
successfully.

So my conclusion is that some other merged patch add this prerequisite or
these packages were added to their repo.

Gil

On Wed, Jun 29, 2016 at 11:39 PM, Nir Soffer  wrote:

> Hi infra,
>
> Jenkinks failed to installed rpms on Fedora 23 - same build succeeded
> yesterday,
> without any change (on fedora).
>
> 20:28:14 Failed to synchronize cache for repo 'fedora', disabling.
> 20:28:16 Error: nothing provides usbutils needed by
> vdsm-hook-hostusb-4.18.999-168.gitf857e35.fc23.noarch.
> 20:28:16 nothing provides fcoe-utils needed by
> vdsm-hook-fcoe-4.18.999-168.gitf857e35.fc23.noarch.
> 20:28:16 nothing provides ed needed by
> vdsm-4.18.999-168.gitf857e35.fc23.x86_64.
> 20:28:16 package vdsm-tests-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> providers can be installed.
> 20:28:16 package
> vdsm-hook-vmfex-dev-4.18.999-168.gitf857e35.fc23.noarch requires vdsm
> = 4.18.999-168.gitf857e35.fc23, but none of the providers can be
> installed.
> 20:28:16 package vdsm-hook-ovs-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> providers can be installed.
> 20:28:16 package vdsm-hook-extnet-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> providers can be installed.
> 20:28:16 package
> vdsm-hook-ethtool-options-4.18.999-168.gitf857e35.fc23.noarch requires
> vdsm = 4.18.999-168.gitf857e35.fc23, but none of the providers can be
> installed.
> 20:28:16 nothing provides ed needed by
> vdsm-4.18.999-168.gitf857e35.fc23.x86_64.
> 20:28:16 package vdsm-gluster-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> providers can be installed.
> 20:28:16 package
> vdsm-hook-qemucmdline-4.18.999-168.gitf857e35.fc23.noarch requires
> vdsm, but none of the providers can be installed.
> 20:28:16 package vdsm-hook-macbind-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm >= 4.14, but none of the providers can be installed.
> 20:28:16 package vdsm-hook-ipv6-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm >= 4.16.7, but none of the providers can be installed.
> 20:28:16 package vdsm-hook-faqemu-4.18.999-168.gitf857e35.fc23.noarch
> requires vdsm, but none of the providers can be installed.
> 20:28:16 package
> vdsm-hook-fakevmstats-4.18.999-168.gitf857e35.fc23.noarch requires
> vdsm, but none of the providers can be installed.
> 20:28:16 nothing provides logrotate needed by
> vdsm-4.18.999-142.gitb53f26b.fc23.x86_64.
> 20:28:16 package
> vdsm-hook-checkimages-4.18.999-168.gitf857e35.fc23.noarch requires
> vdsm, but none of the providers can be installed.
> 20:28:16 package
> vdsm-hook-allocate_net-4.18.999-168.gitf857e35.fc23.noarch requires
> vdsm, but none of the providers can be installed
>
>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/3585/console
>
> Please check.
>
> Thanks,
> Nir
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins repository issues?

2016-06-30 Thread Gil Shinar
Doesn't look like you need to. As I said, it worked for me in the morning.
If these packages were missing for some reason (that's what the error
message claimed), they are now present. so you do not need to do anything.

Let us know if it happens again

Thanks
Gil

On Thu, Jun 30, 2016 at 3:11 PM, Nir Soffer  wrote:

> On Thu, Jun 30, 2016 at 11:24 AM, Gil Shinar  wrote:
> > Hi Nir,
> >
> > I have investigated this issue and it seems like usbutils, fcoe-utils
> and ed
> > packages were missing either in you prerequisites or in the repos they
> > should have been.
> > I have rerun your patch in that same Jenkins slave and now it completes
> > successfully.
> >
> > So my conclusion is that some other merged patch add this prerequisite or
> > these packages were added to their repo.
>
> Hi Gil,
>
> I don't follow - we require these repos:
>
> ovirt-snapshot,
> http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/$distro
>
> ovirt-snapshot-static,http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/$distro
> ovirt-ci-tools,http://resources.ovirt.org/repos/ci-tools/$distro
> lago,http://resources.ovirt.org/repos/lago/stable/0.0/rpm/$distro
>
> See
> https://github.com/oVirt/vdsm/blob/master/automation/check-patch.repos.fc23
>
> We never seen an issue with fcoe-utils or the other packages until now.
>
> Do we need to require additional repos?
>
> >
> > Gil
> >
> > On Wed, Jun 29, 2016 at 11:39 PM, Nir Soffer  wrote:
> >>
> >> Hi infra,
> >>
> >> Jenkinks failed to installed rpms on Fedora 23 - same build succeeded
> >> yesterday,
> >> without any change (on fedora).
> >>
> >> 20:28:14 Failed to synchronize cache for repo 'fedora', disabling.
> >> 20:28:16 Error: nothing provides usbutils needed by
> >> vdsm-hook-hostusb-4.18.999-168.gitf857e35.fc23.noarch.
> >> 20:28:16 nothing provides fcoe-utils needed by
> >> vdsm-hook-fcoe-4.18.999-168.gitf857e35.fc23.noarch.
> >> 20:28:16 nothing provides ed needed by
> >> vdsm-4.18.999-168.gitf857e35.fc23.x86_64.
> >> 20:28:16 package vdsm-tests-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> >> providers can be installed.
> >> 20:28:16 package
> >> vdsm-hook-vmfex-dev-4.18.999-168.gitf857e35.fc23.noarch requires vdsm
> >> = 4.18.999-168.gitf857e35.fc23, but none of the providers can be
> >> installed.
> >> 20:28:16 package vdsm-hook-ovs-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> >> providers can be installed.
> >> 20:28:16 package vdsm-hook-extnet-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> >> providers can be installed.
> >> 20:28:16 package
> >> vdsm-hook-ethtool-options-4.18.999-168.gitf857e35.fc23.noarch requires
> >> vdsm = 4.18.999-168.gitf857e35.fc23, but none of the providers can be
> >> installed.
> >> 20:28:16 nothing provides ed needed by
> >> vdsm-4.18.999-168.gitf857e35.fc23.x86_64.
> >> 20:28:16 package vdsm-gluster-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm = 4.18.999-168.gitf857e35.fc23, but none of the
> >> providers can be installed.
> >> 20:28:16 package
> >> vdsm-hook-qemucmdline-4.18.999-168.gitf857e35.fc23.noarch requires
> >> vdsm, but none of the providers can be installed.
> >> 20:28:16 package vdsm-hook-macbind-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm >= 4.14, but none of the providers can be installed.
> >> 20:28:16 package vdsm-hook-ipv6-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm >= 4.16.7, but none of the providers can be installed.
> >> 20:28:16 package vdsm-hook-faqemu-4.18.999-168.gitf857e35.fc23.noarch
> >> requires vdsm, but none of the providers can be installed.
> >> 20:28:16 package
> >> vdsm-hook-fakevmstats-4.18.999-168.gitf857e35.fc23.noarch requires
> >> vdsm, but none of the providers can be installed.
> >> 20:28:16 nothing provides logrotate needed by
> >> vdsm-4.18.999-142.gitb53f26b.fc23.x86_64.
> >> 20:28:16 package
> >> vdsm-hook-checkimages-4.18.999-168.gitf857e35.fc23.noarch requires
> >> vdsm, but none of the providers can be installed.
> >> 20:28:16 package
> >> vdsm-hook-allocate_net-4.18.999-168.gitf857e35.fc23.noarch requires
> >> vdsm, but none of the providers can be installed
> >>
> >>
> >>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/3585/console
> >>
> >> Please check.
> >>
> >> Thanks,
> >> Nir
> >> ___
> >> Infra mailing list
> >> Infra@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Error in vdsm functional tests

2016-07-05 Thread Gil Shinar
Can you send me a link to the job that fails?

Gil

On Tue, Jul 5, 2016 at 3:54 PM, Yaniv Bronheim  wrote:

> Hi,
> Recently we're getting the following error, even-though we require
> python-mock in check-merged.packages.f23 (https://gerrit.ovirt.org/59797)
>
> *09:52:43*   # Deploy environment: Success (in 0:09:06)*09:52:44* @ Deploy 
> oVirt environment: Success (in 0:09:07)*09:52:44* + lago shell 
> vdsm_functional_tests_host-fc23 -c 'mount -t tmpfs tmpfs 
> /sys/kernel/mm/ksm'*09:52:44* + tee 
> /home/jenkins/workspace/vdsm_master_check-merged-fc23-x86_64/vdsm/exported-artifacts/functional_tests_stdout.fc23.log*09:52:44*
>  current session does not belong to lago group.*09:52:46* + lago shell 
> vdsm_functional_tests_host-fc23 -c ' cd 
> /usr/share/vdsm/tests*09:52:46* ./run_tests.sh
>  --with-xunit --xunit-file=/tmp/nosetests-fc23.xml
>  -s  functional/supervdsmFuncTests.py 
> '*09:52:47* current session does not belong to lago 
> group.*09:52:49* Traceback (most recent call last):*09:52:49*   File 
> "../tests/testrunner.py", line 42, in *09:52:49* import 
> testlib*09:52:49*   File "/usr/share/vdsm/tests/testlib.py", line 44, in 
> *09:52:49* import mock*09:52:49* ImportError: No module named 
> mock*09:52:49* + failed=1*09:52:49* + lago copy-from-vm 
> vdsm_functional_tests_host-fc23 /tmp/nosetests-fc23.xml 
> /home/jenkins/workspace/vdsm_master_check-merged-fc23-x86_64/vdsm/exported-artifacts/nosetests-fc23.xml*09:52:50*
>  current session does not belong to lago group.*09:52:51* Error occured, 
> aborting*09:52:51* Traceback (most recent call last):*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/lago/cmd.py", line 691, in main*09:52:51*   
>   cli_plugins[args.verb].do_run(args)*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 180, in 
> do_run*09:52:51* self._do_run(**vars(args))*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 488, in 
> wrapper*09:52:51* return func(*args, **kwargs)*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 499, in 
> wrapper*09:52:51* return func(*args, prefix=prefix, **kwargs)*09:52:51*   
> File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 514, in 
> do_copy_from_vm*09:52:51* host.copy_from(remote_path, 
> local_path)*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 319, in 
> copy_from*09:52:51* local_path=local_path,*09:52:51*   File 
> "/usr/lib/python2.7/site-packages/scp.py", line 125, in get*09:52:51* 
> self._recv_all()*09:52:51*   File "/usr/lib/python2.7/site-packages/scp.py", 
> line 257, in _recv_all*09:52:51* raise 
> SCPException(str(msg).strip())*09:52:51* SCPException:  scp: 
> /tmp/nosetests-fc23.xml: No such file or directory*09:52:51* + :*09:52:51* + 
> lago stop vdsm_functional_tests_host-fc23
>
>
> --
> *Yaniv Bronhaim.*
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build failed in Jenkins: ovirt_master_publish-rpms_nightly #133

2016-07-13 Thread Gil Shinar
Already asked Sandro on IRC :-)

On Thu, Jul 14, 2016 at 9:19 AM, Eyal Edri  wrote:

> Sandro,
> Can you check / update otopi build? did it move to f24?
>
> On Thu, Jul 14, 2016 at 3:28 AM,  wrote:
>
>> See 
>>
>> --
>> Started by timer
>> Building on master in workspace <
>> http://jenkins.ovirt.org/job/ovirt_master_publish-rpms_nightly/ws/>
>> [WS-CLEANUP] Deleting project workspace...
>> [workspace] $ /bin/bash -xe /tmp/hudson3340116479061570244.sh
>> + rm -rf <
>> http://jenkins.ovirt.org/job/ovirt_master_publish-rpms_nightly/ws/artifacts
>> >
>> + mkdir <
>> http://jenkins.ovirt.org/job/ovirt_master_publish-rpms_nightly/ws/artifacts
>> >
>> Copied 6 artifacts from
>> "ovirt-host-deploy_master_build-artifacts-el7-x86_64" build number 11
>> Copied 6 artifacts from
>> "ovirt-host-deploy_master_build-artifacts-fc23-x86_64" build number 11
>> Copied 8 artifacts from "otopi_master_build-artifacts-el7-x86_64" build
>> number 13
>> ERROR: Unable to find project for artifact copy:
>> otopi_master_build-artifacts-fc23-x86_64
>> This may be due to incorrect project name or permission settings; see
>> help for project name in job configuration.
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: premissions for my jenkins user

2016-07-19 Thread Gil Shinar
Try now

On Tue, Jul 19, 2016 at 12:42 PM, Eyal Edri  wrote:

> Shlomi,
> Can you add ido to the dev role so he can trigger builds?
>
> On Jul 19, 2016 12:20 PM, "Ido Rosenzwig"  wrote:
>
>> Hi,
>>
>> I wish to have trigger (and re-trigger) premissions on my jenkins user.
>> my user is : irosenzw
>>
>> Best regards,
>> Ido Rosenzwig
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Not able to log to jenkins

2016-07-31 Thread Gil Shinar
Still relevant?

Gil

On Fri, Jul 29, 2016 at 6:03 PM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> I am not able to log to jenkins using pkliczewski user.
> Can someone please take a look?
>
> Thanks,
> Piotr
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Not able to log to jenkins

2016-07-31 Thread Gil Shinar
Are you using this user?
piotr.kliczew...@gmail.com
<http://jenkins.ovirt.org/securityRealm/user/piotr.kliczew...@gmail.com/>

Gil

On Sun, Jul 31, 2016 at 12:09 PM, Gil Shinar  wrote:

> Still relevant?
>
> Gil
>
> On Fri, Jul 29, 2016 at 6:03 PM, Piotr Kliczewski <
> piotr.kliczew...@gmail.com> wrote:
>
>> I am not able to log to jenkins using pkliczewski user.
>> Can someone please take a look?
>>
>> Thanks,
>> Piotr
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-681) [URGENT] boolean parameters not passed anymore to jenkins jobs

2016-08-14 Thread Gil Shinar
Hi Sandro,

I haven't had the chance to use Matrix jobs so it was a challenge for me
investigate :-)
I have found the following bug:
https://issues.jenkins-ci.org/browse/JENKINS-34758

Due to a security update, parameters stopped being passed to child jobs.
According to changelog, matrix job plugin version 1.7 should fix that:
[image: Inline image 1]

I'll upgrade the plugin but restart might be needed so you'll tell us when.

Gil

On Sat, Aug 13, 2016 at 8:49 AM, sbonazzo (oVirt JIRA) <
j...@ovirt-jira.atlassian.net> wrote:

> sbonazzo created OVIRT-681:
> --
>
>  Summary: [URGENT] boolean parameters not passed anymore to
> jenkins jobs
>  Key: OVIRT-681
>  URL: https://ovirt-jira.atlassian.net/browse/OVIRT-681
>  Project: oVirt - virtualization made easy
>   Issue Type: By-EMAIL
> Reporter: sbonazzo
> Assignee: infra
>
>
> Hi,
> looks like repository closure jobs are not working anymore because the
> boolean parameters are not passed anymore to the job environment.
> Can you please check why this happens?
> See for example
> http://jenkins.ovirt.org/user/sbonazzo/my-views/view/Repo%
> 20status/job/repos_4.0_check-closure_merged/86/
> DISTRIBUTION=centos7/console
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v1000.245.0#19)
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Gerrit/ManualTrigger permission

2016-08-14 Thread Gil Shinar
Have you signed into the Jenkins?

On Mon, Aug 15, 2016 at 6:10 AM, Ravi Nori  wrote:

> Hi,
>
> I need dev permissions for "rnori" to be able to trigger manually.
>
> "rnori is missing the Gerrit/ManualTrigger permission"
>
> Thanks,
>
> Ravi
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Gerrit/ManualTrigger permission

2016-08-14 Thread Gil Shinar
We're talking about this:
http://jenkins.ovirt.org/

Right?

Don't see you there...

On Mon, Aug 15, 2016 at 9:26 AM, Ravi Nori  wrote:

> Yes I have, user name rnori
>
> On Mon, Aug 15, 2016 at 2:20 AM, Gil Shinar  wrote:
> > Have you signed into the Jenkins?
> >
> > On Mon, Aug 15, 2016 at 6:10 AM, Ravi Nori  wrote:
> >>
> >> Hi,
> >>
> >> I need dev permissions for "rnori" to be able to trigger manually.
> >>
> >> "rnori is missing the Gerrit/ManualTrigger permission"
> >>
> >> Thanks,
> >>
> >> Ravi
> >> ___
> >> Infra mailing list
> >> Infra@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Gerrit/ManualTrigger permission

2016-08-14 Thread Gil Shinar
Try now

On Mon, Aug 15, 2016 at 9:48 AM, Ravi Nori  wrote:

> Just logged out and logged back in
>
>
>
> On Mon, Aug 15, 2016 at 2:44 AM, Gil Shinar  wrote:
> > We're talking about this:
> > http://jenkins.ovirt.org/
> >
> > Right?
> >
> > Don't see you there...
> >
> > On Mon, Aug 15, 2016 at 9:26 AM, Ravi Nori  wrote:
> >>
> >> Yes I have, user name rnori
> >>
> >> On Mon, Aug 15, 2016 at 2:20 AM, Gil Shinar  wrote:
> >> > Have you signed into the Jenkins?
> >> >
> >> > On Mon, Aug 15, 2016 at 6:10 AM, Ravi Nori  wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I need dev permissions for "rnori" to be able to trigger manually.
> >> >>
> >> >> "rnori is missing the Gerrit/ManualTrigger permission"
> >> >>
> >> >> Thanks,
> >> >>
> >> >> Ravi
> >> >> ___
> >> >> Infra mailing list
> >> >> Infra@ovirt.org
> >> >> http://lists.ovirt.org/mailman/listinfo/infra
> >> >
> >> >
> >
> >
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Failed Check-Merged Job

2016-08-17 Thread Gil Shinar
I have re-triggered the build and it completed successfully.
It was probably a fedora repo network hiccup.

Gil

On Wed, Aug 17, 2016 at 5:46 PM, Eyal Edri  wrote:

> Adding Infra support so we'll have a ticket documenting this as well.
>
> On Wed, Aug 17, 2016 at 5:23 PM, Phillip Bailey 
> wrote:
>
>> Hi infra team,
>>
>> One of my patches [1] failed the check-merged-el6-x86 job [2]. It looks
>> like a yum install failed at 12:59:29.
>>
>> Could someone please take a look and let me know if any action is
>> required on my part?
>>
>> Thanks!
>>
>> -Phillip Bailey
>>
>> [1] https://gerrit.ovirt.org/#/c/62165/
>> [2] http://jenkins.ovirt.org/job/ovirt-engine_3.6_check-merged-e
>> l6-x86_64/101/console
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-engine[master]: core: Null-safe ImagesHandler.buildStorageToDiskMap()

2016-09-15 Thread Gil Shinar
There was an issue in the Jenkins in that time when you've pushed the patch
and the find bugs job had a timeout so it was set to -1
I have retriggered all related jobs and now the patch has +1

On Tue, Sep 13, 2016 at 12:59 PM, Shmuel Melamud 
wrote:

> Hi!
>
> What have been failed is everything is succeeded? ;)
>
> Shmuel
>
> -- Forwarded message --
> From: Jenkins CI 
> Date: Mon, Sep 12, 2016 at 12:08 AM
> Subject: Change in ovirt-engine[master]: core: Null-safe ImagesHandler.
> buildStorageToDiskMap()
> To: Shmuel Leib Melamud 
>
>
> Jenkins CI has posted comments on this change.
>
> Change subject: core: Null-safe ImagesHandler.buildStorageToDiskMap()
> ..
>
>
> Patch Set 1: Continuous-Integration-1
>
> Build Failed
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-fro
> m-master_el7_created/2387/ : SUCCESS
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
> -el7-x86_64/6726/ : SUCCESS
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-fro
> m-4.0_el7_created/2387/ : SUCCESS
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
> -fc24-x86_64/983/ : SUCCESS
>
> --
> To view, visit https://gerrit.ovirt.org/63655
> To unsubscribe, visit https://gerrit.ovirt.org/settings
>
> Gerrit-MessageType: comment
> Gerrit-Change-Id: I61f67c4f088b5782b4879e9dc9721cc156eebc0d
> Gerrit-PatchSet: 1
> Gerrit-Project: ovirt-engine
> Gerrit-Branch: master
> Gerrit-Owner: Shmuel Leib Melamud 
> Gerrit-Reviewer: Arik Hadas 
> Gerrit-Reviewer: Jenkins CI
> Gerrit-Reviewer: Shahar Havivi 
> Gerrit-Reviewer: Shmuel Leib Melamud 
> Gerrit-Reviewer: gerrit-hooks 
> Gerrit-HasComments: No
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build failed in Jenkins: ovirt_4.0_system-tests #464

2016-09-26 Thread Gil Shinar
Yes, saw and now it looks like it has recovered.
It was a network issue

Gil

On Mon, Sep 26, 2016 at 2:59 PM, Evgheni Dereveanchin 
wrote:

> Looks like there's some issue with the gerrit VM.
> I see alerts on Nagios and can't connect to the WebUI or SSH at the moment.
>
>
> Regards,
> Evgheni Dereveanchin
>
> - Original Message -
> From: "Eyal Edri" 
> To: "Evgheni Dereveanchin" , "Gil Shinar" <
> gshi...@redhat.com>
> Cc: "infra" 
> Sent: Monday, 26 September, 2016 1:53:52 PM
> Subject: Re: Build failed in Jenkins: ovirt_4.0_system-tests #464
>
> connection issues to gerrit?
>
> On Mon, Sep 26, 2016 at 2:51 PM,  wrote:
>
> > See <http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/464/>
> >
> > --
> > Started by timer
> > [EnvInject] - Loading node environment variables.
> > Building remotely on ovirt-srv19.phx.ovirt.org (phx physical integ-tests
> > fc24) in workspace <http://jenkins.ovirt.org/job/
> > ovirt_4.0_system-tests/ws/>
> > Cloning the remote Git repository
> > Cloning repository git://gerrit.ovirt.org/ovirt-system-tests.git
> >  > git init <http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/ws/
> > ovirt-system-tests> # timeout=10
> > Fetching upstream changes from git://gerrit.ovirt.org/ovirt-
> > system-tests.git
> >  > git --version # timeout=10
> >  > git -c core.askpass=true fetch --tags --progress git://
> > gerrit.ovirt.org/ovirt-system-tests.git +refs/heads/*:refs/remotes/
> > origin/*
> >  > git config remote.origin.url git://gerrit.ovirt.org/ovirt-
> > system-tests.git # timeout=10
> >  > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/
> origin/*
> > # timeout=10
> >  > git config remote.origin.url git://gerrit.ovirt.org/ovirt-
> > system-tests.git # timeout=10
> > Cleaning workspace
> >  > git rev-parse --verify HEAD # timeout=10
> > No valid HEAD. Skipping the resetting
> >  > git clean -fdx # timeout=10
> > Pruning obsolete local branches
> > Fetching upstream changes from git://gerrit.ovirt.org/ovirt-
> > system-tests.git
> >  > git -c core.askpass=true fetch --tags --progress git://
> > gerrit.ovirt.org/ovirt-system-tests.git refs/heads/master --prune
> > ERROR: Error fetching remote repo 'origin'
> > hudson.plugins.git.GitException: Failed to fetch from git://
> > gerrit.ovirt.org/ovirt-system-tests.git
> > at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
> > at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
> > at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
> > at org.jenkinsci.plugins.multiplescms.MultiSCM.
> > checkout(MultiSCM.java:129)
> > at hudson.scm.SCM.checkout(SCM.java:485)
> > at hudson.model.AbstractProject.checkout(AbstractProject.java:
> > 1269)
> > at hudson.model.AbstractBuild$AbstractBuildExecution.
> > defaultCheckout(AbstractBuild.java:607)
> > at jenkins.scm.SCMCheckoutStrategy.checkout(
> > SCMCheckoutStrategy.java:86)
> > at hudson.model.AbstractBuild$AbstractBuildExecution.run(
> > AbstractBuild.java:529)
> > at hudson.model.Run.execute(Run.java:1738)
> > at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> > at hudson.model.ResourceController.execute(
> > ResourceController.java:98)
> > at hudson.model.Executor.run(Executor.java:410)
> > Caused by: hudson.plugins.git.GitException: Command "git -c
> > core.askpass=true fetch --tags --progress git://gerrit.ovirt.org/ovirt-
> > system-tests.git refs/heads/master --prune" returned status code 128:
> > stdout:
> > stderr: fatal: unable to connect to gerrit.ovirt.org:
> > gerrit.ovirt.org[0: 107.22.212.69]: errno=Connection timed out
> >
> >
> > at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> > launchCommandIn(CliGitAPIImpl.java:1640)
> > at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> > launchCommandWithCredentials(CliGitAPIImpl.java:1388)
> > at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> > access$300(CliGitAPIImpl.java:62)
> > at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.
> > execute(CliGitAPIImpl.java:313)
> > at org.jenkinsci.plugins.gitclient.RemoteGitImpl$
> > CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
> > at org.jenkinsci.plugins.gitclient.RemoteGitImpl$
> > CommandInvocationHandler$1.

Re: Build failed in Jenkins: ovirt_master_system-tests #573

2016-09-26 Thread Gil Shinar
Hi Yaniv,

Can the below error be because of the change you have pushed?

*12:01:31* Unhandled exception in  at
0x7f791843d500>*12:01:31* Traceback (most recent call last):*12:01:31*
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
195, in assert_true_within*12:01:31* if func():*12:01:31*   File
"/home/jenkins/workspace/ovirt_master_system-tests/ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
line 382, in *12:01:31* lambda:
api.disks.get(disk_name).status.state == 'ok',*12:01:31*
AttributeError: 'NoneType' object has no attribute 'status'*12:01:31*
Error while running thread*12:01:31* Traceback (most recent call
last):*12:01:31*   File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 53, in
_ret_via_queue*12:01:31* queue.put({'return': func()})*12:01:31*
File 
"/home/jenkins/workspace/ovirt_master_system-tests/ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
line 409, in import_template_from_glance*12:01:31*
generic_import_from_glance(api, image_name=CIRROS_IMAGE_NAME,
image_ext='_glance_template', as_template=True)*12:01:31*   File
"/home/jenkins/workspace/ovirt_master_system-tests/ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
line 382, in generic_import_from_glance*12:01:31* lambda:
api.disks.get(disk_name).status.state == 'ok',*12:01:31*   File
"/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 223, in
assert_true_within_long*12:01:31*
allowed_exceptions=allowed_exceptions,*12:01:31*   File
"/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 195, in
assert_true_within*12:01:31* if func():*12:01:31*   File
"/home/jenkins/workspace/ovirt_master_system-tests/ovirt-system-tests/basic_suite_master/test-scenarios/002_bootstrap.py",
line 382, in *12:01:31* lambda:
api.disks.get(disk_name).status.state == 'ok',*12:01:31*
AttributeError: 'NoneType' object has no attribute 'status'*12:01:31*
   * Collect artifacts: *12:01:47* * Collect artifacts: ERROR (in
0:00:15)*12:01:47*   # add_secondary_storage_domains: ERROR (in
0:04:52)*12:01:47*   # Results located at
/home/jenkins/workspace/ovirt_master_system-tests/ovirt-system-tests/deployment-basic_suite_master/default/nosetests-002_bootstrap.py.xml*12:01:47*
@ Run test: 002_bootstrap.py: ERROR (in 0:16:32)*12:01:47* Error
occured, aborting*12:01:47* Traceback (most recent call
last):*12:01:47*   File
"/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 258, in
do_run*12:01:47*
self.cli_plugins[args.ovirtverb].do_run(args)*12:01:47*   File
"/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 180, in
do_run*12:01:47* self._do_run(**vars(args))*12:01:47*   File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 488, in
wrapper*12:01:47* return func(*args, **kwargs)*12:01:47*   File
"/usr/lib/python2.7/site-packages/lago/utils.py", line 499, in
wrapper*12:01:47* return func(*args, prefix=prefix,
**kwargs)*12:01:47*   File
"/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 102, in
do_ovirt_runtest*12:01:47* raise RuntimeError('Some tests failed')

*12:01:47* RuntimeError: Some tests failed


Thanks
Gil

On Mon, Sep 26, 2016 at 3:02 PM,  wrote:

> See 
>
> Changes:
>
> [Yaniv Kaul] Fix Glance provider list test
>
> --
> [...truncated 716 lines...]
> ##  took 1864 seconds
> ##  rc = 1
> ##
> ##! ERROR v
> ##! Last 20 log enties: logs/mocker-fedora-24-x86_64.
> fc24.basic_suite_master.sh/basic_suite_master.sh.log
> ##!
> + true
> + env_cleanup
> + echo '#'
> #
> + local res=0
> + local uuid
> + echo ' Cleaning up'
>  Cleaning up
> + [[ -e  ovirt-system-tests/deployment-basic_suite_master> ]]
> + echo '--- Cleaning with lago'
> --- Cleaning with lago
> + lago --workdir  ovirt_master_system-tests/ws/ovirt-system-tests/deployment-
> basic_suite_master> destroy --yes --all-prefixes
> + echo '--- Cleaning with lago done'
> --- Cleaning with lago done
> + [[ 0 != \0 ]]
> + echo ' Cleanup done'
>  Cleanup done
> + exit 0
> Took 1708 seconds
> ===
> ##!
> ##! ERROR ^^
> ##!
> ##
> Build step 'Execute shell' marked build as failure
> Performing Post build task...
> Match found for :.* : True
> Logical operation result is TRUE
> Running script  : #!/bin/bash -xe
> echo 'shell_scripts/system_tests.collect_logs.sh'
>
> #
> # Required jjb vars:
> #version
> #
> VERSION=master
> SUITE_TYPE=
>
> WORKSPACE="$PWD

Re: Permissions to run builds manually

2016-09-27 Thread Gil Shinar
Try now

On Tue, Sep 27, 2016 at 1:20 PM, Ala Hino  wrote:

> Hello,
>
> I created a user, ahino, in Jenkins but cannot run builds manually.
> Can you please assign the appropriate permissions to me so I can trigger
> builds manually?
>
> Thanks,
> Ala
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Unable to sign in to gerrit

2016-10-09 Thread Gil Shinar
Hi Piotr,

In the Gerrit logs I see the following error message:
*review_site/logs/error_log.2016-10-08:[2016-10-08 12:45:16,341] [HTTP-749]
ERROR com.google.gerrit.httpd.auth.openid.OpenIdServiceImpl : Cannot
discover OpenID http://pkliczew.id.fedoraproject.org/
*
*review_site/logs/error_log.2016-10-08:org.openid4java.discovery.yadis.YadisException:
0x706: GET failed on http://pkliczew.id.fedoraproject.org/
 : 503*

Can you please try later on today and let me know if it works?

Thanks
Gil

On Sat, Oct 8, 2016 at 7:49 PM, Piotr Kliczewski  wrote:

> I attempted to sign in to gerrit using my fedora account [1] but when
> click sign in I see:
>
> "Provider is not supported, or was incorrectly entered."
>
> Where there any changes recently which are causing it?
>
> Thanks,
> Piotr
>
> [1] http://pkliczew.id.fedoraproject.org/
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Permission to (re-)trigger gerrit builds in jenkins

2016-10-25 Thread Gil Shinar
Hi,

Can you please try now?

Thanks
Gil

On Fri, Oct 21, 2016 at 4:07 PM, Dominik Holler  wrote:

> Hi,
> I like to have the permission to retrigger failed gerrit builds on the
> jenkins build system.
> Who can enable me to do so?
> Thanks,
> Dominik
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-engine-4.0.5 is not treated as a monitored stable branch

2016-10-26 Thread Gil Shinar
Can you please elaborate?

Thanks
Gil

On Wed, Oct 26, 2016 at 11:50 AM, Tal Nisan  wrote:

> This patch for example:
> https://gerrit.ovirt.org/#/c/65398/
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc24-x86_64/6383

2016-11-03 Thread Gil Shinar
resources.ovirt.org was down a few hours ago
Now it should be OK

Gil

On Thu, Nov 3, 2016 at 12:57 PM, Yevgeny Zaspitsky 
wrote:

> Hi,
>
> Could someone look into the build?
> It looks like it fails on some yum command, which isn't part of the patch
> it tries to check.
>
> From the console log:
>
> *22:51:47* Start: yum install*23:01:48* ERROR: Command failed. See logs for 
> output.*23:01:48*  # /usr/bin/yum --installroot 
> /var/lib/mock/fedora-24-x86_64-434042a79eacbd41013fce53b4b4ce62-13427/root/ 
> --releasever 24 install @buildsys-build --setopt=tsflags=nocontexts*23:01:48* 
> Init took 602 seconds
>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: OUTAGE: jenkins.ovirt.org is down

2016-11-10 Thread Gil Shinar
Jenkins is up and running

Sorry for the inconvenience
Gil

On Thu, Nov 10, 2016 at 9:25 AM, Eyal Edri  wrote:

> We're experience problems with the jenkins server, and its currently down.
> We are in the process of understanding the issue and are working to bring
> it back online,
>
> We will update when the service is back online, hopefully shouldn't take
> long.
>
> Infra team.
>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins was down

2016-11-10 Thread Gil Shinar
Hi all,

The root cause was no space on disk.
Main jobs that took most of the space were the test experimental jobs which
didn't have any discarder configuration (sent a patch a few minutes ago)
and the ovirt-live_master_create-iso-el7-x86_64 which I'm planning to leave
up to three last builds.

Sorry and thanks
Gil
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] test-repo_ovirt_experimental_3.6 - Build #3471 - FAILURE!

2016-11-10 Thread Gil Shinar
Got fixed.

That's what is important :-)

Thanks
Gil

On Thu, Nov 10, 2016 at 1:22 PM, Yedidyah Bar David  wrote:

> On Thu, Nov 10, 2016 at 12:14 PM,   wrote:
> > Build: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> 3.6/3471/,
> > Build Number: 3471,
> > Build Status: FAILURE
>
> 10:13:59  [36m* Collect artifacts:  [31mERROR [0m (in 0:00:15) [0m
> 10:13:59  [36m  # add_hosts:  [31mERROR [0m (in 0:15:20) [0m
>
> No idea why this failed, I do not think it's related to my last patch.
>
> Best,
> --
> Didi
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: All(?) CI jobs failing with errors downloading packaging

2016-11-14 Thread Gil Shinar
Seems like an issue with Fedora repos.

I'll keep an eye on it.

Thanks for noting
Gil

On Mon, Nov 14, 2016 at 9:26 AM, Allon Mureinik  wrote:

> I see a lot of the following errors (AFAIK, on ALL CI jobs):
>
> DEBUG util.py:421:  http://proxy.phx.ovirt.org:
> 5000/fedora-updates/24/x86_64/u/unzip-6.0-30.fc24.x86_64.rpm: [Errno 14]
> HTTP Error 503 - Service Unavailable
> DEBUG util.py:421:  Trying other mirror.
> DEBUG util.py:421:  http://proxy.phx.ovirt.org:
> 5000/fedora-updates/24/x86_64/u/unzip-6.0-30.fc24.x86_64.rpm: [Errno 14]
> HTTP Error 503 - Service Unavailable
> DEBUG util.py:421:  Trying other mirror.
> DEBUG util.py:421:  Error downloading packages:
> DEBUG util.py:421:libselinux-2.5-9.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:info-6.1-3.fc24.x86_64: [Errno 256] No more mirrors
> to try.
> DEBUG util.py:421:libsolv-0.6.24-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libstdc++-devel-6.2.1-2.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:4:perl-libs-5.22.2-364.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:nss-3.27.0-1.2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libassuan-2.4.3-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:nettle-3.2-3.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:4:perl-macros-5.22.2-364.fc24.x86_64: [Errno 256]
> No more mirrors to try.
> DEBUG util.py:421:1:gmp-6.1.1-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libstdc++-6.2.1-2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libgpg-error-1.24-1.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:libgcrypt-1.6.6-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:python3-hawkey-0.6.3-6.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:libidn-1.33-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:unzip-6.0-30.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libgcc-6.2.1-2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:3:perl-Socket-2.024-1.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:4:perl-5.22.2-364.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:systemd-libs-229-16.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:systemd-229-16.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:pcre-8.39-6.fc24.x86_64: [Errno 256] No more mirrors
> to try.
> DEBUG util.py:421:glib2-2.48.2-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:ncurses-libs-6.0-6.20160709.fc24.x86_64: [Errno
> 256] No more mirrors to try.
> DEBUG util.py:421:redhat-rpm-config-41-2.fc24.noarch: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:perl-IO-1.35-364.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:python3-dnf-1.1.10-1.fc24.noarch: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:nss-sysinit-3.27.0-1.2.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:libsemanage-2.5-5.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:nss-pem-1.0.2-2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:ncurses-6.0-6.20160709.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:libsepol-2.5-8.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libtasn1-4.9-1.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:gcc-c++-6.2.1-2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:gcc-6.2.1-2.fc24.x86_64: [Errno 256] No more mirrors
> to try.
> DEBUG util.py:421:nss-tools-3.27.0-1.2.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:kernel-headers-4.8.6-201.fc24.x86_64: [Errno 256]
> No more mirrors to try.
> DEBUG util.py:421:libtool-ltdl-2.4.6-12.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:lua-5.3.3-2.fc24.x86_64: [Errno 256] No more mirrors
> to try.
> DEBUG util.py:421:libreport-filesystem-2.7.2-1.fc24.x86_64: [Errno
> 256] No more mirrors to try.
> DEBUG util.py:421:ncurses-base-6.0-6.20160709.fc24.noarch: [Errno
> 256] No more mirrors to try.
> DEBUG util.py:421:krb5-libs-1.14.4-4.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:1:openssl-libs-1.0.2j-1.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py:421:libgomp-6.2.1-2.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:libcurl-7.47.1-9.fc24.x86_64: [Errno 256] No more
> mirrors to try.
> DEBUG util.py:421:perl-Errno-1.23-364.fc24.x86_64: [Errno 256] No
> more mirrors to try.
> DEBUG util.py

Re: vdsm_master_build-artifacts-fc24-ppc64le job keeps failing

2016-11-14 Thread Gil Shinar
I'm still not sure about that and still didn't get any response from Michal.
According to fedora site, they do support ppc64le

On Mon, Nov 14, 2016 at 11:54 AM, Eyal Edri  wrote:

> We don't support PPC64LE on fedora, so I suggest to remove this job.
>
> On Mon, Nov 14, 2016 at 11:53 AM, Irit Goihman 
> wrote:
>
>> it's triggered whenever a patch is merged
>>
>> On Mon, Nov 14, 2016 at 10:47 AM, Eyal Edri  wrote:
>>
>>> I'm not sure it should run at all, do we support ppc64le on fedora?
>>>
>>> On Mon, Nov 14, 2016 at 10:30 AM, Irit Goihman 
>>> wrote:
>>>
 Hi,
 It seems like there's a problem with this job.

 All the jobs fail because of this error:


 Start: yum install*15:11:34* ERROR: Command failed. See logs for output.


 Can you please check?


 Thanks,


 --
 Irit Goihman
 Software Engineer
 Red Hat Israel Ltd.

 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra


>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R&D
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>>
>> --
>> Irit Goihman
>> Software Engineer
>> Red Hat Israel Ltd.
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_build-artifacts-fc24-ppc64le job keeps failing

2016-11-14 Thread Gil Shinar
Michal,

For guest agent we need ppc64 and ppc64le for fc24? Problem is I can't find
these repos where we are downloading from:
https://fedorapeople.org/groups/virt/virt-preview/fedora-24/

Gil

On Mon, Nov 14, 2016 at 12:00 PM, Michal Skrivanek 
wrote:

>
> On 14 Nov 2016, at 11:57, Gil Shinar  wrote:
>
> I'm still not sure about that and still didn't get any response from
> Michal.
> According to fedora site, they do support ppc64le
>
>
> please remove it
> not worth taking care of that
> we just need el7 ppc64le and we do need ppc64 and ppc64le for guest agent
> which IIRC doesn’t have this automation
>
>
>
> On Mon, Nov 14, 2016 at 11:54 AM, Eyal Edri  wrote:
>
>> We don't support PPC64LE on fedora, so I suggest to remove this job.
>>
>> On Mon, Nov 14, 2016 at 11:53 AM, Irit Goihman 
>> wrote:
>>
>>> it's triggered whenever a patch is merged
>>>
>>> On Mon, Nov 14, 2016 at 10:47 AM, Eyal Edri  wrote:
>>>
>>>> I'm not sure it should run at all, do we support ppc64le on fedora?
>>>>
>>>> On Mon, Nov 14, 2016 at 10:30 AM, Irit Goihman 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>> It seems like there's a problem with this job.
>>>>>
>>>>> All the jobs fail because of this error:
>>>>>
>>>>>
>>>>> Start: yum install*15:11:34* ERROR: Command failed. See logs for output.
>>>>>
>>>>>
>>>>> Can you please check?
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> --
>>>>> Irit Goihman
>>>>> Software Engineer
>>>>> Red Hat Israel Ltd.
>>>>>
>>>>> ___
>>>>> Infra mailing list
>>>>> Infra@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Eyal Edri
>>>> Associate Manager
>>>> RHV DevOps
>>>> EMEA ENG Virtualization R&D
>>>> Red Hat Israel
>>>>
>>>> phone: +972-9-7692018
>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>
>>>
>>>
>>>
>>> --
>>> Irit Goihman
>>> Software Engineer
>>> Red Hat Israel Ltd.
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R&D
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_build-artifacts-fc24-ppc64le job keeps failing

2016-11-14 Thread Gil Shinar
So Sandro,

It is your call

Gil

On Mon, Nov 14, 2016 at 12:38 PM, Michal Skrivanek 
wrote:

>
> On 14 Nov 2016, at 12:30, Gil Shinar  wrote:
>
> Michal,
>
> For guest agent we need ppc64 and ppc64le for fc24? Problem is I can't
> find these repos where we are downloading from:
> https://fedorapeople.org/groups/virt/virt-preview/fedora-24/
>
>
> it’s not relevant for that project. no need for virt-rpreview
>
>
> Gil
>
> On Mon, Nov 14, 2016 at 12:00 PM, Michal Skrivanek 
> wrote:
>
>>
>> On 14 Nov 2016, at 11:57, Gil Shinar  wrote:
>>
>> I'm still not sure about that and still didn't get any response from
>> Michal.
>> According to fedora site, they do support ppc64le
>>
>>
>> please remove it
>> not worth taking care of that
>> we just need el7 ppc64le and we do need ppc64 and ppc64le for guest agent
>> which IIRC doesn’t have this automation
>>
>>
>>
>> On Mon, Nov 14, 2016 at 11:54 AM, Eyal Edri  wrote:
>>
>>> We don't support PPC64LE on fedora, so I suggest to remove this job.
>>>
>>> On Mon, Nov 14, 2016 at 11:53 AM, Irit Goihman 
>>> wrote:
>>>
>>>> it's triggered whenever a patch is merged
>>>>
>>>> On Mon, Nov 14, 2016 at 10:47 AM, Eyal Edri  wrote:
>>>>
>>>>> I'm not sure it should run at all, do we support ppc64le on fedora?
>>>>>
>>>>> On Mon, Nov 14, 2016 at 10:30 AM, Irit Goihman 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>> It seems like there's a problem with this job.
>>>>>>
>>>>>> All the jobs fail because of this error:
>>>>>>
>>>>>>
>>>>>> Start: yum install*15:11:34* ERROR: Command failed. See logs for output.
>>>>>>
>>>>>>
>>>>>> Can you please check?
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Irit Goihman
>>>>>> Software Engineer
>>>>>> Red Hat Israel Ltd.
>>>>>>
>>>>>> ___
>>>>>> Infra mailing list
>>>>>> Infra@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Eyal Edri
>>>>> Associate Manager
>>>>> RHV DevOps
>>>>> EMEA ENG Virtualization R&D
>>>>> Red Hat Israel
>>>>>
>>>>> phone: +972-9-7692018
>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Irit Goihman
>>>> Software Engineer
>>>> Red Hat Israel Ltd.
>>>>
>>>
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R&D
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_build-artifacts-fc24-ppc64le job keeps failing

2016-11-15 Thread Gil Shinar
Sent a patch <https://gerrit.ovirt.org/#/c/66644/>. Added you all as
reviewers

Gil

On Mon, Nov 14, 2016 at 1:24 PM, Gil Shinar  wrote:

> So Sandro,
>
> It is your call
>
> Gil
>
> On Mon, Nov 14, 2016 at 12:38 PM, Michal Skrivanek 
> wrote:
>
>>
>> On 14 Nov 2016, at 12:30, Gil Shinar  wrote:
>>
>> Michal,
>>
>> For guest agent we need ppc64 and ppc64le for fc24? Problem is I can't
>> find these repos where we are downloading from:
>> https://fedorapeople.org/groups/virt/virt-preview/fedora-24/
>>
>>
>> it’s not relevant for that project. no need for virt-rpreview
>>
>>
>> Gil
>>
>> On Mon, Nov 14, 2016 at 12:00 PM, Michal Skrivanek 
>> wrote:
>>
>>>
>>> On 14 Nov 2016, at 11:57, Gil Shinar  wrote:
>>>
>>> I'm still not sure about that and still didn't get any response from
>>> Michal.
>>> According to fedora site, they do support ppc64le
>>>
>>>
>>> please remove it
>>> not worth taking care of that
>>> we just need el7 ppc64le and we do need ppc64 and ppc64le for guest
>>> agent which IIRC doesn’t have this automation
>>>
>>>
>>>
>>> On Mon, Nov 14, 2016 at 11:54 AM, Eyal Edri  wrote:
>>>
>>>> We don't support PPC64LE on fedora, so I suggest to remove this job.
>>>>
>>>> On Mon, Nov 14, 2016 at 11:53 AM, Irit Goihman 
>>>> wrote:
>>>>
>>>>> it's triggered whenever a patch is merged
>>>>>
>>>>> On Mon, Nov 14, 2016 at 10:47 AM, Eyal Edri  wrote:
>>>>>
>>>>>> I'm not sure it should run at all, do we support ppc64le on fedora?
>>>>>>
>>>>>> On Mon, Nov 14, 2016 at 10:30 AM, Irit Goihman 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> It seems like there's a problem with this job.
>>>>>>>
>>>>>>> All the jobs fail because of this error:
>>>>>>>
>>>>>>>
>>>>>>> Start: yum install*15:11:34* ERROR: Command failed. See logs for output.
>>>>>>>
>>>>>>>
>>>>>>> Can you please check?
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Irit Goihman
>>>>>>> Software Engineer
>>>>>>> Red Hat Israel Ltd.
>>>>>>>
>>>>>>> ___
>>>>>>> Infra mailing list
>>>>>>> Infra@ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Eyal Edri
>>>>>> Associate Manager
>>>>>> RHV DevOps
>>>>>> EMEA ENG Virtualization R&D
>>>>>> Red Hat Israel
>>>>>>
>>>>>> phone: +972-9-7692018
>>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Irit Goihman
>>>>> Software Engineer
>>>>> Red Hat Israel Ltd.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Eyal Edri
>>>> Associate Manager
>>>> RHV DevOps
>>>> EMEA ENG Virtualization R&D
>>>> Red Hat Israel
>>>>
>>>> phone: +972-9-7692018
>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>>
>>>> ___
>>>> Infra mailing list
>>>> Infra@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>>
>>>
>>>
>>
>>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Change in ovirt-engine[master]: packaging: Add ovirt-engine-hosts-ansible-inventory

2016-11-17 Thread Gil Shinar
Might be some leftovers from my "permanent maven cache" feature.
I'l chown jenkins:jenkins -R .m2 folder on this slave.

Gil

On Thu, Nov 17, 2016 at 4:00 PM, Yedidyah Bar David  wrote:

> On Thu, Nov 17, 2016 at 3:42 PM, Code Review  wrote:
> > From Jenkins CI:
> >
> > Jenkins CI has posted comments on this change.
> >
> > Change subject: packaging: Add ovirt-engine-hosts-ansible-inventory
> > ..
> >
> >
> > Patch Set 3: Continuous-Integration-1
> >
> > Build Failed
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-
> from-4.0_el7_created/8854/ : FAILURE
>
> rpmbuild.log has:
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-install-plugin:2.3.1:install
> (default-install) on project ovirt-findbugs-filters: Failed to install
> artifact org.ovirt.engine:ovirt-findbugs-filters:jar:4.1.0-SNAPSHOT:
> /home/jenkins/.m2/repository/org/ovirt/engine/ovirt-
> findbugs-filters/4.1.0-SNAPSHOT/ovirt-findbugs-filters-4.1.0-SNAPSHOT.jar
> (Permission denied) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with
> the -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
>
> Any idea?
>
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-
> from-3.6_el7_created/6343/ : FAILURE
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-
> from-master_el7_created/8846/ : SUCCESS
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_check-
> patch-fc24-x86_64/7439/ : SUCCESS
> >
> > http://jenkins.ovirt.org/job/ovirt-engine_master_check-
> patch-el7-x86_64/13180/ : SUCCESS
> >
> > --
> > To view, visit https://gerrit.ovirt.org/66999
> > To unsubscribe, visit https://gerrit.ovirt.org/settings
> >
> > Gerrit-MessageType: comment
> > Gerrit-Change-Id: I62f5c9c67cf83deee35f7b12e2576bb1a520c48a
> > Gerrit-PatchSet: 3
> > Gerrit-Project: ovirt-engine
> > Gerrit-Branch: master
> > Gerrit-Owner: Yedidyah Bar David 
> > Gerrit-Reviewer: Jenkins CI
> > Gerrit-Reviewer: gerrit-hooks 
> > Gerrit-HasComments: No
>
>
>
> --
> Didi
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: gerrit login troubles

2016-11-21 Thread Gil Shinar
Adding infra support

On Tue, Nov 22, 2016 at 9:26 AM, Francesco Romani 
wrote:

> Hi Infra,
>
> I used to authenticate on gerrit using FAS provider
> (https://admin.fedoraproject.org/accounts)
> and my FAS account.
>
> In the recent days it became too shaky, so I decided to try out
> the Google OAuth2 (gerrit-oauth-provider plugin) method, linking
> my @redhat.com account and the related gmail account.
>
> Apparently this caused split brain on gerrit, so now I have duplicated
> entry and I cannot be added to reviews.
> Plus:
> 1. if I log through fedora accounts (old method) I can review as usual
> 2. if I log through google/redhat, I only have +/-1 score and no special
> powers
>
> I'd like to switch to google oauth2 entirely or, if not possible, to solve
> the split brain issue.
>
> Can anybody help? :)
>
> Bests and thanks,
>
> --
> Francesco Romani
> Red Hat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] 4.0.x dependency failure (vdsm-jsonrpc-java)

2016-12-11 Thread Gil Shinar
If I understand the issue correctly, the failures are for fc23 and not for
el7.
build artifacts 23, the one that bumped the version, failed for fc23 [1]
because of test failures.


[1]
http://jenkins.ovirt.org/job/vdsm-jsonrpc-java_4.0_build-artifacts-fc23-x86_64/23/console

On Sun, Dec 11, 2016 at 11:18 AM, Eyal Edri  wrote:

> Adding infra as well.
>
> I see a very strange thing happening on the build artifacts jobs:
>
> On [1] we see a successful build of 1.2.10 built in build-artifacts, but
> on the 2 patches merged after it, its back to 1.2.9 [2].
> Is it possible that the 2 patches merged after the version bump weren't
> rebased on the version branch and were built using older code?
> I think what we're seeing is a race of 3 patches merged all at the same
> time ( jenkins shows same datetime on the builds ) and it might be that the
> 2 builds that show older version
> run before the version bump was done, even if it shows otherwise on CI.
>
> I've run manually the job again and it nows produces 1.2.10 as expected
> [3], so the job should work once the rpms are deployed ( 30 min ~ )
>
>
> I believe this is one of the things Zuul will solve, I can't think on
> something we can do at the moment to prevents such issues,
>
> Barak - any ideas?
>
> The project has 'fast forward only' mode in Gerrit.
>
> [1] http://jenkins.ovirt.org/job/vdsm-jsonrpc-java_4.0_
> build-artifacts-el7-x86_64/23/
> [2] http://jenkins.ovirt.org/job/vdsm-jsonrpc-java_4.0_
> build-artifacts-el7-x86_64/24/
> [3] http://jenkins.ovirt.org/job/vdsm-jsonrpc-java_4.0_
> build-artifacts-el7-x86_64/26/console
>
> On Sun, Dec 11, 2016 at 11:01 AM, Piotr Kliczewski 
> wrote:
>
>> Yes, version bump was merged.
>>
>> 11 gru 2016 09:57 "Eyal Edri"  napisał(a):
>>
>>> Was the version bumped pushed to the repo itself in Gerrit?
>>> I.e - does build-artifacts builds now the new version 1.2.10?
>>>
>>> On Fri, Dec 9, 2016 at 9:33 PM, Sandro Bonazzola 
>>> wrote:
>>>


 Il 09/Dic/2016 18:45, "Piotr Kliczewski"  ha
 scritto:

 Here [1] is the manual build. We need to check publisher.



 Manual builds are not published by nightly publisher. Manual builds are
 meant for releases or for testing. I haven't the laptop with me tonight and
 tomorrow. Didi, Lev can you handle Sunday morning?




 Thanks,
 Piotr

 [1] http://jenkins.ovirt.org/job/vdsm-jsonrpc-java_any_build
 -artifacts-manual/30/

 9 gru 2016 18:36 "Piotr Kliczewski"  napisał(a):

> Both us and ds rpms were built. Is it possible to check why new
> version was not published to the repo?
>
> 9 gru 2016 18:07 "Martin Perina"  napisał(a):
>
>> Piotr, are you able to publish new vdsm-jsonrpc-java 1.2.10 into
>> ovirt-4.0-snapshot repo [1]? Or only someone from integration team can do
>> that?
>>
>> Thanks
>>
>> Martin
>>
>> [1] http://plain.resources.ovirt.org/pub/ovirt-4.0-snapshot/rpm
>>
>>
>> On Fri, Dec 9, 2016 at 4:54 PM, Yaniv Kaul  wrote:
>>
>>> See [1]:
>>>
>>> + yum install --nogpgcheck -y --downloaddir=/dev/shm ovirt-engine 
>>> ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'*13:55:25* Error: 
>>> Package: 
>>> ovirt-engine-backend-4.0.7-0.0.master.20161208233116.git3dff5ce.el7.centos.noarch
>>>  (alocalsync)*13:55:25*Requires: vdsm-jsonrpc-java >= 
>>> 1.2.10*13:55:25*Available: 
>>> vdsm-jsonrpc-java-1.2.9-1.20161208102442.gite5c0c8e.el7.centos.noarch 
>>> (alocalsync)*13:55:25*vdsm-jsonrpc-java = 
>>> 1.2.9-1.20161208102442.gite5c0c8e.el7.centos
>>>
>>>
>>>
>>>
>>> [1] 
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-fc24-x86_64/319/console
>>>
>>>
>>> ___
>>> Devel mailing list
>>> de...@ovirt.org
>>> http://lists.phx.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.phx.ovirt.org/mailman/listinfo/devel



 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.phx.ovirt.org/mailman/listinfo/devel

>>>
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R&D
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/ma

Re: Jenkins permissions

2016-12-12 Thread Gil Shinar
Can you please try now?

Gil

On Mon, Dec 12, 2016 at 3:43 PM, Evgenia Tokar  wrote:

> Hi!
>
> I need permissions for retrigering jobs in Jenkins.
>
> username: jtokar
> email: jto...@redhat.com
>
> Thanks,
> Jenny
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/infra


Re: Jenkins user - manual trigger

2016-12-14 Thread Gil Shinar
Try now

On Wed, Dec 14, 2016 at 8:52 AM, Fred Rolland  wrote:

> Hi,
>
> I would like to have a user to the Jenkins in order to be able to trigger
> manually.
>
> http://jenkins.ovirt.org/login?from=%2Fgerrit_manual_trigger%2F
>
> Thanks,
>
> Freddy
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/infra


Re: SQLException on http://artifactory.ovirt.org/

2016-12-14 Thread Gil Shinar
Can you please retry?

Thanks
Gil

On Tue, Dec 13, 2016 at 2:27 PM, Dominik Holler  wrote:

> Hi,
> on access to
> http://artifactory.ovirt.org/artifactory/ovirt-mirror/org/
> apache/maven/surefire/surefire-junit4/2.7.2/surefire-junit4-2.7.2.pom
> the following error message is responded:
> {
>   "errors" : [ {
> "status" : 500,
> "message" : "Could not process download request:
> java.sql.SQLException: An SQL data change is not permitted for a
> read-only connection, user or database." } ] }
>
> This seems to be critical for ovirt-4.0 CI build.
>
> Who can fix this?
>
> Thanks, Dominik
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/infra


Re: Maintainership request

2016-12-25 Thread Gil Shinar
adding infra-support so Jira ticket will be opened.

On Thu, Dec 22, 2016 at 3:16 PM, Amit Aviram  wrote:

> Hi
> We need to add the following users as maintainers of ovirt-imageio please:
>
>- derez
>- laravot
>
> Thanks, Amit
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: changes in gerrit - backport does not include reviewers automatically

2016-12-25 Thread Gil Shinar
adding infra-support so a Jira task will be opened

On Thu, Dec 22, 2016 at 4:25 PM, Yaniv Bronheim  wrote:

> A backport with same change-id as in master branch used to add the
> reviewer of the original patch automatically. and also the reviewed-on
> label with link to the original patch.
>
> can we add it back?
>
> --
> *Yaniv Bronhaim.*
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ost host addition failure

2016-12-27 Thread Gil Shinar
After the following fix had been merged, we still had an issue with vm_run
but it had been fixed as well.
Master experimental is now working properly.

Thanks Dan
Gil

On Tue, Dec 27, 2016 at 10:24 AM, Dan Kenigsberg  wrote:

> On Tue, Dec 27, 2016 at 9:59 AM, Eyal Edri  wrote:
> >
> >
> > On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:
> >>
> >> Any updates?
> >> The tests are still failing on vdsmd won't start from Sunday... master
> >> repos havn't been refreshed for a few days due to this.
> >>
> >> from host deploy log: [1]
> >> basic-suite-master-engine/_var_log_ovirt-engine/host-
> deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
> >> the job links [2]
> >>
> >>
> >>
> >>
> >>
> >> [1]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/artifact/exported-artifacts/
> basic_suite_master.sh-el7/exported-artifacts/test_logs/
> basic-suite-master/post-002_bootstrap.py/lago-
> >
> >
> > Now with the full link:
> > http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/artifact/exported-artifacts/
> basic_suite_master.sh-el7/exported-artifacts/test_logs/
> basic-suite-master/post-002_bootstrap.py/lago-basic-suite-
> master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-
> deploy-20161227012930-192.168.201.4-14af2bf0.log
> >
> >>
> >>
> >>
> >>
> >> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stdout:
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
> >> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
> >> 'vdsmd.service') stderr:
> >> A dependency job for vdsmd.service failed. See 'journalctl -xe' for
> >> details.
> >>
> >> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142
> method
> >> exception
> >> Traceback (most recent call last):
> >>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in
> >> _executeMethod
> >> method['method']()
> >>   File
> >> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/
> vdsm/packages.py",
> >> line 209, in _start
> >> self.services.state('vdsmd', True)
> >>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
> >> line 141, in state
> >> service=name,
> >> RuntimeError: Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151
> Failed
> >> to execute stage 'Closing up': Failed to start service 'vdsmd'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
> >> ENVIRONMENT DUMP - BEGIN
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/error=bool:'True'
> >> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
> >> BASE/excep
> >>
> >> [2]
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/lastCompletedBuild/testReport/
>
>
> In the log I see
>
> Processing package vdsm-4.20.0-7.gitf851d1b.el7.centos.x86_64
>
> which is from Dec 22 (last Thursday). This is because of us missing a
> master-branch tag. v4.20.0 wrongly tagged on the same commit as that
> of v4.19.1, removed, and never placed properly.
>
> I've re-pushed v4.20.0 properly, and now merged a patch to trigger
> build-artifacts in master.
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/1544/
>
> When this is done, could you use it to take the artifacts and try again?
>
> Regards,
> Dan.
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI gives +1 on gerrit as a response to 'ci please build'.

2017-01-16 Thread Gil Shinar
Edy showed me the issue. I think we haven't thought about that.
'ci please build' actually executes build artifacts jobs and when they
succeed, they change the "continuous integration" flag to +1.

Can we control it from the Gerrit?

On Mon, Jan 16, 2017 at 3:42 PM, Eyal Edri  wrote:

> Can you provide links with example?
>
> On Mon, Jan 16, 2017 at 3:34 PM, Edward Haas  wrote:
>
>> Hi,
>>
>> When issuing 'ci please build' to generate artifacts, on success it sets
>> Continuous-Integration to +1.
>>
>> It should not do that.
>>
>> Thanks,
>> Edy.
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI gives +1 on gerrit as a response to 'ci please build'.

2017-01-16 Thread Gil Shinar
You can actually do it from here in the job's configuration:
[image: Inline image 1]

Not sure how can we do it in yamls
I'll check if it is phys-able

On Mon, Jan 16, 2017 at 4:14 PM, Eyal Edri  wrote:

>
>
> On Mon, Jan 16, 2017 at 4:05 PM, Gil Shinar  wrote:
>
>> Edy showed me the issue. I think we haven't thought about that.
>> 'ci please build' actually executes build artifacts jobs and when they
>> succeed, they change the "continuous integration" flag to +1.
>>
>> Can we control it from the Gerrit?
>>
>
> I couldn't find any reference for response from Jenkins to Grrit in the
> yaml templates in std CI, so not sure its configurable per job, it might be
> on the main Jenkins configuration.
> Barak - do you know where the grading is defined for Gerrit Trigger in
> YAML?
>
>
>>
>> On Mon, Jan 16, 2017 at 3:42 PM, Eyal Edri  wrote:
>>
>>> Can you provide links with example?
>>>
>>> On Mon, Jan 16, 2017 at 3:34 PM, Edward Haas  wrote:
>>>
>>>> Hi,
>>>>
>>>> When issuing 'ci please build' to generate artifacts, on success it
>>>> sets Continuous-Integration to +1.
>>>>
>>>> It should not do that.
>>>>
>>>> Thanks,
>>>> Edy.
>>>>
>>>> ___
>>>> Infra mailing list
>>>> Infra@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>>
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R&D
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Need a topic branch for experimental development

2017-02-05 Thread Gil Shinar
Adding infra-support so a ticket will be opened

On Sun, Feb 5, 2017 at 1:17 PM, Roy Golan  wrote:

> Hi infra,
>
> I want a new gerrit branch that will make it easier to collaborate with
> more developers on experimental, edgy changes.
>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins whitelist

2017-02-21 Thread Gil Shinar
Adding infra-support to open a ticket

On Tue, Feb 21, 2017 at 6:38 PM, Denis Chaplygin 
wrote:

> Hello!
>
> I suppose i should be in whitelist:
>
> Patch Set 1:
>
> No Builds Executed
>
> http://jenkins.ovirt.org/job/ovirt-hosted-engine-ha_master_c
> heck-patch-el7-x86_64/139/ : To avoid overloading the infrastructure, a
> whitelist for
> running gerrit triggered jobs has been set in place, if
> you feel like you should be in it, please contact infra at
> ovirt dot org.
>
>
> http://jenkins.ovirt.org/job/ovirt-hosted-engine-ha_master_c
> heck-patch-fc25-x86_64/25/ : To avoid overloading the infrastructure, a
> whitelist for
> running gerrit triggered jobs has been set in place, if
> you feel like you should be in it, please contact infra at
> ovirt dot org.
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Whitelist for gerrit triggered jobs

2017-02-22 Thread Gil Shinar
Adding infra-support for a ticket to be open

On Wed, Feb 22, 2017 at 4:42 PM, Shmuel Melamud  wrote:

> Hi!
>
> Please, add me to the whitelist for running gerrit triggered jobs. The
> following jobs were not built:
>
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc25-x86_64/1671/
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/7535/
>
> They should be built to check the patch.
>
> Shmuel
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Unable to browse engine patches on gerrit

2017-03-05 Thread Gil Shinar
Adding infra-support so a ticket will be opened


On Sun, Mar 5, 2017 at 10:05 AM, Arik Hadas  wrote:

> Hi,
>
> There's an error while trying to browse patches in gerrit. It seems
> specific to patches for ovirt-engine, as patches for vdsm can be browsed
> just fine. For example, trying to open https://gerrit.ovirt.org/#/c/73140/
> returns error 500.
> Please check.
> Thanks in advance.
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Java exception when pushing changes to Gerrit

2017-04-19 Thread Gil Shinar
I've got the following error message from icinga exactly 3 hours ago:
PROBLEM Service Alert: gerrit.ovirt.org/Gerrit Open Files is CRITICAL

Looks like what have caused this issue.
Evgheni, do you think we can do something about that?

Thanks
Gil

On Wed, Apr 19, 2017 at 1:44 PM, Eyal Edri  wrote:

> Adding infra-support to open a ticket as well.
>
> On Wed, Apr 19, 2017 at 1:42 PM, Yaniv Kaul  wrote:
>
>> [ykaul@ykaul ovirt-system-tests]$ git review -r origin master
>> remote: Processing changes: refs: 2, done
>> To ssh://gerrit.ovirt.org:29418/ovirt-system-tests.git
>>  ! [remote rejected] HEAD -> refs/publish/master (internal server error:
>> java.lang.IllegalArgumentException: expected one element but was:
>> <160, 1001032>)
>> error: failed to push some refs to 'ssh://yk...@gerrit.ovirt.org:
>> 29418/ovirt-system-tests.git'
>>
>>
>> It does seem to be pushed, though. Any ideas?
>> Y.
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] ovirt_master_hc-system-tests - Build # 73 - Failure!

2017-04-20 Thread Gil Shinar
Hi,

There are too many errors here for me to understand what the real issue is:
http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/73

Please assist
Thanks
Gil

On Thu, Apr 20, 2017 at 8:51 AM,  wrote:

> Project: http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/
> Build: http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/73/
> Build Number: 73
> Build Status:  Failure
> Triggered By: Started by timer
>
> -
> Changes Since Last Success:
> -
> Changes for Build #73
> [Yaniv Kaul] Fixed NTP configuration on Engine.
>
> [Sandro Bonazzola] publisher: drop 3.6 publisher
>
> [Sandro Bonazzola] publisher: drop 4.0 publisher
>
>
>
>
> -
> Failed Tests:
> -
> 1 tests failed.
> FAILED:  002_bootstrap.add_hosts
>
> Error Message:
>
> status: 404
> reason: Not Found
> detail:
> Error404 - Not Found
>  >> begin captured logging << 
> ovirtlago.testlib: ERROR: * Unhandled exception in  _host_is_up at 0x3b26ed8>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217,
> in assert_equals_within
> res = func()
>   File "/home/jenkins/workspace/ovirt_master_hc-system-tests/
> ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
> line 145, in _host_is_up
> cur_state = api.hosts.get(host.name()).status.state
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py",
> line 18338, in get
> headers={"All-Content":all_content}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 46, in get
> return self.request(method='GET', url=url, headers=headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 79, in do_request
> persistent_auth)
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 162, in __do_request
> raise errors.RequestError(response_code, response_reason,
> response_body)
> RequestError:
> status: 404
> reason: Not Found
> detail:
> Error404 - Not Found
> - >> end captured logging << -
>
> Stack Trace:
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
> runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129,
> in wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59,
> in wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/home/jenkins/workspace/ovirt_master_hc-system-tests/
> ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
> line 164, in add_hosts
> testlib.assert_true_within(_host_is_up, timeout=15 * 60)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 256,
> in assert_true_within
> assert_equals_within(func, True, timeout, allowed_exceptions)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217,
> in assert_equals_within
> res = func()
>   File "/home/jenkins/workspace/ovirt_master_hc-system-tests/
> ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
> line 145, in _host_is_up
> cur_state = api.hosts.get(host.name()).status.state
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py",
> line 18338, in get
> headers={"All-Content":all_content}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 46, in get
> return self.request(method='GET', url=url, headers=headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 79, in do_request
> persistent_auth)
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 162, in __do_request
> raise errors.RequestError(response_code, response_reason,
> response_body)
>
> status: 404
> reason: Not Found
> detail:
> Error404 - Not Found
>  >> begin captured logging << 
> ovirtlago.testlib: ERROR: * Unhandled exception in  _host_is_up at 0x3b26ed8>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217,
> in assert_equals_within
> res = func()
>   File "/home/jenkins/workspace/ovirt_master_hc-system-tests/
> ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
> line 145, in _host_is_up
> cur_state = api.

Re: can you add Lukas to the whitelist?

2017-04-24 Thread Gil Shinar
Done

On Mon, Apr 24, 2017 at 5:28 PM, Eyal Edri  wrote:

> Gil,
> Can you help?
>
> On Mon, Apr 24, 2017 at 5:06 PM, Oved Ourfali  wrote:
>
>>
>> Patch Set 4:
>>
>> No Builds Executed
>>
>> http://jenkins.ovirt.org/job/ovirt-engine-api-model_master_c
>> heck-patch-fc25-x86_64/198/ : To avoid overloading the infrastructure, a
>> whitelist for running gerrit triggered jobs has been set in place, if you
>> feel like you should be in it, please contact infra at ovirt dot org.
>>
>> Thanks,
>>
>> Oved
>>
>>
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Problem in with ovirt-engine-metrics repo

2017-04-24 Thread Gil Shinar
Hi Shirly,

Which of the commits below you have pushed?
[image: Inline image 1]

Looks like you've pushed patches without rebasing them on master first.

*Update*, I have sat with Shirly and we have fixed repo credentials to not
allow merging with rebase first.

Gil


On Mon, Apr 24, 2017 at 9:37 PM, Shirly Radco  wrote:

> Hi,
>
>
> Earlier today I merged 3 patches to ovirt-engine-metrics.
> When I check the git log I see 4 parches. Each patch has 2 merges.
> One is the correct one and the other is empty.
>
> I don't see this in Gerrit.
> I tried to clone the repo again but result is the same.
>
>
> See git log :
>
> commit 54a16aa73545c3a39401c969cc27c1887a6b1038
> Merge: 6e97bcd adb51fb
> Author: Shirly Radco 
> Date:   Mon Apr 24 06:20:06 2017 -0400
>
> Merge "collectd: updated engine processes plugin"
>
> commit 6e97bcd7969ab3dbbb60796f5035343ee14f40d7
> Merge: 0b63697 3466a8b
> Author: Shirly Radco 
> Date:   Mon Apr 24 06:19:57 2017 -0400
>
> Merge "collectd: Fixed processes plugin configurations"
>
> commit 0b636971dc7b415b0831edc4abac2493347a5cfe
> Author: Shirly Radco 
> Date:   Sun Apr 9 11:43:14 2017 +0300
>
> fluentd: added prefix to the statsd value field
>
> Since statds records can be host or vm metrics,
> I added vm/host prefix to the metric value
> field name, so the user can choose the required
> :...skipping...
> commit 54a16aa73545c3a39401c969cc27c1887a6b1038
> Merge: 6e97bcd adb51fb
> Author: Shirly Radco 
> Date:   Mon Apr 24 06:20:06 2017 -0400
>
> Merge "collectd: updated engine processes plugin"
>
> commit 6e97bcd7969ab3dbbb60796f5035343ee14f40d7
> Merge: 0b63697 3466a8b
> Author: Shirly Radco 
> Date:   Mon Apr 24 06:19:57 2017 -0400
>
> Merge "collectd: Fixed processes plugin configurations"
>
> commit 0b636971dc7b415b0831edc4abac2493347a5cfe
> Author: Shirly Radco 
> Date:   Sun Apr 9 11:43:14 2017 +0300
>
> fluentd: added prefix to the statsd value field
>
> Since statds records can be host or vm metrics,
> I added vm/host prefix to the metric value
> field name, so the user can choose the required
> metric easily.
>
> Change-Id: Ib71dbba78f3922fe1d257c83480867f485a91c22
> Signed-off-by: Shirly Radco 
>
>
> Please see why.
> I want to build for 4.1.2 and need to be sure repo is ok.
>
> Thank you,
>
> --
>
> SHIRLY RADCO
>
> BI SOFTWARE ENGINEER,
>
> Red Hat Israel 
>
> sra...@redhat.com
>  
>  
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt master ] [ 24-04-2017 ] [import_template_from_glance]

2017-04-25 Thread Gil Shinar
Hi Eli,

When was it merged? I looked at the patch in your former message and it is
not merged.
Master is still failing on add_secondary_storage_domains

Thanks
Gil

On Tue, Apr 25, 2017 at 5:19 PM, Eli Mesika  wrote:

> The fix for that regression was merged to master, please sync and check
> again
>
> On Tue, Apr 25, 2017 at 5:14 PM, Eli Mesika  wrote:
>
>> Hi
>>
>> Please review the fixing patch
>> https://gerrit.ovirt.org/#/c/76013/2
>>
>> On Tue, Apr 25, 2017 at 3:00 PM, Fred Rolland 
>> wrote:
>>
>>> Eli hi,
>>>
>>> It seems there is some issue in the squash patch [1].
>>>
>>> Regarding the issue found by the OST, if you start from a fresh DB,
>>> wrong values will be inserted in the "spec_params" column in the vm_device
>>> table. [2]
>>> We will get '58ca7b19-0071-00c0-01d6-0212' instead of a map
>>> like { "vram" : "65536" }
>>>
>>> It will fail the creation of the AddVmTemplateCommand that we see in the
>>> log.
>>>
>>> Regards,
>>>
>>> Freddy
>>>
>>> [1] https://gerrit.ovirt.org/#/c/74382/
>>> [2] https://gerrit.ovirt.org/#/c/74382/8/packaging/dbscripts/dat
>>> a/01200_insert_vm_device.sql
>>>
>>> On Tue, Apr 25, 2017 at 12:13 PM, Fred Rolland 
>>> wrote:
>>>
 Looking at it

 On Tue, Apr 25, 2017 at 12:11 AM, Nadav Goldin 
 wrote:

> Test failed: add_secondary_storage_domains/import_template_from_glance
>
> Link to suspected patches: https://gerrit.ovirt.org/#/c/74382/
>
> Link to Job: http://jenkins.ovirt.org/job/t
> est-repo_ovirt_experimental_master/6456/
> (started in 6451)
>
> Link to all logs:
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/6456/artifact/exported-artifacts/basic-suit-master-el7/
> test_logs/basic-suite-master/post-002_bootstrap.py/
>
> Engine log: http://jenkins.ovirt.org/job/t
> est-repo_ovirt_experimental_master/6456/artifact/exported-ar
> tifacts/basic-suit-master-el7/test_logs/basic-suite-master/p
> ost-002_bootstrap.py/lago-basic-suite-master-engine/_var_log
> /ovirt-engine/engine.log
>
> Error snippet from the test log:
>
> 
>
> lago.utils: ERROR: Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> _ret_via_queue
> queue.put({'return': func()})
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_master
> /ovirt-system-tests/basic-suite-master/test-scenarios/002_bo
> otstrap.py",
> line 803, in import_template_from_glance
> generic_import_from_glance(api, image_name=CIRROS_IMAGE_NAME,
> image_ext='_glance_template', as_template=True)
>   File "/home/jenkins/workspace/test-repo_ovirt_experimental_master
> /ovirt-system-tests/basic-suite-master/test-scenarios/002_bo
> otstrap.py",
> line 641, in generic_import_from_glance
> lambda: api.disks.get(disk_name).status.state == 'ok',
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 264, in assert_true_within_long
> assert_equals_within_long(func, True, allowed_exceptions)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 251, in assert_equals_within_long
> func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 230, in assert_equals_within
> '%s != %s after %s seconds' % (res, value, timeout)
> AssertionError: False != True after 600 seconds
>
>
> 
>
>
> the engine.log has this sequence repeating(apparently at the end of
> the task - 199ed356):
>
> 2017-04-24 13:34:50,079-04 INFO
> [org.ovirt.engine.core.bll.storage.repoimage.ImportRepoImageCommand]
> (DefaultQuartzScheduler10) [199ed356-0960-4ef4-9637-09c76a07c932]
> Ending command 'org.ovirt.engine.core.bll.sto
> rage.repoimage.ImportRepoImageCommand'
> successfully.
> 2017-04-24 13:34:50,090-04 ERROR
> [org.ovirt.engine.core.bll.CommandsFactory] (DefaultQuartzScheduler10)
> [] An exception has occurred while trying to create a command object
> for command 'AddVmTemplate' with parameters
> 'AddVmTemplateParameters:{commandId='a6d45092-dfe0-4a65-bdc4
> -4c23a68fe7d5',
> user='admin', commandType='Unknown'}': WELD-49: Unable to invoke
> protected final void
> org.ovirt.engine.core.bll.CommandBase.postConstruct() on
> org.ovirt.engine.core.bll.AddVmTemplateCommand@35c1cbd5
> 2017-04-24 13:34:50,095-04 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (DefaultQuartzScheduler10) [] transaction rolled back
> 2017-04-24 13:34:50,123-04 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler10) [] EVENT_ID:
> USER_IMPORT_IMAGE_AS_TEMPLATE_FINISHED_SUCCESS(3,018), Correlation ID:
> 199ed356-0960-4ef4-

Re: Strange glitch

2017-05-11 Thread Gil Shinar
It happens from time to time that the json is not being fully downloaded
from the mirrors.

On Thu, May 11, 2017 at 1:53 PM, Lev Veyde  wrote:

> Hi,
>
> Tried to build VDSM on FC24, and it failed:
>
> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-fc24-
> x86_64/227/console
>
> retry (run 228) though worked fine, so it's some kind of a strange
> glitch...
>
> Thanks in advance,
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Findbugs issues not reported correctly on Jenkins master jobs?

2017-05-15 Thread Gil Shinar
Adding infra-support

On Mon, May 15, 2017 at 3:55 PM, Tal Nisan  wrote:

> Just a regular "mvn findbugs:findbugs" run on master, you can revert this
> patch and run it manually yourself: https://gerrit.ovirt.org/#/c/76820/
>
> On Mon, May 15, 2017 at 2:15 PM, Daniel Belenky 
> wrote:
>
>> Hi Tal,
>> I'm checking this right now.
>> Can you provide the log from your local run? which files were exported on
>> your local run?
>>
>> On Mon, May 15, 2017 at 11:51 AM Tal Nisan  wrote:
>>
>>> I've pushed this patch to master:
>>> https://gerrit.ovirt.org/#/c/75994/
>>> Jenkins showed no findbugs error and I moved on to merging it
>>>
>>> Then I've pushed the same patch to 4.1:
>>> https://gerrit.ovirt.org/#/c/76803/
>>> And it resulted in two findbugs errors (though minor and style related):
>>> http://jenkins.ovirt.org/job/ovirt-engine_4.1_find-bugs_crea
>>> ted/898/findbugsResult/new/
>>>
>>> Running findbugs manually in my environment on master did report these
>>> two issues so I'd expect the master jobs to report them as well.
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>> --
>>
>> DANIEL BELENKY
>>
>> RHV DEVOPS
>>
>> Red Hat EMEA 
>>
>> IRC: #rhev-integ #rhev-dev
>> 
>>
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Gerrit HTTP-500 error

2017-05-28 Thread Gil Shinar
Hi,

Today we have seen this error again. Restart of Gerrit fixed this issue.
I have look a bit and found the following thread:
https://groups.google.com/forum/#!topic/repo-discuss/N5bIo4VlPGU

According to the above, there's a bug in Jgit. Its cache gets corrupted and
it cannot find objects.
Restart of Gerrit causes it to clear the cache and that fixes the problem.

In order to fix that, you cannot only upgrade JGit (as it is built in
inside Gerrit installation) but you have to upgrade Gerrit. According to
Gerrit's release notes, this issue had been fixed only since Gerrit 2.13.7


It is not urgent but looks like a good idea to upgrade to this version or
the latest (2.13.8).

Gil
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Another repoclosure issue

2017-06-01 Thread Gil Shinar
Hi,

Another master experimental failure on repoclosure has occured:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6961/testReport/junit/(root)/000_check_repo_closure/check_repo_closure/

Is it, again, a glitch or should we fix that?

Gil
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Fwd: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 - Failure!

2017-06-19 Thread Gil Shinar
Can't find this version anywhere. What am I missing?

ovirt-optimizer-0.15-0.176.201702021258.el7


-- Forwarded message --
From: 
Date: Tue, Jun 20, 2017 at 5:31 AM
Subject: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 - Failure!
To: sbona...@redhat.com, infra@ovirt.org, lve...@redhat.com


Project: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/
Build: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/153/
Build Number: 153
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #153
[Yedidyah Bar David] Make fluentd log dir root owned and Add engine env name




-
Failed Tests:
-
No tests ran.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 - Failure!

2017-06-20 Thread Gil Shinar
Another question. It succeeded in the next build. Looking for the
ovirt-optimizer in the Lago log, I didn't see any appearances of 0.15. Only
0.14 which exists in the relevant repos.
How can I know from the above input that it is a reposync failure and not a
wrong/missing version of ovirt-optimizer?

On Tue, Jun 20, 2017 at 10:32 AM, Eyal Edri  wrote:

> It looks like it failed on reposync of  ovirt-4.1-el7.
> Do we know if something happened to resources or that repo specifically
> that can cause this?
>
> On Tue, Jun 20, 2017 at 9:40 AM, Gil Shinar  wrote:
>
>> Can't find this version anywhere. What am I missing?
>>
>> ovirt-optimizer-0.15-0.176.201702021258.el7
>>
>>
>> -- Forwarded message --
>> From: 
>> Date: Tue, Jun 20, 2017 at 5:31 AM
>> Subject: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 -
>> Failure!
>> To: sbona...@redhat.com, infra@ovirt.org, lve...@redhat.com
>>
>>
>> Project: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/
>> Build: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/153/
>> Build Number: 153
>> Build Status:  Failure
>> Triggered By: Started by timer
>>
>> -
>> Changes Since Last Success:
>> -
>> Changes for Build #153
>> [Yedidyah Bar David] Make fluentd log dir root owned and Add engine env
>> name
>>
>>
>>
>>
>> -
>> Failed Tests:
>> -
>> No tests ran.
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 - Failure!

2017-06-20 Thread Gil Shinar
Sandro,

These jobs were deleted therefor the builds were deleted.

On Tue, Jun 20, 2017 at 10:51 AM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Jun 20, 2017 at 9:43 AM, Gil Shinar  wrote:
>
>> Another question. It succeeded in the next build. Looking for the
>> ovirt-optimizer in the Lago log, I didn't see any appearances of 0.15. Only
>> 0.14 which exists in the relevant repos.
>> How can I know from the above input that it is a reposync failure and not
>> a wrong/missing version of ovirt-optimizer?
>>
>
> In oVirt 4.1.0 we released oVirt Optimizer built from
> dd4851d81cf84df68522bd87f93cb848431df4e9 on
>
> ./ovirt-4.1.0_beta1.conf:
>94 : http://jenkins.ovirt.org/job/ovirt-optimizer_master_build-
> artifacts-el7-x86_64/22/
>95 : http://jenkins.ovirt.org/job/ovirt-optimizer_master_build-
> artifacts-fc24-x86_64/21/
>
> Sadly, jenkins deleted above builds even if marked as keep forever.
>
> Adding Martin, optimizer maintainer.
>
>
>
>>
>> On Tue, Jun 20, 2017 at 10:32 AM, Eyal Edri  wrote:
>>
>>> It looks like it failed on reposync of  ovirt-4.1-el7.
>>> Do we know if something happened to resources or that repo specifically
>>> that can cause this?
>>>
>>> On Tue, Jun 20, 2017 at 9:40 AM, Gil Shinar  wrote:
>>>
>>>> Can't find this version anywhere. What am I missing?
>>>>
>>>> ovirt-optimizer-0.15-0.176.201702021258.el7
>>>>
>>>>
>>>> -- Forwarded message --
>>>> From: 
>>>> Date: Tue, Jun 20, 2017 at 5:31 AM
>>>> Subject: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 153 -
>>>> Failure!
>>>> To: sbona...@redhat.com, infra@ovirt.org, lve...@redhat.com
>>>>
>>>>
>>>> Project: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/
>>>> Build: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/153/
>>>> Build Number: 153
>>>> Build Status:  Failure
>>>> Triggered By: Started by timer
>>>>
>>>> -
>>>> Changes Since Last Success:
>>>> -
>>>> Changes for Build #153
>>>> [Yedidyah Bar David] Make fluentd log dir root owned and Add engine env
>>>> name
>>>>
>>>>
>>>>
>>>>
>>>> -
>>>> Failed Tests:
>>>> -
>>>> No tests ran.
>>>> ___
>>>> Infra mailing list
>>>> Infra@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>>
>>>>
>>>> ___
>>>> Infra mailing list
>>>> Infra@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> ASSOCIATE MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R&D
>>>
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig> TRIED. TESTED. TRUSTED.
>>> <https://redhat.com/trusted>
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


collect artifacts in master experimental timed out

2017-07-04 Thread Gil Shinar
Hi Nadav/Gal,

I see the folowing exceptions in lago log:

2017-07-04 14:24:39,254::log_utils.py::__exit__::606::lago.prefix::DEBUG::
 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1476, in
_collect_artifacts
vm.collect_artifacts(path, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
624, in collect_artifacts
ignore_nopath=ignore_nopath
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
381, in extract_paths
return self.provider.extract_paths(paths, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
line 297, in extract_paths
ignore_nopath=ignore_nopath,
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
247, in extract_paths
self._extract_paths_scp(paths=paths, ignore_nopath=ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
266, in _extract_paths_scp
propagate_fail=False
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
425, in copy_from
local_path=local_path,
  File "/usr/lib/python2.7/site-packages/scp.py", line 125, in get
self._recv_all()
  File "/usr/lib/python2.7/site-packages/scp.py", line 250, in _recv_all
msg = self.channel.recv(1024)
  File "/usr/lib/python2.7/site-packages/paramiko/channel.py", line 615, in recv
raise socket.timeout()

2017-07-04 14:24:39,254::utils.py::_ret_via_queue::60::lago.utils::ERROR::Error
while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1476,
in _collect_artifacts
vm.collect_artifacts(path, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
624, in collect_artifacts
ignore_nopath=ignore_nopath
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
381, in extract_paths
return self.provider.extract_paths(paths, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
line 297, in extract_paths
ignore_nopath=ignore_nopath,
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
247, in extract_paths
self._extract_paths_scp(paths=paths, ignore_nopath=ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
266, in _extract_paths_scp
propagate_fail=False
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
425, in copy_from
local_path=local_path,
  File "/usr/lib/python2.7/site-packages/scp.py", line 125, in get
self._recv_all()
  File "/usr/lib/python2.7/site-packages/scp.py", line 250, in _recv_all
msg = self.channel.recv(1024)
  File "/usr/lib/python2.7/site-packages/paramiko/channel.py", line 615, in recv
raise socket.timeout()
timeout
2017-07-04 14:24:39,255::log_utils.py::end_log_task::669::root::ERROR::@
Collect artifacts:  [31mERROR [0m (in 0:00:05)
2017-07-04 14:24:39,256::log_utils.py::__exit__::606::lago.prefix::DEBUG::
 File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 635,
in wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1480,
in collect_artifacts
self.virt_env.get_vms().values(),
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 100, in
invoke_in_parallel
return vt.join_all()
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1476,
in _collect_artifacts
vm.collect_artifacts(path, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
624, in collect_artifacts
ignore_nopath=ignore_nopath
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
381, in extract_paths
return self.provider.extract_paths(paths, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
line 297, in extract_paths
ignore_nopath=ignore_nopath,
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
247, in extract_paths
self._extract_paths_scp(paths=paths, ignore_nopath=ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
266, in _extract_paths_scp
propagate_fail=False
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
425, in copy_from
local_path=local_path,
  File "/usr/lib/python2.7/site-packages/scp.py", line 125, in get
self._recv_all()
  File "/usr/lib/python2.7/site-packages/scp.py", line 250, in _recv_all
msg = self.channel.recv(1024)
  File "/usr/lib/python2.7/site-packages/paramiko/channel.py", line 615, in recv
raise socket.timeout()

2017-07-04 14:24:39,256::cmd.py::main::960::cli::ERROR::Error occured, aborting
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
cli_plugins[args.verb].d

Re: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 171 - Failure!

2017-07-05 Thread Gil Shinar
Did any one do any manual changes in:
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/?

This job keeps on failing on the same packages over and over again.

On Wed, Jul 5, 2017 at 5:32 AM,  wrote:

> Project: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/
> Build: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/171/
> Build Number: 171
> Build Status:  Failure
> Triggered By: Started by timer
>
> -
> Changes Since Last Success:
> -
> Changes for Build #171
> [Eyal Edri] skip repo closure test for master suite
>
> [Barak Korren] Add job configuration for infra-ansible
>
> [Gil Shinar] Change project-pattern value in jjb deploy job
>
>
>
>
> -
> Failed Tests:
> -
> No tests ran.
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: collect artifacts in master experimental timed out

2017-07-05 Thread Gil Shinar
On Wed, Jul 5, 2017 at 9:46 AM, Nadav Goldin  wrote:

> Hi,
> Did it happen more than once?
>

No

>
> Looking at the logs what happened was:
> 1. Lago checked the engine was SSH reachable - this was true.
> 2. Then it tried connecting via SSH and collect the logs and timed out.
>
> On (1) we have retries and guards, on (2) we don't, as we assume (1)
> just passed. I guess in some conditions that logic can be flawed. Can
> you open an issue[1]?
>

Will do

>
> I'll try to fix it as soon as possible.
>
> Thanks,
>
>
> [1] https://github.com/lago-project/lago/issues
> Nadav.
>
> On Wed, Jul 5, 2017 at 9:07 AM, Gil Shinar  wrote:
> > Hi Nadav/Gal,
> >
> > I see the folowing exceptions in lago log:
> >
> > 2017-07-04 14:24:39,254::log_utils.py::__exit__::606::lago.prefix::
> DEBUG::
> > File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1476, in
> > _collect_artifacts
> > vm.collect_artifacts(path, ignore_nopath)
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 624,
> in
> > collect_artifacts
> > ignore_nopath=ignore_nopath
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 381,
> in
> > extract_paths
> > return self.provider.extract_paths(paths, *args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
> line
> > 297, in extract_paths
> > ignore_nopath=ignore_nopath,
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 247,
> in
> > extract_paths
> > self._extract_paths_scp(paths=paths, ignore_nopath=ignore_nopath)
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 266,
> in
> > _extract_paths_scp
> > propagate_fail=False
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 425,
> in
> > copy_from
> > local_path=local_path,
> >   File "/usr/lib/python2.7/site-packages/scp.py", line 125, in get
> > self._recv_all()
> >   File "/usr/lib/python2.7/site-packages/scp.py", line 250, in _recv_all
> > msg = self.channel.recv(1024)
> >   File "/usr/lib/python2.7/site-packages/paramiko/channel.py", line
> 615, in
> > recv
> > raise socket.timeout()
> >
> > 2017-07-04
> > 14:24:39,254::utils.py::_ret_via_queue::60::lago.utils::ERROR::Error
> while
> > running thread
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> > _ret_via_queue
> > queue.put({'return': func()})
> >   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1476, in
> > _collect_artifacts
> > vm.collect_artifacts(path, ignore_nopath)
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 624,
> in
> > collect_artifacts
> > ignore_nopath=ignore_nopath
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 381,
> in
> > extract_paths
> > return self.provider.extract_paths(paths, *args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
> line
> > 297, in extract_paths
> > ignore_nopath=ignore_nopath,
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 247,
> in
> > extract_paths
> > self._extract_paths_scp(paths=paths, ignore_nopath=ignore_nopath)
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 266,
> in
> > _extract_paths_scp
> > propagate_fail=False
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 425,
> in
> > copy_from
> > local_path=local_path,
> >   File "/usr/lib/python2.7/site-packages/scp.py", line 125, in get
> > self._recv_all()
> >   File "/usr/lib/python2.7/site-packages/scp.py", line 250, in _recv_all
> > msg = self.channel.recv(1024)
> >   File "/usr/lib/python2.7/site-packages/paramiko/channel.py", line
> 615, in
> > recv
> > raise socket.timeout()
> > timeout
> > 2017-07-04 14:24:39,255::log_utils.py::end_log_task::669::root::ERROR::@
> > Collect artifacts:  [31mERROR [0m (in 0:00:05)
> > 2017-07-04 14:24:39,256::log_utils.py::__exit__::606::lago.prefix::
> DEBUG::
> > File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 635, in
> > wrapper
> > return func(*args, **kwargs)
> >   File "/u

Re: [oVirt Jenkins] ovirt_4.1_he-system-tests - Build # 171 - Failure!

2017-07-05 Thread Gil Shinar
Gal/Nadav,

Can you please assists with this issue?

Thanks
Gil

On Wed, Jul 5, 2017 at 5:32 AM,  wrote:

> Project: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/
> Build: http://jenkins.ovirt.org/job/ovirt_4.1_he-system-tests/171/
> Build Number: 171
> Build Status:  Failure
> Triggered By: Started by timer
>
> -
> Changes Since Last Success:
> -
> Changes for Build #171
> [Eyal Edri] skip repo closure test for master suite
>
> [Barak Korren] Add job configuration for infra-ansible
>
> [Gil Shinar] Change project-pattern value in jjb deploy job
>
>
>
>
> -
> Failed Tests:
> -
> No tests ran.
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] system-tests_hc-suite-master - Build # 1 - Failure!

2017-07-26 Thread Gil Shinar
Yes.

Will send the patch now and remove the old jobs

On Wed, Jul 26, 2017 at 9:26 AM, Eyal Edri  wrote:

> Gil,
> I believe this is also failing following the latest system tests template
> refactoring,
> Do we have a fix for it already?
>
> On Wed, Jul 26, 2017 at 5:23 AM,  wrote:
>
>> Project: http://jenkins.ovirt.org/job/system-tests_hc-suite-master/
>> Build: http://jenkins.ovirt.org/job/system-tests_hc-suite-master/1/
>> Build Number: 1
>> Build Status:  Failure
>> Triggered By: Started by timer
>>
>> -
>> Changes Since Last Success:
>> -
>>
>>
>> -
>> Failed Tests:
>> -
>> No tests ran.
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: -1 on gerrit-hooks

2017-07-27 Thread Gil Shinar
First it also failed on missing bug URL
Suggestion, can't a certain commit message differ between bugs that needs
to be back ported and those who aren't?

On Wed, Jul 26, 2017 at 1:52 PM, Eyal Edri  wrote:

>
>
> On Wed, Jul 26, 2017 at 1:51 PM, Eyal Edri  wrote:
>
>> Check Backport::WARN, The patch wasn't backported to all the relevant
>> stable branches. (-1)
>> this means the patch isn't merged yet on master branch.
>>
>> The hooks tries to verify you merge patches in the right order: master ->
>> stable branch.
>>
>
> Since this is a patch which only relevant to the stable branch, this is
> false positive, but we can't really
> differ today between patches like that to others.
>
> This is why any project owner has permissions to remove any voting from
> patches on his project if he things its correct.
> So in this case, feel free to remove the voting.
>
>
>>
>> On Wed, Jul 26, 2017 at 1:46 PM, Shirly Radco  wrote:
>>
>>> Hi
>>>
>>> I got -1 on gerrit-hooks please check why.
>>>
>>> https://gerrit.ovirt.org/#/c/79829/
>>> https://gerrit.ovirt.org/#/c/79828/
>>>
>>> Thank you,
>>>
>>> --
>>>
>>> SHIRLY RADCO
>>>
>>> BI SOFTWARE ENGINEER
>>>
>>> Red Hat Israel 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> ASSOCIATE MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R&D
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI jobs not triggered from gerrit for ovirt-imageio

2017-07-27 Thread Gil Shinar
Adding infra-support to open a ticket

On Thu, Jul 27, 2017 at 12:18 PM, Nir Soffer  wrote:

> I posted a patch with 3 versions, none of them triggered a build:
> https://gerrit.ovirt.org/#/c/79869/
>
> Manual trigger works:
> http://jenkins.ovirt.org/job/ovirt-imageio_master_check-
> patch-fc26-x86_64/38/
> http://jenkins.ovirt.org/job/ovirt-imageio_master_check-
> patch-el7-x86_64/349/
> http://jenkins.ovirt.org/job/ovirt-imageio_master_check-
> patch-fc25-x86_64/69/
>
> Can you check?
>
> Thanks,
> Nir
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 79869,3 (ovirt-imageio) failed "ovirt-master" system tests

2017-08-02 Thread Gil Shinar
You have this job:
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64

You can run it by adding the following parameters:
GERRIT_REFSPEC: refs/changes/01/80101/1:nir
GERRIT_BRANCH: nir

Let me know if it works for you

On Wed, Aug 2, 2017 at 2:37 PM, Nir Soffer  wrote:

> Should be fixed in https://gerrit.ovirt.org/80101
>
> How do I trigger a build-artifacts job to verify this?
>
> On Wed, Aug 2, 2017 at 1:32 PM oVirt Jenkins  wrote:
>
>> Change 79869,3 (ovirt-imageio) is probably the reason behind recent
>> system test
>> failures in the "ovirt-master" change queue and needs to be fixed.
>>
>> This change had been removed from the testing queue. Artifacts build from
>> this
>> change will not be released until it is fixed.
>>
>> For further details about the change see:
>> https://gerrit.ovirt.org/#/c/79869/3
>>
>> For failed test results see:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1607/
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 79869,3 (ovirt-imageio) failed "ovirt-master" system tests

2017-08-02 Thread Gil Shinar
Oh sorry

Missed that

Thanks

On Wed, Aug 2, 2017 at 2:59 PM, Nir Soffer  wrote:

> On Wed, Aug 2, 2017 at 2:45 PM Gil Shinar  wrote:
>
>> You have this job:
>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64
>>
>> You can run it by adding the following parameters:
>> GERRIT_REFSPEC: refs/changes/01/80101/1:nir
>> GERRIT_BRANCH: nir
>>
>> Let me know if it works for you
>>
>
> It failed:
> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-
> el7-x86_64/2844/console
>
> Because this an ovirt-imageio patch, but triggering the ovirt-imageio job
> works
> http://jenkins.ovirt.org/view/All/job/ovirt-imageio_master_
> build-artifacts-el7-x86_64/90/console
>
> Merged, thanks for reporting this issue.
>
>
>>
>> On Wed, Aug 2, 2017 at 2:37 PM, Nir Soffer  wrote:
>>
>>> Should be fixed in https://gerrit.ovirt.org/80101
>>>
>>> How do I trigger a build-artifacts job to verify this?
>>>
>>> On Wed, Aug 2, 2017 at 1:32 PM oVirt Jenkins  wrote:
>>>
>>>> Change 79869,3 (ovirt-imageio) is probably the reason behind recent
>>>> system test
>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>
>>>> This change had been removed from the testing queue. Artifacts build
>>>> from this
>>>> change will not be released until it is fixed.
>>>>
>>>> For further details about the change see:
>>>> https://gerrit.ovirt.org/#/c/79869/3
>>>>
>>>> For failed test results see:
>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1607/
>>>> ___
>>>> Infra mailing list
>>>> Infra@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 79869,3 (ovirt-imageio) failed "ovirt-master" system tests

2017-08-02 Thread Gil Shinar
I haven't investigated it but I'm almost sure that Jenkins' git plugin uses
the *ref:branch* to create a branch from a ref/commit and the second
parameter is to checkout to that branch

Hope that is clear

On Wed, Aug 2, 2017 at 3:30 PM, Nir Soffer  wrote:

> Gil, can you explain the format used in the build parameters?
>
> GERRIT_REFSPEC: refs/changes/01/80101/1:nir
> GERRIT_BRANCH: nir
>
> What is the magic "nir" branch?
>
> On Wed, Aug 2, 2017 at 3:19 PM Gil Shinar  wrote:
>
>> Oh sorry
>>
>> Missed that
>>
>> Thanks
>>
>> On Wed, Aug 2, 2017 at 2:59 PM, Nir Soffer  wrote:
>>
>>> On Wed, Aug 2, 2017 at 2:45 PM Gil Shinar  wrote:
>>>
>>>> You have this job:
>>>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64
>>>>
>>>> You can run it by adding the following parameters:
>>>> GERRIT_REFSPEC: refs/changes/01/80101/1:nir
>>>> GERRIT_BRANCH: nir
>>>>
>>>> Let me know if it works for you
>>>>
>>>
>>> It failed:
>>> http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-
>>> el7-x86_64/2844/console
>>>
>>> Because this an ovirt-imageio patch, but triggering the ovirt-imageio
>>> job works
>>> http://jenkins.ovirt.org/view/All/job/ovirt-imageio_master_
>>> build-artifacts-el7-x86_64/90/console
>>>
>>> Merged, thanks for reporting this issue.
>>>
>>>
>>>>
>>>> On Wed, Aug 2, 2017 at 2:37 PM, Nir Soffer  wrote:
>>>>
>>>>> Should be fixed in https://gerrit.ovirt.org/80101
>>>>>
>>>>> How do I trigger a build-artifacts job to verify this?
>>>>>
>>>>> On Wed, Aug 2, 2017 at 1:32 PM oVirt Jenkins 
>>>>> wrote:
>>>>>
>>>>>> Change 79869,3 (ovirt-imageio) is probably the reason behind recent
>>>>>> system test
>>>>>> failures in the "ovirt-master" change queue and needs to be fixed.
>>>>>>
>>>>>> This change had been removed from the testing queue. Artifacts build
>>>>>> from this
>>>>>> change will not be released until it is fixed.
>>>>>>
>>>>>> For further details about the change see:
>>>>>> https://gerrit.ovirt.org/#/c/79869/3
>>>>>>
>>>>>> For failed test results see:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1607/
>>>>>> ___
>>>>>> Infra mailing list
>>>>>> Infra@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>>
>>>>>
>>>>> ___
>>>>> Infra mailing list
>>>>> Infra@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>>>
>>>>>
>>>>
>>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: oVirt github organization

2017-08-02 Thread Gil Shinar
Adding infra-support

On Wed, Aug 2, 2017 at 4:19 PM, Miroslava Voglova 
wrote:

> Hi,
>
> can you add me to oVirt organization on github? My username
> is voglovaMiroslava.
>
> Thanks,
> Mirka
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-428) Credentials request

2016-03-06 Thread Gil Shinar (oVirt JIRA)
Gil Shinar created OVIRT-428:


 Summary: Credentials request
 Key: OVIRT-428
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-428
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Gil Shinar
Assignee: infra


Hi,

I need credentials for the Ovirt Jira. Can you please provide?

Thanks
Gil Shinar




--
This message was sent by Atlassian JIRA
(v7.2.0-OD-03-010#72000)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-493) Re: Build failed in Jenkins: ovirt-node_ovirt-3.6_create-iso-el7_merged #19

2016-04-18 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901#comment-14901
 ] 

Gil Shinar commented on OVIRT-493:
--

Should I wait for the decision whether it is standard CI or should be 
puppetized to all phx slaves?

> Re: Build failed in Jenkins: ovirt-node_ovirt-3.6_create-iso-el7_merged #19
> ---
>
> Key: OVIRT-493
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-493
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: eyal edri [Administrator]
>    Assignee: Gil Shinar
>
> On Apr 17, 2016 7:26 PM, "Tolik Litovsky"  wrote:
> > Can you please add sssd-client.rpm to the el7 and nested groups of phx
> > jenkins
> > Tolik
> >
> > On Sun, Apr 17, 2016 at 7:17 PM, Ryan Barry  wrote:
> >
> >> Tolik, can you look at this please? It's been failing for a couple of
> >> days. Looks like it's missing some dependency.
> >> -- Forwarded message --
> >> From: 
> >> Date: Apr 17, 2016 03:27
> >> Subject: Build failed in Jenkins:
> >> ovirt-node_ovirt-3.6_create-iso-el7_merged #19
> >> To: , 
> >> Cc:
> >>
> >> See <
> >>> http://jenkins.phx.ovirt.org/job/ovirt-node_ovirt-3.6_create-iso-el7_merged/19/
> >>> >
> >>>
> >>> --
> >>> [...truncated 214 lines...]
> >>> Resolving Dependencies
> >>> --> Running transaction check
> >>> ---> Package ovirt-node-plugin-vdsm-recipe.noarch
> >>> 0:0.6.3-0.0.ovirt36.el7.centos will be installed
> >>> --> Finished Dependency Resolution
> >>>
> >>> Dependencies Resolved
> >>>
> >>>
> >>> 
> >>>  Package
> >>> Arch   Version Repository
> >>>   Size
> >>>
> >>> 
> >>> Installing:
> >>>  ovirt-node-plugin-vdsm-recipe
> >>> noarch 0.6.3-0.0.ovirt36.el7.centos
> >>>
> >>>  resources.ovirt.org_pub_ovirt-3.6-snapshot_rpm_el7  11 k
> >>>
> >>> Transaction Summary
> >>>
> >>> 
> >>> Install  1 Package
> >>>
> >>> Total download size: 11 k
> >>> Installed size: 18 k
> >>> Downloading packages:
> >>> Running transaction check
> >>> Running transaction test
> >>> Transaction test succeeded
> >>> Running transaction
> >>>   Installing :
> >>> ovirt-node-plugin-vdsm-recipe-0.6.3-0.0.ovirt36.el7.centos   1/1
> >>>   Verifying  :
> >>> ovirt-node-plugin-vdsm-recipe-0.6.3-0.0.ovirt36.el7.centos   1/1
> >>>
> >>> Installed:
> >>>   ovirt-node-plugin-vdsm-recipe.noarch 0:0.6.3-0.0.ovirt36.el7.centos
> >>>
> >>> Complete!
> >>> + make iso publish
> >>> rm -f *.ks
> >>> Node Creator script is:  /usr/sbin/node-creator
> >>> cp /usr/share/ovirt-node-recipe/*.ks .
> >>> rm -f version.ks
> >>> ( \
> >>>   if [ -n "7" ]; then \
> >>>  CENTOS_REPO_LINE="repo --name=centos --mirrorlist=
> >>> http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os \n" ;\
> >>>  UPDATES_REPO_LINE="repo --name=centos-updates --mirrorlist=
> >>> http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates \n" ;\
> >>>  EPEL_REPO_LINE="repo --name=epel --baseurl=
> >>> http://dl.fedoraproject.org/pub/epel/7/x86_64 \n" ;\
> >>>  OVIRT_REPO_LINE="repo --name=ovirt-repo --baseurl=
> >>> http://resources.ovirt.org/pub/ovirt-3.6-snapshot/rpm/el7\n"; ;\
> >>>  GLUSTER_REPO_LINE="repo --name=ovirt-3.6-glusterfs-x86_64-epel
> >>> --baseurl=
> >>> http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-7/x86_64\n
> >>> repo --name=ovirt-3.6-glusterfs-noarch-epel --baseurl=
> >>> http://download.gluster.org/pub/

[JIRA] (OVIRT-515) Re: Cannot clone bugs with more than 2^16 characters in the comments

2016-05-24 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303#comment-16303
 ] 

Gil Shinar commented on OVIRT-515:
--

Something is really strange here. It let's you write a comment with more then 
2^16 chars but don't let you clone it?
It looks more like a Bugzilla bug than  a Bugzilla limitation

> Re: Cannot clone bugs with more than 2^16 characters in the comments
> 
>
> Key: OVIRT-515
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-515
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: dcaro
>Assignee: infra
> Attachments: signature.asc, signature.asc
>
>
> Looks like a restriction of the api itself, probably triggered by us adding 
> the
> 'comment from ...' header to the comments while clonning.
> Opening a bug on it to keep track
> On 05/02 12:36, Tal Nisan wrote:
> > I've encountered this in this job:
> > http://jenkins-ci.eng.lab.tlv.redhat.com/job/system_bugzilla_clone_zstream_milestone/141/console
> > 
> > For this bug:
> > https://bugzilla.redhat.com/1301083
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> David Caro
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R&D
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-681) [URGENT] boolean parameters not passed anymore to jenkins jobs

2016-08-14 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar updated OVIRT-681:
-
Attachment: image.png

Hi Sandro,

I haven't had the chance to use Matrix jobs so it was a challenge for me
investigate :-)
I have found the following bug:
https://issues.jenkins-ci.org/browse/JENKINS-34758

Due to a security update, parameters stopped being passed to child jobs.
According to changelog, matrix job plugin version 1.7 should fix that:
[image: Inline image 1]

I'll upgrade the plugin but restart might be needed so you'll tell us when.

Gil

On Sat, Aug 13, 2016 at 8:49 AM, sbonazzo (oVirt JIRA) <



> [URGENT] boolean parameters not passed anymore to jenkins jobs
> --
>
> Key: OVIRT-681
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-681
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
> Attachments: image.png
>
>
> Hi,
> looks like repository closure jobs are not working anymore because the
> boolean parameters are not passed anymore to the job environment.
> Can you please check why this happens?
> See for example
> http://jenkins.ovirt.org/user/sbonazzo/my-views/view/Repo%20status/job/repos_4.0_check-closure_merged/86/DISTRIBUTION=centos7/console
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.245.0#19)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-697) Re: Failed Check-Merged Job

2016-08-17 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=19720#comment-19720
 ] 

Gil Shinar commented on OVIRT-697:
--

I have re-triggered the build and it completed successfully.
It was probably a fedora repo network hiccup.

Gil

On Wed, Aug 17, 2016 at 5:46 PM, Eyal Edri  wrote:

> Adding Infra support so we'll have a ticket documenting this as well.
>
> On Wed, Aug 17, 2016 at 5:23 PM, Phillip Bailey 
> wrote:
>
>> Hi infra team,
>>
>> One of my patches [1] failed the check-merged-el6-x86 job [2]. It looks
>> like a yum install failed at 12:59:29.
>>
>> Could someone please take a look and let me know if any action is
>> required on my part?
>>
>> Thanks!
>>
>> -Phillip Bailey
>>
>> [1] https://gerrit.ovirt.org/#/c/62165/
>> [2] http://jenkins.ovirt.org/job/ovirt-engine_3.6_check-merged-e
>> l6-x86_64/101/console
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>

> Re: Failed Check-Merged Job
> ---
>
> Key: OVIRT-697
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-697
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: eyal edri [Administrator]
>Assignee: infra
>
> Adding Infra support so we'll have a ticket documenting this as well.
> On Wed, Aug 17, 2016 at 5:23 PM, Phillip Bailey  wrote:
> > Hi infra team,
> >
> > One of my patches [1] failed the check-merged-el6-x86 job [2]. It looks
> > like a yum install failed at 12:59:29.
> >
> > Could someone please take a look and let me know if any action is required
> > on my part?
> >
> > Thanks!
> >
> > -Phillip Bailey
> >
> > [1] https://gerrit.ovirt.org/#/c/62165/
> > [2] http://jenkins.ovirt.org/job/ovirt-engine_3.6_check-merged-
> > el6-x86_64/101/console
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> -- 
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)



--
This message was sent by Atlassian JIRA
(v1000.253.3#100011)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-741) Create a job that deleted archived for deletion jobs

2016-09-25 Thread Gil Shinar (oVirt JIRA)
Gil Shinar created OVIRT-741:


 Summary: Create a job that deleted archived for deletion jobs
 Key: OVIRT-741
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-741
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: Gil Shinar
Assignee: infra


After 20 days from the day it had changed its name



--
This message was sent by Atlassian JIRA
(v1000.362.1#100014)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-741) Create a job that deleted archived for deletion jobs

2016-09-25 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar reassigned OVIRT-741:


Assignee: Gil Shinar  (was: infra)

> Create a job that deleted archived for deletion jobs
> 
>
> Key: OVIRT-741
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-741
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>    Reporter: Gil Shinar
>    Assignee: Gil Shinar
>
> After 20 days from the day it had changed its name



--
This message was sent by Atlassian JIRA
(v1000.362.1#100014)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-743) [URGENT] Please restore permission on gerrit

2016-09-26 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar updated OVIRT-743:
-
Resolution: Fixed
Status: Done  (was: To Do)

> [URGENT] Please restore permission on gerrit
> 
>
> Key: OVIRT-743
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-743
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> Hi,
> for some reason I lost my maintainer rights at least on ovirt-engine
> project.
> I've not checked all the projects I maintain but please revert last change
> involved rights or fix it, thanks.
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> <https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>



--
This message was sent by Atlassian JIRA
(v1000.362.1#100014)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-751) Persistent maven caches on the mock slaves

2016-10-06 Thread Gil Shinar (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=21603#comment-21603
 ] 

Gil Shinar commented on OVIRT-751:
--

The default maven's cache is a .m2 folder under the user's home directory so 
the mount should be:
source: /.m2
destination: /.m2

Something like that.

> Persistent maven caches on the mock slaves
> --
>
> Key: OVIRT-751
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-751
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: Anton Marchukov
>Assignee: infra
>
> I think we need to design a way to retain maven caches on the mocked jenkins 
> slaves. Currently it is stored inside the mock and thus maven downloads 
> packages from artifactory server each time. 
> However there is really no reason for that. Maven artifacts are designed to 
> be immutable so once artifact is in the repo there is no trivial way to 
> change it without creating a new version. In fact it should never be needed 
> and the correct solution for that is to always create a new version.
> SNAPSHOOT artifacts are in fact timestamped and each one have different file 
> name. It is just not visible since maven automatically takes the latest one. 
> But it is not related to caching as the new snapshoot will be a new artifact 
> still.
> So based on that point there is no reason to purge maven cache each time, but 
> there are reasons why not to purge. Not purging them will reduce the build 
> times of all java jobs and also reduce the network traffic we have. 



--
This message was sent by Atlassian JIRA
(v1000.383.2#100014)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-778) Give jhernand Jenkins permissions

2016-10-25 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar updated OVIRT-778:
-
Resolution: Fixed
Status: Done  (was: To Do)

> Give jhernand Jenkins permissions
> -
>
> Key: OVIRT-778
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-778
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>  Components: Jenkins
>Reporter: Juan Hernández
>Assignee: infra
>
> I would like to have permissions to trigger jobs.



--
This message was sent by Atlassian JIRA
(v1000.456.2#100016)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-784) Re: Permission to (re-)trigger gerrit builds in jenkins

2016-10-25 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar updated OVIRT-784:
-
Resolution: Fixed
Status: Done  (was: To Do)

> Re: Permission to (re-)trigger gerrit builds in jenkins
> ---
>
> Key: OVIRT-784
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-784
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> Opening a ticket
> On Fri, Oct 21, 2016 at 3:07 PM, Dominik Holler  wrote:
> > Hi,
> > I like to have the permission to retrigger failed gerrit builds on the
> > jenkins build system.
> > Who can enable me to do so?
> > Thanks,
> > Dominik
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> <https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>



--
This message was sent by Atlassian JIRA
(v1000.456.2#100016)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-751) Persistent maven caches on the mock slaves

2016-10-31 Thread Gil Shinar (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gil Shinar reassigned OVIRT-751:


Assignee: Gil Shinar  (was: infra)

> Persistent maven caches on the mock slaves
> --
>
> Key: OVIRT-751
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-751
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: Anton Marchukov
>    Assignee: Gil Shinar
>
> I think we need to design a way to retain maven caches on the mocked jenkins 
> slaves. Currently it is stored inside the mock and thus maven downloads 
> packages from artifactory server each time. 
> However there is really no reason for that. Maven artifacts are designed to 
> be immutable so once artifact is in the repo there is no trivial way to 
> change it without creating a new version. In fact it should never be needed 
> and the correct solution for that is to always create a new version.
> SNAPSHOOT artifacts are in fact timestamped and each one have different file 
> name. It is just not visible since maven automatically takes the latest one. 
> But it is not related to caching as the new snapshoot will be a new artifact 
> still.
> So based on that point there is no reason to purge maven cache each time, but 
> there are reasons why not to purge. Not purging them will reduce the build 
> times of all java jobs and also reduce the network traffic we have. 



--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


  1   2   3   >