Re: oVirt infra weekly meeting notes 17.01.2018

2018-01-17 Thread Duck
Quack,

On 01/18/2018 01:23 AM, Evgheni Dereveanchin wrote:

>   * CVE patching ongoing, most services are already patched, few hosts
> and VMs left

Good to hear :-).

Sorry I missed the timeframe. I'm trying to finish a few things before
departing for Europe, and finish my bag too. Hope to see some of you at
DevConf or FOSDEM.

\_o<




signature.asc
Description: OpenPGP digital signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : system-sync_mirrors-centos-qemu-ev-release-el7-x86_64 #606

2018-01-17 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 69848, 27 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2018-01-17 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
69848,27 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 86114,4 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86114,4 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/69848/27

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86114/4

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4942/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-system-tests_hc-basic-suite-master - Build # 162 - Still Failing!

2018-01-17 Thread jenkins
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/162/
Build Number: 162
Build Status:  Still Failing
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #159
[Gal Ben Haim] 4.2: Adding 4.2 pre repo


Changes for Build #160
[Gal Ben Haim] 4.2: Adding 4.2 pre repo


Changes for Build #161
[Gal Ben Haim] 4.2: Adding 4.2 pre repo

[Gal Ben Haim] ost: Add 4.2 to the manual job

[Gal Ben Haim] ost: Adding 'he-basic-ansible' to the manual job

[Daniel Belenky] Add line brak before mock_enable_network


Changes for Build #162
[Yedidyah Bar David] Revert "upgrade_suites: Update ovirt-engine-metrics"

[Gal Ben Haim] ost: Fix Lago custom repo

[Greg Sheremeta] ovirt-js-dependencies: add jobs for fc27, add 4.2 branch




-
Failed Tests:
-
1 tests failed.
FAILED:  002_bootstrap.add_hosts

Error Message:
Host lago-hc-basic-suite-master-host1 failed to install
 >> begin captured logging << 
ovirtlago.testlib: ERROR: * Unhandled exception in 
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up
raise RuntimeError('Host %s failed to install' % host.name())
RuntimeError: Host lago-hc-basic-suite-master-host1 failed to install
- >> end captured logging << -

Stack Trace:
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
wrapper
return func(get_test_prefix(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 164, in add_hosts
testlib.assert_true_within(_host_is_up, timeout=15 * 60)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 263, in 
assert_true_within
assert_equals_within(func, True, timeout, allowed_exceptions)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up
raise RuntimeError('Host %s failed to install' % host.name())
'Host lago-hc-basic-suite-master-host1 failed to install\n 
>> begin captured logging << \novirtlago.testlib: ERROR:
 * Unhandled exception in \nTraceback (most 
recent call last):\n  File 
"/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within\nres = func()\n  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up\nraise RuntimeError(\'Host %s failed to install\' 
% host.name())\nRuntimeError: Host lago-hc-basic-suite-master-host1 failed to 
install\n- >> end captured logging << -'___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 175 - Failure!

2018-01-17 Thread jenkins
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/175/
Build Number: 175
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #175
[Yedidyah Bar David] Revert "upgrade_suites: Update ovirt-engine-metrics"

[Gal Ben Haim] ost: Fix Lago custom repo

[Greg Sheremeta] ovirt-js-dependencies: add jobs for fc27, add 4.2 branch




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: system-sync_mirrors-centos-qemu-ev-release-el7-x86_64 #605

2018-01-17 Thread jenkins
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/heads/*:refs/remotes/origin/* --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 6d8b64bbd0be2a6dc477cece734b20b5a3875b45 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6d8b64bbd0be2a6dc477cece734b20b5a3875b45
Commit message: "ovirt-js-dependencies: add jobs for fc27, add 4.2 branch"
 > git rev-list 6d8b64bbd0be2a6dc477cece734b20b5a3875b45 # timeout=10
[system-sync_mirrors-centos-qemu-ev-release-el7-x86_64] $ /bin/bash -xe 
/tmp/jenkins8743998934041441315.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror centos-qemu-ev-release-el7 
x86_64 jenkins/data/mirrors-reposync.conf
Checking if mirror needs a resync
Traceback (most recent call last):
  File "/usr/bin/reposync", line 343, in 
main()
  File "/usr/bin/reposync", line 175, in main
my.doRepoSetup()
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 681, in 
doRepoSetup
return self._getRepos(thisrepo, True)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 721, in 
_getRepos
self._repos.doSetup(thisrepo)
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in 
retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1465, in 
_commonLoadRepoXML
local  = self.cachedir + '/repomd.xml'
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 774, in 
cachedir = property(lambda self: self._dirGetAttr('cachedir'))
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 757, in 
_dirGetAttr
self.dirSetup()
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 735, in dirSetup
self._dirSetupMkdir_p(dir)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 712, in 
_dirSetupMkdir_p
raise Errors.RepoError, msg
yum.Errors.RepoError: Error making cache directory: 
/home/jenkins/mirrors_cache/centos-qemu-ev-release-el7 error was: [Errno 17] 
File exists: '/home/jenkins/mirrors_cache/centos-qemu-ev-release-el7'
Build step 'Execute shell' marked build as failure
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 74243, 9 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2018-01-17 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
74243,9 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 86114,4 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86114,4 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/74243/9

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86114/4

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4934/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-17 Thread Milan Zamazal
Dafna Ron  writes:

> We had a failure in test 006_migrations.migrate_vm
> .
>
> the migration failed with reason "VMExists"

There are two migrations in 006_migrations.migrate_vm.  The first one
succeeded, but if I'm looking correctly into the logs, Engine didn't
send Destroy to the source host after the migration had finished.  Then
the second migration gets rejected by Vdsm, because Vdsm still keeps the
former Vm object instance in Down status.

Since the test succeeds most of the time, it looks like some timing
issue or border case.  Arik, is it a known problem?  If not, would you
like to look into the logs, whether you can see what's happening?

> Seems to be an issue which is caused by connectivity between engine and
> hosts.
> I remember this issue happening before a few weeks ago - is there a
> solution/bug for this issue?

None I'm aware of.

> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86114/4 
> - net tests: Fix vlan creation name length in nettestlib Link to Job:*

It's just coincidence that it failed on that patch, so I'm excluding
Edward from the discussion, he is innocent :-).

> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4842/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4842/artifact/
> (Relevant)
> error snippet from the log: *
>
>
>
>
>
>
>
>
>
> *vdsm dst:2018-01-15 06:47:03,355-0500 ERROR (jsonrpc/0) [api] FINISH
> create error=Virtual machine already exists (api:124)Traceback (most recent
> call last):  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> line 117, in methodret = func(*args, **kwargs)  File
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 180, in create
> raise exception.VMExists()VMExists: Virtual machine already exists*
>
>
> *vdsm src: 2018-01-15 06:47:03,359-0500 ERROR (migsrc/d17a2482)
> [virt.vm] *(vmId='d17a2482-4904-4cbc-8d13-3a3b7840782d')
> migration destination error: Virtual machine already exists (migration:290
>
>
> *)*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Engine: 2018-01-15 06:45:30,169-05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] Failure to refresh
> host 'lago-basic-suite-master-host-0' runtime info:
> java.net.ConnectException: Connection refused2018-01-15 06:45:30,169-05
> DEBUG [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> java.net.ConnectException: Connection refusedat
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:159)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:122)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]at
> org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
> Source) [vdsbroker.jar:]at
> sun.reflect.GeneratedMethodAccessor234.invoke(Unknown Source)
> [:1.8.0_151]at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]at
> java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at
> org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
> [common.jar:]at
> sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
> [:1.8.0_151]at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]at
> java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at
> 

[CQ]: 86261, 3 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2018-01-17 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
86261,3 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 86114,4 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86114,4 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86261/3

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86114/4

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4926/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Nir Soffer
On Wed, Jan 17, 2018 at 7:15 PM Nir Soffer  wrote:

> Thanks for reporting this issue, should be fixed by
> https://gerrit.ovirt.org/#/c/86489/
>
> Can someone trigger build artifacts job to test this patch?
>
> Can I trigger build-artifacts job manually?
>

Ok, found how to do this.

The issue here is empty release_suffix - this happens only when
we build a release version from tag. This the reason we did
not find this issue before, and we cannot detect this issue
by running build-artifacts on each build, unless we add a special
test to tag the local checkout, and build from the tag.

Daniel, lets merge it and push a new tag to check if the build
works.


>
> On Wed, Jan 17, 2018 at 6:39 PM Nir Soffer  wrote:
>
>> On Wed, Jan 17, 2018 at 5:41 PM Eyal Edri  wrote:
>>
>>> On Wed, Jan 17, 2018 at 5:38 PM, Eyal Edri  wrote:
>>>


 On Wed, Jan 17, 2018 at 5:23 PM, Nir Soffer  wrote:

>
>
> On Wed, Jan 17, 2018 at 5:08 PM Barak Korren 
> wrote:
>
>> Hi Guys,
>>
>> I'm sure you are aware that 'ovirt-imageio' is currently failing to
>> build on FC27 and Rawhide.
>>
>
> Why are you sure? I never seen any build failure since we added the
> fc27 and fcraw builds.
>
> Please point us to failed builds.
>

 Latest one -
 http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fc27-x86_64/32/

>>>
>>> If it doesn't take too long, I think its worth adding building RPMs to
>>> check-patch as well, so you can catch such errors
>>> before the merge.
>>>
>>
>> This is certainly what we need to do.
>>
>>
>>>
>>>


>
>
>>
>> I'm not sure you get the implications tough.
>>
>> The 1st thing that the release change-queue is checking, before
>> running OST, is if all builds for all patches it is checking are
>> building successfully for all platforms that the project has jobs for.
>>
>> This means that essentially no new 'ovirt-imageio' packages are making
>> it into the tested and nightly repos for __any__ platform.
>>
>> Furthermore, with the way the system works for now, there is no
>> special handling for build failures, with means they are treated like
>> OST failures, which means the system runs an expensive bisection
>> search to find the failing patch. So when your project fails to build,
>> it slows down checking for other project's patches.
>>
>> I ask that you please do one of the following:
>> 1. Fix the FC27 and RAWHIDE builds
>> 2. Remove the FC27 and RAWHIDE build jobs
>> 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
>> are not checked by the change-queue.
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


 --

 Eyal edri


 MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 <+972%209-769-2018>
 irc: eedri (on #tlv #rhev-dev #rhev-integ)

>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Nir Soffer
Thanks for reporting this issue, should be fixed by
https://gerrit.ovirt.org/#/c/86489/

Can someone trigger build artifacts job to test this patch?

Can I trigger build-artifacts job manually?

On Wed, Jan 17, 2018 at 6:39 PM Nir Soffer  wrote:

> On Wed, Jan 17, 2018 at 5:41 PM Eyal Edri  wrote:
>
>> On Wed, Jan 17, 2018 at 5:38 PM, Eyal Edri  wrote:
>>
>>>
>>>
>>> On Wed, Jan 17, 2018 at 5:23 PM, Nir Soffer  wrote:
>>>


 On Wed, Jan 17, 2018 at 5:08 PM Barak Korren 
 wrote:

> Hi Guys,
>
> I'm sure you are aware that 'ovirt-imageio' is currently failing to
> build on FC27 and Rawhide.
>

 Why are you sure? I never seen any build failure since we added the
 fc27 and fcraw builds.

 Please point us to failed builds.

>>>
>>> Latest one -
>>> http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fc27-x86_64/32/
>>>
>>
>> If it doesn't take too long, I think its worth adding building RPMs to
>> check-patch as well, so you can catch such errors
>> before the merge.
>>
>
> This is certainly what we need to do.
>
>
>>
>>
>>>
>>>


>
> I'm not sure you get the implications tough.
>
> The 1st thing that the release change-queue is checking, before
> running OST, is if all builds for all patches it is checking are
> building successfully for all platforms that the project has jobs for.
>
> This means that essentially no new 'ovirt-imageio' packages are making
> it into the tested and nightly repos for __any__ platform.
>
> Furthermore, with the way the system works for now, there is no
> special handling for build failures, with means they are treated like
> OST failures, which means the system runs an expensive bisection
> search to find the failing patch. So when your project fails to build,
> it slows down checking for other project's patches.
>
> I ask that you please do one of the following:
> 1. Fix the FC27 and RAWHIDE builds
> 2. Remove the FC27 and RAWHIDE build jobs
> 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
> are not checked by the change-queue.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>

 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra


>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Nir Soffer
On Wed, Jan 17, 2018 at 5:41 PM Eyal Edri  wrote:

> On Wed, Jan 17, 2018 at 5:38 PM, Eyal Edri  wrote:
>
>>
>>
>> On Wed, Jan 17, 2018 at 5:23 PM, Nir Soffer  wrote:
>>
>>>
>>>
>>> On Wed, Jan 17, 2018 at 5:08 PM Barak Korren  wrote:
>>>
 Hi Guys,

 I'm sure you are aware that 'ovirt-imageio' is currently failing to
 build on FC27 and Rawhide.

>>>
>>> Why are you sure? I never seen any build failure since we added the
>>> fc27 and fcraw builds.
>>>
>>> Please point us to failed builds.
>>>
>>
>> Latest one -
>> http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fc27-x86_64/32/
>>
>
> If it doesn't take too long, I think its worth adding building RPMs to
> check-patch as well, so you can catch such errors
> before the merge.
>

This is certainly what we need to do.


>
>
>>
>>
>>>
>>>

 I'm not sure you get the implications tough.

 The 1st thing that the release change-queue is checking, before
 running OST, is if all builds for all patches it is checking are
 building successfully for all platforms that the project has jobs for.

 This means that essentially no new 'ovirt-imageio' packages are making
 it into the tested and nightly repos for __any__ platform.

 Furthermore, with the way the system works for now, there is no
 special handling for build failures, with means they are treated like
 OST failures, which means the system runs an expensive bisection
 search to find the failing patch. So when your project fails to build,
 it slows down checking for other project's patches.

 I ask that you please do one of the following:
 1. Fix the FC27 and RAWHIDE builds
 2. Remove the FC27 and RAWHIDE build jobs
 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
 are not checked by the change-queue.

 --
 Barak Korren
 RHV DevOps team , RHCE, RHCi
 Red Hat EMEA
 redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


oVirt infra weekly meeting notes 17.01.2018

2018-01-17 Thread Evgheni Dereveanchin
Hi everyone,

Please find the topics of this week's infra meeting below:
PHX DC:

   - CVE patching ongoing, most services are already patched, few hosts and
   VMs left
   - BIOS patching breaks lago on newer machines - to be fixed by lago team
   - Slave configuration being moved to global_setup OVIRT-1810
   - Hardware refresh proposals for this year?
   - Add switches as we’re out of port
  - Would be great to have more bare metals


--
Regards,
Evgheni Dereveanchin
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Eyal Edri
On Wed, Jan 17, 2018 at 5:38 PM, Eyal Edri  wrote:

>
>
> On Wed, Jan 17, 2018 at 5:23 PM, Nir Soffer  wrote:
>
>>
>>
>> On Wed, Jan 17, 2018 at 5:08 PM Barak Korren  wrote:
>>
>>> Hi Guys,
>>>
>>> I'm sure you are aware that 'ovirt-imageio' is currently failing to
>>> build on FC27 and Rawhide.
>>>
>>
>> Why are you sure? I never seen any build failure since we added the
>> fc27 and fcraw builds.
>>
>> Please point us to failed builds.
>>
>
> Latest one - http://jenkins.ovirt.org/job/ovirt-imageio_master_
> build-artifacts-fc27-x86_64/32/
>

If it doesn't take too long, I think its worth adding building RPMs to
check-patch as well, so you can catch such errors
before the merge.


>
>
>>
>>
>>>
>>> I'm not sure you get the implications tough.
>>>
>>> The 1st thing that the release change-queue is checking, before
>>> running OST, is if all builds for all patches it is checking are
>>> building successfully for all platforms that the project has jobs for.
>>>
>>> This means that essentially no new 'ovirt-imageio' packages are making
>>> it into the tested and nightly repos for __any__ platform.
>>>
>>> Furthermore, with the way the system works for now, there is no
>>> special handling for build failures, with means they are treated like
>>> OST failures, which means the system runs an expensive bisection
>>> search to find the failing patch. So when your project fails to build,
>>> it slows down checking for other project's patches.
>>>
>>> I ask that you please do one of the following:
>>> 1. Fix the FC27 and RAWHIDE builds
>>> 2. Remove the FC27 and RAWHIDE build jobs
>>> 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
>>> are not checked by the change-queue.
>>>
>>> --
>>> Barak Korren
>>> RHV DevOps team , RHCE, RHCi
>>> Red Hat EMEA
>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>>
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Eyal Edri
On Wed, Jan 17, 2018 at 5:23 PM, Nir Soffer  wrote:

>
>
> On Wed, Jan 17, 2018 at 5:08 PM Barak Korren  wrote:
>
>> Hi Guys,
>>
>> I'm sure you are aware that 'ovirt-imageio' is currently failing to
>> build on FC27 and Rawhide.
>>
>
> Why are you sure? I never seen any build failure since we added the
> fc27 and fcraw builds.
>
> Please point us to failed builds.
>

Latest one -
http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fc27-x86_64/32/


>
>
>>
>> I'm not sure you get the implications tough.
>>
>> The 1st thing that the release change-queue is checking, before
>> running OST, is if all builds for all patches it is checking are
>> building successfully for all platforms that the project has jobs for.
>>
>> This means that essentially no new 'ovirt-imageio' packages are making
>> it into the tested and nightly repos for __any__ platform.
>>
>> Furthermore, with the way the system works for now, there is no
>> special handling for build failures, with means they are treated like
>> OST failures, which means the system runs an expensive bisection
>> search to find the failing patch. So when your project fails to build,
>> it slows down checking for other project's patches.
>>
>> I ask that you please do one of the following:
>> 1. Fix the FC27 and RAWHIDE builds
>> 2. Remove the FC27 and RAWHIDE build jobs
>> 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
>> are not checked by the change-queue.
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Barak Korren
On 17 January 2018 at 17:23, Nir Soffer  wrote:
>
>
> On Wed, Jan 17, 2018 at 5:08 PM Barak Korren  wrote:
>>
>> Hi Guys,
>>
>> I'm sure you are aware that 'ovirt-imageio' is currently failing to
>> build on FC27 and Rawhide.
>
>
> Why are you sure? I never seen any build failure since we added the
> fc27 and fcraw builds.
>
> Please point us to failed builds.

http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fc27-x86_64/32/console
http://jenkins.ovirt.org/job/ovirt-imageio_master_build-artifacts-fcraw-x86_64/34/


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Nir Soffer
On Wed, Jan 17, 2018 at 5:08 PM Barak Korren  wrote:

> Hi Guys,
>
> I'm sure you are aware that 'ovirt-imageio' is currently failing to
> build on FC27 and Rawhide.
>

Why are you sure? I never seen any build failure since we added the
fc27 and fcraw builds.

Please point us to failed builds.


>
> I'm not sure you get the implications tough.
>
> The 1st thing that the release change-queue is checking, before
> running OST, is if all builds for all patches it is checking are
> building successfully for all platforms that the project has jobs for.
>
> This means that essentially no new 'ovirt-imageio' packages are making
> it into the tested and nightly repos for __any__ platform.
>
> Furthermore, with the way the system works for now, there is no
> special handling for build failures, with means they are treated like
> OST failures, which means the system runs an expensive bisection
> search to find the failing patch. So when your project fails to build,
> it slows down checking for other project's patches.
>
> I ask that you please do one of the following:
> 1. Fix the FC27 and RAWHIDE builds
> 2. Remove the FC27 and RAWHIDE build jobs
> 3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
> are not checked by the change-queue.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


ovirt-imageio is failing to build on fc27/fcraw

2018-01-17 Thread Barak Korren
Hi Guys,

I'm sure you are aware that 'ovirt-imageio' is currently failing to
build on FC27 and Rawhide.

I'm not sure you get the implications tough.

The 1st thing that the release change-queue is checking, before
running OST, is if all builds for all patches it is checking are
building successfully for all platforms that the project has jobs for.

This means that essentially no new 'ovirt-imageio' packages are making
it into the tested and nightly repos for __any__ platform.

Furthermore, with the way the system works for now, there is no
special handling for build failures, with means they are treated like
OST failures, which means the system runs an expensive bisection
search to find the failing patch. So when your project fails to build,
it slows down checking for other project's patches.

I ask that you please do one of the following:
1. Fix the FC27 and RAWHIDE builds
2. Remove the FC27 and RAWHIDE build jobs
3. Turn the FC27 and RAWHIDE build jobs into check-merged jobs which
are not checked by the change-queue.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt-engine-dashboard ] [ 17-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-17 Thread Martin Perina
On Wed, Jan 17, 2018 at 3:49 PM, Dafna Ron  wrote:

> hi,
> We are failing test 002_bootstrap.add_hosts
> 
> on ovirt-engine-dashboard in the upgrade suite.
>
> The error seems to be for missing metrics rules in the ansible playbook.
>
>
>
>
>
>
> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86318/  -
> add fc27.spec link for automation Link to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/
> Link
> to all logs:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/artifact/
> (Relevant)
> error snippet from the log: 2018-01-17 08:36:23,706 p=27149 u=ovirt
> |  Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config
> file2018-01-17 08:36:23,734 p=27149 u=ovirt |  ERROR! the role
> 'oVirt.metrics' was not found in
> /usr/share/ovirt-engine/playbooks/roles:/var/lib/ovirt-engine/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/usr/share/ovirt-engine/playbooks/roles:/usr/share/ovirt-engine/playbooksThe
> error appears to have been in
> '/usr/share/ovirt-engine/playbooks/roles/ovirt-host-deploy/meta/main.yml':
> line 6, column 5, but maybe elsewhere in the file depending on the exact
> syntax problem.The offending line appears to be:  -
> ovirt-host-deploy-firewalld  - oVirt.metrics^ here*
>

Shirly/Didi, could you please take a look if ovirt-engine-metrics package
is properly installed in OST?
​

>
>
>
>
>
>
>
>
>
>
>
>
> *2018-01-17 08:36:23,764-05 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] Host installation
> failed for host '61c7b24f-e19d-4101-a945-a65b30b7b488',
> 'lago-upgrade-from-release-suite-master-host0': Failed to execute Ansible
> host-deploy role. Please check logs for more details:
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log2018-01-17
> 08:36:23,769-05 INFO
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] START,
> SetVdsStatusVDSCommand(HostName =
> lago-upgrade-from-release-suite-master-host0,
> SetVdsStatusVDSCommandParameters:{hostId='61c7b24f-e19d-4101-a945-a65b30b7b488',
> status='InstallFailed', nonOperationalReason='NONE',
> stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
> 65b48a392018-01-17 08:36:23,773-05 INFO
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] FINISH,
> SetVdsStatusVDSCommand, log id: 65b48a392018-01-17 08:36:23,780-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host lago-upgrade-from-release-suite-master-host0
> installation failed. Failed to execute Ansible host-deploy role. Please
> check logs for more details:
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log.2018-01-17
> 08:36:23,788-05 INFO
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] Lock freed to object
> 'EngineLock:{exclusiveLocks='[61c7b24f-e19d-4101-a945-a65b30b7b488=VDS]',
> sharedLocks=''}'*
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt-engine-dashboard ] [ 17-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-17 Thread Greg Sheremeta
On Wed, Jan 17, 2018 at 9:49 AM, Dafna Ron  wrote:

> hi,
> We are failing test 002_bootstrap.add_hosts
> 
> on ovirt-engine-dashboard in the upgrade suite.
>
> The error seems to be for missing metrics rules in the ansible playbook.
>
>
>
> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86318/  -
> add fc27.spec link for automation*
>

All this patch does is add automation for fc27, so I don't think that is
affecting OST on el.

@Barak?



>
>
>
> * Link to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/
> Link
> to all logs:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/artifact/
> (Relevant)
> error snippet from the log: 2018-01-17 08:36:23,706 p=27149 u=ovirt
> |  Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config
> file2018-01-17 08:36:23,734 p=27149 u=ovirt |  ERROR! the role
> 'oVirt.metrics' was not found in
> /usr/share/ovirt-engine/playbooks/roles:/var/lib/ovirt-engine/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/usr/share/ovirt-engine/playbooks/roles:/usr/share/ovirt-engine/playbooksThe
> error appears to have been in
> '/usr/share/ovirt-engine/playbooks/roles/ovirt-host-deploy/meta/main.yml':
> line 6, column 5, but maybe elsewhere in the file depending on the exact
> syntax problem.The offending line appears to be:  -
> ovirt-host-deploy-firewalld  - oVirt.metrics^ here2018-01-17
> 08:36:23,764-05 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] Host installation
> failed for host '61c7b24f-e19d-4101-a945-a65b30b7b488',
> 'lago-upgrade-from-release-suite-master-host0': Failed to execute Ansible
> host-deploy role. Please check logs for more details:
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log2018-01-17
> 08:36:23,769-05 INFO
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] START,
> SetVdsStatusVDSCommand(HostName =
> lago-upgrade-from-release-suite-master-host0,
> SetVdsStatusVDSCommandParameters:{hostId='61c7b24f-e19d-4101-a945-a65b30b7b488',
> status='InstallFailed', nonOperationalReason='NONE',
> stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
> 65b48a392018-01-17 08:36:23,773-05 INFO
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] FINISH,
> SetVdsStatusVDSCommand, log id: 65b48a392018-01-17 08:36:23,780-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host lago-upgrade-from-release-suite-master-host0
> installation failed. Failed to execute Ansible host-deploy role. Please
> check logs for more details:
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log.2018-01-17
> 08:36:23,788-05 INFO
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [849756c] Lock freed to object
> 'EngineLock:{exclusiveLocks='[61c7b24f-e19d-4101-a945-a65b30b7b488=VDS]',
> sharedLocks=''}'*
>



-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ OST Failure Report ] [ oVirt-engine-dashboard ] [ 17-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-17 Thread Dafna Ron
hi,
We are failing test 002_bootstrap.add_hosts

on ovirt-engine-dashboard in the upgrade suite.

The error seems to be for missing metrics rules in the ansible playbook.






*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/86318/  -
add fc27.spec link for automation Link to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/
Link
to all logs:*



























*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/artifact/
(Relevant)
error snippet from the log: 2018-01-17 08:36:23,706 p=27149 u=ovirt
|  Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config
file2018-01-17 08:36:23,734 p=27149 u=ovirt |  ERROR! the role
'oVirt.metrics' was not found in
/usr/share/ovirt-engine/playbooks/roles:/var/lib/ovirt-engine/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/usr/share/ovirt-engine/playbooks/roles:/usr/share/ovirt-engine/playbooksThe
error appears to have been in
'/usr/share/ovirt-engine/playbooks/roles/ovirt-host-deploy/meta/main.yml':
line 6, column 5, but maybe elsewhere in the file depending on the exact
syntax problem.The offending line appears to be:  -
ovirt-host-deploy-firewalld  - oVirt.metrics^ here2018-01-17
08:36:23,764-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [849756c] Host installation
failed for host '61c7b24f-e19d-4101-a945-a65b30b7b488',
'lago-upgrade-from-release-suite-master-host0': Failed to execute Ansible
host-deploy role. Please check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log2018-01-17
08:36:23,769-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [849756c] START,
SetVdsStatusVDSCommand(HostName =
lago-upgrade-from-release-suite-master-host0,
SetVdsStatusVDSCommandParameters:{hostId='61c7b24f-e19d-4101-a945-a65b30b7b488',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
65b48a392018-01-17 08:36:23,773-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [849756c] FINISH,
SetVdsStatusVDSCommand, log id: 65b48a392018-01-17 08:36:23,780-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [849756c] EVENT_ID:
VDS_INSTALL_FAILED(505), Host lago-upgrade-from-release-suite-master-host0
installation failed. Failed to execute Ansible host-deploy role. Please
check logs for more details:
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20180117083623-lago-upgrade-from-release-suite-master-host0-849756c.log.2018-01-17
08:36:23,788-05 INFO
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [849756c] Lock freed to object
'EngineLock:{exclusiveLocks='[61c7b24f-e19d-4101-a945-a65b30b7b488=VDS]',
sharedLocks=''}'*
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86318, 1 (ovirt-engine-dashboard) failed "ovirt-master" system tests

2018-01-17 Thread oVirt Jenkins
Change 86318,1 (ovirt-engine-dashboard) is probably the reason behind recent
system test failures in the "ovirt-master" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86318/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4918/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : system-sync_mirrors-centos-updates-el7-x86_64 #1185

2018-01-17 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1849) enable all gerrit hooks for cockpit-ovirt project

2018-01-17 Thread Ryan Barry (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35677#comment-35677
 ] 

Ryan Barry commented on OVIRT-1849:
---

I don't see a good reason not to enable all of them.

At a minimum, set_modified would be great

> enable all gerrit hooks for cockpit-ovirt project
> -
>
> Key: OVIRT-1849
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1849
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri
>Assignee: infra
>
> It looks like the cockpit-ovirt project doesn't have all hooks enabled, 
> current these are the hooks its using:
> /home/gerrit2/review_site/hooks/custom_hooks/update_tracker
> /home/gerrit2/review_site/hooks/custom_hooks/comment-added.propagate_review_values
> If we want that the hooks will also update bz status and do other 
> verification like backporing, we need to add more hooks.
> [~sbona...@redhat.com] [~msi...@redhat.com] please comment which hooks you'd 
> like to enable or all of them.
> Info on the hooks can be found here 
> :http://ovirt-infra-docs.readthedocs.io/en/latest/General/Creating_Gerrit_Projects/index.html#enabling-custom-gerrit-hooks
> [~amarchuk] fyi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1849) enable all gerrit hooks for cockpit-ovirt project

2018-01-17 Thread Martin Sivak (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35676#comment-35676
 ] 

Martin Sivak commented on OVIRT-1849:
-

I am not the maintainer, but we have to check all the sla bugs related to 
cockpit in depth as the status does not reflect the patches.

[~rbarry] should give his opinion here too.

> enable all gerrit hooks for cockpit-ovirt project
> -
>
> Key: OVIRT-1849
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1849
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri
>Assignee: infra
>
> It looks like the cockpit-ovirt project doesn't have all hooks enabled, 
> current these are the hooks its using:
> /home/gerrit2/review_site/hooks/custom_hooks/update_tracker
> /home/gerrit2/review_site/hooks/custom_hooks/comment-added.propagate_review_values
> If we want that the hooks will also update bz status and do other 
> verification like backporing, we need to add more hooks.
> [~sbona...@redhat.com] [~msi...@redhat.com] please comment which hooks you'd 
> like to enable or all of them.
> Info on the hooks can be found here 
> :http://ovirt-infra-docs.readthedocs.io/en/latest/General/Creating_Gerrit_Projects/index.html#enabling-custom-gerrit-hooks
> [~amarchuk] fyi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1849) enable all gerrit hooks for cockpit-ovirt project

2018-01-17 Thread eyal edri (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri updated OVIRT-1849:
-
Epic Link: OVIRT-411

> enable all gerrit hooks for cockpit-ovirt project
> -
>
> Key: OVIRT-1849
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1849
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri
>Assignee: infra
>
> It looks like the cockpit-ovirt project doesn't have all hooks enabled, 
> current these are the hooks its using:
> /home/gerrit2/review_site/hooks/custom_hooks/update_tracker
> /home/gerrit2/review_site/hooks/custom_hooks/comment-added.propagate_review_values
> If we want that the hooks will also update bz status and do other 
> verification like backporing, we need to add more hooks.
> [~sbona...@redhat.com] [~msi...@redhat.com] please comment which hooks you'd 
> like to enable or all of them.
> Info on the hooks can be found here 
> :http://ovirt-infra-docs.readthedocs.io/en/latest/General/Creating_Gerrit_Projects/index.html#enabling-custom-gerrit-hooks
> [~amarchuk] fyi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1849) enable all gerrit hooks for cockpit-ovirt project

2018-01-17 Thread eyal edri (oVirt JIRA)
eyal edri created OVIRT-1849:


 Summary: enable all gerrit hooks for cockpit-ovirt project
 Key: OVIRT-1849
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1849
 Project: oVirt - virtualization made easy
  Issue Type: Task
Reporter: eyal edri
Assignee: infra


It looks like the cockpit-ovirt project doesn't have all hooks enabled, current 
these are the hooks its using:

/home/gerrit2/review_site/hooks/custom_hooks/update_tracker
/home/gerrit2/review_site/hooks/custom_hooks/comment-added.propagate_review_values

If we want that the hooks will also update bz status and do other verification 
like backporing, we need to add more hooks.

[~sbona...@redhat.com] [~msi...@redhat.com] please comment which hooks you'd 
like to enable or all of them.

Info on the hooks can be found here 
:http://ovirt-infra-docs.readthedocs.io/en/latest/General/Creating_Gerrit_Projects/index.html#enabling-custom-gerrit-hooks

[~amarchuk] fyi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ OST Failure Report ] [ oVirt-Host ] [ 17-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-17 Thread Dafna Ron
Hi,

We have a failure in test 002_bootstrap.add_hosts from upgrade suite.
Can you please check the issue?



*Link and headline of suspected patches: Reported as failed:
https://gerrit.ovirt.org/#/c/86152/  -
build: post 4.2.1-1*



*Reported as root cause: https://gerrit.ovirt.org/#/c/85421/
 - Require katello-agentLink to Job:*





















*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4908/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4908/artifact
(Relevant)
error snippet from the log: nsaction commit.2018-01-17
00:08:41,743-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [8f86c69] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing
Host lago-upgrade-from-release-suite-master-host0. Setting kernel
arguments.2018-01-17 00:08:41,977-05 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] Swallowing exception as
preferring stderr2018-01-17 00:08:41,977-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [8f86c69]
Error during deploy dialog2018-01-17 00:08:41,978-05 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] SSH error running
command root@lago-upgrade-from-release-suite-master-host0:'umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
"chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
/dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
"${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True': RuntimeException: Unexpected error during
execution: bash: line 1:  1419 Segmentation fault
"${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True2018-01-17 00:08:41,979-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] Error during host
lago-upgrade-from-release-suite-master-host0 install2018-01-17
00:08:41,983-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during
installation of Host lago-upgrade-from-release-suite-master-host0:
Unexpected error during execution: bash: line 1:  1419 Segmentation
fault  "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True.2018-01-17 00:08:41,983-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] Error during host
lago-upgrade-from-release-suite-master-host0 install, preferring first
exception: Unexpected connection termination2018-01-17 00:08:41,983-05
ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] Host installation
failed for host '76f737a5-fc15-4641-91c8-f5ef28b02436',
'lago-upgrade-from-release-suite-master-host0': Unexpected connection
termination2018-01-17 00:08:41,989-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] START,
SetVdsStatusVDSCommand(HostName =
lago-upgrade-from-release-suite-master-host0,
SetVdsStatusVDSCommandParameters:{hostId='76f737a5-fc15-4641-91c8-f5ef28b02436',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
47ad9c872018-01-17 00:08:41,995-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] FINISH,
SetVdsStatusVDSCommand, log id: 47ad9c872018-01-17 00:08:42,003-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] EVENT_ID:
VDS_INSTALL_FAILED(505), Host lago-upgrade-from-release-suite-master-host0
installation failed. Unexpected connection termination.2018-01-17
00:08:42,014-05 INFO
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [8f86c69] Lock freed to object
'EngineLock:{exclusiveLocks='[76f737a5-fc15-4641-91c8-f5ef28b02436=VDS]',
sharedLocks=''}'*
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86152, 1 (ovirt-host) failed "ovirt-master" system tests, but isn't the failure root cause

2018-01-17 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
86152,1 (ovirt-host) failed. However, this change seems not to be the root
cause for this failure. Change 85421,2 (ovirt-host) that this change depends on
or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 85421,2 (ovirt-host) is fixed
and this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86152/1

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/85421/2

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4908/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[vdsm][tag][master] Vdsm tagged: v4.20.14

2018-01-17 Thread Francesco Romani
Hi infra,

Hi infra,

we have a new Vdsm tag on branch master for oVirt 4.2:

tag v4.20.14
Tagger: Francesco Romani 
Date:   Wed Jan 17 10:07:06 2018 +0100

Vdsm 4.20.14 for oVirt 4.2.1 RC 3

commit a2074b0ff4537a8ea4a6de23acdc260b9a5658c7


+++

NOTE ABOUT BRANCHING:
* We are merging again patches targeted for 4.2.1 on master
* We don't yet have plans to branch out ovirt-4.2 from master.
Following the usual Vdsm habit, we will branch out late, *after* Engine
does.


Bests,

-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : system-sync_mirrors-fedora-updates-fc27-x86_64 #169

2018-01-17 Thread jenkins
See 


___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra