[oVirt Jenkins] ovirt-system-tests_hc-basic-suite-master - Build # 164 - Still Failing!

2018-01-19 Thread jenkins
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/164/
Build Number: 164
Build Status:  Still Failing
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #159
[Gal Ben Haim] 4.2: Adding 4.2 pre repo


Changes for Build #160
[Gal Ben Haim] 4.2: Adding 4.2 pre repo


Changes for Build #161
[Gal Ben Haim] 4.2: Adding 4.2 pre repo

[Gal Ben Haim] ost: Add 4.2 to the manual job

[Gal Ben Haim] ost: Adding 'he-basic-ansible' to the manual job

[Daniel Belenky] Add line brak before mock_enable_network


Changes for Build #162
[Yedidyah Bar David] Revert "upgrade_suites: Update ovirt-engine-metrics"

[Gal Ben Haim] ost: Fix Lago custom repo

[Greg Sheremeta] ovirt-js-dependencies: add jobs for fc27, add 4.2 branch


Changes for Build #163
[Yaniv Kaul] Reuse some test_utils (for snapshot and disk services)


Changes for Build #164
[Yaniv Kaul] Reuse some test_utils (for snapshot and disk services)




-
Failed Tests:
-
1 tests failed.
FAILED:  002_bootstrap.add_hosts

Error Message:
Host lago-hc-basic-suite-master-host1 failed to install
 >> begin captured logging << 
ovirtlago.testlib: ERROR: * Unhandled exception in 
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up
raise RuntimeError('Host %s failed to install' % host.name())
RuntimeError: Host lago-hc-basic-suite-master-host1 failed to install
- >> end captured logging << -

Stack Trace:
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
wrapper
return func(get_test_prefix(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 164, in add_hosts
testlib.assert_true_within(_host_is_up, timeout=15 * 60)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 263, in 
assert_true_within
assert_equals_within(func, True, timeout, allowed_exceptions)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up
raise RuntimeError('Host %s failed to install' % host.name())
'Host lago-hc-basic-suite-master-host1 failed to install\n 
>> begin captured logging << \novirtlago.testlib: ERROR:
 * Unhandled exception in \nTraceback (most 
recent call last):\n  File 
"/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219, in 
assert_equals_within\nres = func()\n  File 
"/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py",
 line 151, in _host_is_up\nraise RuntimeError(\'Host %s failed to install\' 
% host.name())\nRuntimeError: Host lago-hc-basic-suite-master-host1 failed to 
install\n- >> end captured logging << -'___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-appliance_ovirt-4.2-snapshot_build-artifacts-el7-x86_64 - Build # 140 - Failure!

2018-01-19 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-snapshot_build-artifacts-el7-x86_64/
 
Build: 
http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-snapshot_build-artifacts-el7-x86_64/140/
Build Number: 140
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #140
[Sandro Bonazzola] automation: check and save unsigned rpms




-
Failed Tests:
-
No tests ran.___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-system-tests_performance-suite-master - Build # 67 - Failure!

2018-01-19 Thread jenkins
Project: 
http://jenkins.ovirt.org/job/ovirt-system-tests_performance-suite-master/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-system-tests_performance-suite-master/67/
Build Number: 67
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #67
[Yaniv Kaul] Reuse some test_utils (for snapshot and disk services)




-
Failed Tests:
-
1 tests failed.
FAILED:  040_add_hosts_vms.add_cluster

Error Message:
Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX

Stack Trace:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
wrapper
return func(get_test_prefix(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt-system-tests_performance-suite-master/ovirt-system-tests/performance-suite-master/test-scenarios/040_add_hosts_vms.py",
 line 216, in add_cluster
cpu_family = prefix.virt_env.get_ovirt_cpu_family()
  File "/usr/lib/python2.7/site-packages/ovirtlago/virt.py", line 151, in 
get_ovirt_cpu_family
','.join(cpu_map[host.cpu_vendor].iterkeys())
RuntimeError: Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt hc master ] [ 19-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-19 Thread Yaniv Kaul
On Fri, Jan 19, 2018 at 5:06 PM, Dafna Ron  wrote:

> Hi,
>
> we are failing hc master basic suite on test: 002_bootstrap.add_hosts
>
>
>
>
>
>
>
> *Link and headline of suspected patches: Link to
> Job:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/
> (Relevant)
> error snippet from the log: *2018-01-18 22:30:56,141-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [3e58f8ce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
> error has occurred during installation of Host lago_basic_suite_hc_host0:
> Failed to execute stage 'Closing up': 'Plugin' object has no attribute
> 'exist'
>
>
> **
>

Dafna,
The relevant log is[1], which shows:

2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stdout:
2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stderr:
2018-01-18 22:49:25,385-0500 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-xJomKMYufQ/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/tmp/ovirt-xJomKMYufQ/otopi-plugins/ovirt-host-deploy/gluster/packages.py",
line 95, in _closeup
if self.services.exist('glustereventsd'):
AttributeError: 'Plugin' object has no attribute 'exist'


Y.

[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/exported-artifacts/test_logs/hc-basic-suite-master/post-002_bootstrap.py/lago-hc-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20180118224925-192.168.200.4-7bbdac84.log


>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1854) fc26 dashboard jobs failing on rpm install for cmake-data and cmake

2018-01-19 Thread Greg Sheremeta (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35684#comment-35684
 ] 

Greg Sheremeta commented on OVIRT-1854:
---

It's happening locally with mock_runner too

> fc26 dashboard jobs failing on rpm install for cmake-data and cmake
> ---
>
> Key: OVIRT-1854
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1854
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Greg Sheremeta
>Assignee: infra
>
> Hi,
> Anyone seen anything like this?
> http://jenkins.ovirt.org/job/ovirt-engine-dashboard_master_check-merged-fc26-x86_64/30/console
> 17:45:32 --> Processing Dependency: (cmake = 3.10.1-11.fc26 if cmake <
> 3.10.1-11.fc26) for package: cmake-rpm-macros-3.10.1-11.fc26.noarch
> 17:45:32 ---> Package device-mapper.x86_64 0:1.02.137-6.fc26 will be
> installed
> 17:45:32 --> Finished Dependency Resolution
> 17:45:32 Error: Package: cmake-rpm-macros-3.10.1-11.fc26.noarch
> (fedora-updates-fc26)
> 17:45:32Requires: (cmake-data = 3.10.1-11.fc26 if cmake-data <
> 3.10.1-11.fc26)
> 17:45:32  You could try using --skip-broken to work around the problem
> 17:45:32 Error: Package: cmake-rpm-macros-3.10.1-11.fc26.noarch
> (fedora-updates-fc26)
> 17:45:32Requires: (cmake = 3.10.1-11.fc26 if cmake <
> 3.10.1-11.fc26)
> 17:45:32  You could try running: rpm -Va --nofiles --nodigest
> -- 
> GREG SHEREMETA
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
> Red Hat NA
> 
> gsher...@redhat.comIRC: gshereme
> 



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1854) fc26 dashboard jobs failing on rpm install for cmake-data and cmake

2018-01-19 Thread Greg Sheremeta (oVirt JIRA)
Greg Sheremeta created OVIRT-1854:
-

 Summary: fc26 dashboard jobs failing on rpm install for cmake-data 
and cmake
 Key: OVIRT-1854
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1854
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: Greg Sheremeta
Assignee: infra


Hi,

Anyone seen anything like this?

http://jenkins.ovirt.org/job/ovirt-engine-dashboard_master_check-merged-fc26-x86_64/30/console

17:45:32 --> Processing Dependency: (cmake = 3.10.1-11.fc26 if cmake <
3.10.1-11.fc26) for package: cmake-rpm-macros-3.10.1-11.fc26.noarch
17:45:32 ---> Package device-mapper.x86_64 0:1.02.137-6.fc26 will be
installed
17:45:32 --> Finished Dependency Resolution
17:45:32 Error: Package: cmake-rpm-macros-3.10.1-11.fc26.noarch
(fedora-updates-fc26)
17:45:32Requires: (cmake-data = 3.10.1-11.fc26 if cmake-data <
3.10.1-11.fc26)
17:45:32  You could try using --skip-broken to work around the problem
17:45:32 Error: Package: cmake-rpm-macros-3.10.1-11.fc26.noarch
(fedora-updates-fc26)
17:45:32Requires: (cmake = 3.10.1-11.fc26 if cmake <
3.10.1-11.fc26)
17:45:32  You could try running: rpm -Va --nofiles --nodigest

-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme




--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86590, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests, but isn't the failure root cause

2018-01-19 Thread oVirt Jenkins
A system test invoked by the "ovirt-4.2" change queue including change 86590,1
(ovirt-log-collector) failed. However, this change seems not to be the root
cause for this failure. Change 86552,1 (ovirt-log-collector) that this change
depends on or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86552,1 (ovirt-log-collector)
is fixed and this change is updated to refer to or rebased on the fixed
version, or this change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86590/1

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86552/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/266/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 85968, 8 (vdsm) failed "ovirt-master" system tests, but isn't the failure root cause

2018-01-19 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
85968,8 (vdsm) failed. However, this change seems not to be the root cause for
this failure. Change 86114,4 (vdsm) that this change depends on or is based on,
was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86114,4 (vdsm) is fixed and
this change is updated to refer to or rebased on the fixed version, or this
change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/85968/8

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86114/4

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4979/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86584, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests, but isn't the failure root cause

2018-01-19 Thread oVirt Jenkins
A system test invoked by the "ovirt-4.2" change queue including change 86584,1
(ovirt-log-collector) failed. However, this change seems not to be the root
cause for this failure. Change 86552,1 (ovirt-log-collector) that this change
depends on or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86552,1 (ovirt-log-collector)
is fixed and this change is updated to refer to or rebased on the fixed
version, or this change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86584/1

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86552/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/264/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Fwd: [CQ]: 86552, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests

2018-01-19 Thread Dafna Ron
Hi,

Please note that this time the patch failed on unsupported cpu type:

Unsupported CPU model: Haswell-noTSX-IBRS. Supported models:
IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX

Thanks,

Dafna

-- Forwarded message --
From: oVirt Jenkins 
Date: Fri, Jan 19, 2018 at 3:33 PM
Subject: [CQ]: 86552,1 (ovirt-log-collector) failed "ovirt-4.2" system tests
To: infra@ovirt.org


Change 86552,1 (ovirt-log-collector) is probably the reason behind recent
system test failures in the "ovirt-4.2" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from
this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86552/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/261/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86584, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests, but isn't the failure root cause

2018-01-19 Thread oVirt Jenkins
A system test invoked by the "ovirt-4.2" change queue including change 86584,1
(ovirt-log-collector) failed. However, this change seems not to be the root
cause for this failure. Change 86552,1 (ovirt-log-collector) that this change
depends on or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 86552,1 (ovirt-log-collector)
is fixed and this change is updated to refer to or rebased on the fixed
version, or this change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86584/1

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/86552/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/262/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86552,1 (ovirt-log-collector) failed "ovirt-4.2" system tests

2018-01-19 Thread oVirt Jenkins
Change 86552,1 (ovirt-log-collector) is probably the reason behind recent
system test failures in the "ovirt-4.2" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86552/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/261/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 86552, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests

2018-01-19 Thread Dafna Ron
its fixed now.
I re-triggered the job and it should all pass now.


On Fri, Jan 19, 2018 at 2:41 PM, Sandro Bonazzola 
wrote:

>
>
> 2018-01-18 19:07 GMT+01:00 oVirt Jenkins :
>
>> Change 86552,1 (ovirt-log-collector) is probably the reason behind recent
>> system test failures in the "ovirt-4.2" change queue and needs to be
>> fixed.
>>
>
> Not really, it's failing to get CentOS Base repo.
>
>
>
>>
>> This change had been removed from the testing queue. Artifacts build from
>> this
>> change will not be released until it is fixed.
>>
>> For further details about the change see:
>> https://gerrit.ovirt.org/#/c/86552/1
>>
>> For failed test results see:
>> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/254/
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] standard-enqueue - Build #9053 - FAILURE!

2018-01-19 Thread jenkins
Build: http://jenkins.ovirt.org/job/standard-enqueue/9053/
Build Name: #9053
Build Description: Gerrit: 86417 - ovirt-log-collector (master)
Build Status: FAILURE
Gerrit change: https://gerrit.ovirt.org/86417
- title: analyzer: Add IPTablesConfigSiteCustom pre-condition
- project: ovirt-log-collector
- branch: master
- author: Ala Hino ___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ OST Failure Report ] [ oVirt hc master ] [ 19-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-19 Thread Dafna Ron
Hi,

we are failing hc master basic suite on test: 002_bootstrap.add_hosts







*Link and headline of suspected patches: Link to
Job:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/
(Relevant)
error snippet from the log: *2018-01-18 22:30:56,141-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [3e58f8ce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
error has occurred during installation of Host lago_basic_suite_hc_host0:
Failed to execute stage 'Closing up': 'Plugin' object has no attribute
'exist'


**
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[CQ]: 86582, 1 (ovirt-engine-dashboard) failed "ovirt-4.2" system tests

2018-01-19 Thread oVirt Jenkins
Change 86582,1 (ovirt-engine-dashboard) is probably the reason behind recent
system test failures in the "ovirt-4.2" change queue and needs to be fixed.

This change had been removed from the testing queue. Artifacts build from this
change will not be released until it is fixed.

For further details about the change see:
https://gerrit.ovirt.org/#/c/86582/1

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/258/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-1849) enable all gerrit hooks for cockpit-ovirt project

2018-01-19 Thread sbonazzo (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=35683#comment-35683
 ] 

sbonazzo commented on OVIRT-1849:
-

I agree with [~rbarry] , let's enable all of them.

> enable all gerrit hooks for cockpit-ovirt project
> -
>
> Key: OVIRT-1849
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1849
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri
>Assignee: infra
>
> It looks like the cockpit-ovirt project doesn't have all hooks enabled, 
> current these are the hooks its using:
> /home/gerrit2/review_site/hooks/custom_hooks/update_tracker
> /home/gerrit2/review_site/hooks/custom_hooks/comment-added.propagate_review_values
> If we want that the hooks will also update bz status and do other 
> verification like backporing, we need to add more hooks.
> [~sbona...@redhat.com] [~msi...@redhat.com] please comment which hooks you'd 
> like to enable or all of them.
> Info on the hooks can be found here 
> :http://ovirt-infra-docs.readthedocs.io/en/latest/General/Creating_Gerrit_Projects/index.html#enabling-custom-gerrit-hooks
> [~amarchuk] fyi



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100076)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 86552, 1 (ovirt-log-collector) failed "ovirt-4.2" system tests

2018-01-19 Thread Sandro Bonazzola
2018-01-18 19:07 GMT+01:00 oVirt Jenkins :

> Change 86552,1 (ovirt-log-collector) is probably the reason behind recent
> system test failures in the "ovirt-4.2" change queue and needs to be fixed.
>

Not really, it's failing to get CentOS Base repo.



>
> This change had been removed from the testing queue. Artifacts build from
> this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/86552/1
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/254/
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>



-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] standard-enqueue - Build #9050 - FAILURE!

2018-01-19 Thread jenkins
Build: http://jenkins.ovirt.org/job/standard-enqueue/9050/
Build Name: #9050
Build Description: Gerrit: 86566 - ovirt-engine-dashboard (4.2)
Build Status: FAILURE
Gerrit change: https://gerrit.ovirt.org/86566
- title: Enable Czech (cs-CZ) as a supported locale, enhance normalize script
- project: ovirt-engine-dashboard
- branch: ovirt-engine-dashboard-1.2
- author: Scott Dickerson ___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-19 Thread Arik Hadas
On Fri, Jan 19, 2018 at 12:46 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 18 Jan 2018, at 17:36, Arik Hadas  wrote:
>
>
>
> On Wed, Jan 17, 2018 at 9:41 PM, Milan Zamazal 
> wrote:
>
>> Dafna Ron  writes:
>>
>> > We had a failure in test 006_migrations.migrate_vm
>> > > er/4842/testReport/junit/%28root%29/006_migrations/migrate_vm/>.
>> >
>> > the migration failed with reason "VMExists"
>>
>> There are two migrations in 006_migrations.migrate_vm.  The first one
>> succeeded, but if I'm looking correctly into the logs, Engine didn't
>> send Destroy to the source host after the migration had finished.  Then
>> the second migration gets rejected by Vdsm, because Vdsm still keeps the
>> former Vm object instance in Down status.
>>
>> Since the test succeeds most of the time, it looks like some timing
>> issue or border case.  Arik, is it a known problem?  If not, would you
>> like to look into the logs, whether you can see what's happening?
>
>
> Your analysis is correct. That's a nice one actually!
>
> The statistics monitoring cycles of both hosts host-0 and host-1 were
> scheduled in a way that they are executed almost at the same time [1].
>
> Now, at 6:46:34 the VM was migrated from host-1 to host-0.
> At 6:46:42 the migration succeeded - we got events from both hosts, but
> only processed the one from the destination so the VM switched to Up.
> The next statistics monitoring cycle was triggered at 6:46:44 - again, the
> report of that VM from the source host was skipped because we processed the
> one from the destination.
> At 6:46:59, in the next statistics monitoring cycle, it happened again -
> the report of the VM from the source host was skipped.
> The next migration was triggered at 6:47:05 - the engine didn't manage to
> process any report from the source host, so the VM remained Down there.
>
> The probability of this to happen is extremely low.
>
>
> Why wasn't the migration rerun?
>

Good question, because a migration to a particular host (MigrateVmToServer)
was requested.
In this particular case, it seems that there are only two hosts defined so
changing it to MigrateVm wouldn't make any difference though.


>
> However, I think we can make a little tweak to the monitoring code to
> avoid this:
> "If we get the VM as Down on an unexpected host (that is, not the host we
> expect the VM to run on), do not lock the VM"
> It should be safe since we don't update anything in this scenario.
>
> [1] For instance:
> 2018-01-15 06:46:44,905-05 ... GetAllVmStatsVDSCommand ...
> VdsIdVDSCommandParametersBase:{hostId='873a4d36-55fe-4be1-
> acb7-8de9c9123eb2'})
> 2018-01-15 06:46:44,932-05 ... GetAllVmStatsVDSCommand ...
> VdsIdVDSCommandParametersBase:{hostId='31f09289-ec6c-42ff-
> a745-e82e8ac8e6b9'})
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] standard-enqueue - Build #9047 - FAILURE!

2018-01-19 Thread jenkins
Build: http://jenkins.ovirt.org/job/standard-enqueue/9047/
Build Name: #9047
Build Description: Gerrit: 86429 - ovirt-engine (master)
Build Status: FAILURE
Gerrit change: https://gerrit.ovirt.org/86429
- title: ansible: Add posibility to specify verbose level
- project: ovirt-engine
- branch: master
- author: Ondra Machacek ___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-19 Thread Michal Skrivanek


> On 18 Jan 2018, at 17:36, Arik Hadas  wrote:
> 
> 
> 
> On Wed, Jan 17, 2018 at 9:41 PM, Milan Zamazal  > wrote:
> Dafna Ron > writes:
> 
> > We had a failure in test 006_migrations.migrate_vm
> >  >  
> > >.
> >
> > the migration failed with reason "VMExists"
> 
> There are two migrations in 006_migrations.migrate_vm.  The first one
> succeeded, but if I'm looking correctly into the logs, Engine didn't
> send Destroy to the source host after the migration had finished.  Then
> the second migration gets rejected by Vdsm, because Vdsm still keeps the
> former Vm object instance in Down status.
> 
> Since the test succeeds most of the time, it looks like some timing
> issue or border case.  Arik, is it a known problem?  If not, would you
> like to look into the logs, whether you can see what's happening?
> 
> Your analysis is correct. That's a nice one actually!
> 
> The statistics monitoring cycles of both hosts host-0 and host-1 were 
> scheduled in a way that they are executed almost at the same time [1].
> 
> Now, at 6:46:34 the VM was migrated from host-1 to host-0.
> At 6:46:42 the migration succeeded - we got events from both hosts, but only 
> processed the one from the destination so the VM switched to Up.
> The next statistics monitoring cycle was triggered at 6:46:44 - again, the 
> report of that VM from the source host was skipped because we processed the 
> one from the destination.
> At 6:46:59, in the next statistics monitoring cycle, it happened again - the 
> report of the VM from the source host was skipped.
> The next migration was triggered at 6:47:05 - the engine didn't manage to 
> process any report from the source host, so the VM remained Down there. 
> 
> The probability of this to happen is extremely low.

Why wasn't the migration rerun?

> However, I think we can make a little tweak to the monitoring code to avoid 
> this:
> "If we get the VM as Down on an unexpected host (that is, not the host we 
> expect the VM to run on), do not lock the VM"
> It should be safe since we don't update anything in this scenario.
>  
> [1] For instance:
> 2018-01-15 06:46:44,905-05 ... GetAllVmStatsVDSCommand ... 
> VdsIdVDSCommandParametersBase:{hostId='873a4d36-55fe-4be1-acb7-8de9c9123eb2'})
> 2018-01-15 06:46:44,932-05 ... GetAllVmStatsVDSCommand ... 
> VdsIdVDSCommandParametersBase:{hostId='31f09289-ec6c-42ff-a745-e82e8ac8e6b9'})
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra