Build failed in Jenkins: system-sync_mirrors-sac-gluster-ansible-el7-x86_64 #128

2019-09-01 Thread jenkins
See 


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/heads/*:refs/remotes/origin/* --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ed800fe2bf2c113520f346bdcf3e3b5bb155fed8
Commit message: "Add vdsm poll upstream sources job"
 > git rev-list --no-walk ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 # timeout=10
[system-sync_mirrors-sac-gluster-ansible-el7-x86_64] $ /bin/bash -xe 
/tmp/jenkins687028987568757.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror sac-gluster-ansible-el7 
x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs sac-gluster-ansible-el7 yum
+ local repo_name=sac-gluster-ansible-el7
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum 
/var/www/html/repos/yum/sac-gluster-ansible-el7 
/var/www/html/repos/yum/sac-gluster-ansible-el7/base
+ check_yum_sync_needed sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/sac-gluster-ansible-el7
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=sac-gluster-ansible-el7
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf 
--repoid=sac-gluster-ansible-el7 --arch=x86_64 
--cachedir=/home/jenkins/mirrors_cache 
--download_path=/var/www/html/repos/yum/sac-gluster-ansible-el7/base 
--norepopath --newest-only --urls --quiet
+ 
reposync_out='https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905703-gluster-ansible/gluster-ansible-1.0.5-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00861016-gluster-ansible-cluster/gluster-ansible-cluster-1.0.0-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00956485-gluster-ansible-features/gluster-ansible-features-1.0.5-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00942567-gluster-ansible-infra/gluster-ansible-infra-1.0.4-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949988-gluster-ansible-maintenance/gluster-ansible-maintenance-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905778-gluster-ansible-repositories/gluster-ansible-repositories-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949933-gluster-ansible-roles/gluster-ansible-roles-1.0.5-4.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00794582-python-gluster-mgmt-client/python-gluster-mgmt-client-0.2-1.noarch.rpm'
+ [[ -n 

oVirt infra daily report - unstable production jobs - 935

2019-09-01 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/935//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins
 
 
 
 RHEVM CI Jenkins Daily Report - 01/09/2019
 
00 Unstable Critical
 
   
   changequeue-status_standard-check-patch
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-appliance_4.2_build-artifacts-el7-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-appliance_ovirt-4.2_build-artifacts-el7-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-node-ng-image_4.3_build-artifacts-el7-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-node-ng-image_4.3_build-artifacts-fc30-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-node-ng-image_master_build-artifacts-fc30-x86_64
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_hc-basic-suite-4.3
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_hc-basic-suite-master
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-basic-ipv6-suite-master
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-basic-iscsi-suite-4.3
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-basic-role-remote-suite-4.3
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-basic-suite-4.3
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   ovirt-system-tests_he-node-ng-suite-4.3
   
   This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the 
jenkins repo.

   
   
   
   

Build failed in Jenkins: system-sync_mirrors-sac-gluster-ansible-el7-x86_64 #127

2019-09-01 Thread jenkins
See 


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/heads/*:refs/remotes/origin/* --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ed800fe2bf2c113520f346bdcf3e3b5bb155fed8
Commit message: "Add vdsm poll upstream sources job"
 > git rev-list --no-walk ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 # timeout=10
[system-sync_mirrors-sac-gluster-ansible-el7-x86_64] $ /bin/bash -xe 
/tmp/jenkins8760876295377733785.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror sac-gluster-ansible-el7 
x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs sac-gluster-ansible-el7 yum
+ local repo_name=sac-gluster-ansible-el7
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum 
/var/www/html/repos/yum/sac-gluster-ansible-el7 
/var/www/html/repos/yum/sac-gluster-ansible-el7/base
+ check_yum_sync_needed sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/sac-gluster-ansible-el7
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=sac-gluster-ansible-el7
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf 
--repoid=sac-gluster-ansible-el7 --arch=x86_64 
--cachedir=/home/jenkins/mirrors_cache 
--download_path=/var/www/html/repos/yum/sac-gluster-ansible-el7/base 
--norepopath --newest-only --urls --quiet
+ 
reposync_out='https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905703-gluster-ansible/gluster-ansible-1.0.5-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00861016-gluster-ansible-cluster/gluster-ansible-cluster-1.0.0-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00956485-gluster-ansible-features/gluster-ansible-features-1.0.5-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00942567-gluster-ansible-infra/gluster-ansible-infra-1.0.4-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949988-gluster-ansible-maintenance/gluster-ansible-maintenance-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905778-gluster-ansible-repositories/gluster-ansible-repositories-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949933-gluster-ansible-roles/gluster-ansible-roles-1.0.5-4.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00794582-python-gluster-mgmt-client/python-gluster-mgmt-client-0.2-1.noarch.rpm'
+ [[ -n 

[CQ]: 103002, 1 (ovirt-engine) failed "ovirt-master" system tests, but isn't the failure root cause

2019-09-01 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
103002,1 (ovirt-engine) failed. However, this change seems not to be the root
cause for this failure. Change 101913,10 (ovirt-engine) that this change
depends on or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 101913,10 (ovirt-engine) is
fixed and this change is updated to refer to or rebased on the fixed version,
or this change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/103002/1

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/101913/10

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15642/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XH6YDM434IMVJV27IJNHPNOU2DAFDJ2R/


Removing big (hc-* and metrics) suits from OST's check-patch (How to make OST faster in CI)

2019-09-01 Thread Barak Korren
If you have been using or monitoring any OST suits recently, you may have
noticed we've been suffering from long delays in allocating CI hardware
resources for running OST suits. I'd like to briefly discuss the reasons
behind this, what are planning to do to resolve this and the implication of
those actions for big suit owners.

As you might know, we have moved a while ago from running OST suits each on
its own dedicated server to running them inside containers managed by
OpenShift. That had allowed us to run multiple OST suits on the same
bare-metal host which in turn increased our overall capacity by 50% while
still allowing us to free up hardware for accommodating the kubevirt
project on our CI hardware.

Our infrastructure is currently built in a way where we use the exact same
POD specification (and therefore resource settings) for all suits. Making
it more flexible at this point would require significant code changes we
are not likely to make. What this means is that we need to make sure our
PODs have enough resources to run the most demanding suits. It also means
we waste some resources when running less demanding ones.

Given the set of OST suits we have ATM, we sized our PODs to allocate
32Gibs of RAM. Given the servers we have, this means we can run 15 suits at
a time in parallel. This was sufficient for a while, but given increasing
demand, and the expectation for it to increase further once we introduce
the patch gating features we've been working on, we must find a way to
significantly increase our suit running capacity.

We have measured the amount of RAM required by each suit and came to the
conclusion that for the vast majority of suits, we could settle for PODs
that allocate only 14Gibs of RAM. If we make that change, we would be able
to run a total of 40 suits at a time, almost tripling our current capacity.

The downside of making this change is that our STDCI V2 infrastructure will
no longer be able to run suits that require more then 14Gib of RAM. This
effectively means it would no longer be possible to run these suits from
OST's check-patch job or from the OST manual job.

The list of relevant suits that would be affected follows, the suit owners,
as documented in the CI configuration, have be added as "to" recipients to
the message:

   - hc-basic-suite-4.3
   - hc-basic-suite-master
   - metrics-suite-4.3

Since we're aware people would still like to be able to work with the
bigger suits, we will leverage the nightly suit invocation jobs to enable
then to be run in the CI infra. We will support the following use cases:

   - *Periodically running the suit on the latest oVirt packages* - this
   will be done by the nightly job like it is done today
   - *Running the suit to test changes to the suit`s code* - while
   currently this is done automatically by check-patch, this would have to be
   done manually in the future by manually triggering the nightly job and
   setting the REFSPEC parameter to point to the examined patch
   - *Triggering the suit manually* - This would be done by triggering the
   suit-specific nightly job (as opposed to the general OST manual job)

 The patches listed below implement the changes outlined above:

   - 102757  nightly-system-tests: big
   suits -> big containers
   - 102771 : stdci: Drop `big` suits from
   check-patch

We know that making the changes we presented will make things a little less
convenient for users and maintainers of the big suits, but we believe the
benefits of having vastly increased execution capacity for all other suits
outweigh those shortcomings.

We would like to hear all relevant comment and questions from the quite
owners and other interested parties, especially is you think we should not
carry out the changes we propose.
Please take the time to respond on this thread, or on the linked patches.

Thanks,

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2O3MV7X5VB32DG2KJDMJKDYWWSHNBZ3R/


Re: URGENT - ovirt-engine master is failing for the past 9 days

2019-09-01 Thread Tal Nisan
OST failing for 9 days is totally unacceptable, the only reason I'm not
sending a revert patch is that it'll make a bloody mess with the upgrade
scripts as we have two on top of it, this needs to be fixed ASAP

On Sun, Sep 1, 2019 at 3:27 PM Dafna Ron  wrote:

> Hi,
>
> We have been failing CQ master for project ovirt-engine for the past 9
> days.
> this was reported last week and we have not yet seen a fix for this issue.
>
> the patch reported by CQ is
> https://gerrit.ovirt.org/#/c/101913/10 - core: Change CPU config to
> secure/insecure concept
>
> logs for latest failure can be found here:
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/
>
> Error:
> 2019-09-01 06:31:17,248-04 INFO
>  [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
> [174bc463-9a98-441f-ad14-4d97fefde4fa] Running command:
> UpdateClusterCommand internal: false. Entities affected :  ID:
> 0407bc06-4bef-4c47-99e1-0e42f9ced996 Type: ClusterAction group
> EDIT_CLUSTER_CONFIGURATION with role type ADMIN
> 2019-09-01 06:31:17,263-04 INFO
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [174bc463-9a98-441f-ad14-4d97fefde4fa] EVENT_ID:
> CLUSTER_UPDATE_CPU_WHEN_DEPRECATED(9,029), Modified the CPU Type to Intel
> Haswell-noTSX Family when upgrading the Compatibility Version of Cluster
> test-cluster because the previous CPU Type was deprecated.
> 2019-09-01 06:31:17,319-04 WARN
>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [18fd7817]
> Validation of action 'UpdateVm' failed for user admin@internal-authz.
> Reasons:
> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
> 2019-09-01 06:31:17,329-04 WARN
>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [62694a6a]
> Validation of action 'UpdateVm' failed for user admin@internal-authz.
> Reasons:
> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
> 2019-09-01 06:31:17,334-04 WARN
>  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [10e41cd3]
> Validation of action 'UpdateVm' failed for user admin@internal-authz.
> Reasons:
> VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
> 2019-09-01 06:31:17,345-04 WARN
>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
> [26a471dd] Validation of action 'UpdateVmTemplate' failed for user
> admin@internal-authz. Reasons:
> VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
> 2019-09-01 06:31:17,352-04 WARN
>  [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
> [32fd89b5] Validation of action 'UpdateVmTemplate' failed for user
> admin@internal-authz. Reasons:
> VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
> 2019-09-01 06:31:17,363-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [32fd89b5] EVENT_ID:
> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
> compatibility version of Vm/Template: [vm-with-iface], Message: [No Message]
> 2019-09-01 06:31:17,371-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [32fd89b5] EVENT_ID:
> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
> compatibility version of Vm/Template: [vm-with-iface-template], Message:
> [No Message]
> 2019-09-01 06:31:17,380-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [32fd89b5] EVENT_ID:
> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
> compatibility version of Vm/Template: [vm0], Message: [No Message]
> 2019-09-01 06:31:17,388-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [32fd89b5] EVENT_ID:
> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
> compatibility version of Vm/Template: [vm1], Message: [No Message]
> 2019-09-01 06:31:17,405-04 INFO
>  [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [32fd89b5]
> Command [id=50898f00-4956-484a-94ff-f273adc2e93e]: Compensating
> UPDATED_ONLY_ENTITY of
> org.ovirt.engine.core.common.businessentities.Cluster; snapshot: Cluster
> [test-cluster].
> 2019-09-01 06:31:17,422-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [32fd89b5] EVENT_ID: USER_UPDATE_CLUSTER_FAILED(812),
> Failed to update Host cluster (User: admin@internal-authz)
> 2019-09-01 06:31:17,422-04 INFO
>  [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
> [32fd89b5] Lock freed to object
> 'EngineLock:{exclusiveLocks='[5d3b2f0f-f05f-41c6-b61a-c517704f79fd=TEMPLATE,
> 728f5530-4e34-4c92-beef-ca494ec104b9=TEMPLATE]',
> 

URGENT - ovirt-engine master is failing for the past 9 days

2019-09-01 Thread Dafna Ron
Hi,

We have been failing CQ master for project ovirt-engine for the past 9
days.
this was reported last week and we have not yet seen a fix for this issue.

the patch reported by CQ is
https://gerrit.ovirt.org/#/c/101913/10 - core: Change CPU config to
secure/insecure concept

logs for latest failure can be found here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/artifact/upgrade-from-release-suite.el7.x86_64/test_logs/upgrade-from-release-suite-master/post-004_basic_sanity.py/

Error:
2019-09-01 06:31:17,248-04 INFO
 [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
[174bc463-9a98-441f-ad14-4d97fefde4fa] Running command:
UpdateClusterCommand internal: false. Entities affected :  ID:
0407bc06-4bef-4c47-99e1-0e42f9ced996 Type: ClusterAction group
EDIT_CLUSTER_CONFIGURATION with role type ADMIN
2019-09-01 06:31:17,263-04 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [174bc463-9a98-441f-ad14-4d97fefde4fa] EVENT_ID:
CLUSTER_UPDATE_CPU_WHEN_DEPRECATED(9,029), Modified the CPU Type to Intel
Haswell-noTSX Family when upgrading the Compatibility Version of Cluster
test-cluster because the previous CPU Type was deprecated.
2019-09-01 06:31:17,319-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [18fd7817]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,329-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [62694a6a]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,334-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [10e41cd3]
Validation of action 'UpdateVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_OS_TYPE_IS_NOT_SUPPORTED_BY_ARCHITECTURE_TYPE
2019-09-01 06:31:17,345-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
[26a471dd] Validation of action 'UpdateVmTemplate' failed for user
admin@internal-authz. Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
2019-09-01 06:31:17,352-04 WARN
 [org.ovirt.engine.core.bll.UpdateVmTemplateCommand] (default task-1)
[32fd89b5] Validation of action 'UpdateVmTemplate' failed for user
admin@internal-authz. Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM_TEMPLATE,VMT_CANNOT_UPDATE_ILLEGAL_FIELD
2019-09-01 06:31:17,363-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm-with-iface], Message: [No Message]
2019-09-01 06:31:17,371-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm-with-iface-template], Message:
[No Message]
2019-09-01 06:31:17,380-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm0], Message: [No Message]
2019-09-01 06:31:17,388-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID:
CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update
compatibility version of Vm/Template: [vm1], Message: [No Message]
2019-09-01 06:31:17,405-04 INFO
 [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [32fd89b5]
Command [id=50898f00-4956-484a-94ff-f273adc2e93e]: Compensating
UPDATED_ONLY_ENTITY of
org.ovirt.engine.core.common.businessentities.Cluster; snapshot: Cluster
[test-cluster].
2019-09-01 06:31:17,422-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [32fd89b5] EVENT_ID: USER_UPDATE_CLUSTER_FAILED(812),
Failed to update Host cluster (User: admin@internal-authz)
2019-09-01 06:31:17,422-04 INFO
 [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-1)
[32fd89b5] Lock freed to object
'EngineLock:{exclusiveLocks='[5d3b2f0f-f05f-41c6-b61a-c517704f79fd=TEMPLATE,
728f5530-4e34-4c92-beef-ca494ec104b9=TEMPLATE]',
sharedLocks='[a107316a-a961-403f-adb2-d01f22f0b8f1=VM,
dc3a2ad8-6019-4e00-85e5-d9fba7390d4f=VM,
6348722f-61ae-40be-a2b8-bd9d086b06dc=VM]'}'
2019-09-01 06:31:17,424-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: [Update of cluster compatibility version
failed because there are VMs/Templates [vm-with-iface,
vm-with-iface-template, vm0, vm1] with incorrect configuration. 

Build failed in Jenkins: system-sync_mirrors-sac-gluster-ansible-el7-x86_64 #126

2019-09-01 Thread jenkins
See 


--
Started by timer
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on mirrors.phx.ovirt.org (mirrors) in workspace 

No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://gerrit.ovirt.org/jenkins.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Pruning obsolete local branches
Fetching upstream changes from http://gerrit.ovirt.org/jenkins.git
 > git --version # timeout=10
 > git fetch --tags --progress http://gerrit.ovirt.org/jenkins.git 
 > +refs/heads/*:refs/remotes/origin/* --prune
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ed800fe2bf2c113520f346bdcf3e3b5bb155fed8
Commit message: "Add vdsm poll upstream sources job"
 > git rev-list --no-walk ed800fe2bf2c113520f346bdcf3e3b5bb155fed8 # timeout=10
[system-sync_mirrors-sac-gluster-ansible-el7-x86_64] $ /bin/bash -xe 
/tmp/jenkins3344783761560448671.sh
+ jenkins/scripts/mirror_mgr.sh resync_yum_mirror sac-gluster-ansible-el7 
x86_64 jenkins/data/mirrors-reposync.conf
+ MIRRORS_MP_BASE=/var/www/html/repos
+ MIRRORS_HTTP_BASE=http://mirrors.phx.ovirt.org/repos
+ MIRRORS_CACHE=/home/jenkins/mirrors_cache
+ MAX_LOCK_ATTEMPTS=120
+ LOCK_WAIT_INTERVAL=5
+ LOCK_BASE=/home/jenkins
+ OLD_MD_TO_KEEP=100
+ HTTP_SELINUX_TYPE=httpd_sys_content_t
+ HTTP_FILE_MODE=644
+ main resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local command=resync_yum_mirror
+ command_args=("${@:2}")
+ local command_args
+ cmd_resync_yum_mirror sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local sync_needed
+ mkdir -p /home/jenkins/mirrors_cache
+ verify_repo_fs sac-gluster-ansible-el7 yum
+ local repo_name=sac-gluster-ansible-el7
+ local repo_type=yum
+ sudo install -o jenkins -d /var/www/html/repos/yum 
/var/www/html/repos/yum/sac-gluster-ansible-el7 
/var/www/html/repos/yum/sac-gluster-ansible-el7/base
+ check_yum_sync_needed sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf sync_needed
+ local repo_name=sac-gluster-ansible-el7
+ local repo_archs=x86_64
+ local reposync_conf=jenkins/data/mirrors-reposync.conf
+ local p_sync_needed=sync_needed
+ local reposync_out
+ echo 'Checking if mirror needs a resync'
Checking if mirror needs a resync
+ rm -rf /home/jenkins/mirrors_cache/sac-gluster-ansible-el7
++ IFS=,
++ echo x86_64
+ for arch in '$(IFS=,; echo $repo_archs)'
++ run_reposync sac-gluster-ansible-el7 x86_64 
jenkins/data/mirrors-reposync.conf --urls --quiet
++ local repo_name=sac-gluster-ansible-el7
++ local repo_arch=x86_64
++ local reposync_conf=jenkins/data/mirrors-reposync.conf
++ extra_args=("${@:4}")
++ local extra_args
++ reposync --config=jenkins/data/mirrors-reposync.conf 
--repoid=sac-gluster-ansible-el7 --arch=x86_64 
--cachedir=/home/jenkins/mirrors_cache 
--download_path=/var/www/html/repos/yum/sac-gluster-ansible-el7/base 
--norepopath --newest-only --urls --quiet
+ 
reposync_out='https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905703-gluster-ansible/gluster-ansible-1.0.5-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00861016-gluster-ansible-cluster/gluster-ansible-cluster-1.0.0-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00956485-gluster-ansible-features/gluster-ansible-features-1.0.5-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00942567-gluster-ansible-infra/gluster-ansible-infra-1.0.4-3.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949988-gluster-ansible-maintenance/gluster-ansible-maintenance-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00905778-gluster-ansible-repositories/gluster-ansible-repositories-1.0.1-1.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00949933-gluster-ansible-roles/gluster-ansible-roles-1.0.5-4.el7.noarch.rpm
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-x86_64/00794582-python-gluster-mgmt-client/python-gluster-mgmt-client-0.2-1.noarch.rpm'
+ [[ -n 

[CQ]: 102670, 10 (ovirt-engine) failed "ovirt-master" system tests, but isn't the failure root cause

2019-09-01 Thread oVirt Jenkins
A system test invoked by the "ovirt-master" change queue including change
102670,10 (ovirt-engine) failed. However, this change seems not to be the root
cause for this failure. Change 101913,10 (ovirt-engine) that this change
depends on or is based on, was detected as the cause of the testing failures.

This change had been removed from the testing queue. Artifacts built from this
change will not be released until either change 101913,10 (ovirt-engine) is
fixed and this change is updated to refer to or rebased on the fixed version,
or this change is modified to no longer depend on it.

For further details about the change see:
https://gerrit.ovirt.org/#/c/102670/10

For further details about the change that seems to be the root cause behind the
testing failures see:
https://gerrit.ovirt.org/#/c/101913/10

For failed test results see:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/15638/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/MMBD3LPPIWV47TDPEN4WERFX4JNVTWIS/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren edited comment on OVIRT-2566 at 9/1/19 7:52 AM:
-

Not for specific patches - but we can easily enable it for ALL patches.

If we only want it for specific patches - we can consider allowing some 
customization of the Zull configuration at project level to allow running 
specific suits for specific patches, but this will require some code changes in 
several places. We can plan this once we're in production with the current set 
of suits.


was (Author: bkor...@redhat.com):
Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/K4QZIHSNI25ZOBWIRXAP2J6CAYY2SKRW/


[JIRA] (OVIRT-2566) make network suite blocking

2019-09-01 Thread Barak Korren (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=39763#comment-39763
 ] 

Barak Korren commented on OVIRT-2566:
-

Not for specific patches - but we can enable it for ALL patches.

We can consider allowing some customization of the Zull configuration at 
project level to allow running specific suits for specific patches, but this 
will require some code changes in several places. We can plan this once we're 
in production with the current set of suits.

> make network suite blocking
> ---
>
> Key: OVIRT-2566
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: OST
>Reporter: danken
>Assignee: infra
>Priority: High
>
> The network suite is executing nightly for almost a year. It has a caring 
> team that tends to it, and it does not have false positives.
> Currently, it currently fails for a week on the 4.2 branch, but it is due to 
> a production code bug.
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_network-suite-4.2/
> I would like to see this suite escalated to the importance of the basic 
> suite, making it a gating condition for marking a collection of packages as 
> "tested".
> [~gbenh...@redhat.com], what should be done to make this happen?



--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100108)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HSF22UINFLBXE243SP6K7S5ABITO7GQ/


Fwd: Zuul posts non hiddable Gerrit comments

2019-09-01 Thread Barak Korren
What is out current plan for upgrading to Gerrit 2.15+ ?

As you can see below one of the nice features it provides, is the ability
to hide auto-generated comments from the UI. This will allow use to
de-clutter the comment lists on patches making them easier to track and
read.

-- Forwarded message -
From: Vitaliy Lotorev 
Date: Fri, 30 Aug 2019 at 01:01
Subject: Zuul posts non hiddable Gerrit comments
To: 


Hi,

Starting with at least v2.15 Gerrit is able to differentiate bot comments
and hide them using 'Show comments' trigger on web UI. This is achived
either using tagged comments or robot comments.

A while ago I filed an issue about Zuul not using these technics [1]
(ticket has links for Gerrit docs with tagged and robot comments). This
results that Zuul comments are not hiddable in Gerrit.

AFAIK, sending tagged comments via SSH to Gerrit is easy (one-line of code
at [2]).

I could try providing a patch for tagged comments.

What Zuul maintainers think about adding support for bot comments?

Should not be sending tagged comments done only if Gerrit version >= 2.15?

[1] https://storyboard.openstack.org/#!/story/2005661
[2]
https://opendev.org/zuul/zuul/src/branch/master/zuul/driver/gerrit/gerritconnection.py#L860
___
Zuul-discuss mailing list
zuul-disc...@lists.zuul-ci.org
http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GCFMMA4BVIQGJO72OVUAT36LJAWNRKO6/