oVirt infra daily report - unstable production jobs - 83

2016-09-20 Thread jenkins
Good morning!

Attached is the HTML page with the jenkins status report. You can see it also 
here:
 - 
http://jenkins.ovirt.org/job/system_jenkins-report/83//artifact/exported-artifacts/upstream_report.html

Cheers,
Jenkins


upstream_report.html
Description: Binary data
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

2016-09-20 Thread Nir Soffer
On Mon, Sep 19, 2016 at 8:49 PM, Eyal Edri  wrote:
> Is this the usual Sanlock issue?

"Usual" sanlock issue is alarming. I don't know about any issues with Sanlock.

Do we have a bug about this?

>
> Error Message
>
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state is
> up.
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 120, in
> wrapped_test
> return test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 52, in
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 61, in
> wrapper
> return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
>   File
> "/home/jenkins/workspace/ovirt_4.0_he-system-tests/ovirt-system-tests/he_basic_suite_4.0/test-scenarios/004_basic_sanity.py",
> line 78, in add_vm_blank
> api.vms.add(vm_params)
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
> 35701, in add
> headers={"Correlation-Id":correlation_id, "Expect":expect}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 79, in add
> return self.request('POST', url, body, headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 79, in do_request
> persistent_auth)
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 162, in __do_request
> raise errors.RequestError(response_code, response_reason, response_body)
> RequestError:
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state is
> up.
>
>
> On Mon, Sep 19, 2016 at 8:22 PM,  wrote:
>>
>> See 
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Dropping rpm build from ovirt-engine check-merged.sh

2016-09-20 Thread Nir Soffer
On Tue, Sep 20, 2016 at 11:27 AM, Eyal Edri  wrote:

>
>
> On Tue, Sep 20, 2016 at 9:34 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Mon, Sep 19, 2016 at 7:56 PM, Eyal Edri  wrote:
>>
>>>
>>>
>>> On Mon, Sep 19, 2016 at 9:41 AM, Sandro Bonazzola 
>>> wrote:
>>>


 On Sun, Sep 18, 2016 at 4:18 PM, Eyal Edri  wrote:

> Hi,
>
> Following [1] I'd like to propose to remove rpm building from the
> 'check-merged.sh' script from ovirt-engine (master for now).
>
> The job [2] takes on avg 15 min while actually the rpms are built
> already in check-patch
> (with gwt draft mode if needed) and runs exactly the same build rpm
> command as check-patch [3].
>
> So there isn't real value in running exactly the same rpm build post
> merge, and we already build full permutation mode in 'build-artifacts.sh'.
>
> Any reason to keep it?
> We can cut down valuable time in CI if we drop it and vacant more time
> for more meaningful tests.
>


 This depends on the flow: if we make check_merge gating to the merge
 and to the build we should keep the rpm build becuase at merge a rebase is
 done automatically.

>>>
>>> What do you mean by 'gating to the merge'? I'm not sure I understand
>>> what it means.
>>> Isn't check-patch.sh does the gating? check-merge runs post merge so its
>>> already too late to gate the code ...
>>> And I think check-merge and check-patch currently runs the same rpmbuild
>>> command, so I don't see how check-merged has any value over check-patch.
>>>
>>
>> when merge command is issued a rebase is done as well. We still need a
>> check-merged job because the code checked by check-patch is not the same
>> anymore when check-merged runs.
>>
>
> OK, now I understand, so indeed check-merge can potentially run on
> different code than check-patch and possibly fail due to it.
>

If we require only fast-forward merges, there is no way to merge patch
before a rebase. Once you rebase a patch, check-patch runs...

So check-merge may be unneeded in this case.


>
>
>> In original desing of stdci, check-merged was supposed to become a gating
>> test for build-artifacts.
>>
>
> We have it in our backlog, i.e installing Zuul and adding gating for the
> check-merged jobs, its mostly relevant for system jobs, but we can
> defiently do it first for simple 'check-merged.sh' jobs
> as part of standard CI.
>
> Opened a ticket for it [1]
>
> [1] https://ovirt-jira.atlassian.net/browse/OVIRT-734
>
>>
>>
>>
>>
>>>
>>>
 If there's not gating process performed by check-merge then I agree in
 dropping rpm build.



>
>
> [1] https://ovirt-jira.atlassian.net/browse/OVIRT-416
> [2] http://jenkins.ovirt.org/job/ovirt-engine_master_check-m
> erged-el7-x86_64/buildTimeTrend
> [3]
> rpmbuild \
> -D "_rpmdir $PWD/output" \
> -D "_topmdir $PWD/rpmbuild" \
> -D "release_suffix ${SUFFIX}" \
> -D "ovirt_build_ut $BUILD_UT" \
> -D "ovirt_build_extra_flags $EXTRA_BUILD_FLAGS" \
> -D "ovirt_build_draft 1" \
> --rebuild output/*.src.rpm
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community
 collaboration.
 See how it works at redhat.com
 

>>>
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>> 
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : ovirt_4.0_he-system-tests #291

2016-09-20 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins build is back to normal : ovirt_3.6_system-tests #520

2016-09-20 Thread jenkins
See 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Updating permissions on gerrit.ovirt.org

2016-09-20 Thread Shlomo Ben David
Hi All,

On the upcoming days i'm going to update permissions on gerrit.ovirt.org
server.
The changes shouldn't reflect on the current state but if you'll encounter
with any permission issues or other issues related to gerrit.ovirt.org
server please let me or infra@ovirt.org to know and we'll handle it ASAP.

Best Regards,

Shlomi Ben-David | DevOps Engineer | Red Hat ISRAEL
RHCSA | RHCE
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)

OPEN SOURCE - 1 4 011 && 011 4 1
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-735) cleanup workspaces task fails on RAM disk slaves

2016-09-20 Thread Evgheni Dereveanchin (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgheni Dereveanchin updated OVIRT-735:
---
Description: 
Working on testing RAM disks as part of OVIRT-636 I see the slave being put 
offline by the cleanup job.

It's failing with the following error:
{{21:30:08 ==
21:30:08 INFO::node: ovirt-srv08.phx.ovirt.org, free space: 2GB
21:30:08 getting executors
21:30:08 got executors
21:30:08 getting build
21:30:08 Thread[Executor #0 for ovirt-srv08.phx.ovirt.org : executing 
PlaceholderExecutable:job/test-repo_ovirt_experimental_3.6/1758/:null,5,]
21:30:08 Executor #0
21:30:08 got build
21:30:08 got parent
21:30:08 got task
21:30:08 got name
21:30:08 ERROR: Build step failed with exception
21:30:08 java.lang.NullPointerException
21:30:08at java.util.Hashtable.put(Hashtable.java:514)
21:30:08at groovy.lang.Binding.setVariable(Binding.java:77)
21:30:08at groovy.lang.Script.setProperty(Script.java:66)
21:30:08at 
org.codehaus.groovy.runtime.ScriptBytecodeAdapter.setGroovyObjectProperty(ScriptBytecodeAdapter.java:528)
21:30:08at Script1.run(Script1.groovy:140)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:650)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:636)
21:30:08at 
hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:93)
21:30:08at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
21:30:08at hudson.model.Build$BuildExecution.build(Build.java:205)
21:30:08at hudson.model.Build$BuildExecution.doRun(Build.java:162)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
21:30:08at hudson.model.Run.execute(Run.java:1738)
21:30:08at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
21:30:08at 
hudson.model.ResourceController.execute(ResourceController.java:98)
21:30:08at hudson.model.Executor.run(Executor.java:410)
21:30:08 Build step 'Execute system Groovy script' marked build as failure
21:30:09 Build step 'Groovy Postbuild' marked build as failure}}

As this is a NullPointerException - it's probably a bug in the cleanup job.

  was:
Working on testing RAM disks as part of OVIRT-636 I see the slave being put 
offline by the cleanup job.

It's failing with the following error:
{{
21:30:08 ==
21:30:08 INFO::node: ovirt-srv08.phx.ovirt.org, free space: 2GB
21:30:08 getting executors
21:30:08 got executors
21:30:08 getting build
21:30:08 Thread[Executor #0 for ovirt-srv08.phx.ovirt.org : executing 
PlaceholderExecutable:job/test-repo_ovirt_experimental_3.6/1758/:null,5,]
21:30:08 Executor #0
21:30:08 got build
21:30:08 got parent
21:30:08 got task
21:30:08 got name
21:30:08 ERROR: Build step failed with exception
21:30:08 java.lang.NullPointerException
21:30:08at java.util.Hashtable.put(Hashtable.java:514)
21:30:08at groovy.lang.Binding.setVariable(Binding.java:77)
21:30:08at groovy.lang.Script.setProperty(Script.java:66)
21:30:08at 
org.codehaus.groovy.runtime.ScriptBytecodeAdapter.setGroovyObjectProperty(ScriptBytecodeAdapter.java:528)
21:30:08at Script1.run(Script1.groovy:140)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:650)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:636)
21:30:08at 
hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:93)
21:30:08at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
21:30:08at hudson.model.Build$BuildExecution.build(Build.java:205)
21:30:08at hudson.model.Build$BuildExecution.doRun(Build.java:162)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
21:30:08at hudson.model.Run.execute(Run.java:1738)
21:30:08at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
21:30:08at 
hudson.model.ResourceController.execute(ResourceController.java:98)
21:30:08at hudson.model.Executor.run(Executor.java:410)
21:30:08 Build step 'Execute system Groovy script' marked build as failure
21:30:09 Build step 'Groovy Postbuild' marked build as failure
}}

As this is a NullPointerException - it's probably a bug in the cleanup job.


> cleanup workspaces task fails on RAM disk slaves
> 
>
> Key: OVIRT-735
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-735
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: Jenkins
>Reporter: Evgheni Dereveanchin
>Assignee: infra
>
> 

[JIRA] (OVIRT-735) cleanup workspaces task fails on RAM disk slaves

2016-09-20 Thread Evgheni Dereveanchin (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgheni Dereveanchin reassigned OVIRT-735:
--

Assignee: Evgheni Dereveanchin  (was: infra)

> cleanup workspaces task fails on RAM disk slaves
> 
>
> Key: OVIRT-735
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-735
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>  Components: Jenkins
>Reporter: Evgheni Dereveanchin
>Assignee: Evgheni Dereveanchin
>
> Working on testing RAM disks as part of OVIRT-636 I see the slave being put 
> offline by the cleanup job.
> It's failing with the following error:
> {{21:30:08 ==
> 21:30:08 INFO::node: ovirt-srv08.phx.ovirt.org, free space: 2GB
> 21:30:08 getting executors
> 21:30:08 got executors
> 21:30:08 getting build
> 21:30:08 Thread[Executor #0 for ovirt-srv08.phx.ovirt.org : executing 
> PlaceholderExecutable:job/test-repo_ovirt_experimental_3.6/1758/:null,5,]
> 21:30:08 Executor #0
> 21:30:08 got build
> 21:30:08 got parent
> 21:30:08 got task
> 21:30:08 got name
> 21:30:08 ERROR: Build step failed with exception
> 21:30:08 java.lang.NullPointerException
> 21:30:08  at java.util.Hashtable.put(Hashtable.java:514)
> 21:30:08  at groovy.lang.Binding.setVariable(Binding.java:77)
> 21:30:08  at groovy.lang.Script.setProperty(Script.java:66)
> 21:30:08  at 
> org.codehaus.groovy.runtime.ScriptBytecodeAdapter.setGroovyObjectProperty(ScriptBytecodeAdapter.java:528)
> 21:30:08  at Script1.run(Script1.groovy:140)
> 21:30:08  at groovy.lang.GroovyShell.evaluate(GroovyShell.java:650)
> 21:30:08  at groovy.lang.GroovyShell.evaluate(GroovyShell.java:636)
> 21:30:08  at 
> hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:93)
> 21:30:08  at 
> hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
> 21:30:08  at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
> 21:30:08  at hudson.model.Build$BuildExecution.build(Build.java:205)
> 21:30:08  at hudson.model.Build$BuildExecution.doRun(Build.java:162)
> 21:30:08  at 
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
> 21:30:08  at hudson.model.Run.execute(Run.java:1738)
> 21:30:08  at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> 21:30:08  at 
> hudson.model.ResourceController.execute(ResourceController.java:98)
> 21:30:08  at hudson.model.Executor.run(Executor.java:410)
> 21:30:08 Build step 'Execute system Groovy script' marked build as failure
> 21:30:09 Build step 'Groovy Postbuild' marked build as failure}}
> As this is a NullPointerException - it's probably a bug in the cleanup job.



--
This message was sent by Atlassian JIRA
(v1000.319.1#100012)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-735) cleanup workspaces task fails on RAM disk slaves

2016-09-20 Thread Evgheni Dereveanchin (oVirt JIRA)
Evgheni Dereveanchin created OVIRT-735:
--

 Summary: cleanup workspaces task fails on RAM disk slaves
 Key: OVIRT-735
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-735
 Project: oVirt - virtualization made easy
  Issue Type: Bug
  Components: Jenkins
Reporter: Evgheni Dereveanchin
Assignee: infra


Working on testing RAM disks as part of OVIRT-636 I see the slave being put 
offline by the cleanup job.

It's failing with the following error:
{{
21:30:08 ==
21:30:08 INFO::node: ovirt-srv08.phx.ovirt.org, free space: 2GB
21:30:08 getting executors
21:30:08 got executors
21:30:08 getting build
21:30:08 Thread[Executor #0 for ovirt-srv08.phx.ovirt.org : executing 
PlaceholderExecutable:job/test-repo_ovirt_experimental_3.6/1758/:null,5,]
21:30:08 Executor #0
21:30:08 got build
21:30:08 got parent
21:30:08 got task
21:30:08 got name
21:30:08 ERROR: Build step failed with exception
21:30:08 java.lang.NullPointerException
21:30:08at java.util.Hashtable.put(Hashtable.java:514)
21:30:08at groovy.lang.Binding.setVariable(Binding.java:77)
21:30:08at groovy.lang.Script.setProperty(Script.java:66)
21:30:08at 
org.codehaus.groovy.runtime.ScriptBytecodeAdapter.setGroovyObjectProperty(ScriptBytecodeAdapter.java:528)
21:30:08at Script1.run(Script1.groovy:140)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:650)
21:30:08at groovy.lang.GroovyShell.evaluate(GroovyShell.java:636)
21:30:08at 
hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:93)
21:30:08at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
21:30:08at hudson.model.Build$BuildExecution.build(Build.java:205)
21:30:08at hudson.model.Build$BuildExecution.doRun(Build.java:162)
21:30:08at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
21:30:08at hudson.model.Run.execute(Run.java:1738)
21:30:08at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
21:30:08at 
hudson.model.ResourceController.execute(ResourceController.java:98)
21:30:08at hudson.model.Executor.run(Executor.java:410)
21:30:08 Build step 'Execute system Groovy script' marked build as failure
21:30:09 Build step 'Groovy Postbuild' marked build as failure
}}

As this is a NullPointerException - it's probably a bug in the cleanup job.



--
This message was sent by Atlassian JIRA
(v1000.319.1#100012)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

2016-09-20 Thread Simone Tiraboschi
On Tue, Sep 20, 2016 at 10:44 AM, Lev Veyde  wrote:

> Hi Eyal,
>
> I'm not 100% sure about that one.
> Checking the logs I see that the first host has been setup without any
> issues, yet setting up the second host gives:
>
> 17:16:31 [ ERROR ] Failed to execute stage 'Closing up': HTTP Error 500:
> Internal Server Error
> 17:16:31 [ INFO  ] Stage: Clean up
> 17:16:31 [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20160919131631.conf'
> 17:16:31 [ INFO  ] Stage: Pre-termination
> 17:16:31 [ INFO  ] Stage: Termination
> 17:16:31 [ ERROR ] Hosted Engine deployment failed: this system is not
> reliable, please check the issue,fix and redeploy
> 17:16:31   Log file is located at /var/log/ovirt-hosted-engine-
> setup/ovirt-hosted-engine-setup-20160919131556-qufq5z.log
>
> However as we don't have the ovirt-engine and ovirt-hosted-engine-setup
> logs we can't be 100% certain.
>

We have them,
they are in
http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/288/artifact/*zip*/archive.zip
under /archive/exported-artifacts/test_logs/he_basic_suite_4.0/
post-002_bootstrap.py/


>
> Thanks in advance,
> Lev Veyde.
>
> - Original Message -
> From: "Eyal Edri" 
> To: "Lev Veyde" 
> Cc: "Sandro Bonazzola" , "infra" ,
> "Simone Tiraboschi" 
> Sent: Monday, September 19, 2016 8:49:59 PM
> Subject: Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288
>
> Is this the usual Sanlock issue?
>
> Error Message
>
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state
> is up.
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
> runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 120, in wrapped_test
> return test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 52, in wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 61, in wrapper
> return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
>   File "/home/jenkins/workspace/ovirt_4.0_he-system-tests/
> ovirt-system-tests/he_basic_suite_4.0/test-scenarios/004_basic_sanity.py",
> line 78, in add_vm_blank
> api.vms.add(vm_params)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/
> infrastructure/brokers.py",
> line 35701, in add
> headers={"Correlation-Id":correlation_id, "Expect":expect}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/
> infrastructure/proxy.py",
> line 79, in add
> return self.request('POST', url, body, headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/
> infrastructure/proxy.py",
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/
> connectionspool.py",
> line 79, in do_request
> persistent_auth)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/
> connectionspool.py",
> line 162, in __do_request
> raise errors.RequestError(response_code, response_reason,
> response_body)
> RequestError:
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state
> is up.
>
>
> On Mon, Sep 19, 2016 at 8:22 PM,  wrote:
>
> > See 
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

2016-09-20 Thread Lev Veyde
Hi Eyal,

I'm not 100% sure about that one.
Checking the logs I see that the first host has been setup without any issues, 
yet setting up the second host gives:

17:16:31 [ ERROR ] Failed to execute stage 'Closing up': HTTP Error 500: 
Internal Server Error
17:16:31 [ INFO  ] Stage: Clean up
17:16:31 [ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160919131631.conf'
17:16:31 [ INFO  ] Stage: Pre-termination
17:16:31 [ INFO  ] Stage: Termination
17:16:31 [ ERROR ] Hosted Engine deployment failed: this system is not 
reliable, please check the issue,fix and redeploy
17:16:31   Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160919131556-qufq5z.log

However as we don't have the ovirt-engine and ovirt-hosted-engine-setup logs we 
can't be 100% certain.

Thanks in advance,
Lev Veyde.

- Original Message -
From: "Eyal Edri" 
To: "Lev Veyde" 
Cc: "Sandro Bonazzola" , "infra" , 
"Simone Tiraboschi" 
Sent: Monday, September 19, 2016 8:49:59 PM
Subject: Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

Is this the usual Sanlock issue?

Error Message

status: 400
reason: Bad Request
detail: Cannot add VM: Storage Domain cannot be accessed.
-Please check that at least one Host is operational and Data Center state is up.

Stacktrace

Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
120, in wrapped_test
return test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
52, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
61, in wrapper
return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt_4.0_he-system-tests/ovirt-system-tests/he_basic_suite_4.0/test-scenarios/004_basic_sanity.py",
line 78, in add_vm_blank
api.vms.add(vm_params)
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py",
line 35701, in add
headers={"Correlation-Id":correlation_id, "Expect":expect}
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 79, in add
return self.request('POST', url, body, headers, cls=cls)
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 122, in request
persistent_auth=self.__persistent_auth
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
persistent_auth)
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError:
status: 400
reason: Bad Request
detail: Cannot add VM: Storage Domain cannot be accessed.
-Please check that at least one Host is operational and Data Center state is up.


On Mon, Sep 19, 2016 at 8:22 PM,  wrote:

> See 
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

2016-09-20 Thread Eyal Edri
I really want to add this job to the experimental flow to catch regressions
early, but we have to iron out any false positives or errors we're seeing
before that.
Until now I was only aware of the Sanlock issue, which should be resolved
soon once its available in centos.

Are we aware of other known issues in the tests?

On Tue, Sep 20, 2016 at 11:22 AM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Sep 19, 2016 at 7:49 PM, Eyal Edri  wrote:
>
>> Is this the usual Sanlock issue?
>>
>
> Not that sure, however build 289 come back to stable by itself.
>
>
>
>>
>> Error Message
>>
>> status: 400
>> reason: Bad Request
>> detail: Cannot add VM: Storage Domain cannot be accessed.
>> -Please check that at least one Host is operational and Data Center state is 
>> up.
>>
>> Stacktrace
>>
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
>> testMethod()
>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
>> self.test(*self.arg)
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 120, in 
>> wrapped_test
>> return test()
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 52, in 
>> wrapper
>> return func(get_test_prefix(), *args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 61, in 
>> wrapper
>> return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
>>   File 
>> "/home/jenkins/workspace/ovirt_4.0_he-system-tests/ovirt-system-tests/he_basic_suite_4.0/test-scenarios/004_basic_sanity.py",
>>  line 78, in add_vm_blank
>> api.vms.add(vm_params)
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 
>> 35701, in add
>> headers={"Correlation-Id":correlation_id, "Expect":expect}
>>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", 
>> line 79, in add
>> return self.request('POST', url, body, headers, cls=cls)
>>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", 
>> line 122, in request
>> persistent_auth=self.__persistent_auth
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>>  line 79, in do_request
>> persistent_auth)
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>>  line 162, in __do_request
>> raise errors.RequestError(response_code, response_reason, response_body)
>> RequestError:
>> status: 400
>> reason: Bad Request
>> detail: Cannot add VM: Storage Domain cannot be accessed.
>> -Please check that at least one Host is operational and Data Center state is 
>> up.
>>
>>
>> On Mon, Sep 19, 2016 at 8:22 PM,  wrote:
>>
>>> See 
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Dropping rpm build from ovirt-engine check-merged.sh

2016-09-20 Thread Eyal Edri
On Tue, Sep 20, 2016 at 9:34 AM, Sandro Bonazzola 
wrote:

>
>
> On Mon, Sep 19, 2016 at 7:56 PM, Eyal Edri  wrote:
>
>>
>>
>> On Mon, Sep 19, 2016 at 9:41 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Sun, Sep 18, 2016 at 4:18 PM, Eyal Edri  wrote:
>>>
 Hi,

 Following [1] I'd like to propose to remove rpm building from the
 'check-merged.sh' script from ovirt-engine (master for now).

 The job [2] takes on avg 15 min while actually the rpms are built
 already in check-patch
 (with gwt draft mode if needed) and runs exactly the same build rpm
 command as check-patch [3].

 So there isn't real value in running exactly the same rpm build post
 merge, and we already build full permutation mode in 'build-artifacts.sh'.

 Any reason to keep it?
 We can cut down valuable time in CI if we drop it and vacant more time
 for more meaningful tests.

>>>
>>>
>>> This depends on the flow: if we make check_merge gating to the merge and
>>> to the build we should keep the rpm build becuase at merge a rebase is done
>>> automatically.
>>>
>>
>> What do you mean by 'gating to the merge'? I'm not sure I understand what
>> it means.
>> Isn't check-patch.sh does the gating? check-merge runs post merge so its
>> already too late to gate the code ...
>> And I think check-merge and check-patch currently runs the same rpmbuild
>> command, so I don't see how check-merged has any value over check-patch.
>>
>
> when merge command is issued a rebase is done as well. We still need a
> check-merged job because the code checked by check-patch is not the same
> anymore when check-merged runs.
>

OK, now I understand, so indeed check-merge can potentially run on
different code than check-patch and possibly fail due to it.


> In original desing of stdci, check-merged was supposed to become a gating
> test for build-artifacts.
>

We have it in our backlog, i.e installing Zuul and adding gating for the
check-merged jobs, its mostly relevant for system jobs, but we can
defiently do it first for simple 'check-merged.sh' jobs
as part of standard CI.

Opened a ticket for it [1]

[1] https://ovirt-jira.atlassian.net/browse/OVIRT-734

>
>
>
>
>>
>>
>>> If there's not gating process performed by check-merge then I agree in
>>> dropping rpm build.
>>>
>>>
>>>


 [1] https://ovirt-jira.atlassian.net/browse/OVIRT-416
 [2] http://jenkins.ovirt.org/job/ovirt-engine_master_check-m
 erged-el7-x86_64/buildTimeTrend
 [3]
 rpmbuild \
 -D "_rpmdir $PWD/output" \
 -D "_topmdir $PWD/rpmbuild" \
 -D "release_suffix ${SUFFIX}" \
 -D "ovirt_build_ut $BUILD_UT" \
 -D "ovirt_build_extra_flags $EXTRA_BUILD_FLAGS" \
 -D "ovirt_build_draft 1" \
 --rebuild output/*.src.rpm


 --
 Eyal Edri
 Associate Manager
 RHV DevOps
 EMEA ENG Virtualization R
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)

>>>
>>>
>>>
>>> --
>>> Sandro Bonazzola
>>> Better technology. Faster innovation. Powered by community collaboration.
>>> See how it works at redhat.com
>>> 
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-734) [std ci] add Zuul to gate check-merged jobs

2016-09-20 Thread eyal edri [Administrator] (oVirt JIRA)
eyal edri [Administrator] created OVIRT-734:
---

 Summary: [std ci] add Zuul to gate check-merged jobs 
 Key: OVIRT-734
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-734
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
  Components: Jenkins
Reporter: eyal edri [Administrator]
Assignee: infra
Priority: High


Today we run check-merged.sh jobs in CI just post merge without any gating to 
reject a failing code.
So what happens is a regression which passed check-patch might fail 
check-merged and we'll only see it after it already broken the branch.

Introducing Zuul to be gating before merge can solve this, and also solve any 
rebase issues we might have when merging the code.



--
This message was sent by Atlassian JIRA
(v1000.319.1#100012)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-734) [std ci] add Zuul to gate check-merged jobs

2016-09-20 Thread eyal edri [Administrator] (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

eyal edri [Administrator] updated OVIRT-734:

Epic Link: OVIRT-400

> [std ci] add Zuul to gate check-merged jobs 
> 
>
> Key: OVIRT-734
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-734
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>  Components: Jenkins
>Reporter: eyal edri [Administrator]
>Assignee: infra
>Priority: High
>
> Today we run check-merged.sh jobs in CI just post merge without any gating to 
> reject a failing code.
> So what happens is a regression which passed check-patch might fail 
> check-merged and we'll only see it after it already broken the branch.
> Introducing Zuul to be gating before merge can solve this, and also solve any 
> rebase issues we might have when merging the code.



--
This message was sent by Atlassian JIRA
(v1000.319.1#100012)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins build became unstable: ovirt_4.0_he-system-tests #288

2016-09-20 Thread Simone Tiraboschi
On Mon, Sep 19, 2016 at 7:49 PM, Eyal Edri  wrote:

> Is this the usual Sanlock issue?
>

Not that sure, however build 289 come back to stable by itself.



>
> Error Message
>
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state is 
> up.
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 120, in 
> wrapped_test
> return test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 52, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 61, in 
> wrapper
> return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
>   File 
> "/home/jenkins/workspace/ovirt_4.0_he-system-tests/ovirt-system-tests/he_basic_suite_4.0/test-scenarios/004_basic_sanity.py",
>  line 78, in add_vm_blank
> api.vms.add(vm_params)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", 
> line 35701, in add
> headers={"Correlation-Id":correlation_id, "Expect":expect}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", 
> line 79, in add
> return self.request('POST', url, body, headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", 
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>  line 79, in do_request
> persistent_auth)
>   File 
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
>  line 162, in __do_request
> raise errors.RequestError(response_code, response_reason, response_body)
> RequestError:
> status: 400
> reason: Bad Request
> detail: Cannot add VM: Storage Domain cannot be accessed.
> -Please check that at least one Host is operational and Data Center state is 
> up.
>
>
> On Mon, Sep 19, 2016 at 8:22 PM,  wrote:
>
>> See 
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Build failed in Jenkins: ovirt_3.6_system-tests #519

2016-09-20 Thread jenkins
See 

Changes:

[Yaniv Kaul] Speedup for master reposync

--
[...truncated 603 lines...]
##  rc = 1
##
##! ERROR v
##! Last 20 log enties: 
logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/basic_suite_3.6.sh.log
##!
+ true
+ env_cleanup
+ echo '#'
#
+ local res=0
+ local uuid
+ echo ' Cleaning up'
 Cleaning up
+ [[ -e 

 ]]
+ echo '--- Cleaning with lago'
--- Cleaning with lago
+ lago --workdir 

 destroy --yes --all-prefixes
+ echo '--- Cleaning with lago done'
--- Cleaning with lago done
+ [[ 0 != \0 ]]
+ echo ' Cleanup done'
 Cleanup done
+ exit 0
Took 358 seconds
===
##!
##! ERROR ^^
##!
##
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -xe
echo 'shell_scripts/system_tests.collect_logs.sh'

#
# Required jjb vars:
#version
#
VERSION=3.6
SUITE_TYPE=

WORKSPACE="$PWD"
OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"

rm -rf "$WORKSPACE/exported-artifacts"
mkdir -p "$WORKSPACE/exported-artifacts"

if [[ -d "$TESTS_LOGS" ]]; then
mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
fi

[ovirt_3.6_system-tests] $ /bin/bash -xe /tmp/hudson7431171548225896753.sh
+ echo shell_scripts/system_tests.collect_logs.sh
shell_scripts/system_tests.collect_logs.sh
+ VERSION=3.6
+ SUITE_TYPE=
+ WORKSPACE=
+ OVIRT_SUITE=3.6
+ 
TESTS_LOGS=
+ rm -rf 

+ mkdir -p 

+ [[ -d 

 ]]
+ mv 

 

POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Match found for :.* : True
Logical operation result is TRUE
Running script  : #!/bin/bash -xe
echo "shell-scripts/mock_cleanup.sh"

shopt -s nullglob


WORKSPACE="$PWD"

# Make clear this is the cleanup, helps reading the jenkins logs
cat 
+ cat
___
###
# #
#   CLEANUP   #
# #
###
+ logs=(./*log ./*/logs)
+ [[ -n ./ovirt-system-tests/logs ]]
+ tar cvzf exported-artifacts/logs.tgz ./ovirt-system-tests/logs
./ovirt-system-tests/logs/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/state.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/build.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/root.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/state.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/build.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/root.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/basic_suite_3.6.sh.log
+ rm -rf ./ovirt-system-tests/logs
+ failed=false
+ mock_confs=("$WORKSPACE"/*/mocker*)
+ for mock_conf_file in '"${mock_confs[@]}"'
+ [[ -n 

Build failed in Jenkins: ovirt_4.0_he-system-tests #290

2016-09-20 Thread jenkins
See 

Changes:

[Yaniv Kaul] Speedup for master reposync

--
[...truncated 423 lines...]
  # Copying any deploy scripts: Success (in 0:00:00)
  # [Thread-2] Bootstrapping lago_he_basic_suite_4_0_storage: 
  # [Thread-3] Bootstrapping lago_he_basic_suite_4_0_host1: 
  # [Thread-4] Bootstrapping lago_he_basic_suite_4_0_host0: 
  # [Thread-3] Bootstrapping lago_he_basic_suite_4_0_host1: Success (in 0:00:44)
  # [Thread-4] Bootstrapping lago_he_basic_suite_4_0_host0: Success (in 0:00:44)
  # [Thread-2] Bootstrapping lago_he_basic_suite_4_0_storage: Success (in 
0:00:45)
  # Save prefix: 
* Save nets: 
* Save nets: Success (in 0:00:00)
* Save VMs: 
* Save VMs: Success (in 0:00:00)
* Save env: 
* Save env: Success (in 0:00:00)
  # Save prefix: Success (in 0:00:00)
@ Initialize and populate prefix: Success (in 0:00:51)
+ env_repo_setup
+ local extrasrc
+ declare -a extrasrcs
+ echo '#'
#
+ cd 

+ lago ovirt reposetup --reposync-yum-config 

current session does not belong to lago group.
@ Create prefix internal repo: 
  # Syncing remote repos locally (this might take some time): 
* Acquiring lock for /var/lib/lago/reposync/repolock: 
* Acquiring lock for /var/lib/lago/reposync/repolock: Success (in 0:00:00)
* Running reposync: 
* Running reposync: Success (in 0:01:53)
* Due to bug https://bugzilla.redhat.com/show_bug.cgi?id=1332441 sometimes 
reposync fails to update some packages that have older versions already 
downloaded, will remove those if any and retry
* Rerunning reposync: 
* Rerunning reposync: Success (in 0:02:07)
* Failed to run reposync again, that usually means that some of the local 
rpms might be corrupted or the metadata invalid, cleaning caches and retrying a 
second time
* Rerunning reposync a last time: 
* Rerunning reposync a last time: Success (in 0:02:28)
* reposync command failed with following output: 
centos-base-el7 | 3.6 kB  00:00 

centos-extras-el7   | 3.4 kB  00:00 

centos-ovirt-4.0-el7| 2.9 kB  00:00 

centos-updates-el7  | 3.4 kB  00:00 

epel-el7| 4.3 kB  00:00 

glusterfs-el7   | 2.9 kB  00:00 

and following error: 
Yum-utils package has been deprecated, use dnf instead.
See 'man yum2dnf' for more information.


Traceback (most recent call last):
  File "/bin/reposync", line 334, in 
main()
  File "/bin/reposync", line 166, in main
my.doRepoSetup()
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 702, in 
doRepoSetup
return self._getRepos(thisrepo, True)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 742, in 
_getRepos
self._repos.doSetup(thisrepo)
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 157, in doSetup
self.retrieveAllMD()
  File "/usr/lib/python2.7/site-packages/yum/repos.py", line 88, in 
retrieveAllMD
dl = repo._async and repo._commonLoadRepoXML(repo)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1460, in 
_commonLoadRepoXML
result = self._getFileRepoXML(local, text)
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1234, in 
_getFileRepoXML
size=102400) # setting max size as 100K
  File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 1028, in _getFile
raise Errors.NoMoreMirrorsRepoError(errstr, errors, repo=self)
yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from 
ovirt-4.0-dependencies-el7: [Errno 256] No more mirrors to try.
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/repodata/repomd.xml:
 [Errno 14] curl#7 - "Failed to connect to copr-be.cloud.fedoraproject.org port 
80: No route to host"
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/repodata/repomd.xml:
 [Errno 12] Timeout on 
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/repodata/repomd.xml:
 (28, 'Connection timed out after 30001 milliseconds')
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/repodata/repomd.xml:
 [Errno 14] curl#7 - "Failed to connect to copr-be.cloud.fedoraproject.org port 
80: No route to host"
http://copr-be.cloud.fedoraproject.org/results/patternfly/patternfly1/epel-7-x86_64/repodata/repomd.xml:
 [Errno 12] Timeout on 

[oVirt Jenkins] repos_master_check-closure_el7_merged - Build # 17 - Failure!

2016-09-20 Thread jenkins
Project: http://jenkins.ovirt.org/job/repos_master_check-closure_el7_merged/ 
Build: http://jenkins.ovirt.org/job/repos_master_check-closure_el7_merged/17/
Build Number: 17
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #17
[Eyal Edri] remove ovirt-engine update db job in ci




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] repos_master_check-closure_fc24_merged - Build # 20 - Failure!

2016-09-20 Thread jenkins
Project: http://jenkins.ovirt.org/job/repos_master_check-closure_fc24_merged/ 
Build: http://jenkins.ovirt.org/job/repos_master_check-closure_fc24_merged/20/
Build Number: 20
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #1
No changes

Changes for Build #2
No changes

Changes for Build #3
No changes

Changes for Build #4
[Barak Korren] Add Ruby SDK builds and tests on ppc64le

[Martin Sivak] Configure standard automation for mom


Changes for Build #5
No changes

Changes for Build #6
[ngoldin] Revert "mock_runner.sh: Fix yum.conf location"

[Sharon Naftaly] Small changes in the build-artifacts-manual job template

[Sharon Naftaly] Add ovirt-host-deploy build-artifacts-manual jobs

[Sharon Naftaly] Change order of project confs in ovirt-dwh yaml project

[Sharon Naftaly] Add ovirt-hosted-engine-ha build-artifacts-manual jobs

[Sharon Naftaly] Add ovirt-hosted-engine-setup build-artifacts-manual jobs

[Sharon Naftaly] Add ovirt-image-uploader build-artifacts-manual jobs

[Sharon Naftaly] Add ovirt-iso-uploader build-artifacts-manual jobs

[Sharon Naftaly] Add ovirt-log-collector build-artifacts-manual jobs

[Sharon Naftaly] Add ovirt-setup-lib build-artifacts-manual jobs

[Sharon Naftaly] Add otopi build-artifacts-manual jobs

[Sharon Naftaly] Add mom build-artifacts-manual jobs

[Sharon Naftaly] Add vdsm-jsonrpc-java build-artifacts-manual jobs


Changes for Build #7
No changes

Changes for Build #8
[Sharon Naftaly] Adding ovirt-vmconsole build-artifacts and manual jobs

[Sharon Naftaly] Add vdsm build-artifacts-manual jobs

[Sharon Naftaly] Changing repoclosure jobs to run inside mock


Changes for Build #9
No changes

Changes for Build #10
[Yedidyah Bar David] Enable imageio-proxy in engine setup


Changes for Build #11
No changes

Changes for Build #12
No changes

Changes for Build #13
[Martin Perina] Fix repositories for 4.0 aaa-jdbc build


Changes for Build #14
[Yedidyah Bar David] run engine upgrade from 3.6 to master


Changes for Build #15
No changes

Changes for Build #16
No changes

Changes for Build #17
No changes

Changes for Build #18
No changes

Changes for Build #19
No changes

Changes for Build #20
[Eyal Edri] remove ovirt-engine update db job in ci




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Vdsm source packages signed with an expired key?

2016-09-20 Thread Milan Zamazal
Sandro Bonazzola  writes:

> On Mon, Sep 19, 2016 at 10:01 AM, Milan Zamazal  wrote:
>
>> Hi, on Vdsm packages downloaded from
>> http://resources.ovirt.org/pub/ovirt-4.0/src/vdsm/ :
>>
>> % gpg --verify vdsm-4.18.13.tar.gz.sig
>> gpg: assuming signed data in 'vdsm-4.18.13.tar.gz'
>> gpg: Signature made Wed 14 Sep 2016 04:38:26 PM CEST using RSA key ID
>> FE590CB7
>> gpg: Good signature from "oVirt " [expired]
>> gpg: Note: This key has expired!
>> Primary key fingerprint: 31A5 D783 7FAD 7CB2 86CD  3469 AB8C 4F9D FE59 0CB7
>>
>> % gpg --list-keys infra@ovirt.org
>> pub   2048R/FE590CB7 2014-03-30 [expired: 2016-04-02]
>> uid   [ expired] oVirt 
>>
>> Either I download fake packages signed with a cracked expired key, or
>> you sign the packages with an expired key.  Not good in any case.
>>
>
>
> Please run gpg --refresh-keys

I see, it's OK now, thanks!
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] repos_3.6_check-closure_el6_merged - Build # 18 - Failure!

2016-09-20 Thread jenkins
Project: http://jenkins.ovirt.org/job/repos_3.6_check-closure_el6_merged/ 
Build: http://jenkins.ovirt.org/job/repos_3.6_check-closure_el6_merged/18/
Build Number: 18
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #18
[Eyal Edri] remove ovirt-engine update db job in ci




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Dropping rpm build from ovirt-engine check-merged.sh

2016-09-20 Thread Sandro Bonazzola
On Mon, Sep 19, 2016 at 7:56 PM, Eyal Edri  wrote:

>
>
> On Mon, Sep 19, 2016 at 9:41 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Sun, Sep 18, 2016 at 4:18 PM, Eyal Edri  wrote:
>>
>>> Hi,
>>>
>>> Following [1] I'd like to propose to remove rpm building from the
>>> 'check-merged.sh' script from ovirt-engine (master for now).
>>>
>>> The job [2] takes on avg 15 min while actually the rpms are built
>>> already in check-patch
>>> (with gwt draft mode if needed) and runs exactly the same build rpm
>>> command as check-patch [3].
>>>
>>> So there isn't real value in running exactly the same rpm build post
>>> merge, and we already build full permutation mode in 'build-artifacts.sh'.
>>>
>>> Any reason to keep it?
>>> We can cut down valuable time in CI if we drop it and vacant more time
>>> for more meaningful tests.
>>>
>>
>>
>> This depends on the flow: if we make check_merge gating to the merge and
>> to the build we should keep the rpm build becuase at merge a rebase is done
>> automatically.
>>
>
> What do you mean by 'gating to the merge'? I'm not sure I understand what
> it means.
> Isn't check-patch.sh does the gating? check-merge runs post merge so its
> already too late to gate the code ...
> And I think check-merge and check-patch currently runs the same rpmbuild
> command, so I don't see how check-merged has any value over check-patch.
>

when merge command is issued a rebase is done as well. We still need a
check-merged job because the code checked by check-patch is not the same
anymore when check-merged runs.
In original desing of stdci, check-merged was supposed to become a gating
test for build-artifacts.




>
>
>> If there's not gating process performed by check-merge then I agree in
>> dropping rpm build.
>>
>>
>>
>>>
>>>
>>> [1] https://ovirt-jira.atlassian.net/browse/OVIRT-416
>>> [2] http://jenkins.ovirt.org/job/ovirt-engine_master_check-m
>>> erged-el7-x86_64/buildTimeTrend
>>> [3]
>>> rpmbuild \
>>> -D "_rpmdir $PWD/output" \
>>> -D "_topmdir $PWD/rpmbuild" \
>>> -D "release_suffix ${SUFFIX}" \
>>> -D "ovirt_build_ut $BUILD_UT" \
>>> -D "ovirt_build_extra_flags $EXTRA_BUILD_FLAGS" \
>>> -D "ovirt_build_draft 1" \
>>> --rebuild output/*.src.rpm
>>>
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHV DevOps
>>> EMEA ENG Virtualization R
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>> 
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Vdsm source packages signed with an expired key?

2016-09-20 Thread Sandro Bonazzola
On Mon, Sep 19, 2016 at 10:01 AM, Milan Zamazal  wrote:

> Hi, on Vdsm packages downloaded from
> http://resources.ovirt.org/pub/ovirt-4.0/src/vdsm/ :
>
> % gpg --verify vdsm-4.18.13.tar.gz.sig
> gpg: assuming signed data in 'vdsm-4.18.13.tar.gz'
> gpg: Signature made Wed 14 Sep 2016 04:38:26 PM CEST using RSA key ID
> FE590CB7
> gpg: Good signature from "oVirt " [expired]
> gpg: Note: This key has expired!
> Primary key fingerprint: 31A5 D783 7FAD 7CB2 86CD  3469 AB8C 4F9D FE59 0CB7
>
> % gpg --list-keys infra@ovirt.org
> pub   2048R/FE590CB7 2014-03-30 [expired: 2016-04-02]
> uid   [ expired] oVirt 
>
> Either I download fake packages signed with a cracked expired key, or
> you sign the packages with an expired key.  Not good in any case.
>


Please run gpg --refresh-keys
Thanks,




> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra