[ovirt-devel] Re: vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-18 Thread Daniel Belenky
Hi Nir,

It seems that the reason behind this issue is that the s390x node is
offline.
I'm checking it right now and will update asap.

On Sat, Aug 17, 2019 at 3:18 AM Nir Soffer  wrote:

> On Sat, Aug 17, 2019 at 3:06 AM Nir Soffer  wrote:
>
>> This looks like a bug, so adding infra-support - this will open a ticket
>> and someone will look into it.
>>
>> On Sat, Aug 17, 2019 at 1:12 AM Amit Bawer  wrote:
>>
>>> Hi
>>> Unable to run CI builds and OSTs due to indefinite processing over fc29 
>>> Archiving
>>> artifacts phase
>>> Example run:
>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
>>>
>>
> We have 2 stuck builds:
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10454/
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/
>
> Both started 4 hours ago.
>
> It possible to abort jobs from jenkins UI, but usually it cause more
> trouble because
> of partial cleanup, so lets let infra team handle this properly.
>
>
>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/
>>>
>> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3LAOTG5RQDYQTYT7GBPLPSXT4WFHXETL/
>


-- 

Daniel Belenky

Red Hat <https://www.redhat.com/>
<https://red.ht/sig>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5A34H2SKDT2HB6VNQ22YFULHAFQQFZ4E/


[ovirt-devel] Jenkins was restarted this morning

2019-01-22 Thread Daniel Belenky
Hi all,

Some of you have probably noticed that Jenkins was not responsive
this morning. As a result, we've had to restart it.
The service is up and running and everything should be back to normal
by now.

If you've had any running jobs by the time of the restart,
please re-trigger them by going to your patch in Gerrit
or PR in GitHub and comment '*ci test please*' to make sure
that the CI system will test your jobs.

Sorry for any inconvenience that was caused and do not hesitate
to reach out to me or anyone from the CI team with questions
or requests for assistance.

Thanks,
-- 

DANIEL BELENKY
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6ZUNUSJSU6MUTO2JVTIKIL36Z4JV5OPH/


[ovirt-devel] Re: ovirt-node-ng-image_4.3_build-artifacts-fc28-x86_64 #39 stuck for 2 days

2018-12-31 Thread Daniel Belenky
The machine where that job ran on ran out of space.
I'll update more details (as I'll have them) on the ticket
https://ovirt-jira.atlassian.net/browse/OVIRT-2638

On Sun, Dec 30, 2018 at 9:45 PM Eyal Edri  wrote:

>
>
> On Sun, Dec 30, 2018, 21:25 Yuval Turgeman 
>> Looks like livemedia-creator installed the VM correctly, but failed to
>> build the final image file for some reason (disk issues?).  Stdci failed
>> the job on timeout, but probably can't kill the hanging process.  Is it
>> possible to take a look at the slave somehow ?
>>
>
> sure, though you will need someone from the CI team to ssh in, if you
> don't have access to infra servers.
>
>
>> On Sun, Dec 30, 2018, 19:57 Nir Soffer >
>>> Started 2 days 11 hr ago
>>> Build has been executing for 2 days 11 hr on vm0038.workers-phx.ovirt.or
>>> <https://jenkins.ovirt.org/computer/vm0038.workers-phx.ovirt.org>
>>>
>>>
>>> https://jenkins.ovirt.org/job/ovirt-node-ng-image_4.3_build-artifacts-fc28-x86_64/39/
>>>
>>>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FXGWNLV3ZLZSNC4MEXGBAMFP5GBAP7IY/
>>>
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/46JUANKQHKPK7AQN5PD3MJGAESVBMFCC/
>>
>

-- 

DANIEL BELENKY
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GSZTGCQJPTCTT2ZEPPOVDSQXVXYS4HCO/


[ovirt-devel] [ OST Failure Report ] [ oVirt master (vdsm) ] [ 09-10-2018 ] [ 005_network_by_label ]

2018-10-09 Thread Daniel Belenky
Hi,

The following patch is suspected to be failing OST:
https://gerrit.ovirt.org/#/c/94754/2

Error snippet from vdsm log:

2018-10-09 08:51:50,799-0400 ERROR (Thread-7) [root] Shutdown by QEMU
Guest Agent failed (vm:5269)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5260,
in qemuGuestAgentShutdown
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 94, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2482, in
shutdownFlags
if ret == -1: raise libvirtError ('virDomainShutdownFlags()
failed', dom=self)
libvirtError: Guest agent is not responding: QEMU guest agent is not connected


-- 

DANIEL BELENKY
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7GW7VH5SDE53NCKPKSYA6BTDQSHZVIDR/


[ovirt-devel] Re: Missing rpm for ovirt-release-master

2018-07-19 Thread Daniel Belenky
Hi,

This issue was fixed.

On Thu, Jul 19, 2018 at 4:41 PM, Sandro Bonazzola 
wrote:

>
>
> 2018-07-19 13:37 GMT+02:00 Greg Sheremeta :
>
>> Indeed odd. cc @Sandro
>>
>
> Already reported to infra, looks like there were troubles in last nightly
> publish event.
>
>
>
>>
>> You could try this one:
>> https://plain.resources.ovirt.org/repos/ovirt/tested/master/
>> rpm/el7/noarch/ovirt-release-master-4.3.0-0.1.master.2018071
>> 956.git356809a.el7.noarch.rpm
>>
>> On Thu, Jul 19, 2018 at 7:30 AM Kaustav Majumder 
>> wrote:
>>
>>>  Hi,
>>> I am trying to setup hosts with ovirt-engine running on my local. this
>>> repo (http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm)
>>> needs to be enabled on the hosts but currently I don't see this repo in the
>>> resources list.
>>> --
>>>
>>> KAUSTAV MAJUMDER
>>>
>>> ASSOCIATE SOFTWARE ENGINEER
>>>
>>> Red Hat India PVT LTD. <https://www.redhat.com/>
>>>
>>> kmajum...@redhat.comM: 08981884037 IM: IRC: kmajumder
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>> @redhatway <https://twitter.com/redhatway>   @redhatinc
>>> <https://instagram.com/redhatinc>   @redhatsnaps
>>> <https://snapchat.com/add/redhatsnaps>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archiv
>>> es/list/devel@ovirt.org/message/BSJFTNREKODSUZXTY3B3RC6XZ3WESZTJ/
>>>
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> <https://www.redhat.com/>
>>
>> gsher...@redhat.comIRC: gshereme
>> <https://red.ht/sig>
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/TZRV43EVF2OVTVXTTP4Q2X6EX6R6UNZR/
>
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/25L5ZAW2OOVJ2BN7AQUSQXUMXTMKH3BP/


[ovirt-devel] Re: Build fail stopping docker containers

2018-07-15 Thread Daniel Belenky
Hi Nir,

A fix is under review https://gerrit.ovirt.org/c/93021/

On Sun, Jul 15, 2018 at 7:05 AM Daniel Belenky  wrote:

> Thanks for reporting Nir!
>
> I'm looking into it.
>
> On Fri, Jul 13, 2018 at 12:53 PM Dafna Ron  wrote:
>
>> Thanks for reporting Nir. I opened a Jira so we can investigate:
>> https://ovirt-jira.atlassian.net/browse/OVIRT-2314
>>
>>
>> On Thu, Jul 12, 2018 at 5:48 PM, Nir Soffer  wrote:
>>
>>> Seen this failure today:
>>>
>>> *16:16:05* Stopping and removing all running containers*16:16:05* Stopping 
>>> and removing name=unnamed, 
>>> id=b1d61f20b7fd4746ed83f308ab292b1d53a02de2271965268890b35c1344da04*16:16:40*
>>>  Stopping and removing name=unnamed, 
>>> id=22a8ace28d96e469bfc8177ee6afb52d6ca7b4578c27475c92862465253caa2b*16:17:16*
>>>  Stopping and removing name=unnamed, 
>>> id=5bab561632b6c179446df56dd3aadf10402989047564202f86c6b3989551820c*16:17:28*
>>>  Stopping and removing name=unnamed, 
>>> id=4c04d87ec8b55a774c9e6558a814b944d665b4c0984d6677972285209c5eea7c*16:17:41*
>>>  Stopping and removing name=unnamed, 
>>> id=5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d*16:17:41*
>>>  Traceback (most recent call last):*16:17:41*   File 
>>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>>  line 290, in *16:17:41* main()*16:17:41*   File 
>>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>>  line 34, in main*16:17:41* 
>>> stop_all_running_containers(client)*16:17:41*   File 
>>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>>  line 125, in stop_all_running_containers*16:17:41* 
>>> _remove_container(client, container)*16:17:41*   File 
>>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>>  line 152, in _remove_container*16:17:41* 
>>> client.stop(container)*16:17:41*   File 
>>> "/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line 21, in 
>>> wrapped*16:17:41* return f(self, resource_id, *args, 
>>> **kwargs)*16:17:41*   File 
>>> "/usr/lib/python2.7/site-packages/docker/api/container.py", line 403, in 
>>> stop*16:17:42* self._raise_for_status(res)*16:17:42*   File 
>>> "/usr/lib/python2.7/site-packages/docker/client.py", line 173, in 
>>> _raise_for_status*16:17:42* raise errors.NotFound(e, response, 
>>> explanation=explanation)*16:17:42* docker.errors.NotFound: 404 Client 
>>> Error: Not Found ("{"message":"No such container: 
>>> 5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d"}")
>>>
>>>
>>> Looks like we need to handle docker.errors.NotFound - it is expected
>>> error.
>>>
>>>
>>> https://jenkins.ovirt.org/job/ovirt-imageio_4.2_check-patch-el7-x86_64/551/console
>>>
>>> Nir
>>>
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/HGT55GYSN3UWNBW34HQIMZ7IWBLP55YV/
>>>
>>>
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/UPG6WSTW3F4FIBWDKRK2DDOV7IUUC575/
>>
>
>
> --
>
> DANIEL BELENKY
>
> RHV DEVOPS
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N7ZSXJZH3WDLBXMSONHJ3JVYGRRO4FJR/


[ovirt-devel] Re: Build fail stopping docker containers

2018-07-14 Thread Daniel Belenky
Thanks for reporting Nir!

I'm looking into it.

On Fri, Jul 13, 2018 at 12:53 PM Dafna Ron  wrote:

> Thanks for reporting Nir. I opened a Jira so we can investigate:
> https://ovirt-jira.atlassian.net/browse/OVIRT-2314
>
>
> On Thu, Jul 12, 2018 at 5:48 PM, Nir Soffer  wrote:
>
>> Seen this failure today:
>>
>> *16:16:05* Stopping and removing all running containers*16:16:05* Stopping 
>> and removing name=unnamed, 
>> id=b1d61f20b7fd4746ed83f308ab292b1d53a02de2271965268890b35c1344da04*16:16:40*
>>  Stopping and removing name=unnamed, 
>> id=22a8ace28d96e469bfc8177ee6afb52d6ca7b4578c27475c92862465253caa2b*16:17:16*
>>  Stopping and removing name=unnamed, 
>> id=5bab561632b6c179446df56dd3aadf10402989047564202f86c6b3989551820c*16:17:28*
>>  Stopping and removing name=unnamed, 
>> id=4c04d87ec8b55a774c9e6558a814b944d665b4c0984d6677972285209c5eea7c*16:17:41*
>>  Stopping and removing name=unnamed, 
>> id=5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d*16:17:41*
>>  Traceback (most recent call last):*16:17:41*   File 
>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>  line 290, in *16:17:41* main()*16:17:41*   File 
>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>  line 34, in main*16:17:41* 
>> stop_all_running_containers(client)*16:17:41*   File 
>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>  line 125, in stop_all_running_containers*16:17:41* 
>> _remove_container(client, container)*16:17:41*   File 
>> "/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
>>  line 152, in _remove_container*16:17:41* 
>> client.stop(container)*16:17:41*   File 
>> "/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line 21, in 
>> wrapped*16:17:41* return f(self, resource_id, *args, **kwargs)*16:17:41* 
>>   File "/usr/lib/python2.7/site-packages/docker/api/container.py", line 403, 
>> in stop*16:17:42* self._raise_for_status(res)*16:17:42*   File 
>> "/usr/lib/python2.7/site-packages/docker/client.py", line 173, in 
>> _raise_for_status*16:17:42* raise errors.NotFound(e, response, 
>> explanation=explanation)*16:17:42* docker.errors.NotFound: 404 Client Error: 
>> Not Found ("{"message":"No such container: 
>> 5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d"}")
>>
>>
>> Looks like we need to handle docker.errors.NotFound - it is expected
>> error.
>>
>>
>> https://jenkins.ovirt.org/job/ovirt-imageio_4.2_check-patch-el7-x86_64/551/console
>>
>> Nir
>>
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/HGT55GYSN3UWNBW34HQIMZ7IWBLP55YV/
>>
>>
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/UPG6WSTW3F4FIBWDKRK2DDOV7IUUC575/
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LG5FBNASP6EI2ATLDM2KV2EUFB5XFISU/


[ovirt-devel] Re: [VDSM] Travis builds still fail on .coverage rename

2018-07-07 Thread Daniel Belenky
On Thu, Jul 5, 2018 at 17:56 Nir Soffer  wrote:

> On Thu, Jul 5, 2018 at 5:43 PM Dan Kenigsberg  wrote:
>
>> On Thu, Jul 5, 2018 at 2:52 AM, Nir Soffer  wrote:
>> > On Wed, Jul 4, 2018 at 1:00 PM Dan Kenigsberg 
>> wrote:
>> >>
>> >> On Wed, Jul 4, 2018 at 12:48 PM, Nir Soffer 
>> wrote:
>> >> > Dan, travis build still fail when renaming coverage file even after
>> >> > your last patch.
>> >> >
>> >> >
>> >> >
>> >> >
>> ...SS.SS.SS..S.SSSS.SSS...SSS...S.S.SSSS..S.SS..
>> >> >
>> --
>> >> > Ran 1267 tests in 99.239s
>> >> > OK (SKIP=63)
>> >> > [ -n "$NOSE_WITH_COVERAGE" ] && mv .coverage .coverage-nose-py2
>> >> > make[1]: *** [check] Error 1
>> >> > make[1]: Leaving directory `/vdsm/tests'
>> >> > ERROR: InvocationError: '/usr/bin/make -C tests check'
>> >> >
>> >> > https://travis-ci.org/oVirt/vdsm/jobs/399932012
>> >> >
>> >> > Do you have any idea what is wrong there?
>> >> >
>> >> > Why we don't have any error message from the failed command?
>> >>
>> >> No idea, nothing pops to mind.
>> >> We can revert to the sillier [ -f .coverage ] condition instead of
>> >> understanding (yeah, this feels dirty)
>> >
>> >
>> > Thanks, your patch (https://gerrit.ovirt.org/#/c/92813/) fixed this
>> > failure.
>> >
>> > Now we have failures for the pywatch_test, and some network
>> > tests. Can someone from network look at this?
>> > https://travis-ci.org/nirs/vdsm/builds/400204807
>>
>> https://travis-ci.org/nirs/vdsm/jobs/400204808 shows
>>
>>   ConfigNetworkError: (21, 'Executing commands failed:
>> ovs-vsctl: cannot create a bridge named vdsmbr_test because a bridge
>> named vdsmbr_test already exists')
>>
>> which I thought was limited to dirty ovirt-ci jenkins slaves. Any idea
>> why it shows here?
>>
>
> Maybe one failed test leave dirty host to the next test?
>
>
>> py-watch seems to be failing due to missing gdb on the travis image
>
>
>> cmdutils.py151 DEBUG./py-watch 0.1 sleep 10 (cwd None)
>> cmdutils.py159 DEBUGFAILED:  = 'Traceback
>> (most recent call last):\n  File "./py-watch", line 60, in \n
>>   dump_trace(watched_proc)\n  File "./py-watch", line 32, in
>> dump_trace\n\'thread apply all py-bt\'])\n  File
>> "/usr/lib64/python2.7/site-packages/subprocess32.py", line 575, in
>> call\np = Popen(*popenargs, **kwargs)\n  File
>> "/usr/lib64/python2.7/site-packages/subprocess32.py", line 822, in
>> __init__\nrestore_signals, start_new_session)\n  File
>> "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1567, in
>> _execute_child\nraise child_exception_type(errno_num,
>> err_msg)\nOSError: [Errno 2] No such file or directory: \'gdb\'\n';
>>  = 1
>>
>
> Cool, easy fix.
>
>
>> Nir, could you remind me what is "ERROR: InterpreterNotFound:
>> python3.6" and how can we avoid it? it keeps distracting during
>> debugging test failures.
>>
>
> We can avoid it in travis using env matrix.
>
> Currently we run "make check" which run all the the tox envs
> (e.g. storage-py27,storage-py36) regardless of the build type. This is good
> for manual usage when you don't know which python version is available
> on a developer machine. For example if I have python 3.7 installed, maybe
> I like to test.
>
> We can change this so we will test only the *-py27 on centos, and both
> *-py27 and *-py36 on Fedora.
>
> We can do the same in ovirt CI but it will be harder, we don't have a
> declerative
> way to configure this.
>

This behavior c

[ovirt-devel] Re: [VDSM] Tests pass, build fail with "Skipping post build task 2 - job status is worse than unstable : FAILURE"

2018-07-04 Thread Daniel Belenky
Hi Nir,

It's a bug we've fixed last night. Here's the ticket
https://ovirt-jira.atlassian.net/browse/OVIRT-2285.

On Thu, Jul 5, 2018 at 12:59 AM Nir Soffer  wrote:

> This is the second time this week. Can we fix the CI to succeed if
> check-patch.sh
> exited with 0?
>
> All the tests passed:
>
> *16:18:38*   tests: commands succeeded*16:18:38*   storage-py27: commands 
> succeeded*16:18:38*   storage-py36: commands succeeded*16:18:38*   lib-py27: 
> commands succeeded*16:18:38*   lib-py36: commands succeeded*16:18:38*   
> network-py27: commands succeeded*16:18:38*   network-py36: commands 
> succeeded*16:18:38*   virt-py27: commands succeeded*16:18:38*   virt-py36: 
> commands succeeded*16:18:38*   congratulations :)
>
>
> But the build failed with:
>
> *16:10:45* Skipping post build task 2 - job status is worse than unstable : 
> FAILURE
>
>
> Failed on both el7 and fc28:
>
> https://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/24267/console
>
> https://jenkins.ovirt.org/job/vdsm_master_check-patch-fc28-x86_64/292/console
>
> I merged the patch anyway.
>
> Nir
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/7KZFLDUGLTIOA3T6EFHGJDWZCXZ6JCGX/
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QGC2BR57AEQOW6ZJTETFSJUR6BUOLUPN/


[ovirt-devel] Re: engine builds failing (Invalid Git ref given: jenkins-ovirt-engine_master_check-patch...)

2018-07-04 Thread Daniel Belenky
The credit for the fix goes to @bkorren, Ive just helped to debug.

Happy to hear the builds work :)

On Wed, Jul 4, 2018 at 22:45 Greg Sheremeta  wrote:

> running:
>
> https://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc28-x86_64/824/
>
>
> On Wed, Jul 4, 2018 at 3:42 PM Daniel Belenky  wrote:
>
>> Greg, can you retrigger your build? Everything should be working now.
>>
>> On Wed, Jul 4, 2018 at 21:10 Daniel Belenky  wrote:
>>
>>> Hi Greg,
>>>
>>> We are aware of this issue and already know the root cause.
>>> We're looking for the best solution right now. I'll update ASAP.
>>>
>>> Thanks,
>>>
>>> On Wed, Jul 4, 2018 at 7:21 PM Greg Sheremeta 
>>> wrote:
>>>
>>>> build-artifacts-on-demand did it too.
>>>> """
>>>> 16:13:18 +
>>>> /home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/jenkins/scripts/pusher.py
>>>> --log=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
>>>> push --if-not-exists
>>>> --unless-hash=jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983
>>>> master
>>>> *16:13:18 Invalid Git ref given:
>>>> 'jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983'*
>>>> *16:13:18 Build step 'Execute shell' marked build as failure*
>>>> """
>>>>
>>>> On Wed, Jul 4, 2018 at 12:13 PM Greg Sheremeta 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> It appears all engine check-patch builds are failing.
>>>>>
>>>>>
>>>>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_check-patch-el7-x86_64/activity
>>>>>
>>>>> They all say
>>>>> """
>>>>> 15:40:10 ##
>>>>> 15:40:10 ## FINISHED SUCCESSFULLY
>>>>> 15:40:10 ##
>>>>> 15:40:10 Collecting mock logs
>>>>> 15:40:10 renamed './mock_logs.sWqMKYDi/populate_mock' ->
>>>>> 'exported-artifacts/mock_logs/populate_mock'
>>>>> 15:40:10 renamed './mock_logs.sWqMKYDi/script' ->
>>>>> 'exported-artifacts/mock_logs/script'
>>>>> 15:40:10 renamed './mock_logs.sWqMKYDi/init' ->
>>>>> 'exported-artifacts/mock_logs/init'
>>>>> 15:40:10 ##
>>>>> 15:40:10 [ovirt-engine_master_check-patch-fc28-x86_64] $ /bin/bash -xe
>>>>> /tmp/jenkins7563971716314834364.sh
>>>>> 15:40:10 +
>>>>> WORKSPACE=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64
>>>>> 15:40:10 +
>>>>> LOGDIR=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>>>>> 15:40:10 + mkdir -p
>>>>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>>>>> 15:40:10 + cd ./ovirt-engine
>>>>> 15:40:10 +
>>>>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/jenkins/scripts/pusher.py
>>>>> --log=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
>>>>> push --if-not-exists
>>>>> --unless-hash=jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822 
>>>>> master
>>>>> *15:40:10 Invalid Git ref given:
>>>>> 'jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822'*
>>>>> *15:40:10 Build step 'Execute shell' marked build as failure*
>>>>> 15:40:10 $ ssh-agent -k
>>>>> 15:40:10 unset SSH_AUTH_SOCK;
>>>>> 15:40:10 unset SSH_AGENT_PID;
>>>>> 15:40:10 echo Agent pid 10302 killed;
>>>>> 15:40:10 [ssh-agent] Stopped.
>>>>> 15:40:11 Performing Post build task...
>>>>> 15:40:11 Match found for :.* : True
>>>>> 15:40:11 Logical operation result is TRUE
>>>>> 15:40:11 Running script  : #!/bin/bash -ex
>>>>> 15:40:11 echo "shell-scripts/collect_artifacts.sh"
>>>>> 15:40:11 cat <>>>> 15:40:11
>>>>> ___________
>>>

[ovirt-devel] Re: engine builds failing (Invalid Git ref given: jenkins-ovirt-engine_master_check-patch...)

2018-07-04 Thread Daniel Belenky
Greg, can you retrigger your build? Everything should be working now.

On Wed, Jul 4, 2018 at 21:10 Daniel Belenky  wrote:

> Hi Greg,
>
> We are aware of this issue and already know the root cause.
> We're looking for the best solution right now. I'll update ASAP.
>
> Thanks,
>
> On Wed, Jul 4, 2018 at 7:21 PM Greg Sheremeta  wrote:
>
>> build-artifacts-on-demand did it too.
>> """
>> 16:13:18 +
>> /home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/jenkins/scripts/pusher.py
>> --log=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
>> push --if-not-exists
>> --unless-hash=jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983
>> master
>> *16:13:18 Invalid Git ref given:
>> 'jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983'*
>> *16:13:18 Build step 'Execute shell' marked build as failure*
>> """
>>
>> On Wed, Jul 4, 2018 at 12:13 PM Greg Sheremeta 
>> wrote:
>>
>>> Hi,
>>>
>>> It appears all engine check-patch builds are failing.
>>>
>>>
>>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_check-patch-el7-x86_64/activity
>>>
>>> They all say
>>> """
>>> 15:40:10 ##
>>> 15:40:10 ## FINISHED SUCCESSFULLY
>>> 15:40:10 ##
>>> 15:40:10 Collecting mock logs
>>> 15:40:10 renamed './mock_logs.sWqMKYDi/populate_mock' ->
>>> 'exported-artifacts/mock_logs/populate_mock'
>>> 15:40:10 renamed './mock_logs.sWqMKYDi/script' ->
>>> 'exported-artifacts/mock_logs/script'
>>> 15:40:10 renamed './mock_logs.sWqMKYDi/init' ->
>>> 'exported-artifacts/mock_logs/init'
>>> 15:40:10 ##
>>> 15:40:10 [ovirt-engine_master_check-patch-fc28-x86_64] $ /bin/bash -xe
>>> /tmp/jenkins7563971716314834364.sh
>>> 15:40:10 +
>>> WORKSPACE=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64
>>> 15:40:10 +
>>> LOGDIR=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>>> 15:40:10 + mkdir -p
>>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>>> 15:40:10 + cd ./ovirt-engine
>>> 15:40:10 +
>>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/jenkins/scripts/pusher.py
>>> --log=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
>>> push --if-not-exists
>>> --unless-hash=jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822 master
>>> *15:40:10 Invalid Git ref given:
>>> 'jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822'*
>>> *15:40:10 Build step 'Execute shell' marked build as failure*
>>> 15:40:10 $ ssh-agent -k
>>> 15:40:10 unset SSH_AUTH_SOCK;
>>> 15:40:10 unset SSH_AGENT_PID;
>>> 15:40:10 echo Agent pid 10302 killed;
>>> 15:40:10 [ssh-agent] Stopped.
>>> 15:40:11 Performing Post build task...
>>> 15:40:11 Match found for :.* : True
>>> 15:40:11 Logical operation result is TRUE
>>> 15:40:11 Running script  : #!/bin/bash -ex
>>> 15:40:11 echo "shell-scripts/collect_artifacts.sh"
>>> 15:40:11 cat <>> 15:40:11
>>> ___
>>> 15:40:11
>>> ###
>>> 15:40:11 #
>>>#
>>> 15:40:11 # ARTIFACT COLLECTION
>>>#
>>> 15:40:11 #
>>>#
>>> 15:40:11
>>> ###################
>>>
>>> """
>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> <https://red.ht/sig>
>>>
>>
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WMPRSFGF2Z6UAUVAG56Z63UEDOB5DDLR/
>>
>
>
> --
>
> DANIEL BELENKY
>
> RHV DEVOPS
>
-- 
Daniel Belenky
RHV DevOps
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZKE3S6SW2K5ANTGBH76PQXBNO6RD22S3/


[ovirt-devel] Re: engine builds failing (Invalid Git ref given: jenkins-ovirt-engine_master_check-patch...)

2018-07-04 Thread Daniel Belenky
Hi Greg,

We are aware of this issue and already know the root cause.
We're looking for the best solution right now. I'll update ASAP.

Thanks,

On Wed, Jul 4, 2018 at 7:21 PM Greg Sheremeta  wrote:

> build-artifacts-on-demand did it too.
> """
> 16:13:18 +
> /home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/jenkins/scripts/pusher.py
> --log=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
> push --if-not-exists
> --unless-hash=jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983
> master
> *16:13:18 Invalid Git ref given:
> 'jenkins-ovirt-engine_master_build-artifacts-on-demand-el7-x86_64-983'*
> *16:13:18 Build step 'Execute shell' marked build as failure*
> """
>
> On Wed, Jul 4, 2018 at 12:13 PM Greg Sheremeta 
> wrote:
>
>> Hi,
>>
>> It appears all engine check-patch builds are failing.
>>
>>
>> https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-engine_master_check-patch-el7-x86_64/activity
>>
>> They all say
>> """
>> 15:40:10 ##
>> 15:40:10 ## FINISHED SUCCESSFULLY
>> 15:40:10 ##
>> 15:40:10 Collecting mock logs
>> 15:40:10 renamed './mock_logs.sWqMKYDi/populate_mock' ->
>> 'exported-artifacts/mock_logs/populate_mock'
>> 15:40:10 renamed './mock_logs.sWqMKYDi/script' ->
>> 'exported-artifacts/mock_logs/script'
>> 15:40:10 renamed './mock_logs.sWqMKYDi/init' ->
>> 'exported-artifacts/mock_logs/init'
>> 15:40:10 ##
>> 15:40:10 [ovirt-engine_master_check-patch-fc28-x86_64] $ /bin/bash -xe
>> /tmp/jenkins7563971716314834364.sh
>> 15:40:10 +
>> WORKSPACE=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64
>> 15:40:10 +
>> LOGDIR=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>> 15:40:10 + mkdir -p
>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs
>> 15:40:10 + cd ./ovirt-engine
>> 15:40:10 +
>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/jenkins/scripts/pusher.py
>> --log=/home/jenkins/workspace/ovirt-engine_master_check-patch-fc28-x86_64/exported-artifacts/pusher_logs/push_ovirt-engine.log
>> push --if-not-exists
>> --unless-hash=jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822 master
>> *15:40:10 Invalid Git ref given:
>> 'jenkins-ovirt-engine_master_check-patch-fc28-x86_64-822'*
>> *15:40:10 Build step 'Execute shell' marked build as failure*
>> 15:40:10 $ ssh-agent -k
>> 15:40:10 unset SSH_AUTH_SOCK;
>> 15:40:10 unset SSH_AGENT_PID;
>> 15:40:10 echo Agent pid 10302 killed;
>> 15:40:10 [ssh-agent] Stopped.
>> 15:40:11 Performing Post build task...
>> 15:40:11 Match found for :.* : True
>> 15:40:11 Logical operation result is TRUE
>> 15:40:11 Running script  : #!/bin/bash -ex
>> 15:40:11 echo "shell-scripts/collect_artifacts.sh"
>> 15:40:11 cat <> 15:40:11
>> ___
>> 15:40:11
>> ###
>> 15:40:11 #
>>  #
>> 15:40:11 # ARTIFACT COLLECTION
>>  #
>> 15:40:11 #
>>  #
>> 15:40:11
>> ###
>>
>> """
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> <https://www.redhat.com/>
>>
>> gsher...@redhat.comIRC: gshereme
>> <https://red.ht/sig>
>>
>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WMPRSFGF2Z6UAUVAG56Z63UEDOB5DDLR/
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C4GCFXDU7NMHFZSCL4QXHJ422EDRDAXD/


[ovirt-devel] [Announcement] Introducing Standard CI build summary

2018-05-30 Thread Daniel Belenky
Hi,

On behalf of oVirt's CI team, I'm happy to announce the STDCI build summary
-
*We've used patternfly *[1]  to ensure a clean,
simple and elegant view of information for your builds.
The summary is available for STDCI V2 [2] projects and it is located under
*build-artifacts/ci_build_summary.html*


*[image: location.png]*


STDCI build summary is a single web page, generated dynamically for every
build on STDCI.
It visualizes all CI threads that ran for that build and features a set of
quick links.

The main view shows a list of all threads with an indicator if that thread
failed or passed.
The quick links to the right of each thread lead to *log* and *artifacts*
of that thread only.
Note that *log* shows the output of your script only to ease debugging.
[image: threads_list.png]

At the top right side, you will find a set of quick links for convenience:

[image: menus.png]

*Test results* (available if JUnit XML files were exported during the
build) leads to a summary of all JUnit test results that were collected
during the build.
*Test results analyzer *shows the history of test execution results in a
tabular format.
*Findbugs results *(available if Findbugs report was generated and exported
during the build) shows trend report for Findbugs.
*Full build log* shows the full log of all threads. This log is harder to
understand as it includes output from CI runtime environment as well as
from your scripts.

*Rebuild (for GitHub PR)/Retrigger (for Gerrit patch): *Run the build
again. The new build will report and vote to Gerrit/GitHub with the new
result.
*View PR/patch: *Links back to the PR on GitHub or patchset page in Gerrit.


[image: view_in_jenkins.png]
*View in Jenkins* will lead you back to the build view in Jenkins

[image: view_in_blueocean.png]
*View in Blue Ocean *will lead you to the build view in Jenkins Blue Ocean
view

STDCI Documentation
https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Build_and_test_standards/index.html

For any questions, don't hesitate to contact the CI team at #rhev-integ,
#rhev-dev
or view mail at in...@ovirt.org.

Thanks,
Daniel.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCRLR7XPCYBOMDT5MGBYHHI4FKO3427C/


[ovirt-devel] Re: [ANNOUNCE] Introducing STDCI V2

2018-05-13 Thread Daniel Belenky
On Fri, May 11, 2018 at 9:22 PM, Greg Sheremeta <gsher...@redhat.com> wrote:

> How do I map branches to distros, like, how do I add fc28 only for master?
>
> In other words, how do I replicate something like this v1 config?
>
> version:
>   - master:
>   branch: master
>   - 4.2:
>   branch: master
>   - 4.1:
>   branch: master
> distro:
>   - el7
>   - fc27
> exclude:
>   - { version: 4.1, distro: fc27 }
> arch: x86_64
>

Hi Grer,

Since STDCI reacts to changes in your repo (PR/Merge in GitHub or new
patchset/update/submit in Gerrit),
STDCI checks out to the same branch the change was made to after cloning
your project.
Since stdci.yaml is located in your repository, you neeed to write
different stdci.yaml for each branch with it's
specific configuration.


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Re: [ovirt-devel] [ANNOUNCE] Introducing STDCI V2

2018-04-22 Thread Daniel Belenky
>
> Hey,
>
> I just got around to studying this.
>
> - Nice clear email!
> - Everything really makes sense.
> - Thank you for fixing the -excludes thing in the yaml. That was rough :)
> - The graph view in Blue Ocean is easy to see and understand.
> - "We now support “sub stages” which provide the ability to run multiple 
> different
> scripts in parallel" -- what kind of races should we watch out for? :) For
> example in OST, I think I'll have to adapt docker stuff to be aware that
> another set of containers could be running at the same time -- not positive
> though.
>

You shouldn't expect any races due to that change. Sub stages are there to
allow triggering of more than one task/job on a single CI event such as
check-patch when a patch is created/updated, check-merged/build-artifacts
when a patch is merged. Sub stages run in parallel but on *different slaves**.
*With sub-stages you can, for example, run different scripts in parallel
and on different slaves to do different tasks such as running unit-tests in
parallel with docs generation and build verification.

>
> It looks like the substages replace change_resolver in OST. Can you go
> into that in more detail? How does this impact running run mock_runner
> locally? When I run it locally it doesn't appear to paralleilize like it
> does in jenkins / Blue Ocean.
>

That is true. In STDCI V1 we used to run change_resolver in check-patch to
check the commit and resolve the relevant changes. STDCI V2 has this
feature integrated in one of it's core components which is called usrc.py.
We haven't decided yet how/if we will integrate this tool into OST, or how
we will achieve the same behaviour when running OST locally with mock
runner. For now, you can keep using the "old" check-patch.sh with
mock_runner which will call change_resolver. I'd recommend to send a patch
and let Jenkins do the checks for you. It will be faster in many cases
where you'll have to run several suites in parallel.

We'll send a proper announcement regarding the new (STDCI V2 based) jobs
for OST including instructions of debugging and how this change affects you
as an OST developer.

Thanks,
-

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (otopi) ] [ 13-03-2018 ] [ 002_bootstrap.verify_add_hosts + 002_bootstrap.add_hosts ]

2018-03-14 Thread Daniel Belenky
The last fix mentioned by Didi indeed fixed the problem.
I've re-enqueued the original patch to otopi. It was successfully tested
and deployed by the CI.

On Wed, Mar 14, 2018 at 8:30 AM, Yedidyah Bar David <d...@redhat.com> wrote:

> On Tue, Mar 13, 2018 at 8:14 PM, Daniel Belenky <dbele...@redhat.com>
> wrote:
> > Hi,
> >
> > I've discussed this issue with Didi and Yuval, and I understood that
> > rebuilding ovirt-host-deploy with new otopi should fix this.
> > The last commit in ovirt-host-deploy has been rebuilt and now is being
> > tested in CQ.
> > Since CQ has prevented the change in otopi from making it's way to the
> > tested repo, this failure applies only to changes in otopi project and
> will
> > not affect others.
> > So maybe we can wait for a proper fix (in case rebuilding
> ovirt-host-deploy
> > won't solve the issue) instead of reverting the change.
> >
> > On Tue, Mar 13, 2018 at 8:01 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >>
> >>
> >> On Mar 13, 2018 6:11 PM, "Dafna Ron" <d...@redhat.com> wrote:
> >>
> >> Hi,
> >>
> >> CQ reported failure on both basic and upgrade suites in otpi project.
> >>
> >> Link and headline of suspected patches:
> >>
> >> core: Use python3 when possible- https://gerrit.ovirt.org/#/c/87276/
> >>
> >>
> >> Please revert.
> >> Y.
> >>
> >>
> >>
> >> Link to Job:
> >>
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6285/
> >>
> >> Link to all logs:
> >>
> >>
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/6285/artifacts/
> >>
> >> (Relevant) error snippet from the log:
> >>
> >> 
> >>
> >>
> >> at
> >> org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand.
> lambda$executeCommand$2(AddVdsCommand.java:217)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$
> InternalWrapperRunnable.run(ThreadPoolUtil.java:96)
> >> [utils.jar:]
> >> at
> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >> [rt.jar:1.8.0_161]
> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >> [rt.jar:1.8.0_161]
> >> at
> >> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> >> [rt.jar:1.8.0_161]
> >> at
> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> >> [rt.jar:1.8.0_161]
> >> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161]
> >> at
> >> org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$
> ManagedThread.run(ManagedThreadFactoryImpl.java:250)
> >> [javax.enterprise.concurrent-1.0.jar:]
> >> at
> >> org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$
> ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
> >>
> >> 2018-03-13 09:22:40,287-04 ERROR
> >> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy)
> [7e847b89]
> >> Error during deploy dialog
> >> 2018-03-13 09:22:40,288-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
> task-25)
> >> [] Entered SsoRestApiAuthFilter
> >> 2018-03-13 09:22:40,288-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
> task-25)
> >> [] SsoRestApiAuthFilter authenticating with sso
> >> 2018-03-13 09:22:40,288-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
> task-25)
> >> [] SsoRestApiAuthFilter authenticating using BEARER header
> >> 2018-03-13 09:22:40,290-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
> task-25)
> >> [] SsoRestApiAuthFilter successfully authenticated using BEARER header
> >> 2018-03-13 09:22:40,290-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter]
> (default
> >> task-25) [] Entered SsoRestApiNegotiationFilter
> >> 2018-03-13 09:22:40,292-04 DEBUG
> >> [org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter]
> (default
> >> task-25) [] SsoRestApiNegotiationFilter Not performing Negotiate Auth
> >> 2018-03-13 09:22:40,297-04 ERROR
> >> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
> >> (E

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (otopi) ] [ 13-03-2018 ] [ 002_bootstrap.verify_add_hosts + 002_bootstrap.add_hosts ]

2018-03-13 Thread Daniel Belenky
o-basic-suite-master-host-1': Unexpected connection termination*
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 002_bootstrap.update_default_cluster ]

2018-03-11 Thread Daniel Belenky
Hi,

The following patch failed to pass OST: https://gerrit.ovirt.org/#/c/88738/2
Link to failed build: here
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1181/>
Link to all logs: here
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1181/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-002_bootstrap.py/>

Error snippet from engine.log
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1181/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-002_bootstrap.py/lago-basic-suite-4-2-engine/_var_log/ovirt-engine/engine.log/*view*/>
:

2018-03-11 06:10:58,521-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(default task-14) [] Operation Failed: [Cannot edit Cluster. The
chosen CPU is not supported.]


Thanks,
-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 04-03-2018 ] [ 004_basic_sanity.disk_operations ]

2018-03-04 Thread Daniel Belenky
Hi,

The following test failed OST: 004_basic_sanity.disk_operations.

Link to suspected patch: https://gerrit.ovirt.org/c/88404/
Link to the failed job:
http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/
Link to all test logs:

   - engine
   
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-engine>
   - host 0
   
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-host-0/_var_log>
   - host 1
   
<http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-host-1/_var_log>

Error snippet from engine:

2018-03-04 09:50:14,823-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-12) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm2 is
down with error. Exit message: Lost connection with qemu process.


Error snippet from host:

Mar  4 09:56:27 lago-basic-suite-4-2-host-1 libvirtd: 2018-03-04
14:56:27.831+: 1189: error : qemuDomainAgentAvailable:6010 : Guest
agent is not responding: QEMU guest agent is not connected


Thanks,

-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST] Some tests failed?

2018-02-11 Thread Daniel Belenky
Hi Nir,

You can find the full test results under "*Test Result*" link on the left
menu at your build: link
<http://jenkins.ovirt.org/job/ovirt-system-tests_manual/2157/testReport/>
All the logs of the system are under "*Build Artifacts": *link
<http://jenkins.ovirt.org/job/ovirt-system-tests_manual/2157/artifact/exported-artifacts/test_logs/basic-suite-4.2/post-002_bootstrap.py/>

On Mon, Feb 12, 2018 at 12:17 AM, Nir Soffer <nsof...@redhat.com> wrote:

> I got this error when running OST with engine 4.2 - is this
> a test error? environment error?
>
> Looks like some error message needs work :-)
>
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/2157/console
>
> *00:10:32.263* @ Run test: 002_bootstrap.py: *00:10:32.281* nose.config: 
> INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']*00:10:32.301*   
> # add_dc: *00:10:53.839*   # add_dc: Success (in 0:00:21)*00:10:53.843*   # 
> add_cluster: *00:10:54.123* * Collect artifacts: *00:10:55.995* * 
> Collect artifacts: Success (in 0:00:01)*00:10:56.000*   # add_cluster: 
> Success (in 0:00:02)*00:10:56.003*   # Results located at 
> /dev/shm/ost/deployment-basic-suite-4.2/default/002_bootstrap.py.junit.xml*00:10:56.005*
>  @ Run test: 002_bootstrap.py: Success (in 0:00:23)*00:10:56.296* Error 
> occured, aborting*00:10:56.296* Traceback (most recent call 
> last):*00:10:56.296*   File 
> "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 362, in 
> do_run*00:10:56.297* 
> self.cli_plugins[args.ovirtverb].do_run(args)*00:10:56.298*   File 
> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
> do_run*00:10:56.299* self._do_run(**vars(args))*00:10:56.299*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 505, in 
> wrapper*00:10:56.300* return func(*args, **kwargs)*00:10:56.300*   File 
> "/usr/lib/python2.7/site-packages/lago/utils.py", line 516, in 
> wrapper*00:10:56.301* return func(*args, prefix=prefix, 
> **kwargs)*00:10:56.301*   File 
> "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 99, in 
> do_ovirt_runtest*00:10:56.302* raise RuntimeError('Some tests 
> failed')*00:10:56.303* RuntimeError: Some tests failed
>
>
>
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 07-02-2018 ] [ 004_basic_sanity.vm_run ]

2018-02-11 Thread Daniel Belenky
is that the host was filtered
>>>>>>>> from selection for vm.
>>>>>>>>
>>>>>>>> Please also notice other errors in the log relating to network sync
>>>>>>>> failing.
>>>>>>>>
>>>>>>>>
>>>>>>>> *Link and headline of suspected patches:
>>>>>>>> https://gerrit.ovirt.org/#/c/87116/5 
>>>>>>>> <https://gerrit.ovirt.org/#/c/87116/5>
>>>>>>>> - core: Add 4.3 support   *
>>>>>>>>
>>>>>>>
>>>>>>> Do we know if VDSM in this patch already contains
>>>>>>> https://gerrit.ovirt.org/87181 ? It's required to have both side
>>>>>>> fixed to properly support 4.3 cluster level
>>>>>>> ​
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *Link to
>>>>>>>> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5307/
>>>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5307/>Link
>>>>>>>> to all
>>>>>>>> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5307/artifact/
>>>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5307/artifact/>(Relevant)
>>>>>>>> error snippet from the log: 2018-02-06 16:42:39,431-05 INFO
>>>>>>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
>>>>>>>> (default
>>>>>>>> task-5) [03cdc0e6-eb94-46a4-9882-77762b2ed787] FINISH,
>>>>>>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 
>>>>>>>> 4d0244492018-02-06
>>>>>>>> 16:42:39,472-05 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
>>>>>>>> (default
>>>>>>>> task-5) [03cdc0e6-eb94-46a4-9882-77762b2ed787] Running command:
>>>>>>>> RunVmOnceCommand internal: false. Entities affected :  ID:
>>>>>>>> ccfe9852-f2d6-4e18-8213-29d958d3a8ed Type: VMAction group RUN_VM with 
>>>>>>>> role
>>>>>>>> type USER,  ID: ccfe9852-f2d6-4e18-8213-29d958d3a8ed Type: VMAction 
>>>>>>>> group
>>>>>>>> EDIT_ADMIN_VM_PROPERTIES with role type ADMIN2018-02-06 16:42:39,486-05
>>>>>>>> INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default
>>>>>>>> task-5) [03cdc0e6-eb94-46a4-9882-77762b2ed787] Candidate host
>>>>>>>> 'lago-upgrade-from-release-suite-master-host0'
>>>>>>>> ('6ec7721d-beae-4b7e-807f-7f3a794fb7c4') was filtered out by
>>>>>>>> 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id:
>>>>>>>> 03cdc0e6-eb94-46a4-9882-77762b2ed787)2018-02-06 16:42:39,486-05 ERROR
>>>>>>>> [org.ovirt.engine.core.bll.RunVmCommand] (default task-5)
>>>>>>>> [03cdc0e6-eb94-46a4-9882-77762b2ed787] Can't find VDS to run the VM
>>>>>>>> 'ccfe9852-f2d6-4e18-8213-29d958d3a8ed' on, so this VM will not be
>>>>>>>> run.2018-02-06 16:42:39,493-05 ERROR
>>>>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>>>>>> (default task-5) [03cdc0e6-eb94-46a4-9882-77762b2ed787] EVENT_ID:
>>>>>>>> USER_FAILED_RUN_VM(54), Failed to run VM vm0 (User:
>>>>>>>> admin@internal-authz).2018-02-06 16:42:39,500-05 INFO
>>>>>>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-5)
>>>>>>>> [03cdc0e6-eb94-46a4-9882-77762b2ed787] Lock freed to object
>>>>>>>> 'EngineLock:{exclusiveLocks='[ccfe9852-f2d6-4e18-8213-29d958d3a8ed=VM]',
>>>>>>>> sharedLocks=''}'2018-02-06 16:42:39,505-05 ERROR
>>>>>>>> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] 
>>>>>>>> (default
>>>>>>>> task-5) [] Operation Failed: []2018-02-06 16:42:39,512-05 INFO
>>>>>>>> [org.ovirt.engine.core.bll.Pro
>>>>>>>> <http://org.ovirt.engine.core.bll.Pro>cessDownVmCommand]
>>>>>>>> (EE-ManagedThreadFactory-engine-Thread-17) [4fe81490] Running command:
>>>>>>>> ProcessDownVmCommand internal: true.(END)*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Devel mailing list
>>>>>>>> Devel@ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Martin Perina
>>>>>>> Associate Manager, Software Engineering
>>>>>>> Red Hat Czech s.r.o.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Martin Perina
>>>>> Associate Manager, Software Engineering
>>>>> Red Hat Czech s.r.o.
>>>>> ___
>>>>> Devel mailing list
>>>>> Devel@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [25-1-18] [ OST Failure Report] [oVirt Master (vdsm)] [post-002_bootstrap]

2018-01-25 Thread Daniel Belenky
Have you tried running OST with rpms from the suspected patch to reproduce?

On Thu, Jan 25, 2018 at 12:24 PM Edward Haas <eh...@redhat.com> wrote:

> We have two options, a revert or a fix:
> Revert: https://gerrit.ovirt.org/#/c/86789/
> Fix: https://gerrit.ovirt.org/#/c/86785/
>
> We are not sure about the fix because we cannot reproduce the problem
> manually.
>
>
> On Thu, Jan 25, 2018 at 10:45 AM, Eyal Edri <ee...@redhat.com> wrote:
>
>> Once you have RPMs, you can run the upgrade suite from the manual job.
>>
>> On Thu, Jan 25, 2018 at 10:43 AM, Edward Haas <eh...@redhat.com> wrote:
>>
>>> Can we test if this one fixes this problem?
>>> https://gerrit.ovirt.org/#/c/86781
>>>
>>> On Thu, Jan 25, 2018 at 10:00 AM, Eyal Edri <ee...@redhat.com> wrote:
>>>
>>>> Indeed, the patch looks relevant,
>>>> Dan, can we revert it or send a fix ASAP to avoid building up a large
>>>> queue?
>>>>
>>>> On Thu, Jan 25, 2018 at 9:29 AM, Daniel Belenky <dbele...@redhat.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We failed to setup host in OST upgrade from 4.1 to master suite.
>>>>> Please note that the upgrade suite installs 4.1 engine, then upgrades
>>>>> it to master and then tries to set up a host.
>>>>>
>>>>> *Links:*
>>>>>
>>>>>1. Link to failed job
>>>>>
>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5093/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-002_bootstrap.py/>
>>>>>2. Suspected patch: Gerrit 86474/33
>>>>><https://gerrit.ovirt.org/#/c/86474/33>
>>>>>
>>>>> *Error snippet from engine.log (engine):*
>>>>>
>>>>> 2018-01-24 15:13:20,257-05 ERROR 
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>>>> (VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An 
>>>>> error has occurred during installation of Host 
>>>>> lago-upgrade-from-release-suite-master-host0: Failed to execute stage 
>>>>> 'Closing up': Failed to start service 'vdsmd'.
>>>>> 2018-01-24 15:13:20,301-05 INFO  
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>>>> (VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing 
>>>>> Host lago-upgrade-from-release-suite-master-host0. Stage: Clean up.
>>>>> 2018-01-24 15:13:20,304-05 INFO  
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>>>> (VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing 
>>>>> Host lago-upgrade-from-release-suite-master-host0. Stage: Pre-termination.
>>>>> 2018-01-24 15:13:20,332-05 INFO  
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>>>> (VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing 
>>>>> Host lago-upgrade-from-release-suite-master-host0. Retrieving 
>>>>> installation logs to: 
>>>>> '/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20180124151320-lago-upgrade-from-release-suite-master-host0-34609a2f.log'.
>>>>> 2018-01-24 15:13:29,227-05 INFO  
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>>>> (VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing 
>>>>> Host lago-upgrade-from-release-suite-master-host0. Stage: Termination.
>>>>> 2018-01-24 15:13:29,321-05 ERROR 
>>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog] 
>>>>> (EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] SSH error running 
>>>>> command root@lago-upgrade-from-release-suite-master-host0:'umask 0077; 
>>>>> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap 
>>>>> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > 
>>>>> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&  
>>>>> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine 
>>>>> DIALOG/customization=bool:True': IOException: Command returned failure 
>>>>> code 1 during SSH sess

[ovirt-devel] [25-1-18] [ OST Failure Report] [oVirt Master (vdsm)] [post-002_bootstrap]

2018-01-24 Thread Daniel Belenky
usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 48, in dump_bonding_options
15:13:19 host0 vdsm-tool: jdump(_get_default_bonding_options(), f)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 60, in _get_default_bonding_options
15:13:19 host0 vdsm-tool: with _bond_device(bond_name):
15:13:19 host0 vdsm-tool: File "/usr/lib64/python2.7/contextlib.py",
line 17, in __enter__
15:13:19 host0 vdsm-tool: return self.gen.next()
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 102, in _bond_device
15:13:19 host0 vdsm-tool: _unmanage_nm_device(bond_name)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 116, in _unmanage_nm_device
15:13:19 host0 vdsm-tool: dev.managed = False
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/networkmanager.py",
line 90, in managed
15:13:19 host0 vdsm-tool: self._device.managed = value
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py",
line 81, in managed
15:13:19 host0 vdsm-tool: return self._set_property('Managed', value)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py",
line 88, in _set_property
15:13:19 host0 vdsm-tool: self.IF_NAME, property_name, property_value)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 70, in
__call__
15:13:19 host0 vdsm-tool: return self._proxy_method(*args, **keywords)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 145, in
__call__
15:13:19 host0 vdsm-tool: **keywords)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in
call_blocking
15:13:19 host0 vdsm-tool: message, timeout)
15:13:19 host0 vdsm-tool: DBusException:
org.freedesktop.DBus.Error.AccessDenied: Property "Managed" of
interface "org.freedesktop.NetworkManager.Device" is not settable


Thanks,
-- 

DANIEL BELENKY

RHV DEVOPS
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [VDSM] mom missing on fc27?

2017-12-02 Thread Daniel Belenky
6_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-24.gita3fa2c46e.fc27.x86_64
>> >> > 19:30:55  Problem 14: package
>> >> > vdsm-hook-fakevmstats-4.20.8-24.gita3fa2c46e.fc27.noarch requires
>> vdsm,
>> >> > but
>> >> > none of the providers can be installed
>> >> > 19:30:55   - conflicting requests
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-1.git383bc1031.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-15.git50bd65cf2.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-18.git28b0fffcd.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-3.gita9ee9c65f.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-4.gite1d056920.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-5.git4b7766c65.fc27.x86_64
>> >> > 19:30:55   - nothing provides python-argparse needed by
>> >> > vdsm-4.18.999-444.git0bb7717.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-24.gita3fa2c46e.fc27.x86_64
>> >> > 19:30:55  Problem 15: package
>> >> > vdsm-hook-fakesriov-4.20.8-24.gita3fa2c46e.fc27.x86_64 requires vdsm,
>> >> > but
>> >> > none of the providers can be installed
>> >> > 19:30:55   - conflicting requests
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-1.git383bc1031.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-15.git50bd65cf2.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-18.git28b0fffcd.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-3.gita9ee9c65f.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-4.gite1d056920.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-5.git4b7766c65.fc27.x86_64
>> >> > 19:30:55   - nothing provides python-argparse needed by
>> >> > vdsm-4.18.999-444.git0bb7717.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-24.gita3fa2c46e.fc27.x86_64
>> >> > 19:30:55  Problem 16: package
>> >> > vdsm-hook-checkimages-4.20.8-24.gita3fa2c46e.fc27.noarch requires
>> vdsm,
>> >> > but
>> >> > none of the providers can be installed
>> >> > 19:30:55   - conflicting requests
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-1.git383bc1031.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-15.git50bd65cf2.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-18.git28b0fffcd.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-3.gita9ee9c65f.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-4.gite1d056920.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-5.git4b7766c65.fc27.x86_64
>> >> > 19:30:55   - nothing provides python-argparse needed by
>> >> > vdsm-4.18.999-444.git0bb7717.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-24.gita3fa2c46e.fc27.x86_64
>> >> > 19:30:55  Problem 17: package
>> >> > vdsm-hook-allocate_net-4.20.8-24.gita3fa2c46e.fc27.noarch requires
>> vdsm,
>> >> > but
>> >> > none of the providers can be installed
>> >> > 19:30:55   - conflicting requests
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-1.git383bc1031.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-15.git50bd65cf2.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-18.git28b0fffcd.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-3.gita9ee9c65f.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-4.gite1d056920.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-5.git4b7766c65.fc27.x86_64
>> >> > 19:30:55   - nothing provides python-argparse needed by
>> >> > vdsm-4.18.999-444.git0bb7717.fc27.x86_64
>> >> > 19:30:55   - nothing provides mom >= 0.5.8 needed by
>> >> > vdsm-4.20.8-24.gita3fa2c46e.fc27.x86_64
>> >> >
>> >> >
>> >> > ___
>> >> > Devel mailing list
>> >> > Devel@ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/devel
>>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

-- 
Daniel Belenky
DevOps Engineer, RHV
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] failing check-patch on ovirt-engine

2017-11-13 Thread Daniel Belenky
Hey,

It's a trailing issue from [1] as Allon mentioned.
Our CI slaves generated mock cache during the outage of the snapshot repos,
and it caused your build to fail.
I've cleaned the cache - everything should work now.

[1] http://lists.ovirt.org/pipermail/devel/2017-November/031840.html

Thanks,

On Mon, Nov 13, 2017 at 12:14 PM, Allon Mureinik <amure...@redhat.com>
wrote:

> Probably another instead of [1].
> Retriggering the job should probably clear the issue up.
>
> [1] http://lists.ovirt.org/pipermail/devel/2017-November/031840.html
>
> On Sun, Nov 12, 2017 at 5:53 PM, Greg Sheremeta <gsher...@redhat.com>
> wrote:
>
>> Hi,
>>
>> I'm seeing a weird issue in ovirt-engine check-patch.
>>
>> There's no way this failure is related to my patch [
>> https://gerrit.ovirt.org/#/c/83950/], which is a 1 character typo fix in
>> the constants file.
>>
>> Anyone have any ideas or also seeing this? It seems random, too --
>> sometimes 1 or 2 of the jobs will succeed [el7, fc25] but the other will
>> fail. Etc.
>>
>>
>> *14:48:08* + automation/packaging-setup-tests.sh*14:48:08* + trap popd 
>> 0*14:48:08* +++ readlink -f automation/packaging-setup-tests.sh*14:48:08* ++ 
>> dirname 
>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc26-x86_64/ovirt-engine/automation/packaging-setup-tests.sh*14:48:08*
>>  + pushd 
>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc26-x86_64/ovirt-engine/automation/..*14:48:08*
>>  ~ ~*14:48:08* + export 
>> PYTHONPATH=packaging/pythonlib:packaging/setup:*14:48:08* + 
>> PYTHONPATH=packaging/pythonlib:packaging/setup:*14:48:08* + python -m pytest 
>> packaging/setup*14:48:08* = test session starts 
>> ==*14:48:08* platform linux2 -- Python 2.7.14, 
>> pytest-3.2.3, py-1.4.34, pluggy-0.4.0*14:48:08* rootdir: 
>> /home/jenkins/workspace/ovirt-engine_master_check-patch-fc26-x86_64/ovirt-engine,
>>  inifile:*14:48:08* collected 0 items / 1 errors*14:48:08* *14:48:08* 
>>  ERRORS 
>> *14:48:08*  ERROR collecting 
>> packaging/setup/tests/ovirt_engine_setup/engine_common/test_database.py 
>> *14:48:08* ImportError while importing test module 
>> '/home/jenkins/workspace/ovirt-engine_master_check-patch-fc26-x86_64/ovirt-engine/packaging/setup/tests/ovirt_engine_setup/engine_common/test_database.py'.*14:48:08*
>>  Hint: make sure your test modules/packages have valid Python 
>> names.*14:48:08* Traceback:*14:48:08* 
>> packaging/setup/tests/ovirt_engine_setup/engine_common/test_database.py:19: 
>> in *14:48:08* import ovirt_engine_setup.engine_common.database 
>> as under_test  # isort:skip # noqa: E402*14:48:08* 
>> packaging/setup/ovirt_engine_setup/engine_common/database.py:30: in 
>> *14:48:08* from otopi import base*14:48:08* E   ImportError: No 
>> module named otopi*14:48:08* !!! Interrupted: 1 errors 
>> during collection *14:48:08* === 
>> 1 error in 0.33 seconds *14:48:08* + 
>> popd*14:48:08* ~
>>
>>
>>
>> -- Forwarded message --
>> From: Code Review <ger...@ovirt.org>
>> Date: Sun, Nov 12, 2017 at 10:31 AM
>> Subject: Change in ovirt-engine[master]: engine: typo fix snaphot ->
>> snapshot
>> To: Greg Sheremeta <gsher...@redhat.com>
>>
>>
>> Jenkins CI *posted comments* on this change.
>>
>> View Change <https://gerrit.ovirt.org/83950>
>>
>> Patch set 1:Continuous-Integration -1
>>
>> Build Failed
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
>> -fc25-x86_64/17231/ : FAILURE
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
>> -fc26-x86_64/157/ : FAILURE
>>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
>> -el7-x86_64/33193/ : SUCCESS
>>
>>
>> To view, visit change 83950 <https://gerrit.ovirt.org/83950>. To
>> unsubscribe, visit settings <https://gerrit.ovirt.org/settings>.
>> Gerrit-Project: ovirt-engine
>> Gerrit-Branch: master
>> Gerrit-MessageType: comment
>> Gerrit-Change-Id: I88b4d75855c0abe2f0eddd8b282c172205db2e42
>> Gerrit-Change-Number: 83950
>> Gerrit-PatchSet: 1
>> Gerrit-Owner: Greg Sheremeta <gsher...@redhat.com>
>> Gerrit-Reviewer: Alexander Wels <aw...@redhat.com>
>> Gerrit-Reviewer: Greg Sheremeta <gsher...@redhat.com>
>> Gerrit-Reviewer: Jenkins CI
>> Gerrit-Revi

[ovirt-devel] [ FIXED ] Missing release RPMs in resources.ovirt.org/pub/yum-repo/

2017-11-12 Thread Daniel Belenky
Hi all,

Some of you might have encountered an error while trying to download the
latest ovirt release (4.1/4.1/master) from resources.ovirt.org/yum-repo/
During the weekend we had an issue with our nightly publisher's job. Those
jobs are responsible to move artifacts from Jenkins to the matching
snapshot repo in resources.ovirt.org.
Since the publishers failed, ovirt-release-master.rpm and
ovirt-release-4.x-snapshot.rpms (which are symlinks to
ovirt-${version}-snapshot) were unavailable.
The nightly publisher is fixed now and, the rpms were properly deployed to
the repo.

Sorry for any inconvenience,
-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] NOTE: centos-ovirt-4.0 repo URL changed

2017-10-26 Thread Daniel Belenky
Hi all,

Please note that centos ovirt-4.0 repo URL was changed to
http://vault.centos.org/7.3.1611/virt/x86_64/ovirt-4.0/
If your project is using the old URL (
http://mirror.centos.org/centos/7.3.1611/virt/x86_64/ovirt-4.0/), please
make sure to update it.

-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-guest-agent_master build artifacts is failing

2017-10-23 Thread Daniel Belenky
Oh sorry! My link was broken.
http://jenkins.ovirt.org/job/ovirt-guest-agent_master_build-artifacts-el7-x86_64/

On Mon, Oct 23, 2017 at 10:10 AM, Eyal Edri <ee...@redhat.com> wrote:

> Danie,
> Can you share the link to the job?
>
> On Mon, Oct 23, 2017 at 10:02 AM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
>
>> 2017-10-23 8:36 GMT+02:00 Daniel Belenky <dbele...@redhat.com>:
>>
>> > Hi,
>> >
>> > ovirt-guest-agent_master build artifacts is failing with dependency
>> error
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt-guest-agent_master build artifacts is failing

2017-10-23 Thread Daniel Belenky
Hi,

ovirt-guest-agent_master build artifacts is failing with dependency error.
Thus any patch to the project is failing OST.
Link to failing build artifacts job: *57/console* <http://57/console>

*Error snippet from log:*

*21:33:18* Getting requirements for
ovirt-guest-agent-windows.spec*21:33:18*  -->
p7zip-16.02-2.el7.x86_64*21:33:18*  -->
py2exe-py2.7-0.6.9-2.el7.centos.noarch*21:33:18*  -->
python-windows-2.7.14-1.el7.centos.noarch*21:33:18*  -->
wine-2.17-1.2.el7.centos.x86_64*21:33:18*  -->
wget-1.14-15.el7.x86_64*21:33:18*  -->
mingw32-gcc-c++-4.9.3-1.el7.x86_64*21:33:18*  -->
mingw64-gcc-c++-4.9.3-1.el7.x86_64*21:33:18* Error: No Package found
for pywin32-py2.7 = 220


-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-18 Thread Daniel Belenky
I think that the missing packages issue you've mentioned is not persistent
and I don't see any more tests failing on it anymore, I assume it was a
change in the repo or something similar. But the original error that I've
reported about is persistent and it's failing OST since yesterday. We must
find the root cause for those errors as every patch to ovirt-engine is
based on some faulty commit and it won't be deployed to the tested repo.
I'll mention though that tests that don't include new engine patches (for
example tests that include only new VDSM) pass OST.

On Wed, Oct 18, 2017 at 3:54 PM, Allon Mureinik <amure...@redhat.com> wrote:

> These patches are part of the engine backend - the failure happens WAY
> beforehand, they don't look related as far as I can tell.
>
> Subsequent failures of the same suite seem to have different errors, e.g.:
>
> Traceback (most recent call last):
>   File "/tmp/ovirt-y8uX2TlUiq/pythonlib/otopi/context.py", line 133, in 
> _executeMethod
> method['method']()
>   File "/tmp/ovirt-y8uX2TlUiq/otopi-plugins/otopi/packagers/yumpackager.py", 
> line 256, in _packages
> if self._miniyum.buildTransaction():
>   File "/tmp/ovirt-y8uX2TlUiq/pythonlib/otopi/miniyum.py", line 920, in 
> buildTransaction
> raise yum.Errors.YumBaseError(msg)
> YumBaseError: [u'vdsm-4.20.3-202.git6826cec.el7.centos.x86_64 requires 
> libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', 
> u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 
> 20170123-1.git4e85b27.el7_4.1']
>
>
> Has something changed in the way OST sets up repos/hosts?
>
>
> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> Hi all,
>>
>> *The following test is failing:* 002_bootstrap.verify_add_hosts
>> *All logs from failing job
>> <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>*
>> *Only 2 engine patches participated in the test, so the suspected patches
>> are:*
>>
>>1. *https://gerrit.ovirt.org/#/c/82542/2*
>><https://gerrit.ovirt.org/#/c/82542/2>
>>2. *https://gerrit.ovirt.org/#/c/82545/3
>><https://gerrit.ovirt.org/#/c/82545/3>*
>>
>> Due to the fact that when this error first introduced we had another
>> error, the CI can't automatically detect the specific patch.
>>
>> *Error snippet from logs: **ovirt-host-deploy-ansible log
>> <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20171015165106-lago-basic-suite-master-host-0-74ed9407.log/*view*/>
>> (Full log)*
>>
>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
>> 
>> failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) 
>> => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, 
>> "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: 
>> INVALID_SERVICE: 'glusterfs' not among existing services Permanent and 
>> Non-Permanent(immediate) operation, Services are defined by port/tcp 
>> relationship and named as they are in /etc/services (on most systems)"}
>>
>>
>> *Error from HOST 0 firewalld
>> log: lago-basic-suite-master-host-0/_var_log/firewalld/
>> <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/firewalld/*view*/>
>>  (Full
>> log)*
>>
>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing 
>> services
>>
>> --
>>
>> DANIEL BELENKY
>>
>> RHV DEVOPS
>>
>> EMEA VIRTUALIZATION R
>> <https://red.ht/sig>
>>
>
>


-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

2017-10-18 Thread Daniel Belenky
Hi all,

*The following test is failing:* 002_bootstrap.verify_add_hosts
*All logs from failing job
<http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>*
*Only 2 engine patches participated in the test, so the suspected patches
are:*

   1. *https://gerrit.ovirt.org/#/c/82542/2*
   <https://gerrit.ovirt.org/#/c/82542/2>
   2. *https://gerrit.ovirt.org/#/c/82545/3
   <https://gerrit.ovirt.org/#/c/82545/3>*

Due to the fact that when this error first introduced we had another error,
the CI can't automatically detect the specific patch.

*Error snippet from logs: **ovirt-host-deploy-ansible log
<http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20171015165106-lago-basic-suite-master-host-0-74ed9407.log/*view*/>
(Full log)*

TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] 
failed: [lago-basic-suite-master-host-0] (item={u'service':
u'glusterfs'}) => {"changed": false, "failed": true, "item":
{"service": "glusterfs"}, "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}


*Error from HOST 0 firewalld
log: lago-basic-suite-master-host-0/_var_log/firewalld/
<http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/firewalld/*view*/>
(Full
log)*

2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
existing services

-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ upgrade suites failed! ] [ 15/10/17 ]

2017-10-17 Thread Daniel Belenky
Hey, I see that both of the patches that were supposed to fix the issue (
https://gerrit.ovirt.org/82800 and https://gerrit.ovirt.org/#/c/82799/)
were merged, but the issue remains.

On Mon, Oct 16, 2017 at 11:45 AM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> 2017-10-16 10:30 GMT+02:00 Yedidyah Bar David <d...@redhat.com>:
>
>> On Mon, Oct 16, 2017 at 11:01 AM, Miroslava Voglova <mvogl...@redhat.com>
>> wrote:
>>
>>> On Mon, Oct 16, 2017 at 9:44 AM, Martin Perina <mper...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Oct 16, 2017 at 9:38 AM, Yedidyah Bar David <d...@redhat.com>
>>>> wrote:
>>>>
>>>>> On Mon, Oct 16, 2017 at 10:34 AM, Miroslava Voglova <
>>>>> mvogl...@redhat.com> wrote:
>>>>>
>>>>>> Fix on review https://gerrit.ovirt.org/#/c/82799/
>>>>>>
>>>>>
>>>>> That's indeed a related patch, but not sure how it solves current
>>>>> failure.
>>>>>
>>>>
>>>> ​Let's copy generate-pgpass.sh to packaging/setup/dbutils a​nd source
>>>> it from this location for taskcleaner.sh and unlock_entity.sh
>>>>
>>>
>>>
>>> After offline discussion merging https://gerrit.ovirt.org/82800 and
>>> then https://gerrit.ovirt.org/#/c/82799/ will fix the issue. Both
>>> patches are needed.
>>>
>>
>> So this means:
>>
>> Merge https://gerrit.ovirt.org/82800 . We might want to open a
>> real 4.1 bug for this.
>>
>
> Agreed, let's open a real 4.1.7 bug to track this and get proper
> verification by QE.
>
>
>>
>> Build 4.1.7 (or 4.1.8?) with it .
>>
>
> Let's point to 4.1.7
>
>
>>
>> Patch 4.2 engine-setup to require tools-4.1.7.
>>
>
> well, tools >= 4.1.7 :-)
>
>
>
>
>>
>> Adding Sandro.
>>
>>
>>>
>>>
>>>>
>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Oct 16, 2017 at 9:32 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Oct 16, 2017 at 10:24 AM, Yedidyah Bar David <
>>>>>>> d...@redhat.com> wrote:
>>>>>>>
>>>>>>>> On Mon, Oct 16, 2017 at 10:21 AM, Yedidyah Bar David <
>>>>>>>> d...@redhat.com> wrote:
>>>>>>>>
>>>>>>>>> On Mon, Oct 16, 2017 at 9:28 AM, Daniel Belenky <
>>>>>>>>> dbele...@redhat.com> wrote:
>>>>>>>>>
>>>>>>>>>> can someone address this issue? every patch to *ovirt-engine* that
>>>>>>>>>> is based on top of this patch is failing OST and* won't deploy
>>>>>>>>>> to the tested repo*.
>>>>>>>>>>
>>>>>>>>>> On Sun, Oct 15, 2017 at 9:33 AM, Daniel Belenky <
>>>>>>>>>> dbele...@redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi all,
>>>>>>>>>>> The following tests are failing both of the upgrade suites in
>>>>>>>>>>> OST (upgrade_from_release and upgrade_from_prevrelease).
>>>>>>>>>>>
>>>>>>>>>>> *Link to console:* ovirt-master_change-queue-tester/3146/console
>>>>>>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/console>
>>>>>>>>>>> *Link to test logs:*
>>>>>>>>>>> - upgrade-from-release-suit-master-el7
>>>>>>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-release-suit-master-el7>
>>>>>>>>>>> - upgrade-from-prevrelease-suit-master-el7
>>>>>>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7>
>>>>>>>>>>> *Suspected patch:* https://gerrit.ovirt.org/#/c/82615/5
>>>>>>>>>>> *Please note that every patch that is based on top of the patch
>>>>>>>>>>> above was not deployed to the tested repo.*
>>>&

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ upgrade suites failed! ] [ 15/10/17 ]

2017-10-16 Thread Daniel Belenky
Didi, thanks for addressing this issue and the detailed explanation. I'll
make sure I attach all relevant logs next time, sorry for that!

On Mon, Oct 16, 2017 at 10:32 AM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Mon, Oct 16, 2017 at 10:24 AM, Yedidyah Bar David <d...@redhat.com>
> wrote:
>
>> On Mon, Oct 16, 2017 at 10:21 AM, Yedidyah Bar David <d...@redhat.com>
>> wrote:
>>
>>> On Mon, Oct 16, 2017 at 9:28 AM, Daniel Belenky <dbele...@redhat.com>
>>> wrote:
>>>
>>>> can someone address this issue? every patch to *ovirt-engine* that is
>>>> based on top of this patch is failing OST and* won't deploy to the
>>>> tested repo*.
>>>>
>>>> On Sun, Oct 15, 2017 at 9:33 AM, Daniel Belenky <dbele...@redhat.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>> The following tests are failing both of the upgrade suites in OST
>>>>> (upgrade_from_release and upgrade_from_prevrelease).
>>>>>
>>>>> *Link to console:* ovirt-master_change-queue-tester/3146/console
>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/console>
>>>>> *Link to test logs:*
>>>>> - upgrade-from-release-suit-master-el7
>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-release-suit-master-el7>
>>>>> - upgrade-from-prevrelease-suit-master-el7
>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7>
>>>>> *Suspected patch:* https://gerrit.ovirt.org/#/c/82615/5
>>>>> *Please note that every patch that is based on top of the patch above
>>>>> was not deployed to the tested repo.*
>>>>>
>>>>> *Error snippet from engine setup log:*
>>>>>
>>>>
>>> Please add a direct link next time, if possible. This is it:
>>>
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>> r/3146/artifact/exported-artifacts/upgrade-from-release-suit
>>> -master-el7/test_logs/upgrade-from-release-suite-master/
>>> post-001_upgrade_engine.py/lago-upgrade-from-release-suit
>>> e-master-engine/_var_log/ovirt-engine/setup/ovirt-engine-
>>> setup-20171013222617-73f0df.log
>>>
>>> And a bit above the snippet below, there is:
>>>
>>> 2017-10-13 22:26:24,274-0400 DEBUG otopi.plugins.ovirt_engine_set
>>> up.ovirt_engine.upgrade.asynctasks plugin.execute:926 execute-output:
>>> ('/usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh', '-l',
>>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20171013222617-73f0df.log',
>>> '-u', 'engine', '-s', 'localhost', '-p', '5432', '-d', 'engine', '-q',
>>> '-r', '-Z') stderr:
>>>
>>> /usr/share/ovirt-engine/bin/generate-pgpass.sh: line 3: 
>>> /usr/share/ovirt-engine/setup/dbutils/engine-prolog.sh: No such file or 
>>> directory
>>>
>>>
>>> 2017-10-13 22:26:24,274-0400 DEBUG otopi.context context._executeMethod:143 
>>> method exception
>>>>> Traceback (most recent call last):
>>>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
>>>>> _executeMethod
>>>>> method['method']()
>>>>>   File 
>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
>>>>>  line 470, in _validateZombies
>>>>> self._clearZombies()
>>>>>   File 
>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
>>>>>  line 135, in _clearZombies
>>>>> 'Failed to clear zombie commands. '
>>>>> RuntimeError: Failed to clear zombie commands. Please access support in 
>>>>> attempt to resolve the problem
>>>>> 2017-10-13 22:26:24,275-0400 ERROR otopi.context 
>>>>> context._executeMethod:152 Failed to execute stage 'Setup validation': 
>>>>> Failed to clear zombie commands. Please access support in attempt to 
>>>>> resolve the problem
>>>>>
>>>>>
>>> With [1], taskcleaner.sh sources generate-pgpass.sh .
>>>
>>> generate-pgpass.sh is in ovirt-engine-tools, which in upgrade flows, is
>>> not
>>> yet upgraded (at the poin

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ upgrade suites failed! ] [ 15/10/17 ]

2017-10-16 Thread Daniel Belenky
can someone address this issue? every patch to *ovirt-engine* that is based
on top of this patch is failing OST and* won't deploy to the tested repo*.

On Sun, Oct 15, 2017 at 9:33 AM, Daniel Belenky <dbele...@redhat.com> wrote:

> Hi all,
> The following tests are failing both of the upgrade suites in OST
> (upgrade_from_release and upgrade_from_prevrelease).
>
> *Link to console:* ovirt-master_change-queue-tester/3146/console
> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/console>
> *Link to test logs:*
> - upgrade-from-release-suit-master-el7
> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-release-suit-master-el7>
> - upgrade-from-prevrelease-suit-master-el7
> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7>
> *Suspected patch:* https://gerrit.ovirt.org/#/c/82615/5
> *Please note that every patch that is based on top of the patch above was
> not deployed to the tested repo.*
>
> *Error snippet from engine setup log:*
>
> 2017-10-13 22:26:24,274-0400 DEBUG otopi.context context._executeMethod:143 
> method exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
> _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
>  line 470, in _validateZombies
> self._clearZombies()
>   File 
> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
>  line 135, in _clearZombies
> 'Failed to clear zombie commands. '
> RuntimeError: Failed to clear zombie commands. Please access support in 
> attempt to resolve the problem
> 2017-10-13 22:26:24,275-0400 ERROR otopi.context context._executeMethod:152 
> Failed to execute stage 'Setup validation': Failed to clear zombie commands. 
> Please access support in attempt to resolve the problem
>
> --
>
> DANIEL BELENKY
>
> RHV DEVOPS
>
> EMEA VIRTUALIZATION R
> <https://red.ht/sig>
>



-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ upgrade suites failed! ] [ 15/10/17 ]

2017-10-15 Thread Daniel Belenky
Hi all,
The following tests are failing both of the upgrade suites in OST
(upgrade_from_release and upgrade_from_prevrelease).

*Link to console:* ovirt-master_change-queue-tester/3146/console
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/console>
*Link to test logs:*
- upgrade-from-release-suit-master-el7
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-release-suit-master-el7>
- upgrade-from-prevrelease-suit-master-el7
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3146/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7>
*Suspected patch:* https://gerrit.ovirt.org/#/c/82615/5
*Please note that every patch that is based on top of the patch above was
not deployed to the tested repo.*

*Error snippet from engine setup log:*

2017-10-13 22:26:24,274-0400 DEBUG otopi.context
context._executeMethod:143 method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133,
in _executeMethod
method['method']()
  File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
line 470, in _validateZombies
self._clearZombies()
  File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
line 135, in _clearZombies
'Failed to clear zombie commands. '
RuntimeError: Failed to clear zombie commands. Please access support
in attempt to resolve the problem
2017-10-13 22:26:24,275-0400 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Setup validation':
Failed to clear zombie commands. Please access support in attempt to
resolve the problem

-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Failure in CQ testing

2017-10-09 Thread Daniel Belenky
It seems that there was one job that timed out while performing a cleanup [1
<http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/50/console>],
and ever since every build that tries to use VMs fail.
I've cleaned the faulty host.
I've enqueued the specific change that was dropped out of the cq to be
tested again.

[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/50/console

On Mon, Oct 9, 2017 at 1:50 PM, Daniel Belenky <dbele...@redhat.com> wrote:

> It seems like a cleanup issue from the previous job that ran on that
> specific node. I'll try to figure out what happened there.
>
> On Mon, Oct 9, 2017 at 1:16 PM, Dafna Ron <d...@redhat.com> wrote:
>
>> Hi,
>>
>> We have a CQ failure with the error:
>>
>> error: [Errno 98] Address already in use
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> * Link to suspected patches: This failure is pointing to:
>> https://gerrit.ovirt.org/#/c/81704/ <https://gerrit.ovirt.org/#/c/81704/>
>> Link to Job:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/
>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/> Link
>> to all logs:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/consoleFull
>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/consoleFull>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/artifact/
>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/artifact/>
>> (Relevant) error snippet from the log:  *
>>
>> *08:36:38* [upgrade-from-prevrelease-suit] @ Deploy oVirt environment: ERROR 
>> (in 0:00:00)*08:36:38* [upgrade-from-prevrelease-suit] Error occured, 
>> aborting*08:36:38* [upgrade-from-prevrelease-suit] Traceback (most recent 
>> call last):*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 362, in 
>> do_run*08:36:38* [upgrade-from-prevrelease-suit] 
>> self.cli_plugins[args.ovirtverb].do_run(args)*08:36:38* 
>> [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in 
>> do_run*08:36:38* [upgrade-from-prevrelease-suit] 
>> self._do_run(**vars(args))*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 501, in 
>> wrapper*08:36:38* [upgrade-from-prevrelease-suit] return func(*args, 
>> **kwargs)*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/lago/utils.py", line 512, in 
>> wrapper*08:36:38* [upgrade-from-prevrelease-suit] return func(*args, 
>> prefix=prefix, **kwargs)*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 166, in 
>> do_deploy*08:36:38* [upgrade-from-prevrelease-suit] 
>> prefix.deploy()*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 636, in 
>> wrapper*08:36:38* [upgrade-from-prevrelease-suit] return func(*args, 
>> **kwargs)*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line 111, in 
>> wrapper*08:36:38* [upgrade-from-prevrelease-suit] with 
>> utils.repo_server_context(args[0]):*08:36:38* 
>> [upgrade-from-prevrelease-suit]   File "/usr/lib64/python2.7/contextlib.py", 
>> line 17, in __enter__*08:36:38* [upgrade-from-prevrelease-suit] return 
>> self.gen.next()*08:36:38* [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 100, in 
>> repo_server_context*08:36:38* [upgrade-from-prevrelease-suit] 
>> root_dir=prefix.paths.internal_repo(),*08:36:38* 
>> [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib/python2.7/site-packages/ovirtlago/utils.py", line 76, in 
>> _create_http_server*08:36:38* [upgrade-from-prevrelease-suit] 
>> generate_request_handler(root_dir),*08:36:38* 
>> [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__*08:36:38* 
>> [upgrade-from-prevrelease-suit] self.server_bind()*08:36:38* 
>> [upgrade-from-prevrelease-suit]   File 
>> "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in server_bind*08:36:38* 
>> [upgrade-from-prevrelease-suit] 
>> SocketServer.TCPServer.server_bind(self)*08:36:38* 
>> [upgrade-from-prevrelease-su

Re: [ovirt-devel] Failure in CQ testing

2017-10-09 Thread Daniel Belenky
Date: Mon, 9 Oct 2017 10:05:39 + (UTC)
> From: oVirt Jenkins <jenk...@ovirt.org> <jenk...@ovirt.org>
> To: in...@ovirt.org
>
> Change 81704,6 (ovirt-hosted-engine-setup) is probably the reason behind 
> recent
> system test failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from this
> change will not be released until it is fixed.
>
> For further details about the change see:https://gerrit.ovirt.org/#/c/81704/6
>
> For failed test results 
> see:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3055/
> ___
> Infra mailing listInfra@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/infra
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ oVirt Devel ] [oVirt master ] [ OST Failure Report ] [ basic_suite_master & upgrade_suites failed ]

2017-08-30 Thread Daniel Belenky
Thanks Juan and Maor for the quick respond.
I've pasted the wrong patch, the one that caused the failure is
https://gerrit.ovirt.org/#/c/80969/
I'll resend the mail..
I'm sorry for the noise!

On Wed, Aug 30, 2017 at 10:22 AM Maor Lipchuk <mlipc...@redhat.com> wrote:

> Thanks for the quick respond Juan
>
> On Wed, Aug 30, 2017 at 10:21 AM, Juan Hernández <jhern...@redhat.com>
> wrote:
> > That failure was cased by this patch:
> >
> >   restapi: Update to model 4.2.16 and metamodel 1.2.10
> >   https://gerrit.ovirt.org/81134
> >
> > And fixed by this one:
> >
> >   restapi: Add metamodel-server to restapi-definition module
> >   https://gerrit.ovirt.org/81175
> >
> > On 08/30/2017 09:09 AM, Daniel Belenky wrote:
> >>
> >> *Test Failed:*
> >>
> >> 1. *basic_suite: *002_bootstrap.add_dc and
> >> 2. *upgrade_from_release: *001_upgrade_engine.test_initialize_engine
> >> 3. *upgrade_from_prevrelease*:
> >>
> >> *Link to test logs:*
> >>
> >> 1. *basic suite logs
> >>
> >> <
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2136/artifact/exported-artifacts/basic-suit-master-el7
> >*
> >> 2. *upgrade from prev release suite logs
> >>
> >> <
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2136/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7
> >*
> >> 3. *upgrade from release suite logs
> >>
> >> <
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2136/artifact/exported-artifacts/upgrade-from-release-suit-master-el7
> >*
> >>
> >>
> >> *Suspected patch: https://gerrit.ovirt.org/#/c/79033/41
> >> <https://gerrit.ovirt.org/#/c/79033/41>*
> >>
> >>
> >> *Please note that this change was the only change under those tests.*
> >> *Error snippet from basic-suite /var/log/ovirt-engine/server.log:*
> >>
> >>
> >> ERROR [io.undertow.request] (default task-5) UT005023: Exception
> >> handling request to /ovirt-engine/api/v4/datacenters:
> >> java.lang.RuntimeException: org.jboss.resteasy.spi.UnhandledException:
> >> java.lang.NoClassDefFoundError:
> >> org/ovirt/api/metamodel/server/ValidationException
> >> at
> >>
> io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:245)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> io.undertow.servlet.spec.RequestDispatcherImpl.forwardImplSetup(RequestDispatcherImpl.java:147)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:111)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:180)
> >> [restapi-jaxrs.jar:]
> >> at
> >>
> org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:98)
> >> [restapi-jaxrs.jar:]
> >> at
> >> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:117)
> >> [restapi-jaxrs.jar:]
> >> at
> >>
> org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:72)
> >> [restapi-jaxrs.jar:]
> >> at
> >> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> org.ovirt.engine.core.aaa.filters.RestApiSessionMgmtFilter.doFilter(RestApiSessionMgmtFilter.java:78)
> >> [aaa.jar:]
> >> at
> >> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
> >> [undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
> >> at
> >>
> io.undertow.servlet.handlers.FilterHandler$Filte

[ovirt-devel] [ oVirt Devel ] [oVirt master ] [ OST Failure Report ] [ basic_suite_master & upgrade_suites failed ]

2017-08-30 Thread Daniel Belenky
8.Final.jar:1.4.18.Final]
at 
io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:221)
[undertow-servlet-1.4.18.Final.jar:1.4.18.Final]
... 73 more
Caused by: java.lang.NoClassDefFoundError:
org/ovirt/api/metamodel/server/ValidationException
at 
org.ovirt.engine.api.resource.DataCentersResource.doAdd(DataCentersResource.java:58)
[restapi-definition.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.8.0_141]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[rt.jar:1.8.0_141]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_141]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_141]
at 
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:140)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
at 
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
at 
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:138)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
at 
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:101)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
at 
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:406)
[resteasy-jaxrs-3.0.24.Final.jar:3.0.24.Final]
... 88 more
Caused by: java.lang.ClassNotFoundException:
org.ovirt.api.metamodel.server.ValidationException from [Module
"org.ovirt.engine.api.restapi-definition" from local module loader
@61064425 (finder: local module finder @7b1d7fff (roots:
/usr/share/ovirt-engine-wildfly-overlay/modules,/usr/share/ovirt-engine/modules/common,/usr/share/ovirt-engine-extension-aaa-jdbc/modules,/usr/share/ovirt-engine-extension-aaa-ldap/modules,/usr/share/ovirt-engine-wildfly/modules,/usr/share/ovirt-engine-wildfly/modules/system/layers/base))]
at 
org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:198)
[jboss-modules.jar:1.6.0.CR2]
at 
org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:412)
[jboss-modules.jar:1.6.0.CR2]
at 
org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:400)
[jboss-modules.jar:1.6.0.CR2]
at 
org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116)
[jboss-modules.jar:1.6.0.CR2]
... 99 more



*Error snippet from both upgrade suites (engine setup output):*[ ERROR
] Failed to execute stage 'Closing up': Command '/bin/firewall-cmd'
failed to execute

-- 

DANIEL BELENKY

RHV DEVOPS

Red Hat EMEA <https://www.redhat.com/>

IRC: #rhev-integ #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ upgrade-from-release-suite-master ] [ 21/08/2017 ][000_deploy]

2017-08-22 Thread Daniel Belenky
The problem is not with the 'tested' repo, but with the official
ldap-1.3.2-1ovirt-4.1 repo.
We have ovirt-engine-extension-aaa-ldap-*1.3.2-1* but
ovirt-engine-extension-aaa-ldap-setup-*1.3.3-1*.
upgrade-from-release-suite-master, will first try to initialize and install
engine 4.1 and then upgrade it to master. That's why it's failing.

On Tue, Aug 22, 2017 at 10:12 AM Barak Korren <bkor...@redhat.com> wrote:

> On 21 August 2017 at 23:24, Valentina Makarova <makarovav...@gmail.com>
> wrote:
> >
> > Hello!
> >
> > OST failed in ./run_suilte.sh upgrade-from-release-suite-master
> > (it is likely to appear after updating in some remote repo) with error:
> >
> > Trying other mirror.
> > Error: Package:
> > ovirt-engine-extension-aaa-ldap-setup-1.3.3-1.el7.centos.noarch
> (alocalsync)
> >Requires: ovirt-engine-extension-aaa-ldap = 1.3.3-1.el7.centos
> >Installing:
> > ovirt-engine-extension-aaa-ldap-1.3.2-1.el7.centos.noarch (alocalsync)
> >ovirt-engine-extension-aaa-ldap = 1.3.2-1.el7.centos
> >
>
> It looks like 'ovirt-engine-extension-aaa-ldap = 1.3.3-1.el7.centos'
> is missing from the 'tested' repo, we are rebuilding it to fix that.
>
> >
> > Updating of my yum packages and pip packages is not effective.
> > Please, update remote repo or say me, how can I run this.
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
-- 

DANIEL BELENKY

RHV DEVOPS

Red Hat EMEA <https://www.redhat.com/>

IRC: #rhev-integ #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ upgrade-from-release-suite-master ] [ 21/08/2017 ][000_deploy]

2017-08-22 Thread Daniel Belenky
Hi Valentina,

Our team is working to fix this issue. I'll update here asap.

Thanks,

On Mon, Aug 21, 2017 at 11:26 PM Valentina Makarova <makarovav...@gmail.com>
wrote:

>
> Hello!
>
> OST failed in ./run_suilte.sh upgrade-from-release-suite-master
> (it is likely to appear after updating in some remote repo) with error:
>
> @ Deploy oVirt environment:
>   # Deploy environment:
> * [Thread-2] Deploy VM lago-upgrade-from-release-suite-master-engine:
> STDERR
> + EL7='release 7\.[0-9]'
> + cat
> + cat
> ++ /sbin/ip -4 -o addr show dev eth0
> ++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] "." a[4]}'
> ++ awk -F/ '{print $1}'
> + ADDR=192.168.203.2
> + echo '192.168.203.2 engine'
> + install_firewalld
> + grep 'release 7\.[0-9]' /etc/redhat-release
> + rpm -q firewalld
> + systemctl enable firewalld
> + systemctl start firewalld
> + yum install --nogpgcheck -y --downloaddir=/dev/shm ntp net-snmp
> ovirt-engine ovirt-log-collector 'ovirt-engine-extension-aaa-ldap*'
>
> http://mirror.vilkam.ru/centos/7.3.1611/os/x86_64/repodata/bd50ff3d861cc21d254a390a963e9f0fd7b7b96ed9d31ece2f2b1997aa3a056f-primary.sqlite.bz2:
> [Errno 14] curl#7 - "Failed connect to mirror.vilkam.ru:80; Connection
> refused"
> Trying other mirror.
> *Error: Package:
> ovirt-engine-extension-aaa-ldap-setup-1.3.3-1.el7.centos.noarch
> (alocalsync)*
> *   Requires: ovirt-engine-extension-aaa-ldap = 1.3.3-1.el7.centos*
> *   Installing:
> ovirt-engine-extension-aaa-ldap-1.3.2-1.el7.centos.noarch (alocalsync)*
> *   ovirt-engine-extension-aaa-ldap = 1.3.2-1.el7.centos*
>
> * [Thread-2] Deploy VM lago-upgrade-from-release-suite-master-engine:
> ERROR (in 0:01:18)
> Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> _ret_via_queue
> queue.put({'return': func()})
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
> _deploy_host
> (script, ret, host.name(), ),
> RuntimeError:
> /home/vmakarova/template_update/ovirt-system-tests/deployment-upgrade-from-release-suite-master/default/scripts/_home_vmakarova_template_update_ovirt-system-tests_upgrade-from-release-suite-master_.._common_deploy-scripts_setup_engine.sh
> failed with status 1 on lago-upgrade-from-release-suite-master-engine
>
> Updating of my yum packages and pip packages is not effective.
> Please, update remote repo or say me, how can I run this.
>
> Sincerely, Valentina Makarova
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel

-- 

DANIEL BELENKY

RHV DEVOPS

Red Hat EMEA <https://www.redhat.com/>

IRC: #rhev-integ #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ ovirt-devel ] [OST Failure Report ] [ oVirt Master ] [ 002_bootstrap ] [ 20/08/17 ]

2017-08-19 Thread Daniel Belenky
Failed test: basic_suite_master/002_bootstrap
Version: oVirt Master
Link to failed job: ovirt-master_change-queue-tester/1860/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1860/>
Link to logs (Jenkins): test logs
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1860/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>
Suspected patch: https://gerrit.ovirt.org/#/c/80749/3

>From what I was able to find, It seems that for some reason VDSM failed to
start on host 1. The VDSM log is empty, and the only error I could find in
supervdsm.log is that start of LLDP failed (Not sure if it's related)

>From host-deploy log:
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1860/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20170819163844-lago-basic-suite-master-host0-72c02881.log/*view*/>

2017-08-19 16:38:41,476-0400 DEBUG
otopi.plugins.otopi.services.systemd systemd.state:130 starting
service vdsmd
2017-08-19 16:38:41,476-0400 DEBUG
otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute:
('/bin/systemctl', 'start', 'vdsmd.service'), executable='None',
cwd='None', env=None
2017-08-19 16:38:44,628-0400 DEBUG
otopi.plugins.otopi.services.systemd plugin.executeRaw:863
execute-result: ('/bin/systemctl', 'start', 'vdsmd.service'), rc=1
2017-08-19 16:38:44,630-0400 DEBUG
otopi.plugins.otopi.services.systemd plugin.execute:921
execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stdout:


2017-08-19 16:38:44,630-0400 DEBUG
otopi.plugins.otopi.services.systemd plugin.execute:926
execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stderr:
Job for vdsmd.service failed because the control process exited with
error code. See "systemctl status vdsmd.service" and "journalctl -xe"
for details.

2017-08-19 16:38:44,631-0400 DEBUG otopi.context
context._executeMethod:142 method exception
Traceback (most recent call last):
  File "/tmp/ovirt-dunwHj8Njn/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
  File "/tmp/ovirt-dunwHj8Njn/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
line 224, in _start
self.services.state('vdsmd', True)
  File "/tmp/ovirt-dunwHj8Njn/otopi-plugins/otopi/services/systemd.py",
line 141, in state
service=name,
RuntimeError: Failed to start service 'vdsmd'


>From /var/log/messages:
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1860/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host0/_var_log/messages/*view*/>

Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh: Error:
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
One of the modules is not configured to work with VDSM.
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh: To
configure the module use the following:
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
'vdsm-tool configure [--module module-name]'.
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh: If
all modules are not configured try to use:
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
'vdsm-tool configure --force'
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
(The force flag will stop the module's service and start it
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
afterwards automatically to load the new configuration.)
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
abrt is already configured for vdsm
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
lvm is configured for vdsm
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
libvirt is already configured for vdsm
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
multipath requires configuration
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
Modules sanlock, multipath are not configured
Aug 19 16:38:44 lago-basic-suite-master-host0 vdsmd_init_common.sh:
vdsm: stopped during execute check_is_configured task (task returned
with error code 1).


Thanks,
-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ ovirt-devel ] [ OST Failure Report ] [ oVirt Master ] [ 002_bootstrap ] [ 17/08/17 ]

2017-08-17 Thread Daniel Belenky
Failed test: basic_suite_master/002_bootstrap

Version: oVirt master

Link to failed job (Jenkins): ovirt-master_change-queue-tester/1817/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1817/>

Link to logs (Jenkins): link
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/1817/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>

Suspected patch: Gerrit 80481/10 <https://gerrit.ovirt.org/#/c/80481/10>


Error snippet from logs:

*From host0*

MainThread::DEBUG::2017-08-17
05:03:20,501::cmd::63::root::(exec_sync_bytes) FAILED:  = '';
 = 1
MainThread::ERROR::2017-08-17
05:03:20,502::initializer::53::root::(_lldp_init) Failed to enable
LLDP on eth0
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/network/initializer.py",
line 51, in _lldp_init
Lldp.enable_lldp_on_iface(device)
  File "/usr/lib/python2.7/site-packages/vdsm/network/lldp/lldpad.py",
line 30, in enable_lldp_on_iface
lldptool.enable_lldp_on_iface(iface, rx_only)
  File "/usr/lib/python2.7/site-packages/vdsm/network/lldpad/lldptool.py",
line 46, in enable_lldp_on_iface
raise EnableLldpError(rc, out, err, iface)
EnableLldpError: (1,
"timeout\n'M0001C304000c04ethbadminStatus0002rx' command
timed out.\n", '', 'eth0')


Thanks,
-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [Lago] [hc-basic-suite-master] - Error starting HE VM

2017-08-16 Thread Daniel Belenky
Hi Sahina,

I've checked http://jenkins.ovirt.org/job/system-tests_hc-basic-suite-mas
ter/22/ and it indeed ran on a host on which nested flag was off.
I've took this host offline that will fix it's configuration.

Thanks,

On Wed, Aug 16, 2017 at 10:52 AM, Sahina Bose <sab...@redhat.com> wrote:

> Hosted-engine setup logs indicate
>   File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py",
> line 315, in _create_vm
> 'The VM is not powering up: please check VDSM logs'
> RuntimeError: The VM is not powering up: please check VDSM logs
>
> And the vdsm logs:
>
> 2017-08-15 22:20:46,953-0400 ERROR (vm/55fd65fe) [virt.vm]
> (vmId='55fd65fe-b10e-4510-8210-37acc35a207a') The vm start process failed
> (vm:853)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 782, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2538, in
> _run
> dom = self._connection.createXML(domxml, flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
> 125, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 586, in
> wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: invalid argument: could not find capabilities for
> arch=x86_64 domaintype=kvm
>
>
> Isn't nested VT enabled for hosts running these tests?
>
>
> On Wed, Aug 16, 2017 at 8:04 AM, <jenk...@jenkins.phx.ovirt.org> wrote:
>
>> Project: http://jenkins.ovirt.org/job/system-tests_hc-basic-suite-master/
>> Build: http://jenkins.ovirt.org/job/system-tests_hc-basic-suite-mas
>> ter/22/
>> Build Number: 22
>> Build Status:  Still Failing
>> Triggered By: Started by timer
>>
>> -
>> Changes Since Last Success:
>> -
>> Changes for Build #16
>> [Yaniv Kaul] Added missing dependencies to allow offline installation
>>
>> [Daniel Belenky] Append ansible suite to OST's manual job
>>
>>
>> Changes for Build #17
>> [Shani Leviim] Change '004_basic_sanity#hotunplug_disk' test to use sdk4
>>
>> [Gil Shinar] Mount upstream sources folder in mock
>>
>> [Gil Shinar] Ability to pass U/S cache folder path
>>
>>
>> Changes for Build #18
>> [Shani Leviim] Change '004_basic_sanity#hotunplug_disk' test to use sdk4
>>
>>
>> Changes for Build #19
>> [Shani Leviim] Change '004_basic_sanity#hotunplug_disk' test to use sdk4
>>
>>
>> Changes for Build #20
>> [Gal Ben Haim] check-patch: Don't fail on missing logs
>>
>> [Barak Korren] Adding injection of mirrors to slaves
>>
>> [Barak Korren] Add repo configuration for FC24 slaves
>>
>> [Barak Korren] Add repo configuration for FC25 and FC26 slaves
>>
>>
>> Changes for Build #21
>> [Gal Ben Haim] he_3.6: Remove suite
>>
>>
>> Changes for Build #22
>> [Gal Ben Haim] docs: Use mkdocs insted of sphinx
>>
>>
>>
>>
>> -
>> Failed Tests:
>> -
>> 1 tests failed.
>> FAILED:  002_bootstrap.py.junit.xml.[empty]
>>
>> Error Message:
>>
>>
>> Stack Trace:
>> Test report file /home/jenkins/workspace/system
>> -tests_hc-basic-suite-master/exported-artifacts/002_bootstrap.py.junit.xml
>> was length 0
>
>
>
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>


-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST: A good way to test cluster compatibility version upgrade?

2017-08-08 Thread Daniel Belenky
Hi Milan,

Please feel free to ping me if you'll need assistance with the structure of
the upgrade suites.

Thanks,

On Mon, Aug 7, 2017 at 10:09 PM, Milan Zamazal <mzama...@redhat.com> wrote:

> Daniel Belenky <dbele...@redhat.com> writes:
>
> > As Barak said, I think it's a good idea to try and extend the current
> > upgrade suites.
> > As the upgrade suites are already running in parallel (on different
> hosts)
> > with the basic_suite (and currently the upgrade suite is much faster than
> > the basic suite),  we might end up with the same run time as the basic
> > suite, which eventually won't increase the total time of change queue's
> > tests.
>
> Thank you both for the advice, under those circumstances the upgrade
> suite should be a very good place, so we'll use it.
>
> Thanks,
> Milan
>



-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST: A good way to test cluster compatibility version upgrade?

2017-08-07 Thread Daniel Belenky
As Barak said, I think it's a good idea to try and extend the current
upgrade suites.
As the upgrade suites are already running in parallel (on different hosts)
with the basic_suite (and currently the upgrade suite is much faster than
the basic suite),  we might end up with the same run time as the basic
suite, which eventually won't increase the total time of change queue's
tests.

On Mon, Aug 7, 2017 at 6:06 PM, Barak Korren <bkor...@redhat.com> wrote:

> On 7 August 2017 at 18:00, Milan Zamazal <mzama...@redhat.com> wrote:
> >
> > We could also make a separate test suite for that test, but is it
> > desirable and would anybody run it regularly?
> >
> > So what do you recommend as a good way to test cluster compatibility
> > version upgrades?
>
> Maybe we could add this to the upgrade suits? They are currently only
> running engine, but I guess hosts could be added and the suits could
> be expanded to cover the entire upgrade process and not just engine's.
>
> The upgrade suits are already getting run automatically by the change
> queue alongside the basic suit.
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

DANIEL BELENKY

RHV DEVOPS

EMEA VIRTUALIZATION R
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Removing 4.0 build jobs from Jenkins

2017-07-20 Thread Daniel Belenky
Hi all,

In order to free space in our Jenkins master host (jenkins.ovirt.org),
we'll need to remove some of the jobs and their artifacts.
For now, we are planning to remove all 4.0 jobs.
If there is any job that you want to keep, please reply to this email with
details.

Thanks,
-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ basic_suite_master ] [ 002_bootstrap.add_master_storage_domain ]

2017-07-19 Thread Daniel Belenky
dler.java:64)
at 
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59)
at 
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at 
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at 
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at 
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at 
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at 
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at 
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at 
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at 
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at 
io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at 
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at 
io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at 
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at 
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at 
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at 
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at 
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at 
io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44)
at 
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at 
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at 
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at 
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]

Thanks,
-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ Failed Build ] [ oVirt engine master ]

2017-07-19 Thread Daniel Belenky
The root cause of those failures is that [1] and [2] can't be built without
[3].

[1] https://gerrit.ovirt.org/#/c/78847/5
<https://gerrit.ovirt.org/#/c/78847/5>[2]
https://gerrit.ovirt.org/#/c/78848/5 <https://gerrit.ovirt.org/#/c/78848/5>
[3] https://gerrit.ovirt.org/#/c/78849/10
<https://gerrit.ovirt.org/#/c/78849/10>

On Wed, Jul 19, 2017 at 1:15 PM, Eyal Edri <ee...@redhat.com> wrote:

>
>
> On Wed, Jul 19, 2017 at 12:45 PM, Oved Ourfali <oourf...@redhat.com>
> wrote:
>
>> I also see that subsequent patches, such as [1] pass those tests:
>> 
>> 
>> Patch Set 4: Continuous-Integration+1
>> Build Successful
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
>> -el7-x86_64/26661/ : SUCCESS
>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch
>> -fc25-x86_64/10705/ : SUCCESS
>>
>
> check-patch doesn't run the same commands as 'check-merged' or
> 'build-artifacts'.
> I've deleted the workspaces for those jobs on 2 slaves it failed on, if it
> was due to old maven cache left there it should work now,
> but we need to understand why it happened and maybe improve the code in
> check-merged/build-artifacts of in the cleanup code.
>
>
>
>> 
>> 
>> [1] https://gerrit.ovirt.org/#/c/79493/
>>
>>
>> On Wed, Jul 19, 2017 at 12:27 PM, Oved Ourfali <oourf...@redhat.com>
>> wrote:
>>
>>> Can we access the environment?
>>>
>>> On Wed, Jul 19, 2017 at 12:21 PM, Daniel Belenky <dbele...@redhat.com>
>>> wrote:
>>>
>>>> It's also removing /home/jenkins/.m2
>>>>
>>>> On Wed, 19 Jul 2017 at 12:16 Tomas Jelinek <tjeli...@redhat.com> wrote:
>>>>
>>>>> On Wed, Jul 19, 2017 at 11:02 AM, Daniel Belenky <dbele...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> I've added a build step that removes .m2 cache and the error still
>>>>>> appears.
>>>>>>
>>>>>
>>>>> are you sure you have removed the right m2 folder? I see this in the
>>>>> logs:
>>>>>
>>>>> rm -rf /root/.m2/repository/org/ovirt
>>>>>
>>>>> and than building it in
>>>>> /home/jenkins/
>>>>>
>>>>> ... along shot but seems suspicious.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Wed, Jul 19, 2017 at 11:46 AM, Eyal Edri <ee...@redhat.com> wrote:
>>>>>>
>>>>>>> Daniel,
>>>>>>> IIRC we have local maven cache on the slaves, please try to clean it
>>>>>>> first and re-run.
>>>>>>> If its not working, then I suggest engine maintainers will look into
>>>>>>> the command that is run inside 'check-merged.sh' and see if there is a
>>>>>>> leftover
>>>>>>> profile in Makefile or maven that still requires userportal.
>>>>>>>
>>>>>>> On Wed, Jul 19, 2017 at 11:43 AM, Oved Ourfali <oourf...@redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I discussed that offline with Eyal.
>>>>>>>> Perhaps some of the data there is cached.
>>>>>>>> He said he is on it.
>>>>>>>>
>>>>>>>> On Wed, Jul 19, 2017 at 11:22 AM, Daniel Belenky <
>>>>>>>> dbele...@redhat.com> wrote:
>>>>>>>>
>>>>>>>>> Hi, I've tried to re-trigger build-artifacts and it failed.
>>>>>>>>> What do you mean by 'brand new environment'? It's the
>>>>>>>>> build-artifacts and check-merged of this patch, not OST.
>>>>>>>>> Link:
>>>>>>>>> check-merged: http://jenkins.ovirt.org/job/o
>>>>>>>>> virt-engine_master_check-merged-el7-x86_64/5408/
>>>>>>>>> build-artifacts: http://jenkins.ovirt.org/job/o
>>>>>>>>> virt-engine_master_check-merged-el7-x86_64/5408/
>>>>>>>>>
>>>>>>>>> On Wed, Jul 19, 2017 at 10:56 AM, Oved Ourfali <
>>>>>>>>> oourf...@redhat.com> 

Re: [ovirt-devel] [ OST Failure Report ] [ Failed Build ] [ oVirt engine master ]

2017-07-19 Thread Daniel Belenky
It's also removing /home/jenkins/.m2

On Wed, 19 Jul 2017 at 12:16 Tomas Jelinek <tjeli...@redhat.com> wrote:

> On Wed, Jul 19, 2017 at 11:02 AM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> I've added a build step that removes .m2 cache and the error still
>> appears.
>>
>
> are you sure you have removed the right m2 folder? I see this in the logs:
>
> rm -rf /root/.m2/repository/org/ovirt
>
> and than building it in
> /home/jenkins/
>
> ... along shot but seems suspicious.
>
>
>
>>
>> On Wed, Jul 19, 2017 at 11:46 AM, Eyal Edri <ee...@redhat.com> wrote:
>>
>>> Daniel,
>>> IIRC we have local maven cache on the slaves, please try to clean it
>>> first and re-run.
>>> If its not working, then I suggest engine maintainers will look into the
>>> command that is run inside 'check-merged.sh' and see if there is a leftover
>>> profile in Makefile or maven that still requires userportal.
>>>
>>> On Wed, Jul 19, 2017 at 11:43 AM, Oved Ourfali <oourf...@redhat.com>
>>> wrote:
>>>
>>>> I discussed that offline with Eyal.
>>>> Perhaps some of the data there is cached.
>>>> He said he is on it.
>>>>
>>>> On Wed, Jul 19, 2017 at 11:22 AM, Daniel Belenky <dbele...@redhat.com>
>>>> wrote:
>>>>
>>>>> Hi, I've tried to re-trigger build-artifacts and it failed.
>>>>> What do you mean by 'brand new environment'? It's the build-artifacts
>>>>> and check-merged of this patch, not OST.
>>>>> Link:
>>>>> check-merged:
>>>>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/5408/
>>>>> build-artifacts:
>>>>> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/5408/
>>>>>
>>>>> On Wed, Jul 19, 2017 at 10:56 AM, Oved Ourfali <oourf...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Can you re-test?
>>>>>> We have pushed a fix this morning.
>>>>>> Although not specifically for that.
>>>>>> Also, is that a brand new environment?
>>>>>> We removed the user portal, so it shouldn't look for the pom file.
>>>>>>
>>>>>> On Jul 19, 2017 09:46, "Daniel Belenky" <dbele...@redhat.com> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> This patch has [1] failed building, and every patch that was rebased
>>>>>>> on top of it failed too because of it.
>>>>>>>
>>>>>>> Error snippet:
>>>>>>>
>>>>>>> [ERROR] The build could not read 1 project -> [Help 1]
>>>>>>> [ERROR]
>>>>>>> [ERROR]   The project
>>>>>>> org.ovirt.engine.ui:webadmin-modules:4.2.0-SNAPSHOT
>>>>>>> (/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml)
>>>>>>> has 1 error
>>>>>>> [ERROR] Child module
>>>>>>> /home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/userportal-gwtp
>>>>>>> of
>>>>>>> /home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml
>>>>>>> does not exist
>>>>>>> [ERROR]
>>>>>>> [ERROR] To see the full stack trace of the errors, re-run Maven with
>>>>>>> the -e switch.
>>>>>>> [ERROR] Re-run Maven using the -X switch to enable full debug
>>>>>>> logging.
>>>>>>> [ERROR]
>>>>>>> [ERROR] For more information about the errors and possible
>>>>>>> solutions, please read the following articles:
>>>>>>> [ERROR] [Help 1]
>>>>>>> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
>>>>>>> make: *** [clean] Error 1
>>>>>>>
>>>>>>> [1] https://gerrit.ovirt.org/#/c/78847/5
>>>>>>>
>>>>>>> Thanks,
>>>>>>> --
>>>>>>>
>>>>>>> DANIEL BELENKY
>>>>>>>
>>>>>>> Associate sw engineer
>>>>>>>
>>>>>&

Re: [ovirt-devel] [ OST Failure Report ] [ Failed Build ] [ oVirt engine master ]

2017-07-19 Thread Daniel Belenky
I've added a build step that removes .m2 cache and the error still appears.

On Wed, Jul 19, 2017 at 11:46 AM, Eyal Edri <ee...@redhat.com> wrote:

> Daniel,
> IIRC we have local maven cache on the slaves, please try to clean it first
> and re-run.
> If its not working, then I suggest engine maintainers will look into the
> command that is run inside 'check-merged.sh' and see if there is a leftover
> profile in Makefile or maven that still requires userportal.
>
> On Wed, Jul 19, 2017 at 11:43 AM, Oved Ourfali <oourf...@redhat.com>
> wrote:
>
>> I discussed that offline with Eyal.
>> Perhaps some of the data there is cached.
>> He said he is on it.
>>
>> On Wed, Jul 19, 2017 at 11:22 AM, Daniel Belenky <dbele...@redhat.com>
>> wrote:
>>
>>> Hi, I've tried to re-trigger build-artifacts and it failed.
>>> What do you mean by 'brand new environment'? It's the build-artifacts
>>> and check-merged of this patch, not OST.
>>> Link:
>>> check-merged: http://jenkins.ovirt.org/job/o
>>> virt-engine_master_check-merged-el7-x86_64/5408/
>>> build-artifacts: http://jenkins.ovirt.org/job/o
>>> virt-engine_master_check-merged-el7-x86_64/5408/
>>>
>>> On Wed, Jul 19, 2017 at 10:56 AM, Oved Ourfali <oourf...@redhat.com>
>>> wrote:
>>>
>>>> Can you re-test?
>>>> We have pushed a fix this morning.
>>>> Although not specifically for that.
>>>> Also, is that a brand new environment?
>>>> We removed the user portal, so it shouldn't look for the pom file.
>>>>
>>>> On Jul 19, 2017 09:46, "Daniel Belenky" <dbele...@redhat.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> This patch has [1] failed building, and every patch that was rebased
>>>>> on top of it failed too because of it.
>>>>>
>>>>> Error snippet:
>>>>>
>>>>> [ERROR] The build could not read 1 project -> [Help 1]
>>>>> [ERROR]
>>>>> [ERROR]   The project org.ovirt.engine.ui:webadmin-modules:4.2.0-SNAPSHOT
>>>>> (/home/jenkins/workspace/ovirt-engine_master_build-artifacts
>>>>> -el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml) has 1
>>>>> error
>>>>> [ERROR] Child module /home/jenkins/workspace/ovirt-
>>>>> engine_master_build-artifacts-el7-x86_64/ovirt-engine/fronte
>>>>> nd/webadmin/modules/userportal-gwtp of /home/jenkins/workspace/ovirt-
>>>>> engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml
>>>>> does not exist
>>>>> [ERROR]
>>>>> [ERROR] To see the full stack trace of the errors, re-run Maven with
>>>>> the -e switch.
>>>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>>>> [ERROR]
>>>>> [ERROR] For more information about the errors and possible solutions,
>>>>> please read the following articles:
>>>>> [ERROR] [Help 1] http://cwiki.apache.org/conflu
>>>>> ence/display/MAVEN/ProjectBuildingException
>>>>> make: *** [clean] Error 1
>>>>>
>>>>> [1] https://gerrit.ovirt.org/#/c/78847/5
>>>>>
>>>>> Thanks,
>>>>> --
>>>>>
>>>>> DANIEL BELENKY
>>>>>
>>>>> Associate sw engineer
>>>>>
>>>>> RHEV DEVOPS
>>>>>
>>>>> EMEA VIRTUALIZATION R
>>>>>
>>>>> Red Hat Israel <https://www.redhat.com/>
>>>>>
>>>>> dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
>>>>> <https://red.ht/sig>
>>>>>
>>>>> ___
>>>>> Devel mailing list
>>>>> Devel@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> DANIEL BELENKY
>>>
>>> Associate sw engineer
>>>
>>> RHEV DEVOPS
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>> Red Hat Israel <https://www.redhat.com/>
>>>
>>> dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
>>> <https://red.ht/sig>
>>>
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ Failed Build ] [ oVirt engine master ]

2017-07-19 Thread Daniel Belenky
Hi, I've tried to re-trigger build-artifacts and it failed.
What do you mean by 'brand new environment'? It's the build-artifacts and
check-merged of this patch, not OST.
Link:
check-merged:
http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/5408/
build-artifacts:
http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/5408/

On Wed, Jul 19, 2017 at 10:56 AM, Oved Ourfali <oourf...@redhat.com> wrote:

> Can you re-test?
> We have pushed a fix this morning.
> Although not specifically for that.
> Also, is that a brand new environment?
> We removed the user portal, so it shouldn't look for the pom file.
>
> On Jul 19, 2017 09:46, "Daniel Belenky" <dbele...@redhat.com> wrote:
>
>> Hi all,
>>
>> This patch has [1] failed building, and every patch that was rebased on
>> top of it failed too because of it.
>>
>> Error snippet:
>>
>> [ERROR] The build could not read 1 project -> [Help 1]
>> [ERROR]
>> [ERROR]   The project org.ovirt.engine.ui:webadmin-modules:4.2.0-SNAPSHOT
>> (/home/jenkins/workspace/ovirt-engine_master_build-artifacts
>> -el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml) has 1 error
>> [ERROR] Child module /home/jenkins/workspace/ovirt-
>> engine_master_build-artifacts-el7-x86_64/ovirt-engine/fronte
>> nd/webadmin/modules/userportal-gwtp of /home/jenkins/workspace/ovirt-
>> engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml
>> does not exist
>> [ERROR]
>> [ERROR] To see the full stack trace of the errors, re-run Maven with the
>> -e switch.
>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> [ERROR]
>> [ERROR] For more information about the errors and possible solutions,
>> please read the following articles:
>> [ERROR] [Help 1] http://cwiki.apache.org/conflu
>> ence/display/MAVEN/ProjectBuildingException
>> make: *** [clean] Error 1
>>
>> [1] https://gerrit.ovirt.org/#/c/78847/5
>>
>> Thanks,
>> --
>>
>> DANIEL BELENKY
>>
>> Associate sw engineer
>>
>> RHEV DEVOPS
>>
>> EMEA VIRTUALIZATION R
>>
>> Red Hat Israel <https://www.redhat.com/>
>>
>> dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
>> <https://red.ht/sig>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>


-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ Failed Build ] [ oVirt engine master ]

2017-07-19 Thread Daniel Belenky
Hi all,

This patch has [1] failed building, and every patch that was rebased on top
of it failed too because of it.

Error snippet:

[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project org.ovirt.engine.ui:webadmin-modules:4.2.0-SNAPSHOT
(/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml)
has 1 error
[ERROR] Child module
/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/userportal-gwtp
of
/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/frontend/webadmin/modules/pom.xml
does not exist
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
make: *** [clean] Error 1

[1] https://gerrit.ovirt.org/#/c/78847/5

Thanks,
-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Failed to build artifacts for oVirt master

2017-07-18 Thread Daniel Belenky
Hi all,

The following patches are failing to build artifacts:

   1. https://gerrit.ovirt.org/#/c/79540/1
   2. https://gerrit.ovirt.org/#/c/79433/3

Though, it seems that the root for the failure is patch [2] as patch [1]
was merged after patch [2], and patch [2] failed to build artifacts.

Error snippet from Console Output:

...
...
[INFO] UserPortal  FAILURE [1:34.567s]
...
...
[ERROR] Failed to execute goal
org.codehaus.mojo:gwt-maven-plugin:2.8.0:compile (gwtcompile) on
project userportal: Command [[
[ERROR] /bin/sh -c
'/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64/jre/bin/java'
'-javaagent:/root/.m2/repository/org/aspectj/aspectjweaver/1.8.10/aspectjweaver-1.8.10.jar'
'-Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory'
'-Dgwt.jjs.maxThreads=4'
'-Djava.io.tmpdir=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.2.0/frontend/webadmin/modules/userportal-gwtp/target/tmp'
'-Djava.util.prefs.systemRoot=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.2.0/frontend/webadmin/modules/userportal-gwtp/target/tmp'
'-Djava.util.prefs.userRoot=/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.2.0/frontend/webadmin/modules/userportal-gwtp/target/tmp'
'-Djava.util.logging.config.class=org.ovirt.engine.ui.gwtaop.JavaLoggingConfig'
'-Xms1G' '-Xmx4G'
'-Dgwt.dontPrune=org\.ovirt\.engine\.core\.(common|compat)\..*'
'com.google.gwt.dev.Compiler' '-logLevel' 'INFO' '-war'
'/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.2.0/frontend/webadmin/modules/userportal-gwtp/target/generated-gwt'
'-localWorkers' '1' '-failOnError' '-XfragmentCount' '-1'
'-sourceLevel' 'auto' '-style' 'OBF' '-gen'
'/home/jenkins/workspace/ovirt-engine_master_build-artifacts-el7-x86_64/ovirt-engine/rpmbuild/BUILD/ovirt-engine-4.2.0/frontend/webadmin/modules/userportal-gwtp/gen'
'org.ovirt.engine.ui.userportal.UserPortal'
[ERROR] ]] failed with status 1
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with
the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :userportal

-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

EMEA VIRTUALIZATION R

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.1 ] [ 28/06/17 ] [ 000_check_repo_closure ]

2017-06-28 Thread Daniel Belenky
> On Wed, Jun 28, 2017 at 11:12 AM, Anton Marchukov <amarc...@redhat.com>
> wrote:
>
>> Test failed: 000_check_repo_closure.check_repo_closure
>>
>> Link to suspected patches:
>>
>> Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.
>> 1/1783
>>
>> Link to all logs: http://jenkins.ovirt.org/job/t
>> est-repo_ovirt_experimental_4.1/1783/testReport/junit/(root)
>> /000_check_repo_closure/check_repo_closure/
>>
>>
>> Error snippet from the log:
>>
>> package: cockpit-ostree-138-1.el7.x86_64 from internal_repo
>>   unresolved deps:
>>  /usr/libexec/rpm-ostreed
>> package: python2-botocore-1.4.43-1.el7.noarch from internal_repo
>>   unresolved deps:
>>  python-dateutil >= 0:2.1
>>
>>
>> We seem to have two new repoclosure problems here.
>>
>> 1. The first one is due to python2-botocore from
>> http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.0/common/ now
>> requiring python-dateutils version 2.1 while we have only 1.5 in our
>> dependencies.
>>
>
> Thanks, tagged python-dateutils for release: http://cbs.centos.
> org/koji/buildinfo?buildID=12099
> But why are you using 4.0 Virt SIG repos in a 4.1 job? It doesn't make
> sense.
>
>
>
>>
>> 2. The second one might be just a missing include in OST. Which
>> package/repo should we consume rom-ostreed from?
>>
>
> rpm-ostree is available in CentOS Atomic SIG repo. We can't disable
> the cockpit-ostree subpackage build in VIrt SIG for providing recent
> cockpit but we don't really want to tag rpm-ostreee in virt sig repo.
> It's shipped in testing repo for atomic here: https://buildlogs.
> centos.org/centos/7/atomic/x86_64/
>
> I would suggest to use Virt SIG as lookaside repo insead of as checked
> repo.
>

Virt SIG is a lookaside repo, the only repo that is being checked it the
internal_repo.
We could exclude cockpit-ostree from the reposync file (as it was done in
master yesterday)



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 004_basic_sanity add_filter_parameter

2017-06-28 Thread Daniel Belenky
Hi,

Can you please provide more logs? lago.log and the test_logs dir will
assist to get down to the issue

Thanks,

On Tue, Jun 27, 2017 at 6:16 PM, Valentina Makarova <makarovav...@gmail.com>
wrote:

> Hello!
>
> After pull last master branch of ovirt_system_tests 004 test fails
> in add_filter_parameter  in basic-suite-master with error:
>
> AttributeError: 'VmNicService' object has no attribute
> 'network_filter_parameters_service'
>
> After first running fail I updated all yum packages using yum upgrade,
> updated lago, lago-ovirt, ovirt-sdk-python
> using pip install --upgrade. And I have next version of this packages:
> lago (0.39.0), lago-ovirt (0.41.0)
> ovirt-engine-sdk-python (4.1.5)
> But error is still open. (I run ./run_suilte without -s. and it should
> update internal repo)
>
> Do I forget to update something else?
>
> Sincerely, Valentina Makarova
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 000_check_repo_closure

2017-06-27 Thread Daniel Belenky
Thanks for the feedback Milan and Roy,
I've also managed to successfully verify this change on all of the related
suites.
We've merged this change, so please update your local ost repo.

On Tue, Jun 27, 2017 at 5:19 PM, Roy Golan <rgo...@redhat.com> wrote:

> Works for me as well.
>
> On Tue, Jun 27, 2017 at 4:58 PM Milan Zamazal <mzama...@redhat.com> wrote:
>
>> Daniel Belenky <dbele...@redhat.com> writes:
>>
>> > I've sent a workaround patch [1] that supposes to fix this.
>> > Please notice that I'm still testing it on the different suites in OST
>> > (basic_suite_master is working).
>> > You can try to run OST locally with this patch
>> >
>> > [1] https://gerrit.ovirt.org/#/c/78710/1
>>
>> Yes, this workaround suppresses the problem, thank you.
>>
>> > On Tue, Jun 27, 2017 at 2:35 PM, Milan Zamazal <mzama...@redhat.com>
>> wrote:
>> >
>> >> Daniel Belenky <dbele...@redhat.com> writes:
>> >>
>> >> > can you please attach the lago log here?
>> >>
>> >> Here it is:
>> >>
>> >>
>> >>
>> >> > On Tue, Jun 27, 2017 at 2:21 PM, Milan Zamazal <mzama...@redhat.com>
>> >> wrote:
>> >> >
>> >> >> Daniel Belenky <dbele...@redhat.com> writes:
>> >> >>
>> >> >> > The same test is working in the experimental flow, so looks like a
>> >> local
>> >> >> > issue.
>> >> >> > Are you using the most up to date master branch of ost?
>> >> >>
>> >> >> Yes, the latest version from git.
>> >> >>
>> >> >> > On Tue, Jun 27, 2017 at 1:58 PM, Milan Zamazal <
>> mzama...@redhat.com>
>> >> >> wrote:
>> >> >> >
>> >> >> >> I tried to run OST on current basic-suite-master and it fails in
>> >> >> >> 000_check_repo_closure.  Is it only my local problem or is there
>> >> >> >> something broken?  How can I get OST running again?
>> >> >> >>
>> >> >> >> ## Params: ['repoclosure', '-t', '--config=/home/pdm/ovirt/
>> >> >> >> lago/ovirt-system-tests/basic-suite-master/reposync-config.
>> >> >> repo_repoclosure',
>> >> >> >> '--lookaside=ovirt-master-tested-el7',
>> '--lookaside=ovirt-master-
>> >> >> snapshot-static-el7',
>> >> >> >> '--lookaside=glusterfs-3.10-el7', '--lookaside=centos-updates-
>> el7',
>> >> >> >> '--lookaside=centos-base-el7', '--lookaside=centos-extras-el7',
>> >> >> >> '--lookaside=epel-el7', '--lookaside=centos-ovirt-4.0-el7',
>> >> >> >> '--lookaside=centos-kvm-common-el7',
>> '--lookaside=centos-opstools-
>> >> >> testing-el7',
>> >> >> >> '--lookaside=copr-sac-gdeploy-el7', '--repoid=internal_repo'].
>> >> >> >> ## Exist status: 1
>> >> >> >> ## Output: Reading in repository metadata - please wait
>> >> >> >> Checking Dependencies
>> >> >> >> Repos looked at: 12
>> >> >> >>centos-base-el7
>> >> >> >>centos-extras-el7
>> >> >> >>centos-kvm-common-el7
>> >> >> >>centos-opstools-testing-el7
>> >> >> >>centos-ovirt-4.0-el7
>> >> >> >>centos-updates-el7
>> >> >> >>copr-sac-gdeploy-el7
>> >> >> >>epel-el7
>> >> >> >>glusterfs-3.10-el7
>> >> >> >>internal_repo
>> >> >> >>ovirt-master-snapshot-static-el7
>> >> >> >>ovirt-master-tested-el7
>> >> >> >> Num Packages in Repos: 37128
>> >> >> >> package: cockpit-ostree-138-1.el7.x86_64 from internal_repo
>> >> >> >>   unresolved deps:
>> >> >> >>  /usr/libexec/rpm-ostreed
>> >> >> >> package: python2-botocore-1.4.43-1.el7.noarch from internal_repo
>> >> >> >>   unresolved deps:
>> >> >> >>  python-dateutil >= 0:2.1
>> >> >> >>
>> >> >> >> Thanks,
>> >> >> >> Milan
>> >> >> >> ___
>> >> >> >> Devel mailing list
>> >> >> >> Devel@ovirt.org
>> >> >> >> http://lists.ovirt.org/mailman/listinfo/devel
>> >> >> >>
>> >> >>
>> >>
>> >>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>


-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 000_check_repo_closure

2017-06-27 Thread Daniel Belenky
I've sent a workaround patch [1] that supposes to fix this.
Please notice that I'm still testing it on the different suites in OST
(basic_suite_master is working).
You can try to run OST locally with this patch

[1] https://gerrit.ovirt.org/#/c/78710/1

On Tue, Jun 27, 2017 at 2:35 PM, Milan Zamazal <mzama...@redhat.com> wrote:

> Daniel Belenky <dbele...@redhat.com> writes:
>
> > can you please attach the lago log here?
>
> Here it is:
>
>
>
> > On Tue, Jun 27, 2017 at 2:21 PM, Milan Zamazal <mzama...@redhat.com>
> wrote:
> >
> >> Daniel Belenky <dbele...@redhat.com> writes:
> >>
> >> > The same test is working in the experimental flow, so looks like a
> local
> >> > issue.
> >> > Are you using the most up to date master branch of ost?
> >>
> >> Yes, the latest version from git.
> >>
> >> > On Tue, Jun 27, 2017 at 1:58 PM, Milan Zamazal <mzama...@redhat.com>
> >> wrote:
> >> >
> >> >> I tried to run OST on current basic-suite-master and it fails in
> >> >> 000_check_repo_closure.  Is it only my local problem or is there
> >> >> something broken?  How can I get OST running again?
> >> >>
> >> >> ## Params: ['repoclosure', '-t', '--config=/home/pdm/ovirt/
> >> >> lago/ovirt-system-tests/basic-suite-master/reposync-config.
> >> repo_repoclosure',
> >> >> '--lookaside=ovirt-master-tested-el7', '--lookaside=ovirt-master-
> >> snapshot-static-el7',
> >> >> '--lookaside=glusterfs-3.10-el7', '--lookaside=centos-updates-el7',
> >> >> '--lookaside=centos-base-el7', '--lookaside=centos-extras-el7',
> >> >> '--lookaside=epel-el7', '--lookaside=centos-ovirt-4.0-el7',
> >> >> '--lookaside=centos-kvm-common-el7', '--lookaside=centos-opstools-
> >> testing-el7',
> >> >> '--lookaside=copr-sac-gdeploy-el7', '--repoid=internal_repo'].
> >> >> ## Exist status: 1
> >> >> ## Output: Reading in repository metadata - please wait
> >> >> Checking Dependencies
> >> >> Repos looked at: 12
> >> >>centos-base-el7
> >> >>centos-extras-el7
> >> >>centos-kvm-common-el7
> >> >>centos-opstools-testing-el7
> >> >>centos-ovirt-4.0-el7
> >> >>centos-updates-el7
> >> >>copr-sac-gdeploy-el7
> >> >>epel-el7
> >> >>glusterfs-3.10-el7
> >> >>internal_repo
> >> >>ovirt-master-snapshot-static-el7
> >> >>ovirt-master-tested-el7
> >> >> Num Packages in Repos: 37128
> >> >> package: cockpit-ostree-138-1.el7.x86_64 from internal_repo
> >> >>   unresolved deps:
> >> >>  /usr/libexec/rpm-ostreed
> >> >> package: python2-botocore-1.4.43-1.el7.noarch from internal_repo
> >> >>   unresolved deps:
> >> >>  python-dateutil >= 0:2.1
> >> >>
> >> >> Thanks,
> >> >> Milan
> >> >> ___
> >> >> Devel mailing list
> >> >> Devel@ovirt.org
> >> >> http://lists.ovirt.org/mailman/listinfo/devel
> >> >>
> >>
>
>


-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 000_check_repo_closure

2017-06-27 Thread Daniel Belenky
can you please attach the lago log here?

On Tue, Jun 27, 2017 at 2:21 PM, Milan Zamazal <mzama...@redhat.com> wrote:

> Daniel Belenky <dbele...@redhat.com> writes:
>
> > The same test is working in the experimental flow, so looks like a local
> > issue.
> > Are you using the most up to date master branch of ost?
>
> Yes, the latest version from git.
>
> > On Tue, Jun 27, 2017 at 1:58 PM, Milan Zamazal <mzama...@redhat.com>
> wrote:
> >
> >> I tried to run OST on current basic-suite-master and it fails in
> >> 000_check_repo_closure.  Is it only my local problem or is there
> >> something broken?  How can I get OST running again?
> >>
> >> ## Params: ['repoclosure', '-t', '--config=/home/pdm/ovirt/
> >> lago/ovirt-system-tests/basic-suite-master/reposync-config.
> repo_repoclosure',
> >> '--lookaside=ovirt-master-tested-el7', '--lookaside=ovirt-master-
> snapshot-static-el7',
> >> '--lookaside=glusterfs-3.10-el7', '--lookaside=centos-updates-el7',
> >> '--lookaside=centos-base-el7', '--lookaside=centos-extras-el7',
> >> '--lookaside=epel-el7', '--lookaside=centos-ovirt-4.0-el7',
> >> '--lookaside=centos-kvm-common-el7', '--lookaside=centos-opstools-
> testing-el7',
> >> '--lookaside=copr-sac-gdeploy-el7', '--repoid=internal_repo'].
> >> ## Exist status: 1
> >> ## Output: Reading in repository metadata - please wait
> >> Checking Dependencies
> >> Repos looked at: 12
> >>centos-base-el7
> >>centos-extras-el7
> >>centos-kvm-common-el7
> >>centos-opstools-testing-el7
> >>centos-ovirt-4.0-el7
> >>centos-updates-el7
> >>copr-sac-gdeploy-el7
> >>epel-el7
> >>glusterfs-3.10-el7
> >>internal_repo
> >>ovirt-master-snapshot-static-el7
> >>ovirt-master-tested-el7
> >> Num Packages in Repos: 37128
> >> package: cockpit-ostree-138-1.el7.x86_64 from internal_repo
> >>   unresolved deps:
> >>  /usr/libexec/rpm-ostreed
> >> package: python2-botocore-1.4.43-1.el7.noarch from internal_repo
> >>   unresolved deps:
> >>  python-dateutil >= 0:2.1
> >>
> >> Thanks,
> >> Milan
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >>
>



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failing in 000_check_repo_closure

2017-06-27 Thread Daniel Belenky
Hi,

The same test is working in the experimental flow, so looks like a local
issue.
Are you using the most up to date master branch of ost?

On Tue, Jun 27, 2017 at 1:58 PM, Milan Zamazal <mzama...@redhat.com> wrote:

> I tried to run OST on current basic-suite-master and it fails in
> 000_check_repo_closure.  Is it only my local problem or is there
> something broken?  How can I get OST running again?
>
> ## Params: ['repoclosure', '-t', '--config=/home/pdm/ovirt/
> lago/ovirt-system-tests/basic-suite-master/reposync-config.repo_repoclosure',
> '--lookaside=ovirt-master-tested-el7', 
> '--lookaside=ovirt-master-snapshot-static-el7',
> '--lookaside=glusterfs-3.10-el7', '--lookaside=centos-updates-el7',
> '--lookaside=centos-base-el7', '--lookaside=centos-extras-el7',
> '--lookaside=epel-el7', '--lookaside=centos-ovirt-4.0-el7',
> '--lookaside=centos-kvm-common-el7', 
> '--lookaside=centos-opstools-testing-el7',
> '--lookaside=copr-sac-gdeploy-el7', '--repoid=internal_repo'].
> ## Exist status: 1
> ## Output: Reading in repository metadata - please wait
> Checking Dependencies
> Repos looked at: 12
>centos-base-el7
>centos-extras-el7
>centos-kvm-common-el7
>centos-opstools-testing-el7
>centos-ovirt-4.0-el7
>centos-updates-el7
>copr-sac-gdeploy-el7
>epel-el7
>glusterfs-3.10-el7
>internal_repo
>ovirt-master-snapshot-static-el7
>ovirt-master-tested-el7
> Num Packages in Repos: 37128
> package: cockpit-ostree-138-1.el7.x86_64 from internal_repo
>   unresolved deps:
>  /usr/libexec/rpm-ostreed
> package: python2-botocore-1.4.43-1.el7.noarch from internal_repo
>   unresolved deps:
>  python-dateutil >= 0:2.1
>
> Thanks,
> Milan
> _______
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failue Report ] [ oVirt master ] [ 7/6/17 ] engine upgrade: 001_upgrade_engine

2017-06-07 Thread Daniel Belenky
Hi all,
oVirt engine upgrade test failed.

Version failing: upgrade from 4.1 to master | upgrade from 4.0 to master
Link to failed job: test-repo_ovirt_experimental_master/7091/
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7091/>
Link to suites logs: logs
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7091/artifact/exported-artifacts/upgrade-from-prevrelease-suit-master-el7/test_logs/upgrade-from-prevrelease-suite-master/>
Suspected patch: Gerrit 77254: packaging: ovirt-provider-ovn password must
not be empty by default <https://gerrit.ovirt.org/#/c/77254/>
Potential fix(es):
[1] 77934: Add ovirtProviderOvnPassword to the answer file
<https://gerrit.ovirt.org/#/c/77934/>
[2] 77975: Update upgrade engine answer-file with ovirtProviderOvnPassword
<https://gerrit.ovirt.org/#/c/77975/>

iIt seems that though the answer file includes the ovn password,
engine-setup ignores it.
Can anyone provide more info/actions can be done in order to fix this
issue?

Thanks,
-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Manual Job ] Updated default fallback repo

2017-05-22 Thread Daniel Belenky
As master is not a released version, this change doesn't affect master.
(Either way, you're getting RPMs from the tested repo)

On Mon, May 22, 2017 at 11:52 AM, Yedidyah Bar David <d...@redhat.com>
wrote:

> On Mon, May 22, 2017 at 11:38 AM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> Hi all,
>>
>> Please note that we've updated the default fallback repo in manual job
>> <http://jenkins.ovirt.org/job/ovirt-system-tests_manual/>.
>> The fallback repo defines the source from which the RPMs that are not
>> part of the patch being tested are being taken.
>> Until today, the default source was 'latest_tested' repo, which is now
>> replaced by 'latest_release'.
>>
>> *Note* that 'latest_tested' repository shouldn't be used on regular
>> basis, as this repository is changing as the
>> experimental flow runs, so in order to avoid failures - please use the
>> stable 'latest_release' repo, which contains all the
>> rpm's that were released in the last official release of oVirt.
>>
>
> Does this apply to master branch too?
>
> What if I run the manual job on a patch for the master branch of project
> A, that requires a package from a build of project B, that is newer than
> what's in the last release (or was not released yet at all)?
>
>
>>
>> Thanks,
>> --
>>
>> DANIEL BELENKY
>>
>> Associate sw engineer
>>
>> RHEV DEVOPS
>>
>> Red Hat Israel <https://www.redhat.com/>
>>
>> dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
>> <https://red.ht/sig>
>>
>> ___
>> Infra mailing list
>> in...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Didi
>



-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Manual Job ] Updated default fallback repo

2017-05-22 Thread Daniel Belenky
Hi all,

Please note that we've updated the default fallback repo in manual job
<http://jenkins.ovirt.org/job/ovirt-system-tests_manual/>.
The fallback repo defines the source from which the RPMs that are not part
of the patch being tested are being taken.
Until today, the default source was 'latest_tested' repo, which is now
replaced by 'latest_release'.

*Note* that 'latest_tested' repository shouldn't be used on regular basis,
as this repository is changing as the
experimental flow runs, so in order to avoid failures - please use the
stable 'latest_release' repo, which contains all the
rpm's that were released in the last official release of oVirt.

Thanks,
-- 

DANIEL BELENKY

Associate sw engineer

RHEV DEVOPS

Red Hat Israel <https://www.redhat.com/>

dbele...@redhat.comIRC: #rhev-integ, #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST 4.1 failure: Error: ('Error while sending HTTP request', error('cannot add/remove handle - multi_perform() already running', ))

2017-05-12 Thread Daniel Belenky
So repoman pulls both of the versions to the internal repo? I think we're
running repoman with only latest flag...

On May 12, 2017 12:29 PM, "Anton Marchukov"  wrote:

> Hello Barak.
>
> Yes. repoman pulls the latest version and that version is in latest and
> latest.under_test on resources. Additionally it is proven by lago.log too.
>
> The only problem seems to be the mock env that runs the python itself.
>
> Anton.
>
> On Fri, May 12, 2017 at 11:03 AM, Barak Korren  wrote:
>
>> Anton, are you seeing reponan pull the right version in the lago logs? We
>> need to know if it makes it into the Lago local repo or not.
>>
>> Barak Korren
>> bkor...@redhat.com
>> RHCE, RHCi, RHV-DevOps Team
>> https://ifireball.wordpress.com/
>>
>> בתאריך 12 במאי 2017 11:13,‏ "Anton Marchukov"  כתב:
>>
>>> Hello Ondra.
>>>
>>> Yes I see it installs the old version, e.g. the latest master run at [1]
>>> installs:
>>>
>>> *07:43:13* [basic_suit_el7] Updated:*07:43:13* [basic_suit_el7]   
>>> python-ovirt-engine-sdk4.x86_64 0:4.2.0-1.a0.20170511git210c375.el7.centos
>>>
>>>
>>> while the latest version is indeed  python-ovirt-engine-sdk4-4.2.
>>> 0-1.a1.20170512git7c40be2.el7.centos.x86_64.rpm
>>>
>>> Just for the record: latest and latest.under_test have correct version
>>> of the package, so it does not look to be a repoman bug.
>>>
>>> Checking OST sources now...
>>>
>>> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimenta
>>> l_master/6651/consoleFull
>>>
>>> On Fri, May 12, 2017 at 9:43 AM, Ondra Machacek 
>>> wrote:
>>>
 Hello Anton,

 So I've bumped the version, but it's still installing the old one.
 The bumped version:

  python-ovirt-engine-sdk4-4.2.0-1.a1.20170512git7c40be2.el7.
 centos.x86_64.rpm
 

 Log from OST run:

 *07:25:59* [upgrade-from-release_suit_el7] 
 *07:25:59*
  [upgrade-from-release_suit_el7]  Package  Arch   Version  
   Repository Size*07:25:59* 
 [upgrade-from-release_suit_el7] 
 *07:25:59*
  [upgrade-from-release_suit_el7] Installing:*07:25:59* 
 [upgrade-from-release_suit_el7]  python-ovirt-engine-sdk4 x86_64 
 4.2.0-1.a0.20170511git210c375.el7.centos*07:25:59* 
 [upgrade-from-release_suit_el7]
  ovirt-master-snapshot 446 k*07:25:59* 
 [upgrade-from-release_suit_el7] Installing for dependencies:*07:25:59* 
 [upgrade-from-release_suit_el7]  python-enum34noarch 
 1.0.4-1.el7centos-base-el752 k*07:25:59* 
 [upgrade-from-release_suit_el7] *07:25:59* [upgrade-from-release_suit_el7] 
 Transaction Summary*07:25:59* [upgrade-from-release_suit_el7] 
 


 On Thu, May 11, 2017 at 8:35 PM, Anton Marchukov 
 wrote:

> Hello Ondra.
>
> Thanks.
>
> It seems that the manual job populates SDK from custom repo only for
> the VMs under test, but the mock where the python test code runs does not
> use it from there. So the release of bumped version will be good idea.
>
> Anton.
>
> On Thu, May 11, 2017 at 8:20 PM, Ondra Machacek 
> wrote:
>
>>
>>
>> On Thu, May 11, 2017 at 8:11 PM, Anton Marchukov > > wrote:
>>
>>> On Thu, May 11, 2017 at 8:03 PM, Ondra Machacek >> > wrote:
>>>

 *15:50:44* [basic_suit_el7] Updated:
>
> *15:50:44* [basic_suit_el7]   python-ovirt-engine-sdk4.x86_64 
> 0:4.2.0-1.a0.20170511git210c375.el7.centos
>
>
 This is incorrect version. The correct one is:

  python-ovirt-engine-sdk4-4.2.0-1.a0.20170511gitcd0adb4.el7.
 centos.x86_64.rpm
 

 From this build:

  http://jenkins.ovirt.org/job/python-ovirt-engine-sdk4_maste
 r_build-artifacts-el7-x86_64/71/

>>>
>>>
>>> Sounds like we have a problem if the version different only by git
>>> hashes. They are not ordered.
>>>
>>> I suggest we just merge the version bump at
>>> https://gerrit.ovirt.org/#/c/76732/ and then see which version it
>>> will install.

[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 07-05-2017 ] [ add_secondary_storage_domains ]

2017-05-07 Thread Daniel Belenky
Hi,
The following test failed: 002_bootstrap.add_secondary_storage_domains
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/testReport/junit/(root)/002_bootstrap/add_secondary_storage_domains/>

*Error snippet from VDSM Log:*

2017-05-07 08:09:03,906-0400 ERROR (monitor/e48ddb8) [storage.Monitor]
Setting up monitor for e48ddb84-77fc-41d8-98ce-a40140649c8d failed
(monitor:329)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 326, in _setupLoop
self._setupMonitor()
  File "/usr/share/vdsm/storage/monitor.py", line 348, in _setupMonitor
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 392, in wrapper
value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 112, in produce
domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 153, in _findDomain
return findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 178, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e48ddb84-77fc-41d8-98ce-a40140649c8d',)


*Error snippet from SUPERVDSM Log:*

MainProcess|jsonrpc/6::ERROR::2017-05-07
08:09:09,960::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper)
Error in mount
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py",
line 94, in wrapper
res = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py",
line 136, in mount
timeout=timeout, cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
262, in _mount
_runcmd(cmd, timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
297, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';mount.nfs: Connection timed out\n')
MainProcess|jsonrpc/1::ERROR::2017-05-07
08:09:09,960::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper)
Error in mount
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py",
line 94, in wrapper
res = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py",
line 136, in mount
timeout=timeout, cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
262, in _mount
_runcmd(cmd, timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
297, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';mount.nfs: Connection timed out\n')


*Links*: Job's artifacts
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/artifact/exported-artifacts/basic-suit-master-el7/>,
All related logs
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>,
Engine Log 
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log>,
Host 1 SuperVdsm Log
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host1/_var_log/vdsm/supervdsm.log>,
Host 1 Vdsm Log
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6573/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host1/_var_log/vdsm/vdsm.log>

Please assist to investigate the failure.

Thanks,

-- 

DANIEL BELENKY

RHV DEVOPS

Red Hat EMEA <https://www.redhat.com/>

IRC: #rhev-integ #rhev-dev
<https://red.ht/sig>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ Announcment ] STD CI now supports custom junit reports

2017-03-29 Thread Daniel Belenky
Hi all,

The oVirt infra team is proud to announce that as from today the standard
ci supports export of custom junit reports.

*How to use?*
Simply generate your junit .xml files, and follow the following
naming: *.junit.xml.
*Make sure you move them to exported-artifacts directory.
That's all, the std ci will collect your .junit.xml files, and generate the
junit report for you.

Please feel free to contact the infra team on in...@ovirt.org for any more
questions you have.

Sincerely,
-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm-hook-ovs dependency

2017-03-27 Thread Daniel Belenky
thanks

On Mon, Mar 27, 2017 at 12:09 PM, Edward Haas <eh...@redhat.com> wrote:

> Hi Daniel,
>
> vdsm-hook-ovs has been removed on both master and 4.1, see
> https://gerrit.ovirt.org/#/c/69849/
>
> Thanks,
> Edy.
>
> On Sun, Mar 26, 2017 at 3:30 PM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> Hi all,
>>
>> I try to test our master tested repo for repoclosure, and the test fails on 
>> the following dependency:
>>
>> 10:44:49 package: vdsm-hook-ovs-4.20.0-162.gitcc43be6.el7.centos.noarch from 
>> internal_repo
>> 10:44:49   unresolved deps:
>> 10:44:49  vdsm = 0:4.20.0-162.gitcc43be6.el7.centos
>>
>> When looking at the last builds of VDSM, I could not find this package: 
>> *vdsm-hook-ovs* in the 'exported artifacts'.
>> Can someone advise where this package is coming from? Do we need this 
>> package?
>>
>> Thanks,
>>
>> --
>>
>> *Daniel Belenky*
>>
>> *RHV DevOps*
>>
>> *Red Hat Israel*
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>


-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] vdsm-hook-ovs dependency

2017-03-26 Thread Daniel Belenky
Hi all,

I try to test our master tested repo for repoclosure, and the test
fails on the following dependency:

10:44:49 package:
vdsm-hook-ovs-4.20.0-162.gitcc43be6.el7.centos.noarch from
internal_repo
10:44:49   unresolved deps:
10:44:49  vdsm = 0:4.20.0-162.gitcc43be6.el7.centos

When looking at the last builds of VDSM, I could not find this
package: *vdsm-hook-ovs* in the 'exported artifacts'.
Can someone advise where this package is coming from? Do we need this package?

Thanks,

-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 09/02/17 ] [ basic_sanity.snapshots_merge ]

2017-03-08 Thread Daniel Belenky
*Test failed: *basic_sanity.snapshots_merge

*Link to failed job: *test-repo_ovirt_experimental_master/5752/
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5752/>

*Link to all logs: *logs from Jenkins
<http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/5752/artifact/exported-artifacts/basic-suit-master-el7/>

*Suspected patch: *https://gerrit.ovirt.org/#/c/73809/

*Error snippet from log:*

ovirtlago.testlib: ERROR: Unhandled exception in  at
0x39ddcf8>
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
217, in assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 417, in 
lambda: len(api.vms.get(VM0_NAME).snapshots.list()) == 2,
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py",
line 34602, in list
headers={"All-Content":all_content}
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 46, in get
return self.request(method='GET', url=url, headers=headers, cls=cls)
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 122, in request
persistent_auth=self.__persistent_auth
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
persistent_auth)
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError:
status: 404
reason: Not Found
detail: Entity not found: null


  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
129, in wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
59, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
68, in wrapper
return func(prefix.virt_env.engine_vm().get_api(), *args, **kwargs)
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 445, in snapshots_merge
vt.join_all()
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 57, in
_ret_via_queue
queue.put({'return': func()})
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 417, in snapshot_live_merge
lambda: len(api.vms.get(VM0_NAME).snapshots.list()) == 2,
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
264, in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
251, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
217, in assert_equals_within
res = func()
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 417, in 
lambda: len(api.vms.get(VM0_NAME).snapshots.list()) == 2,
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py",
line 34602, in list
headers={"All-Content":all_content}
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 46, in get
return self.request(method='GET', url=url, headers=headers, cls=cls)
  File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
line 122, in request
persistent_auth=self.__persistent_auth
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
persistent_auth)
  File 
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)

status: 404
reason: Not Found
detail: Entity not found: null


-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.1 ] [ 07/02/17 ] [ add_secondary_storage_domains ]

2017-03-07 Thread Daniel Belenky
It's 4.1 test
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt 4.1 ] [ 07/02/17 ] [ add_secondary_storage_domains ]

2017-03-06 Thread Daniel Belenky
*Test failed:* add_secondary_storage_domains

*Link to failed Job: *test-repo_ovirt_experimental_4.1/889
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_4.1/889>

*Link to all logs: *logs from Jenkins
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_4.1/889/artifact/exported-artifacts/basic-suit-4.1-el7/>

*Error snippet from log:*

2017-03-07
01:29:03,789::utils.py::_ret_via_queue::59::lago.utils::ERROR::Error while
running thread

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 57, in
_ret_via_queue
queue.put({'return': func()})
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py",
line 574, in add_iso_storage_domain
add_generic_nfs_storage_domain(prefix, SD_ISO_NAME,
SD_ISO_HOST_NAME, SD_ISO_PATH, sd_format='v1', sd_type='iso',
nfs_version='v3')
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py",
line 437, in add_generic_nfs_storage_domain
add_generic_nfs_storage_domain_4(prefix, sd_nfs_name,
nfs_host_name, mount_path, sd_format, sd_type, nfs_version)
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py",
line 493, in add_generic_nfs_storage_domain_4
_add_storage_domain_4(api, p)
  File 
"/home/jenkins/workspace/test-repo_ovirt_experimental_4.1/ovirt-system-tests/basic-suite-4.1/test-scenarios/002_bootstrap.py",
line 407, in _add_storage_domain_4
id=sd.id,
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
line 3488, in add
self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
98, in _check_fault
Service._raise_error(response, fault)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
71, in _raise_error
raise Error(msg)
Error: Fault reason is "Operation Failed". Fault detail is "[Storage
domain cannot be reached. Please ensure it is accessible from the
host(s).]". HTTP response code is 400.

-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Gerrit server restart

2017-03-05 Thread Daniel Belenky
Hi all,

The gerrit server will be unreachable due to a restart.
Planned restart time: 11:20 (GMT +2)

Sorry for the inconvenience,
-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] System tests broken

2017-03-01 Thread Daniel Belenky
ofc. im sorry, it was my mistake

On Wed, Mar 1, 2017 at 11:21 AM, Pavel Zhukov <pzhu...@redhat.com> wrote:

> Daniel,
> Can you please let infra owner know prior testing new code in production?
> Thanks.
>
> On Wed, Mar 1, 2017 at 7:43 AM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> Hey,
>> I was testing a new code there that broke the test,
>> I've reverted the job to it's original code and re triggered your test
>> Link <http://jenkins.ovirt.org/job/ovirt-system-tests_manual/93/console>
>>
>> On Wed, Mar 1, 2017 at 3:33 AM, Nir Soffer <nsof...@redhat.com> wrote:
>>
>>> Trying to build, I get this error after few seconds:
>>>
>>> 01:23:08 [ovirt-system-tests_manual] $ /usr/bin/env python
>>> /tmp/hudson2923456913075803996.sh
>>> 01:23:08 Traceback (most recent call last):
>>> 01:23:08   File "/tmp/hudson2923456913075803996.sh", line 19, in
>>> 
>>> 01:23:08 suit_dir + '*.repo'
>>> 01:23:08   File "jenkins/scripts/mirror_client.py", line 79, in
>>> inject_yum_mirrors_file_by_pattern
>>> 01:23:08 for conf in glob.glob(repo_filename_glob):
>>> 01:23:08 NameError: global name 'repo_filename_glob' is not defined
>>>
>>> Nir
>>> ___
>>> Infra mailing list
>>> in...@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>
>>
>>
>> --
>>
>> *Daniel Belenky*
>>
>> *RHV DevOps*
>>
>> *Red Hat Israel*
>>
>> ___
>> Infra mailing list
>> in...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
> Pavel Zhukov
> Software Engineer
> RHEV Devops
> IRC: landgraf
>
>


-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] System tests broken

2017-02-28 Thread Daniel Belenky
Hey,
I was testing a new code there that broke the test,
I've reverted the job to it's original code and re triggered your test Link
<http://jenkins.ovirt.org/job/ovirt-system-tests_manual/93/console>

On Wed, Mar 1, 2017 at 3:33 AM, Nir Soffer <nsof...@redhat.com> wrote:

> Trying to build, I get this error after few seconds:
>
> 01:23:08 [ovirt-system-tests_manual] $ /usr/bin/env python
> /tmp/hudson2923456913075803996.sh
> 01:23:08 Traceback (most recent call last):
> 01:23:08   File "/tmp/hudson2923456913075803996.sh", line 19, in 
> 01:23:08 suit_dir + '*.repo'
> 01:23:08   File "jenkins/scripts/mirror_client.py", line 79, in
> inject_yum_mirrors_file_by_pattern
> 01:23:08 for conf in glob.glob(repo_filename_glob):
> 01:23:08 NameError: global name 'repo_filename_glob' is not defined
>
> Nir
> ___
> Infra mailing list
> in...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>



-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] new feature on manual ost

2017-01-24 Thread Daniel Belenky
FYI,

The infra team has recently added a new feature for the manual OST jobs,
published in recent oVirt blog [1], Which hopefully will help developers
have more control over running OST on open patches.
The new feature allows anyone to choose which oVirt repo the job should use
as 'base repo', the options are:

*latest_tested*: will install the latest rpms that passed the ci.  ( a.k.a
experimental latest.tested )
*latest_release*: will install the last stable release. ( for e.g oVirt
4.0.6 for 4.0.z or 4.1 RC for 4.1 )

The changes are already available on the job in CI [2],

Please don't hesitate to contact the infra team for any questions or issues
with using the jobs,

The oVirt infra team.


[1] https://www.ovirt.org/blog/2017/01/ovirt-system-tests-to-the-rescue/
[2] http://jenkins.ovirt.org/view/oVirt%20system%20tests/
job/ovirt_master_system-tests_manual/

-- 

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] test experimental repo master failed

2017-01-15 Thread Daniel Belenky
Hi all,

The following test:
http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_master/4739/
has
failed.
Triggered by: https://gerrit.ovirt.org/69139
Logs: Link
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_master/4739/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/>

Found the following error on the engine.log:
ERROR [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-27) []
Can't find relative path for class
"org.ovirt.engine.api.resource.VmDisksResource", will return null

can anyone advise on the root cause?

Thanks,

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt master system tests fail

2017-01-12 Thread Daniel Belenky
Hi all,

test-repo ovirt experimental master job fails, and it seems that there is
an issue with 'add_host' phase under the '*bootstrap*' suite.
>From the logs, it seems that the suite was unable to fire up the host /
something is wrong with host

> end captured logging <<
-">


>From the engine.log, I found a timeout in the rpc call (but this error is
seen on jobs that success too, so might not be relevant(?))

2017-01-12 05:49:53,383-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
(org.ovirt.thread.pool-7-thread-2) [76b0383f] Command
'PollVDSCommand(HostName = lago-basic-suite-master-host1,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='40eb11ba-e6ac-478a-b8b1-73b7892ace65'})' execution failed:
VDSGenericException: VDSNetworkException: Timeout during rpc call
2017-01-12 05:49:53,383-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
(org.ovirt.thread.pool-7-thread-2) [76b0383f] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during rpc call

... (the full error is very long, so I wont paste it here, its in the*
engine.log*)

2017-01-12 05:49:58,291-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
(org.ovirt.thread.pool-7-thread-1) [30b2ca77] Timeout waiting for VDSM
response: Internal timeout occured



In the host's vdsm.log, there are some errors too:

2017-01-12 05:51:48,336 ERROR (jsonrpc/0) [storage.StorageDomainCache]
looking for unfetched domain 380623d8-1e85-4831-9048-3d05932f3d3a
(sdc:151)
2017-01-12 05:51:48,336 ERROR (jsonrpc/0) [storage.StorageDomainCache]
looking for domain 380623d8-1e85-4831-9048-3d05932f3d3a (sdc:168)
2017-01-12 05:51:48,395 WARN  (jsonrpc/0) [storage.LVM] lvm vgs
failed: 5 [] ['  WARNING: Not using lvmetad because config setting
use_lvmetad=0.', '  WARNING: To avoid corruption, rescan devices to
make changes visible (pvscan --cache).', '  Volume group
"380623d8-1e85-4831-9048-3d05932f3d3a" not found', '  Cannot process
volume group 380623d8-1e85-4831-9048-3d05932f3d3a'] (lvm:377)
2017-01-12 05:51:48,398 ERROR (jsonrpc/0) [storage.StorageDomainCache]
domain 380623d8-1e85-4831-9048-3d05932f3d3a not found (sdc:157)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 155, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 185, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'380623d8-1e85-4831-9048-3d05932f3d3a',)


and

2017-01-12 05:53:45,375 ERROR (JsonRpc (StompReactor))
[vds.dispatcher] SSL error receiving from
: unexpected eof (betterAsyncore:119)


Link to Jenkins
<http://jenkins.ovirt.org/view/experimental%20jobs/job/test-repo_ovirt_experimental_master/4693/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/>

Can someone please take a look?

Thanks,


*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] (no subject)

2017-01-12 Thread Daniel Belenky
On Thu, Jan 12, 2017 at 10:35 AM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Thu, Jan 12, 2017 at 10:25 AM, Daniel Belenky <dbele...@redhat.com>
> wrote:
>
>> Hi all,
>>
>> ovirt-system-tests are failing with the following error:
>>
>> ERROR [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-1) [] 
>> Can't find relative path for class 
>> "org.ovirt.engine.api.resource.VmDisksResource", will return null
>>
>>
> This is a known issue ( https://bugzilla.redhat.com/
> show_bug.cgi?id=1410038 ) and is not a cause for failure.
>
>
>> The error began at 4/1.
>>
>
> This is really outdated. How did work yesterday?
>

It didn't. We just didn't get to this issue, but it's failing for a while.


>
>> can someone take a look please?
>>
>
> What is the test that is failing?
>

basic_sanity is failing on vm_run

Link
<http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt_4.1_system-tests/37/>
to the failing job

Thanks,

*Daniel Belenky*

*RHV DevOps*

*Red Hat Israel*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel