gerrit: failure to login using google/github id providers

2020-04-12 Thread Dan Kenigsberg
Hi,

I fail to login into gerrit.ovirt.org using my google or github identities.
I get a non-descriptive "Server Error" message.

When I tried to login using my launchpad identity, I was successful, but a
new gerrit identity was created for me.

Would you please look into the login failures?

Would you please squash my new 1001884 account into my old account
(obviously after you verify that I'm really me using a side channel).

Regards,
Dan.
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VDRV3JXH6YJO367GQVFKVOBN5BTGJ5VR/


Re: Proposing Marcin Sobczyk as VDSM infra maintainer

2019-11-12 Thread Dan Kenigsberg
On Mon, Nov 11, 2019 at 3:36 PM Nir Soffer  wrote:
>
> On Thu, Nov 7, 2019 at 4:13 PM Martin Perina  wrote:
> >
> > Hi,
> >
> > Marcin has joined infra team more than a year ago and during this time he 
> > contributed a lot to VDSM packaging, improved automation and ported all 
> > infra team parts of VDSM (jsonrpc, ssl, vdms-client, hooks infra, ...) to 
> > Python 3. He is a very nice person to talk, is usually very responsive and 
> > takes care a lot about code quality.
> >
> > So I'd like to propose Marcin as VDSM infra maintainer.
> >
> > Please share your thoughts.
>
> Marcin is practically vdsm infra maintainer for awhile now, so it will
> be a good idea
> to make this official.
>
> I hope we can get more contributors to vdsm infra, having one
> maintainer which is
> also the only contributor is not enough for complicated project like vdsm.
>
> Nir

This seems like a +2.
Congratulations, Marcin. This is a serious responsibility, use it carefully!

Dear infra, please add msobczyk to vdsm-maintainers group.
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/FUG6P3FPHDV5EW3RF7DFUIVV4HOBJPPH/


OST failing due to missing qemu dependencies

2018-11-20 Thread Dan Kenigsberg
Is this known already?

I am not familiar of the missing packages nor with the reason why our
cache misses them.


https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3587/console

09:00:14 + yum -y install ovirt-host
09:00:14 Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64 (alocalsync)
09:00:14Requires: libibumad.so.3()(64bit)
09:00:14 Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64 (alocalsync)
09:00:14Requires: libgbm.so.1()(64bit)
09:00:14 Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64 (alocalsync)
09:00:14Requires: libepoxy.so.0()(64bit)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/PUFHABM57Y2TQM3GF5J6E3EPKNZCS4UH/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-19 Thread Dan Kenigsberg
On Wed, Nov 14, 2018 at 5:07 PM Dan Kenigsberg  wrote:
>
> On Wed, Nov 14, 2018 at 12:42 PM Dominik Holler  wrote:
> >
> > On Wed, 14 Nov 2018 11:24:10 +0100
> > Michal Skrivanek  wrote:
> >
> > > > On 14 Nov 2018, at 10:50, Dominik Holler  wrote:
> > > >
> > > > On Wed, 14 Nov 2018 09:27:39 +0100
> > > > Dominik Holler  wrote:
> > > >
> > > >> On Tue, 13 Nov 2018 13:01:09 +0100
> > > >> Martin Perina  wrote:
> > > >>
> > > >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> > > >>> 
> > > >>> wrote:
> > > >>>
> > > >>>>
> > > >>>>
> > > >>>> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> > > >>>>
> > > >>>> On Tue, 13 Nov 2018 11:56:37 +0100
> > > >>>> Martin Perina  wrote:
> > > >>>>
> > > >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> > > >>>>
> > > >>>> Martin? can you please look at the patch that Dominik sent?
> > > >>>> We need to resolve this as we have not had an engine build for the 
> > > >>>> last 11
> > > >>>> days
> > > >>>>
> > > >>>>
> > > >>>> Yesterday I've merged Dominik's revert patch
> > > >>>> https://gerrit.ovirt.org/95377
> > > >>>> which should switch cluster level back to 4.2. Below mentioned change
> > > >>>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 
> > > >>>> 4.3, am I
> > > >>>> right Michal?
> > > >>>>
> > > >>>> The build mentioned
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > > >>>> is from yesterday. Are we sure that it was executed only after 
> > > >>>> #95377 was
> > > >>>> merged? I'd like to see the results from latest
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > > >>>> but unfortunately it already waits more than an hour for available 
> > > >>>> hosts
> > > >>>> ...
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> https://gerrit.ovirt.org/#/c/95283/ results in
> > > >>>>
> > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > > >>>> which is used in
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > > >>>> results in run_vms succeeding.
> > > >>>>
> > > >>>> The next merged change
> > > >>>> https://gerrit.ovirt.org/#/c/95310/ results in
> > > >>>>
> > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > > >>>> which is used in
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > > >>>> results in run_vms failing with
> > > >>>> 2018-11-12 17:35:10,109-05 INFO
> > > >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > > >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
> > > >>>> RunVmOnceCommand
> > > >>>> internal: false. Entities affected :  ID:
> > > >>>> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM 
> > > >>>> with role
> > > >>>> type USER
> > > >>>> 2018-11-12 17:35:10,113-05 DEBUG
> > > >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > >>>> getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], 
> > > >>>> timeElapsed:
> > > >>>> 4ms
> > > >>

Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Dan Kenigsberg
On Thu, Nov 15, 2018 at 1:29 PM Dafna Ron  wrote:
>
>
>
> On Thu, Nov 15, 2018 at 11:20 AM Dan Kenigsberg  wrote:
>>
>> On Thu, Nov 15, 2018 at 1:16 PM Barak Korren  wrote:
>> >
>> >
>> >
>> > On Thu, 15 Nov 2018 at 13:11, Dafna Ron  wrote:
>> >>
>> >> I am checking the failed jobs
>> >> However, Please note that I think you are confusing issues.
>> >> Currently, we (CI) have a problem in the job that syncs the package to 
>> >> the snapshot repo. this jobs run nightly and we had no way of knowing it 
>> >> would fail until today.
>> >> Before today, we had several regressions which lasted for two weeks which 
>> >> means no package was build at all.
>> >> So different issues
>> >>
>> >
>> > It should be fixed now
>>
>> Would you trigger it now (mid-day!)?
>> master-snapshot still carries the ancient
>> ovirt-engine-0:4.3.0-0.0.master.20181101091940.git61310aa
>
>
>
> As this is the way to fix the issue, It ran.
> They have the new package:
> ovirt-engine-0:4.3.0-0.0.master.20181114214053.gitee7737e.el7.noarch

yes you are right; `dnf clean all` fixed it on my side too :-/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JDUSER7LH2YH6WK4PNDCOYWSDRXA6LTL/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Dan Kenigsberg
On Thu, Nov 15, 2018 at 1:16 PM Barak Korren  wrote:
>
>
>
> On Thu, 15 Nov 2018 at 13:11, Dafna Ron  wrote:
>>
>> I am checking the failed jobs
>> However, Please note that I think you are confusing issues.
>> Currently, we (CI) have a problem in the job that syncs the package to the 
>> snapshot repo. this jobs run nightly and we had no way of knowing it would 
>> fail until today.
>> Before today, we had several regressions which lasted for two weeks which 
>> means no package was build at all.
>> So different issues
>>
>
> It should be fixed now

Would you trigger it now (mid-day!)?
master-snapshot still carries the ancient
ovirt-engine-0:4.3.0-0.0.master.20181101091940.git61310aa
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/2HOQU4GUOUXNH3332AD5HSSPC42OVHSQ/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Dan Kenigsberg
On Thu, Nov 15, 2018 at 1:11 PM Dafna Ron  wrote:
>
> I am checking the failed jobs
> However, Please note that I think you are confusing issues.
> Currently, we (CI) have a problem in the job that syncs the package to the 
> snapshot repo. this jobs run nightly and we had no way of knowing it would 
> fail until today.
> Before today, we had several regressions which lasted for two weeks which 
> means no package was build at all.
> So different issues

No confusion here.
There have been multiple production bugs that blocked CQ for 2 weeks.
Only now we are blocked on an automation bug. I hope you fix it soon,
since quite justly, QE do not want to consume a repo that may change
midday.
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/4F7R4FLLJ4J3EPN5UTLCY5D4DSOK5G2E/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Dan Kenigsberg
On Thu, Nov 15, 2018 at 12:45 PM Eyal Edri  wrote:
>
>
>
> On Thu, Nov 15, 2018 at 12:43 PM Dan Kenigsberg  wrote:
>>
>> On Wed, Nov 14, 2018 at 5:07 PM Dan Kenigsberg  wrote:
>> >
>> > On Wed, Nov 14, 2018 at 12:42 PM Dominik Holler  wrote:
>> > >
>> > > On Wed, 14 Nov 2018 11:24:10 +0100
>> > > Michal Skrivanek  wrote:
>> > >
>> > > > > On 14 Nov 2018, at 10:50, Dominik Holler  wrote:
>> > > > >
>> > > > > On Wed, 14 Nov 2018 09:27:39 +0100
>> > > > > Dominik Holler  wrote:
>> > > > >
>> > > > >> On Tue, 13 Nov 2018 13:01:09 +0100
>> > > > >> Martin Perina  wrote:
>> > > > >>
>> > > > >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
>> > > > >>> 
>> > > > >>> wrote:
>> > > > >>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> On 13 Nov 2018, at 12:20, Dominik Holler  
>> > > > >>>> wrote:
>> > > > >>>>
>> > > > >>>> On Tue, 13 Nov 2018 11:56:37 +0100
>> > > > >>>> Martin Perina  wrote:
>> > > > >>>>
>> > > > >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  
>> > > > >>>> wrote:
>> > > > >>>>
>> > > > >>>> Martin? can you please look at the patch that Dominik sent?
>> > > > >>>> We need to resolve this as we have not had an engine build for 
>> > > > >>>> the last 11
>> > > > >>>> days
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Yesterday I've merged Dominik's revert patch
>> > > > >>>> https://gerrit.ovirt.org/95377
>> > > > >>>> which should switch cluster level back to 4.2. Below mentioned 
>> > > > >>>> change
>> > > > >>>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 
>> > > > >>>> 4.3, am I
>> > > > >>>> right Michal?
>> > > > >>>>
>> > > > >>>> The build mentioned
>> > > > >>>>
>> > > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
>> > > > >>>> is from yesterday. Are we sure that it was executed only after 
>> > > > >>>> #95377 was
>> > > > >>>> merged? I'd like to see the results from latest
>> > > > >>>>
>> > > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
>> > > > >>>> but unfortunately it already waits more than an hour for 
>> > > > >>>> available hosts
>> > > > >>>> ...
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> https://gerrit.ovirt.org/#/c/95283/ results in
>> > > > >>>>
>> > > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
>> > > > >>>> which is used in
>> > > > >>>>
>> > > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
>> > > > >>>> results in run_vms succeeding.
>> > > > >>>>
>> > > > >>>> The next merged change
>> > > > >>>> https://gerrit.ovirt.org/#/c/95310/ results in
>> > > > >>>>
>> > > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
>> > > > >>>> which is used in
>> > > > >>>>
>> > > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
>> > > > >>>> results in run_vms failing with
>> > > > >>>> 2018-11-12 17:35:10,109-05 INFO
>&g

Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-15 Thread Dan Kenigsberg
On Wed, Nov 14, 2018 at 5:07 PM Dan Kenigsberg  wrote:
>
> On Wed, Nov 14, 2018 at 12:42 PM Dominik Holler  wrote:
> >
> > On Wed, 14 Nov 2018 11:24:10 +0100
> > Michal Skrivanek  wrote:
> >
> > > > On 14 Nov 2018, at 10:50, Dominik Holler  wrote:
> > > >
> > > > On Wed, 14 Nov 2018 09:27:39 +0100
> > > > Dominik Holler  wrote:
> > > >
> > > >> On Tue, 13 Nov 2018 13:01:09 +0100
> > > >> Martin Perina  wrote:
> > > >>
> > > >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
> > > >>> 
> > > >>> wrote:
> > > >>>
> > > >>>>
> > > >>>>
> > > >>>> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> > > >>>>
> > > >>>> On Tue, 13 Nov 2018 11:56:37 +0100
> > > >>>> Martin Perina  wrote:
> > > >>>>
> > > >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> > > >>>>
> > > >>>> Martin? can you please look at the patch that Dominik sent?
> > > >>>> We need to resolve this as we have not had an engine build for the 
> > > >>>> last 11
> > > >>>> days
> > > >>>>
> > > >>>>
> > > >>>> Yesterday I've merged Dominik's revert patch
> > > >>>> https://gerrit.ovirt.org/95377
> > > >>>> which should switch cluster level back to 4.2. Below mentioned change
> > > >>>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 
> > > >>>> 4.3, am I
> > > >>>> right Michal?
> > > >>>>
> > > >>>> The build mentioned
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > > >>>> is from yesterday. Are we sure that it was executed only after 
> > > >>>> #95377 was
> > > >>>> merged? I'd like to see the results from latest
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> > > >>>> but unfortunately it already waits more than an hour for available 
> > > >>>> hosts
> > > >>>> ...
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> https://gerrit.ovirt.org/#/c/95283/ results in
> > > >>>>
> > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> > > >>>> which is used in
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> > > >>>> results in run_vms succeeding.
> > > >>>>
> > > >>>> The next merged change
> > > >>>> https://gerrit.ovirt.org/#/c/95310/ results in
> > > >>>>
> > > >>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> > > >>>> which is used in
> > > >>>>
> > > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> > > >>>> results in run_vms failing with
> > > >>>> 2018-11-12 17:35:10,109-05 INFO
> > > >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> > > >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
> > > >>>> RunVmOnceCommand
> > > >>>> internal: false. Entities affected :  ID:
> > > >>>> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM 
> > > >>>> with role
> > > >>>> type USER
> > > >>>> 2018-11-12 17:35:10,113-05 DEBUG
> > > >>>> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> > > >>>> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> > > >>>> getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], 
> > > >>>> timeElapsed:
> > > >>>> 4ms
> > > >>

Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-14 Thread Dan Kenigsberg
gt;> Martin Perina  wrote:
> > >>>>
> > >>>> On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> > >>>>
> > >>>> There are currently two issues failing ovirt-engine on CQ ovirt
> > >>>>
> > >>>> master:
> > >>>>
> > >>>>
> > >>>> 1. edit vm pool is causing failure in different tests. it has a
> > >>>>
> > >>>> patch
> > >>>>
> > >>>> *waiting
> > >>>>
> > >>>> to be merged*: https://gerrit.ovirt.org/#/c/95354/
> > >>>>
> > >>>>
> > >>>> Merged
> > >>>>
> > >>>>
> > >>>> 2. we have a failure in upgrade suite as well to run vm but this
> > >>>>
> > >>>> seems
> > >>>>
> > >>>> to
> > >>>>
> > >>>> be related to the tests as well:
> > >>>> 2018-11-12 05:41:07,831-05 WARN
> > >>>> [org.ovirt.engine.core.bll.validator.VirtIoRngValidator]
> > >>>>
> > >>>> (default
> > >>>>
> > >>>> task-1)
> > >>>>
> > >>>> [] Random number source URANDOM is not supported in cluster
> > >>>>
> > >>>> 'test-cluster'
> > >>>>
> > >>>> compatibility version 4.0.
> > >>>>
> > >>>> here is the full error from the upgrade suite failure in run vm:
> > >>>> https://pastebin.com/XLHtWGGx
> > >>>>
> > >>>> Here is the latest failure:
> > >>>>
> > >>>>
> > >>>>
> > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
> > >>>>
> > >>>>
> > >>>>
> > >>>> I will try to take a look later today
> > >>>>
> > >>>>
> > >>>> I have the idea that this might be related to
> > >>>> https://gerrit.ovirt.org/#/c/95377/ , and I check in
> > >>>>
> > >>>>
> > >>>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
> > >>>>
> > >>>>
> > >>>> , but I have to stop now, if not solved I can go on later today.
> > >>>>
> > >>>>
> > >>>> OK, both CI and above manual OST job went fine, so I've just merged the
> > >>>> revert patch. I will take a look at it later in detail, we should
> > >>>>
> > >>>> really be
> > >>>>
> > >>>> testing 4.3 on master and not 4.2
> > >>>>
> > >>>>
> > >>>> Ack.
> > >>>>
> > >>>> Now
> > >>>>
> > >>>>
> > >>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> > >>>> is failing on
> > >>>> File
> > >>>>
> > >>>> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
> > >>>> line 698, in run_vms
> > >>>>   api.vms.get(VM0_NAME).start(start_params)
> > >>>> status: 400
> > >>>> reason: Bad Request
> > >>>>
> > >>>> 2018-11-12 10:06:30,722-05 INFO
> > >>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default 
> > >>>> task-3)
> > >>>> [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host
> > >>>> 'lago-basic-suite-master-host-1' 
> > >>>> ('dbfe1b0c-f940-4dba-8fb1-0cfe5ca7ddfc')
> > >>>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> > >>>> (correlation id: b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
> > >>>> 2018-11-12 10:06:30,722-05 INFO
> > >>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default 
> > >>>> task-3)
> > >>>> [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host
> > >>>> 'lago-basic-suite-master-host-0' 
> > >>>> ('e83a63ca-381e-40db-acb2-65a3e7953e11')
> > >&g

Re: [VDSM] Proposing Denis as vdsm gluster maintainer

2018-11-12 Thread Dan Kenigsberg
On Mon, 12 Nov 2018, 16:04 Francesco Romani  On 11/12/18 1:33 PM, Nir Soffer wrote:
> > Hi all,
> >
> > Denis is practically maintaining vdsm gluster code in the recent years,
> > and it is time to make this official.
> >
> > Please ack,
> > Nir
>
>
> Obvious +1 from me. Keep up the good work Denis!
>

+2 then. Adding infra@ovirt to make the change to vdsm ACL.


> --
> Francesco Romani
> Senior SW Eng., Virtualization R
> Red Hat
> IRC: fromani github: @fromanirh
>
>
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/VDVOVRXLWA5QEDQCBT3Y25ZYALOV6W5O/


Re: [ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-11 Thread Dan Kenigsberg
On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri  wrote:
>
>
>
> On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri  wrote:
>>
>>
>>
>> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg  wrote:
>>>
>>> On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  wrote:
>>> >
>>> > Hey,
>>> > I've seen that CQ Master is not passing ovirt-engine for 10 days and 
>>> > fails on test suite called restore_vm0_networking
>>> > here's a snap error regarding it:
>>> >
>>> > https://pastebin.com/7msEYqKT
>>> >
>>> > Link to a sample job with the error:
>>> >
>>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml
>>>
>>> I cannot follow this link because I'm 4 minutes too late
>>>
>>> jenkins.ovirt.org uses an invalid security certificate. The
>>> certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
>>> current time is November 11, 2018, 5:17 PM.
>>
>>
>> Yes, we're looking into that issue now.
>
>
> Fixed, you should be able to access it now.

OST fails during restore_vm0_networking in line 101 of
004_basic_sanity.py while comparing
vm_service.get().status == state

It seems that instead of reporting back the VM status, Engine set garbage
"The response content type 'text/html; charset=iso-8859-1' isn't the
expected XML"

I do not know what could cause that, and engine.log does not mention
it. But it seems like a problem in engine API hence +Martin Perina and
+Ondra Machacek .



>
>>
>>
>>
>>>
>>>
>>> >
>>> > Can some1 have a look at it and help to resolve the issue?
>>> >
>>> >
>>> > ___
>>> > Infra mailing list -- infra@ovirt.org
>>> > To unsubscribe send an email to infra-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct: 
>>> > https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives: 
>>> > https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZQAYWTLZJKGPJ25F33E6ICVDXQDYSKSQ/
>>> ___
>>> Devel mailing list -- de...@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/R5LOJH73XCLLFOUTKPM5GUCS6PNNKGTE/
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV/CNV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA
>>
>> TRIED. TESTED. TRUSTED.
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV/CNV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA
>
> TRIED. TESTED. TRUSTED.
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/DA6Q5RE5JO3FYIKN2QLKLWMCUBQA2HBX/


Re: [ovirt-devel] [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-11 Thread Dan Kenigsberg
On Sun, Nov 11, 2018 at 4:36 PM Ehud Yonasi  wrote:
>
> Hey,
> I've seen that CQ Master is not passing ovirt-engine for 10 days and fails on 
> test suite called restore_vm0_networking
> here's a snap error regarding it:
>
> https://pastebin.com/7msEYqKT
>
> Link to a sample job with the error:
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3/artifact/basic-suite.el7.x86_64/004_basic_sanity.py.junit.xml

I cannot follow this link because I'm 4 minutes too late

jenkins.ovirt.org uses an invalid security certificate. The
certificate expired on November 11, 2018, 5:13:25 PM GMT+2. The
current time is November 11, 2018, 5:17 PM.

>
> Can some1 have a look at it and help to resolve the issue?
>
>
> ___
> Infra mailing list -- infra@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/ZQAYWTLZJKGPJ25F33E6ICVDXQDYSKSQ/
___
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/R5LOJH73XCLLFOUTKPM5GUCS6PNNKGTE/


Re: Test network.ovs_driver_test.TestOvsApiBase seems to fail due to a timeout

2018-03-28 Thread Dan Kenigsberg
We still have no guess regarding what made this pop up. I can confirm that
this happens quite often, on different slaves. E.g
http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/22648/console
where the command

'/usr/bin/ovs-vsctl', '--oneline', '--format=json', '--', 'add-br',
'vdsmbr_test', '--', 'list', 'Bridge'

gets stuck.



On Mon, Mar 26, 2018 at 7:20 PM, Dafna Ron  wrote:

> Adding Gal.
>
> Gal, can this be related to the timeout you added?
>
> thanks,
> Dafna
>
>
> On Mon, Mar 26, 2018 at 2:24 PM, Eyal Edri  wrote:
>
>> Not sure its an infra issue, but adding Evgheni to help if needed.
>>
>> On Mon, Mar 26, 2018 at 3:48 PM, Shani Leviim  wrote:
>>
>>>
>>> *Hi,*
>>>
>>> *I'm trying to verify an ovirt-4.2 patch on vdsm, but the ci keep failing 
>>> due to a timeout on *
>>> *network.ovs_driver_test.TestOvsApiBase test [1]. *
>>>
>>>
>>>
>>>
>>>
>>> *That heppen for both jobs 166 and 167 (I've tryied to retrigger). (Links 
>>> below).Can you please assist?Thanks!*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *http://jenkins.ovirt.org/job/vdsm_4.2_check-patch-el7-x86_64/166/consoleFull
>>>  
>>> http://jenkins.ovirt.org/job/vdsm_4.2_check-patch-el7-x86_64/167/console
>>>  
>>> [1]08:22:55*
>>>  network.ovs_driver_test.TestOvsApiBase*08:22:55* 
>>> test_execute_a_single_command   OK*08:26:56*
>>>  test_execute_a_transaction  
>>> ___ summary 
>>> *08:26:56*   pylint: commands 
>>> succeeded*08:26:56*   congratulations :)*08:30:53* *08:30:53* 
>>> *08:30:53*
>>>  =   Watched process timed out  
>>> =*08:30:53* 
>>> *08:30:53*
>>>  *08:30:53* [Thread debugging using libthread_db enabled]*08:30:53* Using 
>>> host libthread_db library "/lib64/libthread_db.so.1".*08:30:54* 
>>> 0x7f6a10ca6a3d in poll () at 
>>> ../sysdeps/unix/syscall-template.S:81*08:30:54* 81 T_PSEUDO 
>>> (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
>>>
>>>
>>>
>>> *Regards,*
>>>
>>> *Shani Leviim*
>>>
>>> ___
>>> Infra mailing list
>>> Infra@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/infra
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>> ___
>> Infra mailing list
>> Infra@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: OST Failure - Weekly update [03/03/2018-09/03/2018]

2018-03-09 Thread Dan Kenigsberg
On Fri, Mar 9, 2018 at 10:02 PM, Dafna Ron  wrote:

> Hello,
>
> I would like to update on this week's failures and OST current status.
>
> We had a few failure this week:
>
> 1. A change in OST caused a failure in one of the tests. this was due to
> using a 4.3 SDK feature which is not available in the current SDK version.
> The change was https://gerrit.ovirt.org/#/c/84338 -
> https://gerrit.ovirt.org/#/c/84338/
>  We reverted the change: https://gerrit.ovirt.org/#/c/88441/
>

I believe that we where able to take this patch due to a bug in OST, which
I believe Daniel already works on: the patch modified a file that is linked
from the suite-4.2, but the CI job for the patch triggered only
suite-master. We should be more prudent, as long as we use cross-suite
symlinks.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] ovirt-system-tests_network-suite-4.2 - Build # 5 - Still Failing!

2018-03-05 Thread Dan Kenigsberg
http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.2/5/console

Leon, have you noticed that your new test for syncallnetworks runs on
4.2, too, while the API is introduced only on 4.3?
Either backport the new API, or (more reasonably) avoid this test in 4.2.

On Mon, Mar 5, 2018 at 12:27 PM,   wrote:
> Project: http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.2/
> Build: http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.2/5/
> Build Number: 5
> Build Status:  Still Failing
> Triggered By: Started by timer
>
> -
> Changes Since Last Success:
> -
> Changes for Build #4
> [Eyal Edri] Revert "track LSM job status"
>
>
> Changes for Build #5
> [Eyal Edri] Revert "track LSM job status"
>
>
>
>
> -
> Failed Tests:
> -
> No tests ran.
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [CQ]: 83979,8 (vdsm) failed "ovirt-master" system tests

2018-02-26 Thread Dan Kenigsberg
On Sat, Feb 24, 2018 at 4:36 PM, oVirt Jenkins  wrote:
> Change 83979,8 (vdsm) is probably the reason behind recent system test 
> failures
> in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/83979/8
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5850/

I see that one of the network functional tests is failing
build-artifacts and the release of this build.

Edy, could you see why?

http://jenkins.ovirt.org/job/vdsm_master_check-merged-el7-x86_64/3226/artifact/exported-artifacts/mock_logs/mocker-epel-7-x86_64.el7.check-merged.sh/check-merged.sh.log
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 22-02-2018 ] [ 002_bootstrap.verify_add_hosts + 002_bootstrap.add_hosts ]

2018-02-22 Thread Dan Kenigsberg
On Thu, Feb 22, 2018 at 12:59 PM, Dafna Ron  wrote:
> Hi,
>
> We had two failed tests reported in vdsm project last evening  the patch
> reported seems to be related to the issue.
>
>
> Link and headline of suspected patches:
>
> momIF: change the way we connect to MOM -
> https://gerrit.ovirt.org/#/c/87944/
>
>
> Link to Job:
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5823/
>
> Link to all logs:
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5823/artifacts
>
> (Relevant) error snippet from the log:
>
> 
>
>
>
> 2018-02-21 14:15:47,576-0500 INFO  (MainThread) [vdsm.api] FINISH
> prepareForShutdown return=None from=internal,
> task_id=7d37a33b-0215-40c0-a821-9b94707caca6 (api:52)
> 2018-02-21 14:15:47,576-0500 ERROR (MainThread) [vds] Exception raised
> (vdsmd:158)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run
> serve_clients(log)
>   File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 103, in
> serve_clients
> cif = clientIF.getInstance(irs, log, scheduler)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 251, in
> getInstance
> cls._instance = clientIF(irs, log, scheduler)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 121, in
> __init__
> self.mom = MomClient(config.get("mom", "socket_path"))
>   File "/usr/lib/python2.7/site-packages/vdsm/momIF.py", line 51, in
> __init__
> raise MomNotAvailableError()
> MomNotAvailableError
>
> 
>

this smells like a race between mom and vdsm startups (that
bidirectional dependency is woderful!). I am sure that Francesco can
fix it quickly, but until then I've posted a revert of the offending
patch
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/2225/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt master ] [ 20-11-2017 ] [004_basic_sanity.vm_run ]

2017-11-20 Thread Dan Kenigsberg
Francesco is on it: https://gerrit.ovirt.org/#/c/84382/

On Mon, Nov 20, 2017 at 3:43 PM, Dafna Ron  wrote:
> Hi,
>
> We have a failure in OST on test 004_basic_sanity.vm_run.
>
> it seems to be an error in vm type which is related to the patch reported.
>
>
> Link to suspected patches: https://gerrit.ovirt.org/#/c/84343/
>
>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922
>
>
> Link to all logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3922/artifact
>
>
> (Relevant) error snippet from the log:
>
> 
>
>
> vdsm log:
>
> 2017-11-20 07:40:12,779-0500 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:611)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606,
> in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in
> _dynamicMethod
> result = fn(*methodArgs)
>   File "", line 2, in getAllVmStats
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1341, in
> getAllVmStats
> statsList = self._cif.getAllVmStats()
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 508, in
> getAllVmStats
> return [v.getStats() for v in self.vmContainer.values()]
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1664, in
> getStats
> stats.update(self._getConfigVmStats())
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1703, in
> _getConfigVmStats
> 'vmType': self.conf['vmType'],
> KeyError: 'vmType'
>
>
> engine log:
>
> 2017-11-20 07:43:07,675-05 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
> Message received: {"jsonrpc": "2.0", "id":
> "5bf12e5a-4a09-4999-a6ce-a7dd639d3833", "error": {"message": "Internal
> JSON-RPC error:
>  {'reason': \"'vmType'\"}", "code": -32603}}
> 2017-11-20 07:43:07,676-05 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Unexpected return
> value: Status [code=-32603, message=Internal JSON-RPC error: {'r
> eason': "'vmType'"}]
> 2017-11-20 07:43:07,676-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Failed in
> 'GetAllVmStatsVDS' method
> 2017-11-20 07:43:07,676-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Command
> 'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-0, VdsIdV
> DSCommandParametersBase:{hostId='1af28f2c-79db-4069-aa53-5bb46528c5e9'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> GetAllVmStatsVDS, error = Internal JSON-RPC error: {'reason': "'vmType'"},
> code = -32603
> 2017-11-20 07:43:07,676-05 DEBUG
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-70) [] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGeneric
> Exception: VDSErrorException: Failed to GetAllVmStatsVDS, error = Internal
> JSON-RPC error: {'reason': "'vmType'"}, code = -32603
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]
> at
> org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
> [vdsbroker.jar:]
> at
> org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
> Source) [vdsbroker.jar:]
> at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown Source)
> [:1.8.0_151]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]
> at 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 01 Nov 2017 ] [ 098_ovirt_provider_ovn.test_ovn_provider_rest ]

2017-11-03 Thread Dan Kenigsberg
Kaul suggested a fix https://gerrit.ovirt.org/#/c/83569/ ; please consider
taking it in.

On Wed, Nov 1, 2017 at 3:24 PM, Eyal Edri  wrote:

> Adding Marcin.
>
> On Wed, Nov 1, 2017 at 12:01 PM, Dafna Ron  wrote:
>
>> Hi,
>>
>> 098_ovirt_provider_ovn.test_ovn_provider_rest failed on removing the
>> interface from a running vm.
>>
>> I have seen this before, do we perhaps have a race in OST where the vm is
>> still running at times?
>>
>> *Link to suspected patches: Patch reported is below but I am suspecting
>> its a race and not related*
>>
>>
>> *https://gerrit.ovirt.org/#/c/83414/
>>  *
>>
>> *Link to Job:*
>>
>> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3558/
>> *
>>
>>
>> *Link to all logs:*
>>
>>
>>
>>
>>
>> *
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3558/artifact/
>> 
>> (Relevant) error snippet from the log:  2017-10-31 10:58:43,516-04
>> ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
>> (default task-32) [] Operation Failed: [Cannot remove Interface. The VM
>> Network Interface is plugged to a running VM.]  *
>>
>>
>> ___
>> Devel mailing list
>> de...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-1649) Include summary, owner and component of suspected patch in OST Failure Report

2017-09-14 Thread Dan Kenigsberg
On Thu, Sep 14, 2017 at 3:17 PM, eyal edri (oVirt JIRA) <
j...@ovirt-jira.atlassian.net> wrote:

> [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=34913#comment-34913
>  ]
>
> eyal edri commented on OVIRT-1649:
>
> We are planning to start commenting directly in Gerrit on the relevant
> offending patch soon, we were waiting to see how CQ behaves and sending
> emails to infra only until now. However, it looks like it's quite accurate,
> so we might consider doing it sooner, though we might also want to wait
> until we gate also OS changes before during that.
>
> Nevertheless, I guess we can in the meantime include the patch owner in
> the ‘to’ email when sending a report. Any comments on the current summary?
> is there something missing in it other than the patch owner?
>

Yes - I've asked to have the patch summary line, and its component.

On Thu, Sep 14, 2017 at 3:11 PM, danken (oVirt JIRA) <
>
> —
>
> Eyal edri
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
> Red Hat EMEA   TRIED.
> TESTED. TRUSTED.  phone: +972-9-7692018
> <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> Include summary, owner and component of suspected patch in OST Failure
> Report
>
>  Key: OVIRT-1649
>  URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1649
>  Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
> Reporter: danken
> Assignee: infra
>
> $subject would make it easier and quicker to check if the suspicion is
> correct.
>
> — This message was sent by Atlassian {0} (v1001.0.0-SNAPSHOT#100059)
>
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
>
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt 4.1 ] [ 09/02/17 ] [ basic_sanity.hotplug_nic ]

2017-03-12 Thread Dan Kenigsberg
For the record, we have realized that this failure is unrelated to the
log supplied in bug
https://bugzilla.redhat.com/show_bug.cgi?id=1417595 .

That log is due to a test requesting dhcp address, not receiving it,
and carrying on to next tests, while a Vdsm thread is still waiting
for dhcp.

The cause for this breakage is probably a broken jsonrpc connection,
hopefully fixed by
https://gerrit.ovirt.org/#/q/Idaa54767bb7e54bf13e89887ca34fa8e01ade420

On Thu, Mar 9, 2017 at 3:06 PM, Dan Kenigsberg <dan...@redhat.com> wrote:
> On Thu, Mar 9, 2017 at 10:00 AM, Daniel Belenky <dbele...@redhat.com> wrote:
>> Test failed: basic_sanity.hotplug_nic
>>
>> Link to failed job: test-repo_ovirt_experimental_4.1/917/
>>
>> Link to all logs: logs from Jenkins
>>
>> May be related to: https://bugzilla.redhat.com/show_bug.cgi?id=1417595
>>
>> Error snippet from log: (from supervdsm.log):
>>
>> ifup/VLAN100_Network::DEBUG::2017-03-08
>> 22:24:22,832::commands::93::root::(execCmd) FAILED:  = 'Running scope
>> as unit
>> ff994cde-b66d-4d54-9e36-fc253682608f.scope.\n/etc/sysconfig/network-scripts/ifup-eth:
>> line 297: 16985 Terminated  /sbin/dhclient ${DHCLIENTARGS}
>> ${DEVICE}\nCannot find device "VLAN100_Network"\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
>> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
>> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
>> does not exist.\n';  = 1
>> ifup/VLAN100_Network::ERROR::2017-03-08
>> 22:24:22,832::utils::371::root::(wrapper) Unhandled exception
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in
>> wrapper
>> return f(*a, **kw)
>>   File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 180, in
>> run
>> return func(*args, **kwargs)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
>> 924, in _exec_ifup
>> _exec_ifup_by_name(iface.name, cgroup)
>>   File
>> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
>> 910, in _exec_ifup_by_name
>> raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
>> ConfigNetworkError: (29, 'Determining IPv6 information for
>> VLAN100_Network... failed.')
>
> Yes, this seems to be a reproduction of Vdsm bug 1417595 - hotplug_nic
> test fails in OST
>
> Eddy, can you tell how could it be that the VLAN100_NETWORK bridge
> does not exist when ifup is called?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt 4.1 ] [ 09/02/17 ] [ basic_sanity.hotplug_nic ]

2017-03-09 Thread Dan Kenigsberg
On Thu, Mar 9, 2017 at 10:00 AM, Daniel Belenky  wrote:
> Test failed: basic_sanity.hotplug_nic
>
> Link to failed job: test-repo_ovirt_experimental_4.1/917/
>
> Link to all logs: logs from Jenkins
>
> May be related to: https://bugzilla.redhat.com/show_bug.cgi?id=1417595
>
> Error snippet from log: (from supervdsm.log):
>
> ifup/VLAN100_Network::DEBUG::2017-03-08
> 22:24:22,832::commands::93::root::(execCmd) FAILED:  = 'Running scope
> as unit
> ff994cde-b66d-4d54-9e36-fc253682608f.scope.\n/etc/sysconfig/network-scripts/ifup-eth:
> line 297: 16985 Terminated  /sbin/dhclient ${DHCLIENTARGS}
> ${DEVICE}\nCannot find device "VLAN100_Network"\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\nDevice "VLAN100_Network" does not exist.\nDevice
> "VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
> exist.\nDevice "VLAN100_Network" does not exist.\nDevice "VLAN100_Network"
> does not exist.\n';  = 1
> ifup/VLAN100_Network::ERROR::2017-03-08
> 22:24:22,832::utils::371::root::(wrapper) Unhandled exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in
> wrapper
> return f(*a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 180, in
> run
> return func(*args, **kwargs)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
> 924, in _exec_ifup
> _exec_ifup_by_name(iface.name, cgroup)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py", line
> 910, in _exec_ifup_by_name
> raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
> ConfigNetworkError: (29, 'Determining IPv6 information for
> VLAN100_Network... failed.')

Yes, this seems to be a reproduction of Vdsm bug 1417595 - hotplug_nic
test fails in OST

Eddy, can you tell how could it be that the VLAN100_NETWORK bridge
does not exist when ifup is called?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ OST Failure Report ] [ oVirt 4.1 ] [ 09/02/17 ] [ basic_sanity.assign_hosts_network_label ]

2017-03-09 Thread Dan Kenigsberg
On Thu, Mar 9, 2017 at 1:50 PM, Daniel Belenky  wrote:
> Test failed: basic_sanity.assign_hosts_network_label
>
> Link to failed job: test-repo_ovirt_experimental_master/5754
>
> Link to all logs: logs from Jenkins
>
> Error snippet from log:
>
> lago.utils: ERROR: Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 57, in
> _ret_via_queue
> queue.put({'return': func()})
>   File
> "/home/jenkins/workspace/test-repo_ovirt_experimental_master/ovirt-system-tests/basic-suite-master/test-scenarios/005_network_by_label.py",
> line 56, in _assign_host_network_label
> host_nic=nic
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
> 16231, in add
> headers={"Correlation-Id":correlation_id, "Expect":expect}
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 79, in add
> return self.request('POST', url, body, headers, cls=cls)
>   File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py",
> line 122, in request
> persistent_auth=self.__persistent_auth
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 79, in do_request
> persistent_auth)
>   File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 162, in __do_request
> raise errors.RequestError(response_code, response_reason, response_body)
> RequestError:
> status: 409
> reason: Conflict
> detail: Cannot add Label. Operation can be performed only when Host status
> is  Maintenance, Up, NonOperational.

Leon, could you look into this job's engine.log? I suspect that the
added log entry there would state that the host has fallen off to
non-responding. If so, I suspect a bug in the transport layer.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ost host addition failure

2016-12-27 Thread Dan Kenigsberg
On Tue, Dec 27, 2016 at 9:59 AM, Eyal Edri  wrote:
>
>
> On Tue, Dec 27, 2016 at 9:56 AM, Eyal Edri  wrote:
>>
>> Any updates?
>> The tests are still failing on vdsmd won't start from Sunday... master
>> repos havn't been refreshed for a few days due to this.
>>
>> from host deploy log: [1]
>> basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
>> the job links [2]
>>
>>
>>
>>
>>
>> [1]
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-
>
>
> Now with the full link:
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/artifact/exported-artifacts/basic_suite_master.sh-el7/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log_ovirt-engine/host-deploy/ovirt-host-deploy-20161227012930-192.168.201.4-14af2bf0.log
>
>>
>>
>>
>>
>> 016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stdout:
>>
>> 2016-12-27 01:29:29 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stderr:
>> A dependency job for vdsmd.service failed. See 'journalctl -xe' for
>> details.
>>
>> 2016-12-27 01:29:29 DEBUG otopi.context context._executeMethod:142 method
>> exception
>> Traceback (most recent call last):
>>   File "/tmp/ovirt-QZ1ucxWFfm/pythonlib/otopi/context.py", line 132, in
>> _executeMethod
>> method['method']()
>>   File
>> "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
>> line 209, in _start
>> self.services.state('vdsmd', True)
>>   File "/tmp/ovirt-QZ1ucxWFfm/otopi-plugins/otopi/services/systemd.py",
>> line 141, in state
>> service=name,
>> RuntimeError: Failed to start service 'vdsmd'
>> 2016-12-27 01:29:29 ERROR otopi.context context._executeMethod:151 Failed
>> to execute stage 'Closing up': Failed to start service 'vdsmd'
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:760
>> ENVIRONMENT DUMP - BEGIN
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
>> BASE/error=bool:'True'
>> 2016-12-27 01:29:29 DEBUG otopi.context context.dumpEnvironment:770 ENV
>> BASE/excep
>>
>> [2]
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/lastCompletedBuild/testReport/


In the log I see

Processing package vdsm-4.20.0-7.gitf851d1b.el7.centos.x86_64

which is from Dec 22 (last Thursday). This is because of us missing a
master-branch tag. v4.20.0 wrongly tagged on the same commit as that
of v4.19.1, removed, and never placed properly.

I've re-pushed v4.20.0 properly, and now merged a patch to trigger
build-artifacts in master.
http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/1544/

When this is done, could you use it to take the artifacts and try again?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] Testing vdsm master on Fedora 23?

2016-12-27 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 11:57 AM, Eyal Edri  wrote:
> It should be removed from the standard yml file for vdsm on Jenkins repo.

Do you refer to jobs/confs/projects/vdsm/vdsm_standard.yaml ?

> Any maintainer can send a patch to exclude it, if help is needed, then
> someone from infra can guide you to the right file.
>
> On Thu, Dec 22, 2016 at 5:20 PM, Nir Soffer  wrote:
>>
>> Hi all,
>>
>> For some reason we have a new job testing vdsm master/4.1 on Fedora 23.
>> http://jenkins.ovirt.org/job/vdsm_4.1_check-patch-fc23-x86_64/2/
>>
>> Fedora 23 is not support on vdsm master long time ago. Please remove this
>> job.

On master and 4.1 we should happily skip to f25.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [ACTION REQUIRED] [URGENT] ovirt-4.1-snapshot repoclosure is failing due to ovirt-provider-ovn and vdsm

2016-12-21 Thread Dan Kenigsberg
On Wed, Dec 21, 2016 at 10:17 AM, Sandro Bonazzola  wrote:
> 00:00:31.874 Num Packages in Repos: 22534
> 00:00:31.875 package:
> ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch from
> check-custom-el7
> 00:00:31.876   unresolved deps:
> 00:00:31.876  python-openvswitch >= 0:2.6
> 00:00:31.876  openvswitch-ovn-central >= 0:2.6
> 00:00:31.876 package:
> ovirt-provider-ovn-driver-1.0-1.20161219125609.git.el7.centos.noarch from

It's good we have repoclosure, as it reminded us we cannot ship
ovirt-provider-ovn unless we build and ship a version of openvswitch
from from their master branch, at least until they ship ovs-2.7.

Sandro, Marcin: can we do it? Can we supply our own build of
openvswitch, like we did for Marcin's blog?

> check-custom-el7
> 00:00:31.876   unresolved deps:
> 00:00:31.876  python-openvswitch >= 0:2.6
> 00:00:31.876  openvswitch-ovn-host >= 0:2.6
> 00:00:31.877  openvswitch >= 0:2.6
> 00:00:31.877 package:
> vdsm-gluster-4.18.999-1162.gite9544ovirt-provider-ovn2e.el7.centos.noarch from
> check-custom-el7
> 00:00:31.877   unresolved deps:
> 00:00:31.877  vdsm = 0:4.18.999-1162.gite95442e.el7.centos

All of these seem like repoclosure false warning.

After all, vdsm = 0:4.18.999-1162.gite95442e.el7.centos is the exact
version of vdsm that is in the repo, right?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Gerrit headers are not added to commits in vdsm repo

2016-11-27 Thread Dan Kenigsberg
On Sun, Nov 27, 2016 at 12:31:21PM +0200, Eyal Edri wrote:
> Not sure I understand what do you mean by Gerrit Headers.
> Can you give examples?
> 
> On Fri, Nov 25, 2016 at 4:57 PM, Nir Soffer  wrote:
> 
> > On Fri, Nov 25, 2016 at 4:45 PM, Tomáš Golembiovský 
> > wrote:
> > > Hi,
> > >
> > > I've noticed that in vdsm repo the merged commits do not contain the
> > > info headers added by Gerrit any more (Reviewed-by/Reviewed-on/etc.).
> > >
> > > Is that intentional? If yes, what was the motivation behind this?
> > >
> > > The change seem to have happened about 4 days ago. Sometime between the
> > > following two commits:
> > >
> > > * 505f5da  API: Introduce getQemuImageInfo API. [Maor Lipchuk]
> > > * 1c4a39c  protocoldetector: Avoid unneeded getpeername() [Nir Soffer]
> >
> > We switched vdsm to fast-forward 4 days ago, maybe this was unintended
> > side effect of this change?
> >
> > The gerrit headers are very useful, please add back.


https://gerrit.ovirt.org/#/c/66295/ is the last one which had them:

Reviewed-on: https://gerrit.ovirt.org/66295
Reviewed-by: Nir Soffer 
Continuous-Integration: Jenkins CI

they are added to the commit message during cherry-pick, and I find them
very useful.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-757) [VDSM] Add Travis CI to vdsm github project

2016-10-31 Thread Dan Kenigsberg
On Mon, Oct 31, 2016 at 11:36:04AM +0200, Barak Korren (oVirt JIRA) wrote:
> 
> [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=22135#comment-22135
>  ] 
> 
> Barak Korren commented on OVIRT-757:
> 
> 
> Enabled travis on vds repo:
> https://travis-ci.org/oVirt/vdsm
> 
> Please check if it works for you

Thanks.

https://travis-ci.org/oVirt/vdsm/jobs/171955938 was triggered
automatically right after I've merged a patch to vdsm's gerrit.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-757) [VDSM] Add Travis CI to vdsm github project

2016-10-31 Thread Dan Kenigsberg
On Sun, Oct 30, 2016 at 05:12:03PM +0200, eyal edri [Administrator] (oVirt 
JIRA) wrote:
> 
> [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=22109#comment-22109
>  ] 
> 
> eyal edri [Administrator] commented on OVIRT-757:
> -
> 
> Few questions:
> 
> 1. Which tests will run there? same as check-patch/check-merged? 

It would be Vdsm's unit tests, which are also run by check-patch.

> 2. How will the travis configuration be managed/configured

vdsm source tree already has .travis.yml which github uses to trigger
Travis runs

> 3. Why use another CI system when we can run it on jenkins.ovirt.org?
> is there any limitation to existing jenkins server that Travis doesn't
> have? can you elaborate?

We love jenkins CI and we don't want to replace it, only augment, with
very little cost to the ovirt project.

Travis has a nice look-and-feel; but more importantly, it keeps the
outcome of jobs indefinitely. We'd like to see TravisCI for for the very
same reason we have a github mirror to our gerrit: github provides a
widely-used platform, that might be more approachable to the general
public, even though I personally like it less.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: building engine artificats from a posted patch?

2016-10-27 Thread Dan Kenigsberg
On Thu, Oct 27, 2016 at 06:32:47AM -0400, Martin Mucha wrote:
> Hi,
> 
> let me step back a little and explain what we want to achieve. We have patch 
> pushed to gerrit, not merged to master. We want to build rpms from it and 
> pass it (via no official way) to some tester so that he can test it.
> 
> I read provided documentation, but I do not have sufficient background to 
> understand it fully.
> Questions: 
> 
> 1. if I opted to run these tests locally, what are expected hw specification? 
> I mean devel build is already more than laptop can handle. If this has 
> enabled all translations, I'd have to take a pto to run it. So is this even 
> possible to be ran on laptop with only 12G ram?
> 
> 2. Since I probably should be coding instead of waiting for build on 
> irresponsible laptop (which it is even for devel build), would it be possible 
> to have jenkins build, which prepares rpms as described above without need to 
> deploy them to some repo, but allowing to download them instead?
> 
> thanks,
> M.
> 
> - Original Message -
> > Hi,
> > first you can run it locally quite easily using mock[1], the command
> > should be(after jenkins repo is cloned and mock installed) something
> > like:
> > ../jenkins/mock_configs/mock_runner.sh --mock-confs-dir
> > ../jenkins/mock_configs/ --build-only -s fc24
> > After running successfully the artifacts will be under
> > exported-artifacts directory.
> > 
> > It is possible to do it from Jenkins too, the problem is that the
> > current _build_artifacts job also deploy the created RPMs to
> > resources.ovirt.org's experimental repo, which is later consumed by
> > OST.
> > If needed, we can clone the needed job and remove the deploy part(and
> > add -manual suffix), then you can pass the gerrit refspec in the build
> > parameters. If so, tell me which job.

Adding to Matin's explaination: He posted https://gerrit.ovirt.org/65793
and would like it be tested. It would be wonderful if you can add a job
that makes it possible to build el7 rpms from that patch, to bet
executed by QE.

So yes, I'd appreciate if you can add such a -manual job for building
ovirt-engine. I'm not sure I know "which job" precisely is that, though.
Can you tell me which are the options?

Regrards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


building engine artificats from a posted patch?

2016-10-26 Thread Dan Kenigsberg
Hi,

Pardon my ignorance, but how can I trigger build-artifacts.sh after
posting a patch to gerrit?

I hope there's an easy way togenerate RPMs to be tested by third
parties prior to merging the patch.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-761) Re: Do we or can we have Mac OS slaves in Jenkins?

2016-10-11 Thread Dan Kenigsberg
On Tue, Oct 11, 2016 at 11:29:00AM +0300, Juan Hernández (oVirt JIRA) wrote:
> 
> [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=21621#comment-21621
>  ] 
> 
> Juan Hernández edited comment on OVIRT-761 at 10/11/16 11:28 AM:
> -
> 
> Thanks for the suggestion to use "clang" Evgheni, that can certainly help, I 
> will explore it.
> 
> I also see that Travis CI (https://travis-ci.com) supports building in Mac 
> OS. I added a .travis.yml file to the Ruby SDK, and tested the build my 
> personal github fork. It worked correctly. Is there any way to activate 
> Travis builds for the oVirt mirror of the SDK that we have in github? That 
> way at least the build will run after merging patches (before releasing). Do 
> you know if there is any way to make Travis CI work directly from 
> gerrit.ovirt.org, before merging patches?

Note to infra team: there's a very similar request (OVIRT-757) to enable
Travis build from the github mirror of Vdsm. Please solve them together;
for Vdsm as well, triggering a build in check-patch would be
interesting.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Decommissioning etherpad and old wiki

2016-10-09 Thread Dan Kenigsberg
On Thu, Oct 06, 2016 at 01:13:47PM +0300, Dan Kenigsberg wrote:
> On Thu, Oct 06, 2016 at 05:53:54PM +0900, Marc Dequènes (Duck) wrote:
> > Quack,
> > 
> > The wiki content was already converted into the website, and remaining
> > broken links & co were fixed (thanks Garrett). It was merely kept in
> > case some content was forgotten (read-only).
> > 
> > The Etherpad was barely used and a pain to upgrade/maintain, so the
> > infra team with the last remaining user decided to close it.
> > 
> > They are to be removed soon. In fact we already stopped them to track
> > the remaining users, and no one complained. It can be restarted on
> > demand before the removal date.
> > 
> > On 2016-10-26 the resources will be purged. Backup has been made, so
> > nothing is lost but unless there is a good reason it won't be used.
> > 
> > Btw, the wiki@ mailing-list was unused and is now read-only.
> 
> How much resources did old.ovirt.org take? I many occasions, the
> migration to the new git-based site rendered pages almost unreadable. In
> such occasions, I loved going back to old.ovirt.org, to see how a
> feature page was meant to look like.

As a recent example, we got
[Bug 1383047] New: No wiki pages on Important Vdsm wiki pages
opened. I might be able to guess what this link used to look like, but
seeing is much better.

> 
> Can we keep old.ovirt.org for another year? I see that it is currently
> down.

Service Temporarily Unavailable

The server is temporarily unable to service your request due to maintenance 
downtime or capacity problems. Please try again later.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Decommissioning etherpad and old wiki

2016-10-06 Thread Dan Kenigsberg
On Thu, Oct 06, 2016 at 05:53:54PM +0900, Marc Dequènes (Duck) wrote:
> Quack,
> 
> The wiki content was already converted into the website, and remaining
> broken links & co were fixed (thanks Garrett). It was merely kept in
> case some content was forgotten (read-only).
> 
> The Etherpad was barely used and a pain to upgrade/maintain, so the
> infra team with the last remaining user decided to close it.
> 
> They are to be removed soon. In fact we already stopped them to track
> the remaining users, and no one complained. It can be restarted on
> demand before the removal date.
> 
> On 2016-10-26 the resources will be purged. Backup has been made, so
> nothing is lost but unless there is a good reason it won't be used.
> 
> Btw, the wiki@ mailing-list was unused and is now read-only.

How much resources did old.ovirt.org take? I many occasions, the
migration to the new git-based site rendered pages almost unreadable. In
such occasions, I loved going back to old.ovirt.org, to see how a
feature page was meant to look like.

Can we keep old.ovirt.org for another year? I see that it is currently
down.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [VDSM] build-artifacts failing on master

2016-09-15 Thread Dan Kenigsberg
On Thu, Sep 15, 2016 at 01:15:29PM +0300, Barak Korren wrote:
> >
> > I love running tests on the build systems - its gives another layer of
> > assurance that we are going to build a good package for the relevant
> > system/architecture.
> >
> > However, the offending patch makes it impossible on el7-based build
> > system. Can we instead skip the test (on such systems) if the right nose
> > version is not installed?
> >
> > We should file a bug to fix nose on el7.
> >
> 
> IMO test requirements != build req != runtime req.
> 
> It is perfectly valid to use virtualenv and pip to enable using the
> latest and graetest testing tools, but those should __only__ be used
> in a testing environment. Those should not be used in a build
> environment which is designed to be reproducible and hence is
> typically devoid of network access.

That's clear.

> Deploy requirements should be tested, but by their nature those tests
> need to run post-build and hence are better left to integration tests
> like ovirt-system-tests.

I'm not sure I understand your point. RPM spec files have %check section
for pre-build tests. Should we, or should we not, strive to use them?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ACTION REQUIRED] VDSM master builf failing on NOSE version check

2016-09-15 Thread Dan Kenigsberg
On Thu, Sep 15, 2016 at 11:46:11AM +0200, Sandro Bonazzola wrote:
> On Thu, Sep 15, 2016 at 11:14 AM, Dan Kenigsberg <dan...@redhat.com> wrote:
> 
> > On Thu, Sep 15, 2016 at 08:51:59AM +0200, Sandro Bonazzola wrote:
> > > *http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-
> > el7-x86_64/836/console
> > > <http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-
> > el7-x86_64/836/console>*
> > >
> > >
> > > *00:05:37.816* make[1]: Entering directory
> > > `/home/jenkins/workspace/vdsm_master_build-artifacts-el7-
> > x86_64/vdsm/rpmbuild/BUILD/vdsm-4.18.999'*00:05:37.817*
> > > Makefile:980: warning: overriding recipe for target
> > > `check-recursive'*00:05:37.818* Makefile:516: warning: ignoring old
> > > recipe for target `check-recursive'*00:05:38.119* Error: NOSE is too
> > > old, please install NOSE 1.3.7 or later*00:05:38.119* make[1]: ***
> > > [tests] Error 1
> >
> > Yes, we broke it with https://gerrit.ovirt.org/63638 as discussed on
> >
> > [VDSM] build-artifacts failing on master
> >
> > >
> > >
> > > Is a newer version of nose really needed?
> >
> > yes.
> >
> > > If no: please require a version of nose which is available.
> > > If yes: we can consider shipping nose in CentOS Virt SIG repos since it's
> > > available for OpenStack repos:
> > > http://cbs.centos.org/koji/buildinfo?buildID=10186
> >
> > This would not solve our problem for RHEL, but it would certainly help
> > developement upsream. How difficult would it be to add it?
> >
> 
> Tagged into ovirt common repo. Please add the following repo to vdsm
> automation for el7:
> 
> centos-ovirt-common-candidate ->
> http://cbs.centos.org/repos/virt7-ovirt-common-candidate/$basearch/os/
> <http://cbs.centos.org/repos/virt7-ovirt-40-candidate/$basearch/os/>

So https://gerrit.ovirt.org/63986 might be enough to fix the current
job error? (though not future proof for formal brew/koji builds)

Sandro, could you trigger vdsm_master_build-artifacts? My own
account password on jenkins.ovirt.org is reset again.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [VDSM] build-artifacts failing on master

2016-09-15 Thread Dan Kenigsberg
On Wed, Sep 14, 2016 at 11:28:05PM +0300, Nir Soffer wrote:
> On Wed, Sep 14, 2016 at 10:43 PM, Nir Soffer  wrote:
> > On Wed, Sep 14, 2016 at 10:40 PM, Irit Goihman  wrote:
> >> I think that what's missing in build-artifacts.sh is the following commands
> >> that exist in check-patch.sh:
> >>
> >> easy_install pip
> >> pip install -U nose==1.3.7
> >
> > We cannot do this in brew/koji, you can use only packages from the 
> > distribution
> > when running make rpm.
> 
> The best way to avoid such issues is to remove the "make tests"
> from the %check section in the spec.
> 
> This allows using latest and greatest development tools which are not 
> available
> in brew or koji.
> 
> Here is a quick patch, please review:
> https://gerrit.ovirt.org/63966

I love running tests on the build systems - its gives another layer of
assurance that we are going to build a good package for the relevant
system/architecture.

However, the offending patch makes it impossible on el7-based build
system. Can we instead skip the test (on such systems) if the right nose
version is not installed?

We should file a bug to fix nose on el7.

> 
> >> This should install the right version for nose (that doesn't exist in rhel
> >> yum repos)
> >>
> >> On Wed, Sep 14, 2016 at 10:31 PM, Eyal Edri  wrote:
> >>>
> >>> Its actually a good question to know if standard CI supports versions of
> >>> RPMs.
> >>> Barak - do you know if we can specify in build-artifacts.packages file a
> >>> version requirement?
> >>>
> >>> for e.g python-nose >= 1.3.7
> >>>
> >>> On Wed, Sep 14, 2016 at 10:21 PM, Nir Soffer  wrote:
> 
>  The build-artifacts job is failing on master now with this error:
> 
>  19:09:23 Error: NOSE is too old, please install NOSE 1.3.7 or later
>  19:09:23 make[1]: *** [tests] Error 1
>  19:09:23 make[1]: Leaving directory
> 
>  `/home/jenkins/workspace/vdsm_master_build-artifacts-el7-x86_64/vdsm/rpmbuild/BUILD/vdsm-4.18.999'
>  19:09:23 error: Bad exit status from /var/tmp/rpm-tmp.LQXOfm (%check)
>  19:09:23
>  19:09:23
>  19:09:23 RPM build errors:
>  19:09:23 Bad exit status from /var/tmp/rpm-tmp.LQXOfm (%check)
> 
>  Looks like this patch is the cause:
> 
>  commit 4e729ddd2b243d0953e2de5d31c42fc59859bf23
>  Author: Edward Haas 
>  Date:   Sun Sep 11 14:10:01 2016 +0300
> 
>  build tests: Require NOSE 1.3.7 and up for running tests
> 
>  On RHEL7/Centos7 the provided NOSE version is 1.3.0.
>  CI runs the tests with 1.3.7.
> 
>  To be consistent and avoid different behaviours, assure that the
>  tests
>  are running with a minimum nose version of 1.3.7.
> 
>  Specifically, between 1.3.0 and 1.3.7 a bug has been resolved
>  regarding
>  test labeling and its support with test class inheritance.
> 
>  Change-Id: If79d8624cee1c14a21840e4a08000fc33abb58e5
>  Signed-off-by: Edward Haas 
>  Reviewed-on: https://gerrit.ovirt.org/63638
>  Continuous-Integration: Jenkins CI
>  Reviewed-by: Petr Horáček 
>  Reviewed-by: Irit Goihman 
>  Reviewed-by: Yaniv Bronhaim 
>  Reviewed-by: Piotr Kliczewski 
> 
>  I did not check the details, but it seems we need to revert this patch.
> 
>  Please check and fix.
> 
>  Cheers,
>  Nir
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ACTION REQUIRED] VDSM master builf failing on NOSE version check

2016-09-15 Thread Dan Kenigsberg
On Thu, Sep 15, 2016 at 08:51:59AM +0200, Sandro Bonazzola wrote:
> *http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-el7-x86_64/836/console
> *
> 
> 
> *00:05:37.816* make[1]: Entering directory
> `/home/jenkins/workspace/vdsm_master_build-artifacts-el7-x86_64/vdsm/rpmbuild/BUILD/vdsm-4.18.999'*00:05:37.817*
> Makefile:980: warning: overriding recipe for target
> `check-recursive'*00:05:37.818* Makefile:516: warning: ignoring old
> recipe for target `check-recursive'*00:05:38.119* Error: NOSE is too
> old, please install NOSE 1.3.7 or later*00:05:38.119* make[1]: ***
> [tests] Error 1

Yes, we broke it with https://gerrit.ovirt.org/63638 as discussed on

[VDSM] build-artifacts failing on master

> 
> 
> Is a newer version of nose really needed?

yes.

> If no: please require a version of nose which is available.
> If yes: we can consider shipping nose in CentOS Virt SIG repos since it's
> available for OpenStack repos:
> http://cbs.centos.org/koji/buildinfo?buildID=10186

This would not solve our problem for RHEL, but it would certainly help
developement upsream. How difficult would it be to add it?

> 
> Please let me know.
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-provider-ovn - new gerrit.ovirt.org project

2016-08-29 Thread Dan Kenigsberg
On Mon, Aug 29, 2016 at 02:43:02PM +0300, Eyal Edri wrote:
> On Mon, Aug 29, 2016 at 1:11 PM, Marcin Mirecki  wrote:
> 
> > All,
> >
> > Can you please add a new project to gerrit.ovirt.org?
> > The project name: ovirt-provider-ovn
> >
> > The project will contain the ovn external network provider.
> > I will maintain it.

Ack from my end.

We need this package to server as an external network provider for OVN
(https://github.com/openvswitch/ovs/tree/master/ovn ) that is slated for
ovirt-4.1.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [vdsm][heads up] planned branches for Vdsm 4.0.3 and 4.0.4+

2016-08-16 Thread Dan Kenigsberg
On Thu, Aug 11, 2016 at 04:28:12AM -0400, Francesco Romani wrote:
> Thanks Eyal. 
> 
> Infra: for the moment please just add hooks for ovirt-4.0.2. We are still 
> discussing if we will have the ovirt-4.0.3 branch or if 
> we will just use ovirt-4.0.2 also for the next microversion. 

I don't think we need a ovirt-4.0.3 branch. If we need to build vdsm for
4.0.3, the small number of relevant patches allows us to do it from the
4.0.2 branch.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Vdsm 4.0 fc23 build fails with "nothing provides ovirt-imageio-common"

2016-07-21 Thread Dan Kenigsberg
On Wed, Jul 20, 2016 at 10:16:20AM +0300, Eyal Edri wrote:
> It might be due to proxy issues Sandro reported already.
> I see the recent jobs are OK, but we'll continue to investigate if we need
> to fix something in the proxy.

I'm afraid
http://jenkins.ovirt.org/job/vdsm_4.0_check-patch-fc23-x86_64/111/console
shows that the problem is still with us. Please look into it.

> 
> On Tue, Jul 19, 2016 at 7:11 PM, Nir Soffer  wrote:
> 
> > More info - this is a random failure - other patches in same topic are
> > fine.
> >
> > So it seems that some slaves have wrong repositories, maybe cache issue?
> >
> > On Tue, Jul 19, 2016 at 7:09 PM, Nir Soffer  wrote:
> > > Hi all,
> > >
> > > Seems that builds on 4.0 are failing now with:
> > > 15:19:02 Error: nothing provides ovirt-imageio-common needed by
> > > vdsm-4.18.6-13.git3aaee18.fc23.x86_64.
> > >
> > > See
> > http://jenkins.ovirt.org/job/vdsm_4.0_check-patch-fc23-x86_64/27/console
> > >
> > > ovirt-imageio-* packages are built in jenkins, and provided in ovirt
> > > repositories.
> > >
> > > Can someone take a look?
> > >
> > > Nir
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] vdsm_master_verify-error-codes_created running on master - why?

2016-07-20 Thread Dan Kenigsberg
On Wed, Jul 20, 2016 at 01:51:51AM +0300, Nir Soffer wrote:
> On Tue, Jul 19, 2016 at 6:20 PM, Eyal Edri  wrote:
> > And also, feel free to move it to check-patch.sh code as well.
> 
> Including this in vdsm seem like the best option.
> 
> Can you point me to the source of this job?
> 
> >
> > On Tue, Jul 19, 2016 at 6:19 PM, Eyal Edri  wrote:
> >>
> >> This isn't new, it was running for a few years, just on old jenkins,
> >> Maybe you just noticed it.
> >>
> >> Allon & Dan are familiar with that job and it already found in the past
> >> real issues.
> >> If you want to remove/disable it, I have no problem - just make sure
> >> you're synced with all VDSM people that requested this job in the first
> >> place.
> >>
> >> On Tue, Jul 19, 2016 at 6:02 PM, Nir Soffer  wrote:
> >>>
> >>> Hi all,
> >>>
> >>> Since yesterday, vdsm_master_verify-error-codes_created job is running
> >>> on master.
> >>>
> >>> I guess that this was a unintended change in jenkins - please revert this
> >>> change.
> >>>
> >>> If someone want to add a job for vdsm master, it must be approved by
> >>> vdsm maintainers first.
> >>>
> >>> The best would be to run everything from the automation scripts, so
> >>> vdsm maintainers have full control on the way patches are checked.

A bit of a background: this job was created many many years ago, in
order to compare the set of error codes in Vdsm to that of Engine. The
motivation was to catch typos or other mismatches, where Vdsm is sending
one value and Engine is expecting another, or Vdsm dropping something
that Engine depends on.

HOWEVER, I'm not sure at all that the job's code is up-to-date. I wonder
how it could have every survived the big changes of
https://gerrit.ovirt.org/#/c/48871/ and its bash code
http://jenkins.ovirt.org/job/vdsm_master_verify-error-codes_merged/configure
does not reassure me
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build failed in Jenkins: ovirt_master_system-tests #243

2016-07-11 Thread Dan Kenigsberg
On Thu, Jul 07, 2016 at 07:00:35PM +0300, Nadav Goldin wrote:
> Seems like [1], as ovirt-srv19  has fresh new FC24 installation,
> virtlogd is not enabled by default:
> ● virtlogd.service - Virtual machine log manager
>Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect;
> vendor preset: disabled)
>Active: inactive (dead)
>  Docs: man:virtlogd(8)
>http://libvirt.org
> we can add it to puppet for now.
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1290357

Francesco, shouldn't vdsm require virtlogd explicitly?

> 
> 
> On Thu, Jul 7, 2016 at 6:49 PM, Eyal Edri  wrote:
> > This looks like a bug in libvirt?
> > Tolik mentioned something in a socket name which is too long, anyone seen it
> > before?
> >
> > 15:37:11 libvirt: XML-RPC error : Failed to connect socket to
> > '/var/run/libvirt/virtlogd-sock': No such file or directory
> > 15:37:11 * Starting VM lago_basic_suite_master_storage: ERROR (in
> > 0:00:00)
> > 15:37:11   # Start vms: ERROR (in 0:00:00)
> > 15:37:11   # Destroy network lago_basic_suite_master_lago:
> > 15:37:11   # Destroy network lago_basic_suite_master_lago: ERROR (in
> > 0:00:00)
> > 15:37:11 @ Start Prefix: ERROR (in 0:00:00)
> > 15:37:11 Error occured, aborting
> > 15:37:11 Traceback (most recent call last):
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 691, in
> > main
> > 15:37:11 cli_plugins[args.verb].do_run(args)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
> > 180, in do_run
> > 15:37:11 self._do_run(**vars(args))
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 488,
> > in wrapper
> > 15:37:11 return func(*args, **kwargs)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 499,
> > in wrapper
> > 15:37:11 return func(*args, prefix=prefix, **kwargs)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 255, in
> > do_start
> > 15:37:11 prefix.start(vm_names=vm_names)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 958,
> > in start
> > 15:37:11 self.virt_env.start(vm_names=vm_names)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 182,
> > in start
> > 15:37:11 vm.start()
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> > 247, in start
> > 15:37:11 return self.provider.start(*args, **kwargs)
> > 15:37:11   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 93, in
> > start
> > 15:37:11 self.libvirt_con.createXML(self._libvirt_xml())
> > 15:37:11   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611,
> > in createXML
> > 15:37:11 if ret is None:raise libvirtError('virDomainCreateXML()
> > failed', conn=self)
> > 15:37:11 libvirtError: Failed to connect socket to
> > '/var/run/libvirt/virtlogd-sock': No such file or directory
> > 15:37:11 #
> >
> >
> > On Thu, Jul 7, 2016 at 6:37 PM,  wrote:
> >>
> >> See 
> >>
> >> Changes:
> >>
> >> [Eyal Edri] add hystrix deps to yum repos include list
> >>
> >> [Eyal Edri] refresh fedora versions and release versions for ovirt-engine
> >>
> >> [Sandro Bonazzola] ovirt-engine_upgrade-db: drop 3.6.7 jobs
> >>
> >> [Shirly Radco] Replacing jpackage repo for 3.6 dwh
> >>
> >> --
> >> [...truncated 485 lines...]
> >> ##  rc = 1
> >> ##
> >> ##! ERROR v
> >> ##! Last 20 log enties:
> >> logs/mocker-fedora-23-x86_64.fc23.basic_suite_master.sh/basic_suite_master.sh.log
> >> ##!
> >>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 255, in
> >> do_start
> >> prefix.start(vm_names=vm_names)
> >>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 958, in
> >> start
> >> self.virt_env.start(vm_names=vm_names)
> >>   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 182, in start
> >> vm.start()
> >>   File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 247, in
> >> start
> >> return self.provider.start(*args, **kwargs)
> >>   File "/usr/lib/python2.7/site-packages/lago/vm.py", line 93, in start
> >> self.libvirt_con.createXML(self._libvirt_xml())
> >>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in
> >> createXML
> >> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> >> conn=self)
> >> libvirtError: Failed to connect socket to
> >> '/var/run/libvirt/virtlogd-sock': No such file or directory
> >> #
> >>  Cleaning up
> >> --- Cleaning with lago
> >> --- Cleaning with lago done
> >>  Cleanup done
> >> Took 197 seconds
> >> ===
> >> ##!
> >> ##! ERROR 

Re: [Vdsm] infra support for the new stable branch ovirt-4.0

2016-06-05 Thread Dan Kenigsberg
On Sat, Jun 04, 2016 at 04:16:18PM +0300, Nir Soffer wrote:
> On Fri, Jun 3, 2016 at 10:58 AM, Francesco Romani  wrote:
> > Hi Infra,
> >
> > (Dan, please ACK/NACK the following)
> >
> > I'm not sure this is already been worked on, or if it was already 
> > configured automatically,
> > sending just in case to be sure.
> >
> > Me and Yaniv (CC'd) agreed to continue our maintainer duties and take care 
> > of the ovirt-4.0
> > Vdsm stable branch which was recently created.
> >
> > I'd like to ask if we have the gerrit permissions and CI jobs ready for the 
> > new
> > branch.
> 
> +1
+1
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread Dan Kenigsberg
On Tue, May 24, 2016 at 10:22:16AM +0200, David Caro wrote:
> On 05/24 11:07, Amit Aviram wrote:
> > Hi.
> > For the last day I am getting this error over and over again from jenkins:
> > 
> > Start: yum install*07:23:55* ERROR: Command failed. See logs for
> > output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> > /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> > --releasever 7 install @buildsys-build
> > --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> > selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> > Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> > Init took 3 seconds
> > 
> > 
> > (see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> > 
> > 
> > This fails the job, so I get -1 from Jenkins CI for my patch.
> 
> 
> That's not what's failing the job, is just a warning, the failure is happening
> before that, when installing the chroot:
> 
> 07:23:53 Start: yum install
> 07:23:55 ERROR: Command failed. See logs for output.
> 07:23:55  # /usr/bin/yum-deprecated --installroot 
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/ 
> --releasever 7 install @buildsys-build --setopt=tsflags=nocontexts
> 
> Checking the logs (logs.tgz file, archived on the job, under
> vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):
> 
> 
> DEBUG util.py:417:  
> https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
>  [Errno 14] HTTPS Error 404 - Not Found
> DEBUG util.py:417:  Trying other mirror.
> DEBUG util.py:417:   One of the configured repositories failed ("Custom 
> openstack-kilo"),
> DEBUG util.py:417:   and yum doesn't have enough cached data to continue. At 
> this point the only
> DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways to 
> work "fix" this:
> DEBUG util.py:417:   1. Contact the upstream for the repository and get 
> them to fix the problem.
> DEBUG util.py:417:   2. Reconfigure the baseurl/etc. for the repository, 
> to point to a working
> DEBUG util.py:417:  upstream. This is most often useful if you are 
> using a newer
> DEBUG util.py:417:  distribution release than is supported by the 
> repository (and the
> DEBUG util.py:417:  packages for the previous distribution release 
> still work).
> DEBUG util.py:417:   3. Disable the repository, so yum won't use it by 
> default. Yum will then
> DEBUG util.py:417:  just ignore the repository until you permanently 
> enable it again or use
> DEBUG util.py:417:  --enablerepo for temporary usage:
> DEBUG util.py:417:  yum-config-manager --disable openstack-kilo
> DEBUG util.py:417:   4. Configure the failing repository to be skipped, 
> if it is unavailable.
> DEBUG util.py:417:  Note that yum will try to contact the repo. when 
> it runs most commands,
> DEBUG util.py:417:  so will have to try and fail each time (and thus. 
> yum will be be much
> DEBUG util.py:417:  slower). If it is a very temporary problem 
> though, this is often a nice
> DEBUG util.py:417:  compromise:
> DEBUG util.py:417:  yum-config-manager --save 
> --setopt=openstack-kilo.skip_if_unavailable=true
> DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo: [Errno 
> 256] No more mirrors to try.
> 
> 
> So it seems that the repo does not exist anymore, there's a README.txt file
> though that says:
> 
> RDO Kilo is hosted in CentOS Cloud SIG repository
> http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/
> 
> And that new link seems to work ok, so probably you just need to change the
> automation/*.repos files on vdsm git repo to point to the new openstack repos
> url instead of the old one and everything should work ok.
> 
> 
> 
> > 
> > I am pretty sure it is not related to the patch. also fc23 job passes.
> > 
> > 
> > Any idea what's the problem?

Yep, I believe that https://gerrit.ovirt.org/57870 has solved that.
Please rebase on top of current ovirt-3.6 branch.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: none-yamlized jobs

2016-04-24 Thread Dan Kenigsberg
On Sun, Apr 24, 2016 at 09:02:32PM +0300, Nadav Goldin wrote:
> Hey Sandro,
> [1] is a list of all the none-yamlized jobs in jenkins-old.ovirt.org, can
> you help us map which jobs still need to be enabled? we already mapped dao
> and find_bugs, we want to minimize the number of jobs that are not yamlized
> yet and must be enabled in jenkins-old.ovirt.org
> 
> 
> Thanks,
> 
> Nadav.
> 
> 
> 
> [1] https://paste.fedoraproject.org/359265/

(It would have been even nicer had you cleaned the u'' mess from the list)

 vdsm_any_create-rpms_manual

can be dropped

 vdsm_master_create-rpms-el7-x86_64_no_spm_testing

Adam, is this of any use? ^^^

 vdsm_master_storage_functional_tests_posix_gerrit

I think this one can be dropped

 vdsm_master_verify-error-codes_merged

should be migrated

 vdsm_master_virt_functional_tests_gerrit

Francesco, this is never used, right?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Dropping support for none Standard-CI packages on Jenkins' slaves

2016-04-11 Thread Dan Kenigsberg
On Mon, Apr 11, 2016 at 02:30:42PM +0300, Nadav Goldin wrote:
> Hi,
> As most jobs were migrated to standard CI, we still have prior puppet code
> that installs various packages on all slaves. These packages cause 2
> problems:
> 1) mask possible bugs by using different packages than the ones intended in
> the standard CI files.
> 2) overhead and unneeded complication in puppet and the VM templates.
> 
> to ensure they are indeed not needed any more, I want to start removing the
> packages gradually,
> if no one objects I'll start by removing the following packages:
> 
> jasperreports-server
> > postgresql-jdbc
> > libnl

ack for dropping libnl. vdsm tests that needed it were ported to "yaml"
long ago.

> > log4j
> > chrpath
> > sos
> >
> mailcap
> >
> 
> 
> 
> 
> Thanks
> Nadav.

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [urgent] gerrit not working: Error Cannot update refs/heads/master

2016-04-08 Thread Dan Kenigsberg
On Fri, Apr 08, 2016 at 09:43:25AM +0300, Eyal Edri wrote:
> works now (patch merged).
> Seems that the git-exprol was run with root instead of gerrit2 user, i'm
> changing ownership to gerrit2 user back it should fix it for all projects.

Thanks.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [JIRA] (OVIRT-473) [urgent] gerrit not working: Error Cannot update refs/heads/master

2016-04-08 Thread Dan Kenigsberg
On Fri, Apr 08, 2016 at 09:35:02AM +0300, eyal edri [Administrator] (oVirt 
JIRA) wrote:
> 
> [ 
> https://ovirt-jira.atlassian.net/browse/OVIRT-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14700#comment-14700
>  ] 
> 
> eyal edri [Administrator] commented on OVIRT-473:
> -
> 
> Gerrit server restarted, please retry.

No change, I'm afraid. https://gerrit.ovirt.org/#/c/55827/ cannot be
merged.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-merged-el7-x86_64

2016-04-07 Thread Dan Kenigsberg
On Thu, Apr 07, 2016 at 09:41:15AM +0200, Michal Skrivanek wrote:
> this job is failing for …well, as far back as I can see
> please fix or disable it  

The job fails due to this error - I think that Yaniv and David are aware of
this?

##! ERROR v
##! Last 20 log enties: 
logs/mocker-epel-7-x86_64.el7.check-merged.sh/check-merged.sh.log
##!
cli_plugins[args.verb].do_run(args)
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 180, in 
do_run
self._do_run(**vars(args))
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 501, in wrapper
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 512, in wrapper
return func(*args, prefix=prefix, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 234, in do_start
prefix.start(vm_names=vm_names)
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 778, in start
self.virt_env.start(vm_names=vm_names)
  File "/usr/lib/python2.7/site-packages/lago/virt.py", line 193, in start
vm.start()
  File "/usr/lib/python2.7/site-packages/lago/virt.py", line 910, in start
self._env.libvirt_con.createXML(self._libvirt_xml())
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Cannot check QEMU binary /usr/libexec/qemu-kvm: No such file or 
directory
Took 327 seconds
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI does not run tests post merge

2016-01-26 Thread Dan Kenigsberg
On Mon, Jan 25, 2016 at 02:39:24PM +0200, Eyal Edri wrote:
> I'm guessing there was a reason to remove/disable them?
> Adding Danken.
> 
> Anton can help to re-enable them if needed.

We should simply add automation/check_merge to vdsm.git; and we should
start by having them as a softlink to automation/check_patch* files.

Edy, can you take care of that?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: FAIL: testEnablePromisc (ipwrapperTests.TestDrvinfo) - bad test? ci issue?

2016-01-06 Thread Dan Kenigsberg
On Wed, Jan 06, 2016 at 09:29:46AM +0200, Edward Haas wrote:
> Hi,
> 
> Strange, the logging below shows the 'promisc on' commands was successful.
> Unfortunately, the logs/run/job archive is no longer available.
> 
> The check itself is asymmetric: We set it using iproute2 (command) and
> read it using netlink.
> At the least, we should add some more info on failure (like link state
> details)
> Adding to my TODO.
> 
> Thanks,
> Edy.
> 
> On 01/05/2016 06:10 PM, Nir Soffer wrote:
> > Hi all,
> > 
> > We see this failure again in the ci - can someone from networking take a 
> > look?

I'm guessing that it is a race due to the asynchrony of netlink.
If so, this patch should solve the issue in one location
https://gerrit.ovirt.org/#/c/51410/

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: fyi - vdsm check-patch for el7 has been disabled due to tests errors unattended

2016-01-06 Thread Dan Kenigsberg
On Tue, Jan 05, 2016 at 03:17:54PM +0200, Eyal Edri wrote:
> same for http://jenkins.ovirt.org/job/vdsm_3.5_check-patch-fc22-x86_64/
> 
> 
> On Tue, Jan 5, 2016 at 2:56 PM, Eyal Edri  wrote:
> 
> > FYI,
> >
> > The vdsm job [1] has been failing for quite some time now, without any
> > resolution so far.
> > In order to reduce noise and false positive for CI it was disabled until
> > the relevant developers will ack it it stable and can be re-enabled.
> >
> > Please contact the infra team if you need any assistance testing it on a
> > non-production job.
> >
> >
> > [1] http://jenkins.ovirt.org//job/vdsm_3.5_check-patch-el7-x86_64/

https://gerrit.ovirt.org/#/c/51390/ hides some of the problems (most of
them already solved on 3.6 branch).

I suggest to take it in instead of turning the job off.

The 3.5 branch is quite quiet these days, but I would like to enjoy the
benefits of our unit test as long as it is alive.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Tested failing because of missing loop devices

2015-12-23 Thread Dan Kenigsberg
On Wed, Dec 23, 2015 at 03:21:31AM +0200, Nir Soffer wrote:
> Hi all,
> 
> We see too many failures of tests using loop devices. Is it possible
> that we run tests
> concurrently on the same slave, using all the available loop devices, or maybe
> creating races between different tests?
> 
> It seems that we need new decorator for disabling tests on the CI
> slaves, since this
> environment is too fragile.
> 
> Here are some failures:
> 
> 01:10:33 
> ==
> 01:10:33 ERROR: testLoopMount (mountTests.MountTests)
> 01:10:33 
> --
> 01:10:33 Traceback (most recent call last):
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
> line 128, in testLoopMount
> 01:10:33 m.mount(mntOpts="loop")
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> line 225, in mount
> 01:10:33 return self._runcmd(cmd, timeout)
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> line 241, in _runcmd
> 01:10:33 raise MountError(rc, ";".join((out, err)))
> 01:10:33 MountError: (32, ';mount: /tmp/tmpZuJRNk: failed to setup
> loop device: No such file or directory\n')
> 01:10:33  >> begin captured logging << 
> 
> 01:10:33 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
> /sbin/mkfs.ext2 -F /tmp/tmpZuJRNk (cwd None)
> 01:10:33 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13
> (17-May-2015)\n';  = 0
> 01:10:33 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
> /usr/bin/mount -o loop /tmp/tmpZuJRNk /var/tmp/tmpJO52Xj (cwd None)
> 01:10:33 - >> end captured logging << 
> -
> 01:10:33
> 01:10:33 
> ==
> 01:10:33 ERROR: testSymlinkMount (mountTests.MountTests)
> 01:10:33 
> --
> 01:10:33 Traceback (most recent call last):
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/mountTests.py",
> line 150, in testSymlinkMount
> 01:10:33 m.mount(mntOpts="loop")
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> line 225, in mount
> 01:10:33 return self._runcmd(cmd, timeout)
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/vdsm/storage/mount.py",
> line 241, in _runcmd
> 01:10:33 raise MountError(rc, ";".join((out, err)))
> 01:10:33 MountError: (32, ';mount: /var/tmp/tmp1UQFPz/backing.img:
> failed to setup loop device: No such file or directory\n')
> 01:10:33  >> begin captured logging << 
> 
> 01:10:33 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
> /sbin/mkfs.ext2 -F /var/tmp/tmp1UQFPz/backing.img (cwd None)
> 01:10:33 Storage.Misc.excCmd: DEBUG: SUCCESS:  = 'mke2fs 1.42.13
> (17-May-2015)\n';  = 0
> 01:10:33 Storage.Misc.excCmd: DEBUG: /usr/bin/taskset --cpu-list 0-1
> /usr/bin/mount -o loop /var/tmp/tmp1UQFPz/link_to_image
> /var/tmp/tmp1UQFPz/mountpoint (cwd None)
> 01:10:33 - >> end captured logging << 
> -
> 01:10:33
> 01:10:33 
> ==
> 01:10:33 ERROR: test_getDevicePartedInfo (parted_utils_tests.PartedUtilsTests)
> 01:10:33 
> --
> 01:10:33 Traceback (most recent call last):
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/testValidation.py",
> line 97, in wrapper
> 01:10:33 return f(*args, **kwargs)
> 01:10:33   File
> "/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/parted_utils_tests.py",
> line 61, in setUp
> 01:10:33 self.assertEquals(rc, 0)
> 01:10:33 AssertionError: 1 != 0
> 01:10:33  >> begin captured logging << 
> 
> 01:10:33 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 dd if=/dev/zero
> of=/tmp/tmpasV8TD bs=100M count=1 (cwd None)
> 01:10:33 root: DEBUG: SUCCESS:  = '1+0 records in\n1+0 records
> out\n104857600 bytes (105 MB) copied, 0.368498 s, 285 MB/s\n';  =
> 0
> 01:10:33 root: DEBUG: /usr/bin/taskset --cpu-list 0-1 losetup -f
> --show /tmp/tmpasV8TD (cwd None)
> 01:10:33 root: DEBUG: FAILED:  = 'losetup: /tmp/tmpasV8TD: failed
> to set up loop device: No such file or directory\n';  = 1
> 01:10:33 - >> end captured logging << 
> -
> 

I've reluctantly marked another test as broken in https://gerrit.ovirt.org/50484
due to a similar problem.
Your idea of @brokentest_ci decorator is slightly less bad - at least we
do not ignore errors in this test 

Re: vdsm_master-libgfapi_create-rpms-* jobs on jenkins

2015-12-16 Thread Dan Kenigsberg
On Wed, Dec 16, 2015 at 03:56:06PM +0200, Nir Soffer wrote:
> Hi all,
> 
> I see these jobs running after merging patches:
> - 
> http://jenkins.ovirt.org/job/vdsm_master-libgfapi_create-rpms-fc23-x86_64_merged/
> - 
> http://jenkins.ovirt.org/job/vdsm_master-libgfapi_create-rpms-el7-x86_64_merged/
> 
> I don't know what these jobs are doing, but vdsm does not support libgfapi 
> yet.
> 
> We have work in progress patches for adding this, but currently we are blocked
> on libvirt, since it does not support it properly yet.
> 
> So these jobs should be disabled or removed, there is no point to overload
> our slaves with jobs that do nothing useful.

I believe that it (used to?) cherry-pick
https://gerrit.ovirt.org/#/c/33768/ on top master.

I share your opinion that there's no need to keep them running until
libgfapi effort is renewed.

Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [vdsm] strange network test failure on FC23

2015-12-03 Thread Dan Kenigsberg
On Fri, Nov 27, 2015 at 07:09:30PM +0100, David Caro wrote:
> 
> I see though that is leaving a bunch of test interfaces in the slave:
> 
> 
> 2753: vdsmtest-gNhf3:  mtu 1500 qdisc 
> noqueue state UNKNOWN group default 
> link/ether 86:73:13:4c:e2:63 brd ff:ff:ff:ff:ff:ff
> 2767: vdsmtest-aX5So:  mtu 1500 qdisc 
> noqueue state UNKNOWN group default 
> link/ether 9e:fa:75:3e:a3:e6 brd ff:ff:ff:ff:ff:ff
> 2768: vdsmtest-crso1:  mtu 1500 qdisc 
> noqueue state UNKNOWN group default 
> link/ether 22:ce:cb:5c:42:3b brd ff:ff:ff:ff:ff:ff
> 2772: vdsmtest-JDc5P:  mtu 1500 qdisc 
> noqueue state UNKNOWN group default 
> link/ether ae:79:dc:e9:22:9a brd ff:ff:ff:ff:ff:ff
> 
> 
> 
> Can we do a cleanup in the tests and remove those? That might collide with
> other tests and create failures.

These bridges are no longer created on master (thanks to Nir's
http://gerrit.ovirt.org/44111)
The should have been removed by the run_tests that created them, but
this may not take place if it is killed (or dies) beforehand.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Missing package on one of the Fedora 23 slaves?

2015-12-01 Thread Dan Kenigsberg
On Tue, Dec 01, 2015 at 04:19:19AM -0500, Francesco Romani wrote:
> - Original Message -
> > From: "Nir Soffer" 
> > To: "David Caro" 
> > Cc: "Tal Nisan" , "infra" 
> > Sent: Monday, November 30, 2015 9:55:52 PM
> > Subject: Re: Missing package on one of the Fedora 23 slaves?
> > 
> > Adding Francesco
> > 
> > On Mon, Nov 30, 2015 at 8:12 PM, David Caro  wrote:
> > > On 11/30 20:09, Nir Soffer wrote:
> > >> Another instance:
> > >>
> > >> 17:37:47 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> > >> vdsm-4.17.11-9.gitdcd50d8.fc23.noarch.
> > >> 17:37:47 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> > >> vdsm-4.17.11-9.gitdcd50d8.fc23.noarch.
> > >
> > > Has ovirt-vmconsole dependency changed lately? From which repo should it 
> > > be
> > > comming from?
> > >
> > > The repos available during the build are declared in the
> > > automation/check-patch.repos file in the vdsm git repo, maybe something is
> > > missing there for fc23?
> 
> I think we just need https://gerrit.ovirt.org/#/c/49503/

Merged, hth.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [RFC] Proposal for dropping FC22 jenkins tests on master branch

2015-11-30 Thread Dan Kenigsberg
On Mon, Nov 30, 2015 at 01:11:25PM +0100, Sandro Bonazzola wrote:
> On Thu, Nov 12, 2015 at 9:34 AM, Sandro Bonazzola 
> wrote:
> 
> > Hi,
> > can we drop FC22 testing in jenkins now that FC23 jobs are up and running?
> > it will reduce jenkins load. If needed we can keep FC22 builds, just
> > dropping the check jobs.
> > Comments?
> >
> >
> This morning queue is up to 233 jobs, can we drop fc22 build on master?

+1.

http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/ and
http://jenkins.ovirt.org/view/All/job/vdsm_master_install-rpm-sanity-fc23_created/
seem good.

Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [vdsm] CI tests jobs for branch ovirt-3.5

2015-11-18 Thread Dan Kenigsberg
On Wed, Nov 18, 2015 at 02:53:47PM +0100, David Caro wrote:
> On 11/18 14:49, Petr Horacek wrote:
> > Could you just give me a link to that failing jobs please?
> 
> 
> Sure : http://jenkins.ovirt.org/search/?q=vdsm_3.5_check-patch


14:48:25 ERROR: testGetBondingOptions (netinfoTests.TestNetinfo)
14:48:25 --
14:48:25 Traceback (most recent call last):
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/monkeypatch.py",
 line 133, in wrapper
14:48:25 return f(*args, **kw)
14:48:25   File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
14:48:25 self.gen.throw(type, value, traceback)
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/monkeypatch.py",
 line 110, in MonkeyPatchScope
14:48:25 yield {}
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/monkeypatch.py",
 line 133, in wrapper
14:48:25 return f(*args, **kw)
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/testValidation.py",
 line 100, in wrapper
14:48:25 return f(*args, **kwargs)
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/netinfoTests.py",
 line 370, in testGetBondingOptions
14:48:25 with open(BONDING_MASTERS, 'w') as bonds:
14:48:25 IOError: [Errno 13] Permission denied: '/sys/class/net/bonding_masters'

seems to be failing due to a missing monkeypatch, or a missing kernel module.
Ondra, could you take a look?

14:48:25 FAIL: testFakeNics (netinfoTests.TestNetinfo)
14:48:25 --
14:48:25 Traceback (most recent call last):
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/testValidation.py",
 line 100, in wrapper
14:48:25 return f(*args, **kwargs)
14:48:25   File 
"/home/jenkins/workspace/vdsm_3.5_check-patch-el6-x86_64/vdsm/tests/netinfoTests.py",
 line 236, in testFakeNics
14:48:25 nics))
14:48:25 AssertionError: Some of hidden devices set(['mehd_28', 'mehv_68', 
'mehv_66']) is shown in nics ['eth0', 'dummy0', 'dummy_85', 'mehd_28', 
'veth_34', 'veth_22'] 

is more bizarre. In any case, can you mark then as @brokentest("on jenkins")
until we solve the riddle?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[vdsm] merge rights on master branch

2015-11-17 Thread Dan Kenigsberg
Nir Soffer has been contributing to Vdsm for more than two years now.
Beyond his storage-related expertise, he is well-known for his thorough
code reviews and his pains when the master branch is broken. It's time
for him to take even bigger responsibility.

David, please grant Nir Soffer with merge rights on the master branch of
Vdsm.

I'll keep merging network, virt, and infra patches myself, but much less
than before. Adam keeps his merge right as before (he tends to use them
on emergencies only).

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Maintainer rights on vdsm - ovirt-3.5-gluster

2015-11-16 Thread Dan Kenigsberg
On Wed, Apr 29, 2015 at 10:03:47AM +0200, David Caro wrote:
> Done
> 
> On 04/20, Dan Kenigsberg wrote:
> > On Mon, Apr 20, 2015 at 03:20:18PM +0530, Sahina Bose wrote:
> > > Hi!
> > > 
> > > On the vdsm branch "ovirt-3.5-gluster", could you provide merge rights to
> > > Bala (barum...@redhat.com) ?
> > 
> > +1 from me.
> > 
> > ovirt-3.5-gluster needs a rebase on top of the current ovirt-3.5

Thanks. Few months have passed, and now we need Sahina herself as an
admin of the this branch. Would you please add her as well?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: introducing F23 jobs: vdsm_master_check-patch-fc23 failure

2015-11-12 Thread Dan Kenigsberg
On Wed, Nov 11, 2015 at 03:43:14PM +0100, Sandro Bonazzola wrote:
> On Tue, Nov 10, 2015 at 1:21 PM, Dan Kenigsberg <dan...@redhat.com> wrote:
> 
> > Thanks for introducing the new job.
> > However, I see that the job that runs on spec changes fails:
> >
> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/49/
> >
> > 07:40:56 Last metadata expiration check performed 0:00:27 ago on Tue Nov
> > 10 07:40:24 2015.
> > 07:40:56 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> > vdsm-4.17.999-116.git7d73f3b.fc23.noarch.
> > 07:40:56 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> > vdsm-4.17.999-116.git7d73f3b.fc23.noarch.
> >
> > and another fails on
> >
> > http://jenkins.ovirt.org/job/vdsm_master_install-rpm-sanity-fc23_created/9/console
> >
> > 07:27:09 shell-scripts/mock_build_onlyrpm.sh
> > 07:27:09 + distro=fc23
> > 07:27:09 + arch=x86_64
> > 07:27:09 + project=vdsm
> > 07:27:09 + extra_packages=(vim-minimal)
> > 07:27:09 + extra_rpmbuild_options=('with_check=0' 'with_hooks=1')
> > 07:27:09 /tmp/hudson8201470295383101379.sh: line 39: syntax error near
> > unexpected token `('
> >
> >
> > Do we have an fc23 build of ovirt-vmconsole?
> > Can you fix that syntax error?
> >
> 
> Should all be fixed, only https://gerrit.ovirt.org/48408 pending merge

Taken, though for some reason, I had to manually set the CI+1 flag on it.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Jenkins build fails - bad dependency?

2015-11-12 Thread Dan Kenigsberg
On Wed, Nov 11, 2015 at 03:41:11PM +0100, Sandro Bonazzola wrote:
> On Wed, Nov 11, 2015 at 12:06 PM, Eyal Edri  wrote:
> 
> > ccing devel.
> >
> > On Tue, Nov 10, 2015 at 6:32 PM, Nir Soffer  wrote:
> >
> >> This currently breaks the build for
> >> https://gerrit.ovirt.org/31162
> >>
> >> 00:08:25.505 Last metadata expiration check performed 0:00:02 ago on
> >> Tue Nov 10 16:24:12 2015.
> >>
> >> 00:08:30.247 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed
> >> by vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch.
> >> 00:08:30.247 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by
> >> vdsm-4.17.999-118.git71e8041.fc23.noarch
> >> 00:08:30.247 (try to add '--allowerasing' to command line to replace
> >> conflicting packages)
> >>
> >> Can you take a look at this?
> >>
> >
> 
> looks like the automation/check-patch.repos file is missing the snapshot
> repo.

heh, we simply have no automation/check-patch.repos.fc23, yet.
Is that

http//gerrit.ovirt.org/48479

all that is needed?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


introducing F23 jobs: vdsm_master_check-patch-fc23 failure

2015-11-10 Thread Dan Kenigsberg
Thanks for introducing the new job.
However, I see that the job that runs on spec changes fails:

http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/49/

07:40:56 Last metadata expiration check performed 0:00:27 ago on Tue Nov 10 
07:40:24 2015.
07:40:56 Error: nothing provides ovirt-vmconsole >= 1.0.0-0 needed by 
vdsm-4.17.999-116.git7d73f3b.fc23.noarch.
07:40:56 nothing provides ovirt-vmconsole >= 1.0.0-0 needed by 
vdsm-4.17.999-116.git7d73f3b.fc23.noarch.

and another fails on
http://jenkins.ovirt.org/job/vdsm_master_install-rpm-sanity-fc23_created/9/console

07:27:09 shell-scripts/mock_build_onlyrpm.sh
07:27:09 + distro=fc23
07:27:09 + arch=x86_64
07:27:09 + project=vdsm
07:27:09 + extra_packages=(vim-minimal)
07:27:09 + extra_rpmbuild_options=('with_check=0' 'with_hooks=1')
07:27:09 /tmp/hudson8201470295383101379.sh: line 39: syntax error near 
unexpected token `('


Do we have an fc23 build of ovirt-vmconsole?
Can you fix that syntax error?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Problem with submit of 5982acfcd055, please check

2015-10-30 Thread Dan Kenigsberg
On Thu, Oct 29, 2015 at 05:16:06PM +0100, Anton Marchukov wrote:
> Hello Dan.
> 
> As a follow up to our conversation on IRC regarding failure to submit
> https://gerrit.ovirt.org/#/c/47035/
> 
> ssh -p 29418 gerrit.ovirt.org gerrit review --submit 5982acfcd055
> error: rule error: Cannot submit draft patch sets
> 
> $ ssh -p 29418 gerrit.ovirt.org gerrit review --publish 5982acfcd055
> error: Cannot publish this draft patch set
> 
> Looks like I have found what happened. That change looks to be in a draft
> state indeed, e.g. it is shown as such under "Patch Sets" menu and it may
> be in "ready to submit" state possiblly due to
> https://code.google.com/p/gerrit/issues/detail?id=2844 gerrit bug.
> 
> Also there were no "Publish draft" rights set, so you may not see "Publish"
> button due to that. Now I have added "publish draft" right to
> "vdsm-maintainers" group and you should see Publish button and then would
> be able to do publish and submit.
> 
> Please check when you ahve time and let us know if that solved the problem
> or not.

Thanks Anton. Published and merged this awkward patch.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: error in gerrit review

2015-10-25 Thread Dan Kenigsberg
On Fri, Oct 23, 2015 at 11:40:31AM +0200, David Caro wrote:
> I've just restarted gerrit, can you check if the issues still persists?

It does, no improvement.

https://gerrit.ovirt.org/#/q/project:vdsm+status:open+owner:%22Dan+Kenigsberg+%253Cdanken%2540redhat.com%253E%22

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: error in gerrit review

2015-10-25 Thread Dan Kenigsberg
On Sun, Oct 25, 2015 at 09:23:08AM +0100, Eyal Edri wrote:
> i see this normally,
> so it might mean something is wrong with your user, which user to use to
> log in?

No, it's not something sepecific to me. The link bellow was generated by
gerrit, and it is buggy. It contains the sequence "%253C" which is a
double url-enconding of "<".

Somehow this affects Firefox and not Chrome.

> 
> e.
> 
> On Sun, Oct 25, 2015 at 9:05 AM, Dan Kenigsberg <dan...@redhat.com> wrote:
> 
> > On Fri, Oct 23, 2015 at 11:40:31AM +0200, David Caro wrote:
> > > I've just restarted gerrit, can you check if the issues still persists?
> >
> > It does, no improvement.
> >
> >
> > https://gerrit.ovirt.org/#/q/project:vdsm+status:open+owner:%22Dan+Kenigsberg+%253Cdanken%2540redhat.com%253E%22
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: error in gerrit review

2015-10-19 Thread Dan Kenigsberg
On Mon, Oct 19, 2015 at 10:51:58AM +0200, Anton Marchukov wrote:
> Hello Dan.
> 
> Can you confirm that it is still reproducible for you now? I hit the link
> and also tried that search filter and it works at the moment.
> 
> Anton.
> 
> On Mon, Oct 19, 2015 at 10:20 AM, Dan Kenigsberg <dan...@redhat.com> wrote:
> 
> > On Sun, Oct 18, 2015 at 04:29:02PM +0300, Roy Golan wrote:
> > > I get a "500 Internal server error" under "Conflicts with" tab
> > >
> >
> > It smells unrelated, but recently I've noticed that my favorite gerrit
> > search no longer work:
> >
> >   "status:open project:vdsm label:verified+1 label:code-review>=+1
> > -label:code-review<=-1 branch:master"
> >
> > is wrongly url-encoded twice to
> >
> >
> > https://gerrit.ovirt.org/#/q/status:open+project:vdsm+label:verified%252B1+label:code-review%253E%253D%252B1+-label:code-review%253C%253D-1+branch:master

This still reproduces on my Firefox 41.0.1. I see no problem on Chrome.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: error in gerrit review

2015-10-19 Thread Dan Kenigsberg
On Sun, Oct 18, 2015 at 04:29:02PM +0300, Roy Golan wrote:
> I get a "500 Internal server error" under "Conflicts with" tab
> 

It smells unrelated, but recently I've noticed that my favorite gerrit
search no longer work:

  "status:open project:vdsm label:verified+1 label:code-review>=+1 
-label:code-review<=-1 branch:master"

is wrongly url-encoded twice to

https://gerrit.ovirt.org/#/q/status:open+project:vdsm+label:verified%252B1+label:code-review%253E%253D%252B1+-label:code-review%253C%253D-1+branch:master

and ends with a "Code Review - Error
line 1:39 no viable alternative at character '%'"

Is it a know gerrit bug? Is there a fix?

Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build failed with missing dependency

2015-09-28 Thread Dan Kenigsberg
On Mon, Sep 28, 2015 at 03:48:24PM +0200, Piotr Kliczewski wrote:
> HI,
> 
> I can see that vdsm build [1] with following error:
> 
> Error: nothing provides glusterfs-geo-replication >= 3.7.1 needed by
> vdsm-gluster-4.17.2-188.gitef256fe.fc22.noarch.
> 
> Can someone please take a look at it?
> 
> Thanks,
> Piotr
> 
> 
> [1] http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc22-x86_64/837

Could it be a temporary unavailability of
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-22/x86_64/
?

I see that later runs of the same job do succeed.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] dropping disabled jobs on October 19th

2015-09-24 Thread Dan Kenigsberg
On Wed, Sep 23, 2015 at 10:32:01AM +0200, Sandro Bonazzola wrote:
> Hi, the following jenkins jobs are disabled, unmaintained and not yamlized:
> 
>1. vdsm_master_pep8_gerrit_disabled
>
> 
>2. vdsm_master_unit_tests_el_gerrit_disabled
>
> 

Ack for 1 and 2.

>3. ovirt-engine_maser_gwt_admin_gerrit_disabled
>
> 
>4. ovirt-engine_master_warnings-scan_merged_disabled
>
> 
>5. vdsm_3.5_network_functional_tests_gerrit_disabled
>
> 

Petr, I'm still hopeful you can revive this one, one day. Can you ack
having taken a copy of it?

>6. ovirt-engine_master_animal-sniffer_merged_disabled
>
> 
>7. ovirt-engine_master_upgrade-params_merged_disabled
>
> 
>8. ovirt-engine_master_compile_checkstyle_gerrit_disabled
>
> 
> 
> If nobody volunteer for adopting them, I'm going to delete them on October
> 19th.
> Thanks,
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Merge accounts request

2015-09-09 Thread Dan Kenigsberg
On Wed, Sep 09, 2015 at 10:27:06AM +0200, David Caro wrote:
> 
> Done for both, can you verify?

ack*2
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Merge accounts request

2015-09-08 Thread Dan Kenigsberg
On Tue, Sep 08, 2015 at 12:44:19PM -0400, Greg Padgett wrote:
> Hi infra,
> 
> I accidentally created duplicate gerrit accounts during the recent fedora
> sign-in outage and am hoping someone would merge them for me.
> 
> Main account:
>  id 1000200, email gpadgett@rh..., identity gpadgett.id.fedoraproject.org
> 
> Others:
>  id 1001002, email gdpadgett@gmail..., identity 118074269527461692789
>  id 1001011, email gpadgett@rh..., identity 116136055444850676631


Would you please also merge the two accounts under the name of

Genadi Chereshnya 

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Old patches

2015-09-07 Thread Dan Kenigsberg
On Mon, Sep 07, 2015 at 10:48:33AM +0200, Piotr Kliczewski wrote:
> There are few patches [1] by Saggi which can be abandoned from both
> vdsm and engine.
> I need a permission to do it so please assigned it to me.

+1 for granting Piotr with a permission to abandon-master branch
patches.

Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Needed dependency in 'Project vdsm_master_install-rpm-sanity-el7_created'

2015-09-03 Thread Dan Kenigsberg
On Wed, Sep 02, 2015 at 02:49:47PM +0200, Anton Marchukov wrote:
> Hello Dan.
> 
> As for the standard CI flow, it is described here:
> 
> http://www.ovirt.org/index.php?title=CI/Build_and_test_standards
> 
> Feel free to direct us any questions.
> 
> Anton.

Petr, would you set up a meeting with Anton or David
to see how you can migrate the current build-rpm-job to the new
standard?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: joining accounts

2015-08-30 Thread Dan Kenigsberg
On Fri, Aug 28, 2015 at 03:14:34PM +0200, Anton Marchukov wrote:
 Hello Dan.
 
 I have merged masayag accounts.
 
 yzaspits has been done previously (he have sent a separate request).

Thanks!
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


joining accounts

2015-08-24 Thread Dan Kenigsberg
I have just noticed that {masayag,yzaspits}@redhat appear twice (each)
on gerrit completion, and cannot be added as patch reviewers.

Can this be fixed (I believe by merging the relevant accounts)

Regards,
Dan.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] [RFC] proposal to drop fc20 support in 3.5.z after 3.5.4 release

2015-08-04 Thread Dan Kenigsberg
On Tue, Aug 04, 2015 at 09:25:12AM +0200, Michal Skrivanek wrote:
 
 On Aug 4, 2015, at 09:22 , Sandro Bonazzola sbona...@redhat.com wrote:
 
  
  Hi,
  since Fedora 20 reached EOL long ago, I propose to drop fedora 20 support 
  within 3.5.z, being 3.5.4 the last supported release.
 
 +1

Fine with me.

 
  
  According to current schedule 3.5.5 should be the last 3.5.z release since 
  3.6.0 will be out very close to 3.5.5. We're currently experiencing issues 
  in keeping fc20 build support in Jenkins CI and we need to decide if we 
  want to put effort in keeping fc20 support in 3.5.z.
  
  Comments are welcome.
  
  -- 
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community collaboration.
  See how it works at redhat.com
  ___
  Devel mailing list
  de...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/devel
 
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm from 3.5 branch is currently failing to build on Fedora

2015-07-31 Thread Dan Kenigsberg
On Fri, Jul 31, 2015 at 09:13:45AM +0200, Sandro Bonazzola wrote:
 On Fri, Jul 31, 2015 at 9:06 AM, Michal Skrivanek 
 michal.skriva...@redhat.com wrote:
 
 
  On Jul 31, 2015, at 08:08 , Sandro Bonazzola sbona...@redhat.com wrote:
 
   Hi,
   vdsm from 3.5 branch is currently failing to build on Fedora, failing
  pep8.
   Please fix ASAP.
  
   Fedora 20:
  http://jenkins.ovirt.org/view/Stable%20branches%20per%20project/view/vdsm/job/vdsm_3.5_create-rpms-fc20-x86_64_merged/288/artifact/exported-artifacts/build.log
   Fedora 21:
  http://jenkins.ovirt.org/view/Stable%20branches%20per%20project/view/vdsm/job/vdsm_3.5_create-rpms-fc21-x86_64_merged/114/artifact/exported-artifacts/build.log
 
  looks like [1] merged 3 weeks ago
 
  hm..running CI on merged patches doesn't alert on failure?
 
 
 
 Looks like no email is triggered on failure since jenkins already comment
 on gerrit and gerrit send an email to all people reviewing the patch.
 So all of them have been warned that the patch was failing on merge.
 
 I think vdsm is also missing a pep8 / pyflakes validation on patch sent, it
 would help avoiding to discover the issue after having merged the patch.

We have pep8 job like that on master, but apparently - not on ovirt-3.5.

Anyway, I have just posted https://gerrit.ovirt.org/#/c/44239/ to
unbreak the long line of [1] and other damage done for stock el7 pep8.

I'll merge it asap.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] mom-0.5.0-1.el7.noarch: [Errno 256] No more mirrors to try.

2015-07-28 Thread Dan Kenigsberg
On Tue, Jul 28, 2015 at 03:04:14PM +0200, Sandro Bonazzola wrote:
 Hi,
 vdsm is currently failing to build on mom-0.5.0-1.el7.noarch: [Errno 256]
 No more mirrors to try.
 mom-0.5.0-1.el7.noarch is not in ovirt repos, there is a 0.5.0-0.0 there,
 and it's provided only in epel 7 testing repo.
 
 I'm not really sure that mom belongs to EPEL since it's in a Red Hat
 layered product.

Please note that due to Sandro's pointing of
https://fedoraproject.org/wiki/EPEL/GuidelinesAndPolicies#Policy
and the availability of Vdsm in
http://cbs.centos.org/repos/virt7-ovirt-35-release/x86_64/os/Packages/

we have kicked Vdsm out of EPEL7. I think that mom should follow suit.

 
 Please either move 0.5.0-1 to stable or provide it through ovirt repo.
 
 According to ovirt git, 0.5.0 has not been tagged and tarball has not been
 released.
 So I'm wondering how 0.5.0 has been built in EPEL.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm_master_unit-tests_merged stuck again on JsonRpcServerTests

2015-06-25 Thread Dan Kenigsberg

I see that you have re-enabled the tests - thanks.

Please note that now I see a random

20:25:49 ==
20:25:49 ERROR: testMethodReturnsNullAndServerReturnsTrue(kw=False) 
(jsonRpcTests.JsonRpcServerTests)
20:25:49 --
20:25:49 Traceback (most recent call last):
20:25:49   File /tmp/run/vdsm/tests/testlib.py, line 64, in wrapper
20:25:49 return f(self, *args)
20:25:49   File /tmp/run/vdsm/tests/monkeypatch.py, line 133, in wrapper
20:25:49 return f(*args, **kw)
20:25:49   File /tmp/run/vdsm/tests/jsonRpcTests.py, line 186, in 
testMethodReturnsNullAndServerReturnsTrue
20:25:49 CALL_ID)
20:25:49   File /tmp/run/vdsm/tests/jsonRpcTests.py, line 89, in _callTimeout
20:25:49 raise JsonRpcNoResponseError(methodName)
20:25:49 JsonRpcNoResponseError: [-32605] No response for JSON-RPC ping request.

in http://jenkins.ovirt.org/job/vdsm_master_unit-tests_created/4096/console

can you take a look?
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_unit-tests_merged stuck again on JsonRpcServerTests

2015-06-23 Thread Dan Kenigsberg
On Tue, Jun 23, 2015 at 11:07:07AM -0400, Piotr Kliczewski wrote:
 I am looking at it. It runs locally without any issues but it seems to get 
 stuck quite often on CI.
 
 I was told that it can be related to running the job in mock. I need to run 
 it in mock locally to verify.
 

 Thanks,
 Piotr
 
 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Piotr Kliczewski pklic...@redhat.com, infra infra@ovirt.org, 
 de...@ovirt.org
 Sent: Tuesday, June 23, 2015 4:56:19 PM
 Subject: vdsm_master_unit-tests_merged stuck again on JsonRpcServerTests
 
 http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/326/console
 
 Last lines:
 JsonRpcServerTests
 testDoubleResponse(kw=False)OK
 testDoubleResponse(kw=True) OK
 testMethodBadParameters(kw=False)   OK
 testMethodBadParameters(kw=True)OK
 
 job is still running on fc20-vm06.phx.ovirt.org

Piotr, on the mean time, can you NOSE_EXCLUDE the offending test?
I'd love to see the unit tests running again.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins CI granting excessive +1s

2015-06-17 Thread Dan Kenigsberg
On Tue, Jun 16, 2015 at 08:48:17PM +0200, dcaro wrote:
 On 06/16, Eyal Edri wrote:
  Sounds like a bug.
  Jenkins should only use the CI flag and not change CR and V flags.
  
  Might have been a misunderstanding.
  David, can you fix it so Jenkins will only update CI flag?
 
 Can someone pass me an example of the patch that had the reviews? It's 
 possible
 that previously the jobs were manually modified to only set the ci flag and we
 updated them on yaml and the manual changes got reverted. Having a sample 
 patch
 would allow me to trim down the list of possible jobs that give that review.
 
 
 That can be prevented globally easily, but until all projects have the ci flag
 we can't enforce it globally.

See for example https://gerrit.ovirt.org/#/c/42362/

Patch Set 5: Code-Review+1 Continuous-Integration+1 Verified+1

in a later run seems to have fixed this

Patch Set 6: Continuous-Integration+1
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins CI granting excessive +1s

2015-06-17 Thread Dan Kenigsberg
On Wed, Jun 17, 2015 at 03:05:52PM +0200, dcaro wrote:
 
 I just rolled out the CI flag to all the projects this morning, restricting
 jenkins to set only that flag, did that happen again after it?

No, all seems clear now. Thanks.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins CI granting excessive +1s

2015-06-16 Thread Dan Kenigsberg
As of yesterday, Jenkins CI starting granting CR+1, V+1 and CI+1 for
every patchset that it successfully passed.

Was this change somehow intentional?

It is confusing and unwanted. Only a human developer can give a
meaningful Code-Review. Only a human user/QE can say that a patch solved
the relevant bug and grant it Verfied+1. The flags that Jenkins grant
are meaningless.

Can this be stopped? Jenkins should give CI+1 if it's happe with a
patch, or CI-1 if it is has not yet run (or failed).

Regards,
Dan.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Another test failure which is not related to the tested patch

2015-06-08 Thread Dan Kenigsberg
On Mon, Jun 08, 2015 at 08:06:28AM +0200, Sandro Bonazzola wrote:
 Il 06/06/2015 11:57, Eyal Edri ha scritto:
  Hi,
  
  thanks for reporting this nir, this is helping us map the current infra 
  issues at hand
  and try to resolve them.
  
  i see the failing repos [1] are from epel/testing - are these mandatory to 
  the test?
  or we can disable them and use only stable epel?
 
 We can't disable epel testing since ioprocess is not yet in stable repo.

(btw Yeela/Yaniv, can you push the latest build to stable?
https://admin.fedoraproject.org/updates/search/ioprocess )
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] Fedora 20 is not supported any more on master

2015-05-18 Thread Dan Kenigsberg
On Mon, May 18, 2015 at 01:32:27AM -0400, Yedidyah Bar David wrote:
 - Original Message -
  From: Nir Soffer nsof...@redhat.com
  To: Yedidyah Bar David d...@redhat.com
  Cc: Sandro Bonazzola sbona...@redhat.com, David Caro 
  dcaro...@redhat.com, de...@ovirt.org, infra
  infra@ovirt.org
  Sent: Sunday, May 17, 2015 8:50:26 PM
  Subject: Re: [ovirt-devel] Fedora 20 is not supported any more on master
  
  - Original Message -
   From: Yedidyah Bar David d...@redhat.com
   To: Sandro Bonazzola sbona...@redhat.com
   Cc: David Caro dcaro...@redhat.com, Nir Soffer nsof...@redhat.com,
   de...@ovirt.org, infra infra@ovirt.org
   Sent: Sunday, May 17, 2015 8:41:42 AM
   Subject: Re: [ovirt-devel] Fedora 20 is not supported any more on master
   
   - Original Message -
From: Sandro Bonazzola sbona...@redhat.com
To: David Caro dcaro...@redhat.com, Nir Soffer 
nsof...@redhat.com
Cc: de...@ovirt.org, infra infra@ovirt.org
Sent: Friday, May 15, 2015 3:26:02 PM
Subject: Re: [ovirt-devel] Fedora 20 is not supported any more on master

Il 15/05/2015 10:23, David Caro ha scritto:
 
 This job is on yaml, it's fairly simple to add/remove distros, you can
 do
 it
 yourself, ping me when you are around and I can guide you.
 
 
 The fix should remove the fc20 section under distro from the file:
 
 https://gerrit.ovirt.org/gitweb?p=jenkins.git;a=blob;f=jobs/confs/yaml/jobs/vdsm/vdsm_create-rpms.yaml;hb=refs/heads/master
 
 from the project that has the master version


Please also disable ovirt-hoste-deploy-offline for fedora 20 as well.
I'll have to disable all-in-one too.

Maybe we should consider to drop fedora 20 at all in master at this
point.
Opening the vote: +1 for me; adding de...@ovirt.org.
   
   I'd like to understand the support/build matrix with this change.
   
   Fedora 22 is to be released 2015-05-26. We currently do not build the
   engine
   for it,
   and I understand we do not intend to, until the port to wildfly/java8 is
   done. Right?
   Any ETA for that?
   
   We do build everything for fedora 21, but [1] says it will not be
   supported.
   Is this still true?
   
   We do not build vdsm, at least, for el6, and it's not intended to be
   supported anymore.
   
   We also build everything for el7.
   
   So it turns out that the only distro fully supported is el7, and no 
   version
   of
   fedora will be supported until the port to f22/wildfly/java8 is finished.
   Indeed?
   
   Not sure how long we can/want to be in this state.
  
  The subject was confusing - I'm talking about vdsm support in Fedora 20, not
  about engine support, which as far as I know, works fine on Fedora 20.
  
  Sorry for the confusion.
 
 np at all, but my question still applies (not directed to you specifically) 
 ...

The question applies mostly to Sandro. Personally, I think we can safely
skip Engine support for f21. Vdsm support for f21 is important (it's
actually the ONLY supported Fedora version for the master branch, as
much that master is supported.)


 And a related one:
 
 Do we (want to) support engine on el6/7 with hosts on fedora (21/22)?

As far as I recall, we support this combination for a long long time (as
long as f* and e* hosts are kept on separate clusters), and I don't see
the benefit of dropping this support. It can come up useful for
conservative users, running el6, but wanting to test new stuff on
el7/f29 hypervisors.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [VDSM] ipwrapperTests.TestDrvinfo failing on f21 slave, pass on f20 slave

2015-05-07 Thread Dan Kenigsberg
On Wed, May 06, 2015 at 05:28:14PM -0400, Nir Soffer wrote:
 Hi infra,
 
 These tests fail on Fedora 21 slave, and pass on Fedora 20 slave.
 http://jenkins.ovirt.org/job/vdsm_master_unit-tests_created/1888/console - 
 FAIL
 http://jenkins.ovirt.org/job/vdsm_master_unit-tests_created/1908/console - 
 PASS
 
 Infra: please check the Fedora 21 slave, maybe this is configuration issue?
 Ido: please check if the tests are valid
 
 Generally these tests are not unitests, and should not run for validating 
 unrelated patches. They should move to the functional tests.

Could you explain your assertions?

This tests do not require a functioning vdsm. They don't test vdsm's
API, so they should not end up in the functional directory.

The ipwrapperTests tests the ipwrapper unit of vdsm. But in order to do
so, they interact with the host system, and attempt to create a bridge.
When this setup fails for some reason (I do not know why `brctl
addbr vdsmtest-O8JmI` failed) the test ends up in an ERROR state.

If we fail to fix this condition ASAP, we catch this failure and skip
the test as broken. But I don't think we should thrown it out of the
test suite altogether.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins VDSM build failures

2015-05-07 Thread Dan Kenigsberg
On Thu, May 07, 2015 at 01:33:28PM +0200, Sandro Bonazzola wrote:
 Jenkins build failures have been caused by:
 
 Author: Dan Kenigsberg dan...@redhat.com
 Date:   Fri Apr 24 12:54:57 2015 +0100
 
 python3: use python3-compatible raising
 
 http://legacy.python.org/dev/peps/pep-3109/ has eliminated the raise
 statement with 2 or 3 expressions. However, in Python 2 they are the
 only way to tweak traceback of the raised exception.
 
 This patch introduces a new dependency on the six library, since its
 six.reraise() handles both cases. The code was auto-generated with
 the libmodernize.fixes.fix_raise_six fixer of python-modernize.
 
 
 without updating the build jobs in jenkins repo (git clone 
 git://gerrit.ovirt.org/jenkins)
 I'm going to add the six dependency but please, whenever you change deps, 
 also fix the jobs your breaking.

Thanks, and sorry. I've updated the only jenkins config of the job,
which is a beginner's bug. For future refrence, the fix to the yaml
definition was

https://gerrit.ovirt.org/#/c/40654/
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: your patch https://gerrit.ovirt.org/#/c/40346/ broke oVirt vdsm jobs

2015-05-05 Thread Dan Kenigsberg
On Tue, May 05, 2015 at 10:11:09AM +0200, David Caro wrote:
 On 05/05, Max Kovgan wrote:
  hi, Dan.
  makes sense to me to focus on 2 use cases:
   - pre-commit hook running everything jenkins is running - locally
 Maybe pre-push instead, that will leverage a bit the local work
 - pros:
   - nearly identical checks/tests jenkins would running
   - doesn't care about IDE/editor
 - cons:
   - slower
   - can be annoying to commit (locally) broken code for later squashing

If something is too anoying to be run (such as blocking every patch for
3 minute unit tests, when the poor developer only wants to post his
patch and go home) - developer would find a way to skip it.

  
   - editor/IDE marriage with tests/checks running
 - pros:
   - dev has full control over what runs in checks/tests
   - allows to commit dirty commit
   - shorter == quicker than the quickest jenkins option
 - cons:
   - depends on IDE/editor support
   - less checks/tests = higher risk

+1. It boils down to developer and maintainer prudence.
I have such a plugin in my ViM for static testing; Ido (and everyone
else) should have one, too. I'm less sure about auto-running `make
check` at rundom points in time.

 
  I did both with: intelliJ/PyCharm and vim, almost 100% sure PyDev allows 
  this.
  
  either allows ease of running tests - in 1st case upon git commit, in the
  latter - via a button/shortcut in the devtool.
  I can help with setting up either to an early adopter.
  Then give it a week or two to get some feedback later how well it goes.
  
  Besides, we're also trying to speedup jenkins response all the time

I would not mind to BLOCK merging before jenkins hook has responded -
assuming that I (as a branch maintainer) can remove the jenkins reviewer
from gerrit. There could be emenrgencies that cannot wait for the
response. And of course, as a maintainer, I must be able to override the
decision of the robot (by removing it from the reviewer list).

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Fedora 22 build broken - missing gluster repo

2015-05-05 Thread Dan Kenigsberg
On Tue, May 05, 2015 at 12:16:12PM -0400, Nir Soffer wrote:
 Hi infra,
 
 We have a problem with fc22_created job [1]
 
 It fails with:
 
   16:17:34 Error: nothing provides glusterfs = 3.6.999 needed by 
 vdsm-4.17.0-762.git6e3659a.fc22.x86_64.
 
 This means it does not have the gluster development repository [2], providing 
 gluster 3.7.
 
 Please disable this job until the job is configured properly.
 
 Thanks,
 Nir
 
 [1] 
 http://jenkins.ovirt.org/job/vdsm_master_install-rpm-sanity-fc22_created/127/console
 [2] http://www.ovirt.org/Vdsm_Developers#Installing_the_required_packages

That job should have been disabled per my request to support DNF [3] as yum is
missing from f22.

[3] https://fedorahosted.org/ovirt/ticket/317

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: your patch https://gerrit.ovirt.org/#/c/40346/ broke oVirt vdsm jobs

2015-05-05 Thread Dan Kenigsberg
On Tue, May 05, 2015 at 05:18:00PM +0200, David Caro wrote:
 On 05/05, Dan Kenigsberg wrote:
  On Tue, May 05, 2015 at 10:11:09AM +0200, David Caro wrote:
   On 05/05, Max Kovgan wrote:
hi, Dan.
makes sense to me to focus on 2 use cases:
 - pre-commit hook running everything jenkins is running - locally
   Maybe pre-push instead, that will leverage a bit the local work
   - pros:
 - nearly identical checks/tests jenkins would running
 - doesn't care about IDE/editor
   - cons:
 - slower
 - can be annoying to commit (locally) broken code for later 
squashing
  
  If something is too anoying to be run (such as blocking every patch for
  3 minute unit tests, when the poor developer only wants to post his
  patch and go home) - developer would find a way to skip it.
  

 - editor/IDE marriage with tests/checks running
   - pros:
 - dev has full control over what runs in checks/tests
 - allows to commit dirty commit
 - shorter == quicker than the quickest jenkins option
   - cons:
 - depends on IDE/editor support
 - less checks/tests = higher risk
  
  +1. It boils down to developer and maintainer prudence.
  I have such a plugin in my ViM for static testing; Ido (and everyone
  else) should have one, too. I'm less sure about auto-running `make
  check` at rundom points in time.
  
   
I did both with: intelliJ/PyCharm and vim, almost 100% sure PyDev 
allows this.

either allows ease of running tests - in 1st case upon git commit, in 
the
latter - via a button/shortcut in the devtool.
I can help with setting up either to an early adopter.
Then give it a week or two to get some feedback later how well it goes.

Besides, we're also trying to speedup jenkins response all the time
  
  I would not mind to BLOCK merging before jenkins hook has responded -
  assuming that I (as a branch maintainer) can remove the jenkins reviewer
  from gerrit. There could be emenrgencies that cannot wait for the
  response. And of course, as a maintainer, I must be able to override the
  decision of the robot (by removing it from the reviewer list).
 
 I'm actually working on adding a new flag 'Continuous Integration' that can
 only be set by maintainers and the ci bot, and that requires +1 to merge 
 (where
 -1 does not block).
 
 Does that make sense to you? (that way you can't rebase and merge before ci
 runs and -1, it's easier to handle permissions, it's easier to spot on the ui,
 is clearer it's purpose and does not overload another flag).

Yes, it does!
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: your patch https://gerrit.ovirt.org/#/c/40346/ broke oVirt vdsm jobs

2015-04-29 Thread Dan Kenigsberg
On Wed, Apr 29, 2015 at 11:16:37AM -0400, Barak Korren wrote:
 Patch does not pass pyflakes:
 
 ./tests/samplingTests.py:30: 'libvirtconnection' imported but unused
 ./tests/samplingTests.py:36: 'MonkeyPatch' imported but unused
 make: *** [pyflakes] Error 1
 
 You could clearly see that the tests did not pass for patchset #6
 Please do not merge patches with failing tests!

Barak, thanks for reporting this mistake of ours.
https://gerrit.ovirt.org/#/c/40408/ would fix it momentarily.

I believe that it stems from two reasons:
- Ido did not run `make check` or `make pyflakes` before ticking
  verified on the patch
- I failed to wait for the jenkins job to finish.

To make sure that this does not repeat I should avoid merging
freshly-posted patches. Ido should take better care for pep8 and
pyflakes. I have vim plugins that help me avoid such mistakes
I hear that http://www.vim.org/scripts/script.php?script_id=4440 is
better than what I actually have.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintainer rights on vdsm - ovirt-3.5-gluster

2015-04-29 Thread Dan Kenigsberg
On Wed, Apr 29, 2015 at 12:04:52PM +0200, David Caro wrote:
 On 04/29, Balamurugan Arumugam wrote:
  
  
  - Original Message -
   From: David Caro dc...@redhat.com
   To: Sahina Bose sab...@redhat.com
   Cc: Dan Kenigsberg dan...@redhat.com, infra@ovirt.org, Balamurugan 
   Arumugam barum...@redhat.com,
   de...@ovirt.org
   Sent: Wednesday, April 29, 2015 1:38:03 PM
   Subject: Re: Maintainer rights on vdsm - ovirt-3.5-gluster
   
   You want also push rights? It's not the same submit, push, create 
   reference,
   verify, code review.
   
   All those are different rights. I've added the submit, code-review and 
   verify
   rights. Let me know if you want also the rest.
   
   
   Also note that pushing and submitting is not the same, pushing will skip
   gerrit
   and any gerrit checks, so it's usually something to avoid. While 
   submitting
   is
   done through gerrit, what allows gerrit to check validity of the code to 
   be
   merged (gerrit hooks, updating bugzillas, jenkins jobs, signed-by flabs).
   
  
  The idea behind push right is that we would do periodical rebase of 
  ovirt-3.5-gluster branch with ovirt-3.5 branch.  Is this recommended way to 
  do that?
  
 
 Which kind of rebase? Does it require using --force?

Yes. The expected use case is as such:

- Bala merges patches onto his ovirt-3.5-gluster branch via gerrit.
- Important patches are merged onto ovirt-3.5 by Yaniv.
- Bala wants to have them in ovirt-3.5-gluster
- Bala rebases ovirt-3.5-gluster on top of ovirt-3.5 on his host, and
  pushes the new has as the tip new of ovirt-3.5-gluster.

Pleaes let him do that!

Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: jenkins.ovirt.org and google accounts (OpenID)

2015-04-26 Thread Dan Kenigsberg
On Sun, Apr 26, 2015 at 02:31:58PM -0400, Eyal Edri wrote:
 
 
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: David Caro dcaro...@redhat.com
  Cc: infra infra@ovirt.org
  Sent: Sunday, April 26, 2015 9:25:36 PM
  Subject: Re: jenkins.ovirt.org and google accounts (OpenID)
  
  On Fri, Apr 24, 2015 at 01:29:26PM +0200, David Caro wrote:
   
   I'm still working on testing the unofficial plugin that adds the oauth
   support,
   you can pass me your fedora account user (if you have one) or log in with
   another openid provider and pass me the new account id 
   (settings-account),
   but
   don't use it until merged.
   
   
   
   On 04/24, Alon Bar-Lev wrote:
Hi,

Google stopped providing support for OpenID, I guess all users including
me are locked out from using gerrit now.

Maybe there was an announcement, not sure I saw one.

There is a fix for gerrit-2.10.2[1], not sure if sufficient.
  
  The same problem affects our jenkis controller, but there - alas - I
  don't have a backup identity to use. For some reason, I also fail to log
  in with my username/password (which I used some time in the past).
  Can you add my danken.id.fedoraproject.org to my identity there?
  
  Can Jenkins be upgraded to support the new google protocol?
 
 it requires installing and testing a new google login plugin, 
 and that requires changing auth for all jenkins users, not just one.
 
 i'll reset your local password for now and send you in private,
 you can try associating with another openid provider in the mean time.

Thanks, Eyal.

I see that http://danken.id.fedoraproject.org/ is already associated
with my account. As occured earlier, attempting to use it ends up with a
lovely traceback http://fpaste.org/215648/43009139/

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: jenkins.ovirt.org and google accounts (OpenID)

2015-04-26 Thread Dan Kenigsberg
On Fri, Apr 24, 2015 at 01:29:26PM +0200, David Caro wrote:
 
 I'm still working on testing the unofficial plugin that adds the oauth 
 support,
 you can pass me your fedora account user (if you have one) or log in with
 another openid provider and pass me the new account id (settings-account), 
 but
 don't use it until merged.
 
 
 
 On 04/24, Alon Bar-Lev wrote:
  Hi,
  
  Google stopped providing support for OpenID, I guess all users including me 
  are locked out from using gerrit now.
  
  Maybe there was an announcement, not sure I saw one.
  
  There is a fix for gerrit-2.10.2[1], not sure if sufficient.

The same problem affects our jenkis controller, but there - alas - I
don't have a backup identity to use. For some reason, I also fail to log
in with my username/password (which I used some time in the past).
Can you add my danken.id.fedoraproject.org to my identity there?

Can Jenkins be upgraded to support the new google protocol?

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: access to gerrit

2015-04-22 Thread Dan Kenigsberg
On Wed, Apr 22, 2015 at 03:57:08AM -0400, Yeela Kaplan wrote:
 Hi,
 Starting today google account login to gerrit is not available anymore.
 I was using my personal google account to login up until now.
 login with redhat account is available but it's a different user..
 
 Is this the right address to ask this?
 If yes, what should I do?

Please supply another OpenID indentity (such as fedora's) on this list,
so that dcaro can connect it to your account.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: replace vdsm_master_unit-tests_created with its mock-based version

2015-04-22 Thread Dan Kenigsberg
On Wed, Apr 22, 2015 at 03:04:58AM -0400, Max Kovgan wrote:
 Hi, Dan.
 Have the patches been already merged?

I've merged them now.
Please swap the jobs.

Regards,
Dan.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


replace vdsm_master_unit-tests_created with its mock-based version

2015-04-21 Thread Dan Kenigsberg
With three ugly masking patches merged and propsed in
https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:unbreak_test_ci
the new vdsm_master_unit-tests_created_staging is passing with success.

Please take it out of staging, disable the puppet-based
vdsm_master_unit-tests_created, and make it mark a fat -1 on any patch
breaking it.

We[*] shall work to unbreak the skipped patches in async.

Regards,
Dan.

[*] mostly the CCed
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


  1   2   3   >