Re: [ovirt-devel] [vdsm][maintainership] proposal for a new stable branch policy

2018-03-21 Thread Milan Zamazal
Francesco Romani  writes:

> Recently Milan, Petr and me discussed the state of ovirt.4.2,
> considering that release 4.2.2 is still pending and this prevents
> merging of patches in the sub-branch ovirt-4.2.
>
> We agreed we could improve the handling of the stable branch(es), in
> order to make the process smoother and more predictable.
>
>
> Currently, we avoid doing Z- branches (e.g. ovirt-4.2.2, ovirt-4.2.3...)
> as much as we can, to avoid the hassle of double-backporting patches to
> stable branch.
>
> However, if a release hits unexpected delay, this policy causes
> different hassles: the Y-branch (ovirt-4.2, ovirt-4.3) is effectively
> locked, so patches already queued
>
> and ready for next releases can't be merged and need to wait.
>
>
> The new proposed policy is the following:
>
> - we will keep working exactly as now until we hit a certain RC version.
> We choosed RC3, a rule of thumb made out of experience.

I think the idea actually was to choose the RC after which patch
acceptation is restricted (e.g. blockers only).  That would be
consistent with the problem being solved, i.e. not to block patch
merging and to clearly distinguish between patches to be merged to X.Y
only and patches to be merged to X.Y.Z.  If may typically be RC3, but I
wouldn't hard-bind the branching to a particular version.

> - if RC3 is final, everyone's happy and things resume as usual
>
> - if RC3 is NOT final, we will branch out at RC3
>
> -- from that moment on, patches for next version could be accepted on
> the Y-branch
>
> -- stabilization of the late Z version will continue on the Z-branch
>
> -- patches will be backported twice
>
>
> Example using made up numbers
>
> - We just released ovirt 4.3.1
>
> - We are working on the ovirt-4.3 branch
>
> - The last tag is v4.30.10, from ovirt-4.3 branch
>
> - We accept patches for ovirt 4.3.2 on the ovirt-4.3 branch
>
> - We keep collecting patches, until we tag v4.30.11 (oVirt 4.3.2 RC 1).
> Tag is made from ovirt-4.3 branch.
>
> - Same for tags 4.30.12 - oVirt 4.3.2 RC 2 and 4.30.13 - oVirt 4.3.2 RC
> 3. Both tags are made from ovirt-4.3 branch.
>
> - Damn, RC3 is not final. We branch out ovirt-4.3.2 form branch
> ovirt-4.3 from the same commit pointed by tag 4.30.13
>
> - Next tags (4.30.13.1) for ovirt 4.3.2 will be taken from the
> ovirt-4.3.2 branch
>
>
> I believe this approach will make predictable for everyone if and when
> the branch will be made, so when the patches could be merged and where.
>
>
> The only drawback I can see - and that I realized while writing the
> example - is the version number which can be awkward:
>
>
> v4.30.11 -> 4.3.2 RC1
>
> v4.30.12 -> 4.3.2 RC2
>
> v4.30.13 -> 4.3.2 RC3
>
> v4.30.13.1 -> 4.3.2 RC4 ?!?!
>
> v4.30.13.5 -> 4.3.2 RC5 ?!?!

Our current versions are not tied to RC versions (e.g. there can be more
than one tag per one RC), so I can see no special problem with using
v4.30.13.1 for RC4 and v4.30.13.2 for RC5.

> Perhaps we should move to four versions digit? So we could have
>
>
> v4.30.11.0 -> 4.3.2 RC1
>
> v4.30.11.1 -> 4.3.2 RC2
>
> v4.30.11.2 -> 4.3.2 RC3
>
> v4.30.11.3 -> 4.3.2 RC4
>
> v4.30.11.4 -> 4.3.2 RC5
>
>
> I don't see any real drawback in using 4-digit versions by default,
> besides a minor increase in complexity, which is balanced by more
> predictable and consistent
> versions. Plus, we already had 4-digit versions in Vdsm, so packaging
> should work just fine.
>
> Please share your thoughts,

Yedidyah Bar David  writes:

> So the main question to ask, if you do want to keep above scheme,
> is: Do you expect, in above example, to have to tag/build vdsm for 4.3.3
> before 4.3.2 is released?

We probably want to make a new 4.3 tag as soon as one new patch is
merged to the 4.3 branch after 4.3.Z is branched out, to distinguish any
builds (private, snapshots, …) made from that.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] UI is not coming up after pulled latest code from master

2018-03-21 Thread Benny Zlotnik
Had this issue too (FF & Chrome), I've recompiled, ran engine-setup again
and it resolved

On Wed, Mar 21, 2018 at 3:01 PM, Greg Sheremeta  wrote:

> That message is probably not related, and is generally harmless. Compile
> with  DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=safari,gecko1_8"
> to stop that message.
>
> Does it work in firefox? Please share your ui.log from the server.
>
> Greg
>
> On Wed, Mar 21, 2018 at 8:55 AM, Gobinda Das  wrote:
>
>> Hi,
>> UI keeps on loading and I can see below error in browser.
>>
>> webadmin-0.js:422 Wed Mar 21 14:59:47 GMT+530 2018
>> com.google.gwt.logging.client.LogConfiguration
>> SEVERE: Possible problem with your *.gwt.xml module file.
>> The compile time user.agent value (gecko1_8) does not match the runtime
>> user.agent value (safari).
>> Expect more errors.
>> com.google.gwt.useragent.client.UserAgentAsserter$UserAgentAssertionError:
>> Possible problem with your *.gwt.xml module file.
>> The compile time user.agent value (gecko1_8) does not match the runtime
>> user.agent value (safari).
>> Expect more errors.
>> at Unknown.L(webadmin-0.js)
>> at Unknown.oj(webadmin-0.js)
>> at Unknown.new pj(webadmin-0.js)
>> at Unknown.nj(webadmin-0.js)
>> at Unknown.nb(webadmin-0.js)
>> at Unknown.qb(webadmin-0.js)
>> at Unknown.eval(webadmin-0.js)
>>
>> --
>> Thanks,
>> Gobinda
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] UI is not coming up after pulled latest code from master

2018-03-21 Thread Greg Sheremeta
That message is probably not related, and is generally harmless. Compile
with  DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=safari,gecko1_8"
to stop that message.

Does it work in firefox? Please share your ui.log from the server.

Greg

On Wed, Mar 21, 2018 at 8:55 AM, Gobinda Das  wrote:

> Hi,
> UI keeps on loading and I can see below error in browser.
>
> webadmin-0.js:422 Wed Mar 21 14:59:47 GMT+530 2018
> com.google.gwt.logging.client.LogConfiguration
> SEVERE: Possible problem with your *.gwt.xml module file.
> The compile time user.agent value (gecko1_8) does not match the runtime
> user.agent value (safari).
> Expect more errors.
> com.google.gwt.useragent.client.UserAgentAsserter$UserAgentAssertionError:
> Possible problem with your *.gwt.xml module file.
> The compile time user.agent value (gecko1_8) does not match the runtime
> user.agent value (safari).
> Expect more errors.
> at Unknown.L(webadmin-0.js)
> at Unknown.oj(webadmin-0.js)
> at Unknown.new pj(webadmin-0.js)
> at Unknown.nj(webadmin-0.js)
> at Unknown.nb(webadmin-0.js)
> at Unknown.qb(webadmin-0.js)
> at Unknown.eval(webadmin-0.js)
>
> --
> Thanks,
> Gobinda
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] UI is not coming up after pulled latest code from master

2018-03-21 Thread Gobinda Das
Hi,
UI keeps on loading and I can see below error in browser.

webadmin-0.js:422 Wed Mar 21 14:59:47 GMT+530 2018
com.google.gwt.logging.client.LogConfiguration
SEVERE: Possible problem with your *.gwt.xml module file.
The compile time user.agent value (gecko1_8) does not match the runtime
user.agent value (safari).
Expect more errors.
com.google.gwt.useragent.client.UserAgentAsserter$UserAgentAssertionError:
Possible problem with your *.gwt.xml module file.
The compile time user.agent value (gecko1_8) does not match the runtime
user.agent value (safari).
Expect more errors.
at Unknown.L(webadmin-0.js)
at Unknown.oj(webadmin-0.js)
at Unknown.new pj(webadmin-0.js)
at Unknown.nj(webadmin-0.js)
at Unknown.nb(webadmin-0.js)
at Unknown.qb(webadmin-0.js)
at Unknown.eval(webadmin-0.js)

-- 
Thanks,
Gobinda
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm] network test failure

2018-03-21 Thread Edward Haas
On Wed, Mar 21, 2018 at 12:15 PM, Petr Horacek  wrote:

> I tried to retrigger the build several times, it was always executed on
> el7 machine, maybe it picks fc26 only when all other machines are taken?
>
> Shouldn't be "Permission denied" problem detected in
> link_bond_test.py:setup_module()? It runs "check_sysfs_bond_permission".
>

I think that the error is from attempting to set a values that is not
acceptable by the attribute, not because there is no permission to access
sysfs.


> 2018-03-20 18:12 GMT+01:00 Edward Haas :
>
>> The tests ran on a fc26 slave and our bond option default map is in sync
>> with the el7 kernel.
>> It looks like we MUST generate a bond default map on every run.
>>
>> I'm a bit surprised it never happened until now, perhaps I'm not
>> interpreting correctly the tests helper  code? Petr?
>> Assuming I'm correct here, I'll try to post a fix.
>>
>> Thanks,
>> Edy.
>>
>> On Tue, Mar 20, 2018 at 12:14 PM, Dan Kenigsberg 
>> wrote:
>>
>>> +Petr
>>>
>>> On Tue, Mar 20, 2018 at 11:07 AM, Francesco Romani 
>>> wrote:
>>> > Hi all,
>>> >
>>> >
>>> > we had a bogus failure on CI again, some network test failed and it
>>> > seems totally unrelated to the patch being tested:
>>> >
>>> >
>>> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86
>>> _64/22410/consoleFull
>>> >
>>> >
>>> > could someone please have a look?
>>> >
>>> >
>>> > Bests,
>>> >
>>> > --
>>> > Francesco Romani
>>> > Senior SW Eng., Virtualization R
>>> > Red Hat
>>> > IRC: fromani github: @fromanirh
>>> >
>>>
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (Ovirt-ngine) ] [ 19-03-2018 ] [ 002_bootstrap.verify_notifier ]

2018-03-21 Thread Dafna Ron
The patch was reported to cause 3 different issues but I cannot say for
sure that the notifier error was actually one of them.
I can say that all issues from master and 4.2 were cleared once the fix
from Shmuel was merged and tested in cq.



On Wed, Mar 21, 2018 at 10:28 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 20 Mar 2018, at 10:33, Yaniv Kaul  wrote:
>
>
>
> On Tue, Mar 20, 2018 at 9:49 AM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>> I’m not sure what is that test actually testing, if it depends on the
>> previous host action which fails but is not verified it still may be
>> relevant to Shmuel’s patch
>> adding author of the test and the notifier owner
>>
>
> It is checking that the notifier works - which sends SNMP notifications on
> our events. I happen to picked an event which is VDC_STOP - which happens
> when the engine is restarted - which happens earlier, when we configure it.
>
>
> not sure if that’s reliable then. Doesn’t look related to that patch at
> all. Dafna, does the same error reproduce all the time after that exact
> patch?
>
> Y.
>
>
>>
>> On 19 Mar 2018, at 13:06, Dafna Ron  wrote:
>>
>> Hi,
>>
>> We had a failure in test 002_bootstrap.verify_notifier.
>> I can't see anything wrong with the notifier and I don't think it should
>> be related to the change that was reported.
>>
>> the test itself is looking for vdc_stop in messages which I do not indeed
>> see but I am not sure what is the cause and how the reported change related
>> to the failure.
>>
>> Can you please take a look?
>>
>>
>>
>> *Link and headline of suspected patches: *
>>
>>
>>
>>
>>
>>
>> *core: USB in osinfo configuration depends on chipset -
>> https://gerrit.ovirt.org/#/c/88777/
>> Link to
>> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/
>> Link
>> to all
>> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/artifacts
>> (Relevant)
>> error snippet from the log: *
>> *this is the error from *the api:
>>
>>
>> Error Message
>>
>> Failed grep for VDC_STOP with code 1. Output:
>>  >> begin captured logging << 
>> lago.ssh: DEBUG: start task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh 
>> client for lago-basic-suite-master-engine:
>> lago.ssh: DEBUG: end task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh 
>> client for lago-basic-suite-master-engine:
>> lago.ssh: DEBUG: Running 1cce7c0c on lago-basic-suite-master-engine: grep 
>> VDC_STOP /var/log/messages
>> lago.ssh: DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned 
>> with 1
>> - >> end captured logging << -
>>
>> Stacktrace
>>
>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>> testMethod()
>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
>> self.test(*self.arg)
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
>> wrapped_test
>> test()
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
>> wrapper
>> return func(get_test_prefix(), *args, **kwargs)
>>   File 
>> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>  line 1456, in verify_notifier
>> 'Failed grep for VDC_STOP with code {0}. Output: 
>> {1}'.format(result.code, result.out)
>>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 29, in 
>> eq_
>> raise AssertionError(msg or "%r != %r" % (a, b))
>> 'Failed grep for VDC_STOP with code 1. Output: \n >> 
>> begin captured logging << \nlago.ssh: DEBUG: start 
>> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
>> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: end 
>> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
>> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: Running 1cce7c0c on 
>> lago-basic-suite-master-engine: grep VDC_STOP /var/log/messages\nlago.ssh: 
>> DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned with 
>> 1\n- >> end captured logging << -'
>>
>>
>> **
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (Ovirt-ngine) ] [ 19-03-2018 ] [ 002_bootstrap.verify_notifier ]

2018-03-21 Thread Michal Skrivanek


> On 20 Mar 2018, at 10:33, Yaniv Kaul  wrote:
> 
> 
> 
> On Tue, Mar 20, 2018 at 9:49 AM, Michal Skrivanek 
> > wrote:
> I’m not sure what is that test actually testing, if it depends on the 
> previous host action which fails but is not verified it still may be relevant 
> to Shmuel’s patch
> adding author of the test and the notifier owner
> 
> It is checking that the notifier works - which sends SNMP notifications on 
> our events. I happen to picked an event which is VDC_STOP - which happens 
> when the engine is restarted - which happens earlier, when we configure it.

not sure if that’s reliable then. Doesn’t look related to that patch at all. 
Dafna, does the same error reproduce all the time after that exact patch?

> Y.
>  
> 
>> On 19 Mar 2018, at 13:06, Dafna Ron > > wrote:
>> 
>> Hi, 
>> 
>> We had a failure in test 002_bootstrap.verify_notifier.
>> I can't see anything wrong with the notifier and I don't think it should be 
>> related to the change that was reported. 
>> 
>> the test itself is looking for vdc_stop in messages which I do not indeed 
>> see but I am not sure what is the cause and how the reported change related 
>> to the failure. 
>> 
>> Can you please take a look? 
>> 
>> 
>> 
>> Link and headline of suspected patches: 
>> core: USB in osinfo configuration depends on chipset - 
>> https://gerrit.ovirt.org/#/c/88777/ 
>> 
>> Link to Job:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/ 
>> 
>> 
>> Link to all logs:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/artifacts 
>> 
>> 
>> (Relevant) error snippet from the log: 
>> 
>> 
>> this is the error from the api: 
>> 
>> 
>> Error Message
>> 
>> Failed grep for VDC_STOP with code 1. Output: 
>>  >> begin captured logging << 
>> lago.ssh: DEBUG: start task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh 
>> client for lago-basic-suite-master-engine:
>> lago.ssh: DEBUG: end task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh 
>> client for lago-basic-suite-master-engine:
>> lago.ssh: DEBUG: Running 1cce7c0c on lago-basic-suite-master-engine: grep 
>> VDC_STOP /var/log/messages
>> lago.ssh: DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned 
>> with 1
>> - >> end captured logging << -
>> Stacktrace
>> 
>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>> testMethod()
>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
>> self.test(*self.arg)
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
>> wrapped_test
>> test()
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
>> wrapper
>> return func(get_test_prefix(), *args, **kwargs)
>>   File 
>> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>  line 1456, in verify_notifier
>> 'Failed grep for VDC_STOP with code {0}. Output: 
>> {1}'.format(result.code, result.out)
>>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 29, in 
>> eq_
>> raise AssertionError(msg or "%r != %r" % (a, b))
>> 'Failed grep for VDC_STOP with code 1. Output: \n >> 
>> begin captured logging << \nlago.ssh: DEBUG: start 
>> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
>> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: end 
>> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
>> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: Running 1cce7c0c on 
>> lago-basic-suite-master-engine: grep VDC_STOP /var/log/messages\nlago.ssh: 
>> DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned with 
>> 1\n- >> end captured logging << -'
>> 
>> 
>> 
>> ___
>> Devel mailing list
>> Devel@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/devel 
>> 
> 

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm][maintainership] proposal for a new stable branch policy

2018-03-21 Thread Yedidyah Bar David
On Wed, Mar 21, 2018 at 11:57 AM, Francesco Romani  wrote:
> Hi all,
>
>
> Recently Milan, Petr and me discussed the state of ovirt.4.2,
> considering that release 4.2.2 is still pending and this prevents
> merging of patches in the sub-branch ovirt-4.2.
>
> We agreed we could improve the handling of the stable branch(es), in
> order to make the process smoother and more predictable.
>
>
> Currently, we avoid doing Z- branches (e.g. ovirt-4.2.2, ovirt-4.2.3...)
> as much as we can, to avoid the hassle of double-backporting patches to
> stable branch.
>
> However, if a release hits unexpected delay, this policy causes
> different hassles: the Y-branch (ovirt-4.2, ovirt-4.3) is effectively
> locked, so patches already queued
>
> and ready for next releases can't be merged and need to wait.
>
>
> The new proposed policy is the following:
>
> - we will keep working exactly as now until we hit a certain RC version.
> We choosed RC3, a rule of thumb made out of experience.
>
> - if RC3 is final, everyone's happy and things resume as usual
>
> - if RC3 is NOT final, we will branch out at RC3
>
> -- from that moment on, patches for next version could be accepted on
> the Y-branch
>
> -- stabilization of the late Z version will continue on the Z-branch
>
> -- patches will be backported twice
>
>
> Example using made up numbers
>
> - We just released ovirt 4.3.1
>
> - We are working on the ovirt-4.3 branch
>
> - The last tag is v4.30.10, from ovirt-4.3 branch
>
> - We accept patches for ovirt 4.3.2 on the ovirt-4.3 branch
>
> - We keep collecting patches, until we tag v4.30.11 (oVirt 4.3.2 RC 1).
> Tag is made from ovirt-4.3 branch.
>
> - Same for tags 4.30.12 - oVirt 4.3.2 RC 2 and 4.30.13 - oVirt 4.3.2 RC
> 3. Both tags are made from ovirt-4.3 branch.
>
> - Damn, RC3 is not final. We branch out ovirt-4.3.2 form branch
> ovirt-4.3 from the same commit pointed by tag 4.30.13
>
> - Next tags (4.30.13.1) for ovirt 4.3.2 will be taken from the
> ovirt-4.3.2 branch
>
>
> I believe this approach will make predictable for everyone if and when
> the branch will be made, so when the patches could be merged and where.
>
>
> The only drawback I can see - and that I realized while writing the
> example - is the version number which can be awkward:
>
>
> v4.30.11 -> 4.3.2 RC1
>
> v4.30.12 -> 4.3.2 RC2
>
> v4.30.13 -> 4.3.2 RC3
>
> v4.30.13.1 -> 4.3.2 RC4 ?!?!
>
> v4.30.13.5 -> 4.3.2 RC5 ?!?!

In principle, nothing forces you to align vdsm versions exactly with
oVirt (rc) releases. In otopi, as a much simpler example (and there
are many here), I do mostly what I want, as needed. I usually bump
versions when I have enough bugs fixed for a next project release,
but _also_ if I want to introduce some change that some other project
needs - so that I can bump the 'Requires: otopi >= X.Y.Z' line of the
other project.

So the main question to ask, if you do want to keep above scheme,
is: Do you expect, in above example, to have to tag/build vdsm for 4.3.3
before 4.3.2 is released? If not, then you can have:

v4.30.11 -> 4.3.2 RC1
v4.30.12 -> 4.3.2 RC2
v4.30.13 -> 4.3.2 RC3
v4.30.14 -> 4.3.2 RC4
v4.30.15 -> 4.3.2 RC5

And the only problem will be what to do in the -4.3 branch when you
branch -4.3.2 at RC3. Can't think about a good answer for this...

It's probably not a good idea to keep it at 4.30.13 (or .12), because
you want builds from it to have higher versions than from the -4.3.2
branch.

>
>
> Perhaps we should move to four versions digit? So we could have
>
>
> v4.30.11.0 -> 4.3.2 RC1
>
> v4.30.11.1 -> 4.3.2 RC2
>
> v4.30.11.2 -> 4.3.2 RC3
>
> v4.30.11.3 -> 4.3.2 RC4
>
> v4.30.11.4 -> 4.3.2 RC5
>
>
> I don't see any real drawback in using 4-digit versions by default,
> besides a minor increase in complexity, which is balanced by more
> predictable and consistent
> versions. Plus, we already had 4-digit versions in Vdsm, so packaging
> should work just fine.

Makes sense to me. Not sure if this was discussed, but in practice
the engine already does that, more-or-less (the tags, not branches).

One of the "prices" of adding branches is having to maintain CI for them.
Perhaps this policy should also consider what we do if/when migrating to
STDCI v2, see e.g.:

http://lists.ovirt.org/pipermail/infra/2018-March/033224.html

And also affect, if needed, this v2 project. Obviously it would be great
if you could have automatically had CI handled in the same single
commit that bumps the version. You will still have the price of having
to cherry-pick, but I can't see how you can escape that.

>
> Please share your thoughts,
>
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R
> Red Hat
> IRC: fromani github: @fromanirh
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [vdsm] network test failure

2018-03-21 Thread Petr Horacek
I tried to retrigger the build several times, it was always executed on el7
machine, maybe it picks fc26 only when all other machines are taken?

Shouldn't be "Permission denied" problem detected in
link_bond_test.py:setup_module()? It runs "check_sysfs_bond_permission".

2018-03-20 18:12 GMT+01:00 Edward Haas :

> The tests ran on a fc26 slave and our bond option default map is in sync
> with the el7 kernel.
> It looks like we MUST generate a bond default map on every run.
>
> I'm a bit surprised it never happened until now, perhaps I'm not
> interpreting correctly the tests helper  code? Petr?
> Assuming I'm correct here, I'll try to post a fix.
>
> Thanks,
> Edy.
>
> On Tue, Mar 20, 2018 at 12:14 PM, Dan Kenigsberg 
> wrote:
>
>> +Petr
>>
>> On Tue, Mar 20, 2018 at 11:07 AM, Francesco Romani 
>> wrote:
>> > Hi all,
>> >
>> >
>> > we had a bogus failure on CI again, some network test failed and it
>> > seems totally unrelated to the patch being tested:
>> >
>> >
>> > http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86
>> _64/22410/consoleFull
>> >
>> >
>> > could someone please have a look?
>> >
>> >
>> > Bests,
>> >
>> > --
>> > Francesco Romani
>> > Senior SW Eng., Virtualization R
>> > Red Hat
>> > IRC: fromani github: @fromanirh
>> >
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 19-03-2018 ] [002_bootstrap.add_instance_type ]

2018-03-21 Thread Dafna Ron
This issue was resolved on 4.2 and master.
Thank you for resolving it quickly.

On Mon, Mar 19, 2018 at 6:46 PM, Shmuel Melamud  wrote:

> Here is the fix: https://gerrit.ovirt.org/#/c/89187/
>
> On Mon, Mar 19, 2018 at 4:12 PM, Dafna Ron  wrote:
> > thank you for the fast response. once you have the fix, can you please
> sent
> > it to us?
> >
> >
> >
> > On Mon, Mar 19, 2018 at 1:25 PM, Shmuel Melamud 
> wrote:
> >>
> >> Hi!
> >>
> >> Forgot about instance type that don't have a cluster. Fixing it now.
> >>
> >> Shmuel
> >>
> >> On Mon, Mar 19, 2018 at 2:25 PM, Dafna Ron  wrote:
> >> > Hi,
> >> >
> >> > We have a failure in test 002_bootstrap.add_instance_type.
> >> > There seem to be a NullPointerException on template type which is
> >> > causing
> >> > this failure.
> >> > The same change that was reported at the last failure is reported as
> the
> >> > root cause for this failure as well, but I am not sure how it would
> >> > cause
> >> > this failure.
> >> > Can you please check?
> >> >
> >> >
> >> > Link and headline of suspected patches:
> >> >
> >> > reported as failed:
> >> > core: fix removal of vm-host device -
> >> > https://gerrit.ovirt.org/#/c/89145/
> >> >
> >> > reported as root cause:
> >> >
> >> > core: USB in osinfo configuration depends on chipset -
> >> > https://gerrit.ovirt.org/#/c/88777/
> >> >
> >> > Link to Job:
> >> >
> >> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6431
> >> >
> >> > Link to all logs:
> >> >
> >> >
> >> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/6431/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> >> >
> >> > (Relevant) error snippet from the log:
> >> >
> >> > 
> >> >
> >> >
> >> > 2018-03-19 04:59:01,040-04 INFO
> >> > [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-5)
> >> > [5a890b17-51ec-4398-8d64-82cc71939e6e] Lock Acquired to object
> >> >
> >> > 'EngineLock:{exclusiveLocks='[99a9dfb3-9a13-4595-a795-
> 693493e722be=TEMPLATE,
> >> >  myinstancetype=TEMPLATE_NAME]', sharedLocks='[]'}'
> >> > 2018-03-19 04:59:01,087-04 INFO
> >> > [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-5)
> >> > [5a890b17-51ec-4398-8d64-82cc71939e6e] Running command:
> >> > AddVmTemplateCommand
> >> > internal: false. Entities affected :  ID: aaa0---0
> >> > 000-123456789aaa Type: SystemAction group CREATE_TEMPLATE with role
> type
> >> > USER
> >> > 2018-03-19 04:59:01,139-04 INFO
> >> > [org.ovirt.engine.core.bll.storage.disk.
> CreateAllTemplateDisksCommand]
> >> > (default task-5) [5a890b17-51ec-4398-8d64-82cc71939e6e] Running
> command:
> >> > CreateAllTemplateDisksCommand internal: true.
> >> > 2018-03-19 04:59:01,205-04 INFO
> >> > [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default
> >> > task-5) [5a890b17-51ec-4398-8d64-82cc71939e6e] transaction rolled
> back
> >> > 2018-03-19 04:59:01,205-04 ERROR
> >> > [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-5)
> >> > [5a890b17-51ec-4398-8d64-82cc71939e6e] Command
> >> > 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
> >> > 2018-03-19 04:59:01,205-04 ERROR
> >> > [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-5)
> >> > [5a890b17-51ec-4398-8d64-82cc71939e6e] Exception:
> >> > java.lang.NullPointerException
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.utils.EmulatedMachineUtils.getEffective(
> EmulatedMachineUtils.java:30)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.utils.EmulatedMachineUtils.
> getEffectiveChipset(EmulatedMachineUtils.java:21)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.utils.VmDeviceUtils.
> updateUsbSlots(VmDeviceUtils.java:744)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.utils.VmDeviceUtils.
> copyVmDevices(VmDeviceUtils.java:1519)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.utils.VmDeviceUtils.
> copyVmDevices(VmDeviceUtils.java:1565)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.AddVmTemplateCommand.lambda$
> executeCommand$4(AddVmTemplateCommand.java:362)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.utils.transaction.TransactionSupport.
> executeInNewTransaction(TransactionSupport.java:202)
> >> > [utils.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.AddVmTemplateCommand.executeCommand(
> AddVmTemplateCommand.java:345)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(
> CommandBase.java:1133)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScop
> e(CommandBase.java:1286)
> >> > [bll.jar:]
> >> > at
> >> >
> >> > org.ovirt.engine.core.bll.CommandBase.runInTransaction(
> 

[ovirt-devel] [vdsm][maintainership] proposal for a new stable branch policy

2018-03-21 Thread Francesco Romani
Hi all,


Recently Milan, Petr and me discussed the state of ovirt.4.2,
considering that release 4.2.2 is still pending and this prevents
merging of patches in the sub-branch ovirt-4.2.

We agreed we could improve the handling of the stable branch(es), in
order to make the process smoother and more predictable.


Currently, we avoid doing Z- branches (e.g. ovirt-4.2.2, ovirt-4.2.3...)
as much as we can, to avoid the hassle of double-backporting patches to
stable branch.

However, if a release hits unexpected delay, this policy causes
different hassles: the Y-branch (ovirt-4.2, ovirt-4.3) is effectively
locked, so patches already queued

and ready for next releases can't be merged and need to wait.


The new proposed policy is the following:

- we will keep working exactly as now until we hit a certain RC version.
We choosed RC3, a rule of thumb made out of experience.

- if RC3 is final, everyone's happy and things resume as usual

- if RC3 is NOT final, we will branch out at RC3

-- from that moment on, patches for next version could be accepted on
the Y-branch

-- stabilization of the late Z version will continue on the Z-branch

-- patches will be backported twice


Example using made up numbers

- We just released ovirt 4.3.1

- We are working on the ovirt-4.3 branch

- The last tag is v4.30.10, from ovirt-4.3 branch

- We accept patches for ovirt 4.3.2 on the ovirt-4.3 branch

- We keep collecting patches, until we tag v4.30.11 (oVirt 4.3.2 RC 1).
Tag is made from ovirt-4.3 branch.

- Same for tags 4.30.12 - oVirt 4.3.2 RC 2 and 4.30.13 - oVirt 4.3.2 RC
3. Both tags are made from ovirt-4.3 branch.

- Damn, RC3 is not final. We branch out ovirt-4.3.2 form branch
ovirt-4.3 from the same commit pointed by tag 4.30.13

- Next tags (4.30.13.1) for ovirt 4.3.2 will be taken from the
ovirt-4.3.2 branch


I believe this approach will make predictable for everyone if and when
the branch will be made, so when the patches could be merged and where.


The only drawback I can see - and that I realized while writing the
example - is the version number which can be awkward:


v4.30.11 -> 4.3.2 RC1

v4.30.12 -> 4.3.2 RC2

v4.30.13 -> 4.3.2 RC3

v4.30.13.1 -> 4.3.2 RC4 ?!?!

v4.30.13.5 -> 4.3.2 RC5 ?!?!


Perhaps we should move to four versions digit? So we could have


v4.30.11.0 -> 4.3.2 RC1

v4.30.11.1 -> 4.3.2 RC2

v4.30.11.2 -> 4.3.2 RC3

v4.30.11.3 -> 4.3.2 RC4

v4.30.11.4 -> 4.3.2 RC5


I don't see any real drawback in using 4-digit versions by default,
besides a minor increase in complexity, which is balanced by more
predictable and consistent
versions. Plus, we already had 4-digit versions in Vdsm, so packaging
should work just fine.

Please share your thoughts,


-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel