On 01/07/2013 10:39 AM, Didier Roche wrote:
>>>
>>> You can take whatever other branch you want, like calling it "next foo 
>>> feature", then, using that one and having other
>>> people and yourself branching from it, merge into it, writing tests, 
>>> experimenting with other ppas containing it. Once
>>> it's baked and ready for more people to use it, you then propose this 
>>> feature branch for a full review against trunk.
>>> After having it accepted and tests passing, the feature, merged to trunk is 
>>> then pushed to ubuntu.
>> So exactly what problem was supposed to be solved by eliminating upstream 
>> releases for a project and blurring the
>> distinction between upstream and distro if it requires having a separate 
>> 'upstream' branch where all the work is done
>> and then a single upstream branch is frozen and released into trunk?
> I didn't mention "upstream" branch, but "feature branch". So a feature is 
> backed somewhere and once ready and high
> quality enough, merged to trunk. We have more than one feature backing in 
> parallel most of the time :)

What you described is the way upstream branches normally work.  You call it a 
"feature branch" but the workflow is the
same.

>> Don't get me wrong, trunk needs to be sacred. Requiring downstreams to be 
>> aware of all upstream changes that get pushed
>> on them, and for upstreams to do all the work in downstreams if they break 
>> things before each and every commit anywhere
>> is just not working.  It sounds to me like this current experiment is 
>> turning up some negative results (some of which
>> were explicitly predicted) and we might need to adjust some theory or 
>> parameters.
>>
> How can you talk about negative results of daily release when we don't have 
> them yet (because there are some UTAH issues
> with raring that are under fixes)? The current discussion was about the 
> staging ppa beeing broken. The staging ppa has
> nothing to do with daily release and is there for a year and half already.

The "negative result" is that the trunk is frequently broken by factors beyond 
the control of the immediate project.
About once a week, trunk builds of Unity either break during building or 
running the test suite, or fail at run time,
because an upstream dependency has changed and the first notification the Unity 
maintainers receive is that the build
fails (or a manual upgrade reveals a non-working system).  If the goal if to 
never have this happen, we're not meeting
that goal, and something needs to be fixed.

An experiment that yields negative results is not a failed experiment.  On the 
contrary, it's a successful experiment.
We have more data on where we need to focus our attention.

> How things are not working when the switch is not even on? The only issue I 
> can see here was that a commit was
> introduced breaking the build system. A workaround was applied to disable pch 
> support in the merger bot and instead of
> debian/rules, resulting on no-one fixing it.

I'm not sure what went on with the PCH changes, but Martin and Jussi were 
working on it up to the start of the holidays,
and Jussi said it built using a raring pbuilder but not in the autobuilder 
because the autobuilder does something in
addition to what the debian/rules file does.  Then the holidays happened.

-- 
Stephen M. Webb  <[email protected]>

-- 
Mailing list: https://launchpad.net/~unity-dev
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~unity-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to