09.02.2018, 16:48, "Konstantin Tokarev" <annu...@yandex.ru>:
> 09.02.2018, 10:03, "Kevin Kofler" <kevin.kof...@chello.at>:
>> IMHO, you need to rethink your whole CI approach. This is increasingly being
>> the one bottleneck slowing down Qt development and releases. It might make
>> more sense to try a different approach, such as allowing all commits through
>> initially, then making CI runs at regular intervals, and triggering reverts
>> if things broke.
> From what I see, CI is not the bottleneck, or at least not the only one. From
> this list I got impression that situation is quite different. We may have
> stable branch
> with a good number of fresh unreleased commits, but release team rejects
> of making point release, because release process requires a lot of time and
> need that time for doing something more important .
> One notable example was with Qt 5.9.3, when Linux binaries were accidentally
> using too fresh GCC version. It was proposed that we rebuild 5.9.3 tag as is
> different toolchain, so no new merges were needed and CI delays could have
> minimal influence on the release timing. Anyway this was rejected, apparently
> because verifying binaries before releasing them requires too much effort .
So, I think that instead of putting money into increasing CI capacity, it would
for TQtC to spend it on the following things:
1. Dedicate enough developers' time into making releasing process (i.e., all
have to be done with sources and binaries after Coin integration succeeds) as
automated as it's feasible
2. Find a way to eleminate Coin downtimes, when it doesn't operate at all or
part of capacity
3. Work on eleminating hidded Coin downtimes, when integration time out or fail
of infrastructure issues. I mean cases when compilation does not start or never
independent of code being under integration, so this is totally not about flaky
I guess if these issues were solved, current capacity could be good enough to
promised land of continuous delivery.
Development mailing list