CI catches problems all the time. I don't think many of us can afford
to build all the flavors and architectures in their laptops or
workstations, so we have to rely on CI to catch all kinds of errors
from compilation errors to bugs plus regressions, specially in a
project which has so many build flavors.

I have had this experience in big projects several times and I can
tell you it's always the same.

So from extensive software development experience I write that we will
be able to develop and merge much faster once we have a reliable CI
running in short cycles, any other approach or shortcuts is just
accumulating technical debt for the future that somebody will have to
cleanup and will slow down development. Is better to have a CI with a
reduced scope working reliably than bypassing CI.

This is irrespective of using dev to merge or unprotected master.

We can't afford to have increased warnings, bugs creeping into the
codebase going unnoticed, build system problems, performance
regressions, etc. And we have to rely on a solid CI for this. If we
are not ready for this, we should halt feature development or at least
merging new features until we have a stable codebase and build system.

Reply via email to