Totally agree that CI is useful. Actually, I wrote the jenkinsfile and
setup the jenkins server before moving to apache server. I just mention
that we cannot rely on the CI test. It currently covers operator unittests
and regression test on several cnns. But the code coverage isn't great. If
a PR touches the core system, the best practice today is still code
reviewing. Otherwise, such as a PR is mainly about examples, the CI often
doesn't help so we just waste machine times.

I think checking the exact code coverage is on the roadmap, but I don't
know if we have any progress on it.

On Fri, Dec 1, 2017 at 6:19 AM, Pedro Larroy <[email protected]>
wrote:

> CI catches problems all the time. I don't think many of us can afford
> to build all the flavors and architectures in their laptops or
> workstations, so we have to rely on CI to catch all kinds of errors
> from compilation errors to bugs plus regressions, specially in a
> project which has so many build flavors.
>
> I have had this experience in big projects several times and I can
> tell you it's always the same.
>
> So from extensive software development experience I write that we will
> be able to develop and merge much faster once we have a reliable CI
> running in short cycles, any other approach or shortcuts is just
> accumulating technical debt for the future that somebody will have to
> cleanup and will slow down development. Is better to have a CI with a
> reduced scope working reliably than bypassing CI.
>
> This is irrespective of using dev to merge or unprotected master.
>
> We can't afford to have increased warnings, bugs creeping into the
> codebase going unnoticed, build system problems, performance
> regressions, etc. And we have to rely on a solid CI for this. If we
> are not ready for this, we should halt feature development or at least
> merging new features until we have a stable codebase and build system.
>

Reply via email to