Yeahp I think in case of debugging upstream submissions (on CI)
it's the same process of debugging locally those errors,
in which case, the only missing part there I think can be
just to point where to see when reading CI logs.

What I think will be a bigger challenge it's about to
teach other people to understand and master how
TripleO CI works, including how to define new jobs,
where to see when we have a package conflict
breaking the build, or how to detect infra issues
among other topics. (This won't be for debugging
submissions, instead to debug when we have CI
failures)

Not sure if it's needed a deep historical understanding about
how CI was built and about how it's actually working
but I think this will make people love a little bit more
our CI.

I'll add more items to the Etherpad, let's see how many people
are interested.

Cheers,
Carlos.


On Wed, Aug 24, 2016 at 8:24 PM, James Slagle <james.sla...@gmail.com>
wrote:

> On Wed, Aug 24, 2016 at 12:17 PM, Carlos Camacho Gonzalez
> <ccama...@redhat.com> wrote:
> > Hello guys!
> >
> > I will like to ask you a question related to future TripleO deep dive
> > sessions.
> >
> > What about having a specific series for CI? I read some people kind of
> > “complaining” on IRC when CI does not work properly and assuming that
> taking
> > care of CI is part of everyone's work let's try go have more eyes on CI
> > (including me).
> >
> > I believe if more people its actually able to debug “real” CI issues we
> will
> > be able to decrease the amount of work that these tasks take from the
> team.
> >
> > I added to https://etherpad.openstack.org/p/tripleo-deep-dive-topics a
> > section with some topics, feel free to add/edit items and let's discuss
> it
> > on the weekly meeting to see if in a mid-term we can have some
> introduction
> > to CI.
>
> I think this is a great idea. What I'd like to know before planning
> something like this is what specific things do people need help on
> when it comes to debugging failed jobs. How have folks tried to debug
> jobs that have failed and gotten stuck?
>
> Most of the time it is looking at logs and trying to reproduce
> locally. I'd be happy to show that, but I feel like we've already
> covered that to a large degree. So, I'd like to dig a little more into
> specific ways people get stuck with failures and then we can directly
> address those.
>
> Ideally, a root cause of a failure could always be found, but that is
> just not going to be the case given other constraints. It often comes
> down to what one is able to reproduce locally, and how to mitigate the
> issues as best we can (see email I just sent for an example).
>
> Let me know or add the specifics to the etherpad and I'll pull
> something together if there are no other volunteers :).
>
> --
> -- James Slagle
> --
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to