On Thu, Sep 28, 2017 at 10:42 AM, Mark Waite <[email protected]> wrote:
> Do we have any way of associating historical acceptance test harness
> failures to a cause of those failures?

Not that I am aware of.

> could such data be gathered from some sample of recent runs of the
> acceptance test harness, so that we know which of the tests have been most
> helpful in the recent past?

Maybe. I doubt things are in good enough condition to do that kind of
analysis. AFAIK none of the CI jobs running the ATH have ever had a
single stable run, so there is not really a baseline.

> Alternately, could we add a layering concept to the acceptance test harness?
> There could be "precious" tests which run every time and there could be
> other tests which are part of a collection from which a few tests are
> selected randomly for execution every time.

Yes this is possible.

> is there a way to make the acceptance test harness run
> in a massively parallel fashion

Yes. Does not help with the flakiness, and still wastes a tremendous
amount of cloud machine time.

> As a safety check of that concept, did any of the current acceptance tests
> detect the regression when run with Jenkins 2.80 (or Jenkins 2.80 RC)?

Yes.

> Is there a JenkinsRule test which could reasonably be written to test for
> the conditions that caused the bug in Jenkins 2.80?

Not really; that particular issue was unusual, since it touched on the
setup wizard UI which is normally suppressed by test harnesses.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr0LJ6z%3DTiQe8Rt9P_D7RpZDEHt3XAWQxoq%3Dd0dVAD%2BG%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to