On Thu 28 Sep 2017 at 20:33, Jesse Glick <jgl...@cloudbees.com> wrote:

> On Thu, Sep 28, 2017 at 2:51 PM, Stephen Connolly
> <stephen.alan.conno...@gmail.com> wrote:
> > writing good acceptance tests is a high skill and not something that
> > can be easily crowdsourced.
>
> Agreed. In particular we should not be asking GSoC contributors to
> write new acceptance tests. Every PR proposing a new test should be
> carefully reviewed with an eye to whether it is testing something that
>
> · could not reasonably be covered by lower-level, faster tests
> · is a reasonably important user scenario, for which an accidental
> breakage would be considered a noteworthy regression worth holding up
> or even blacklisting a release (of core or a plugin)
> · is sufficiently distinct from other scenarios already being tested
> that we think regressions might otherwise slip by
>

+1


> > there is a fundamental flaw in
> > using the same harness to drive as to verify *because* any change to that
> > harness has the potential to invalidate any tests using the modified
> > harness
>
> Probably true but does not seem to me like the highest priority facing us.


If we want high value tests, this is the direction I recommend. How we get
there is (or even if we get to my recommendation) is a matter for the
people driving the effort to decide. You have my advice and you appear to
understand why I recommend it. I am not driving this effort, so I will not
dictate anything about it.

>
>
> > it is all too easy to turn a good test into a test giving a false
> > positive
>
> I am not personally aware of any such historical cases (maybe I missed
> some). The immediate problems are slowness, flakiness (tests sometimes
> fail for no clear reason), and fragility (tests fail due to trivial
> code changes especially in the UI).


Well here’s the thing, unless we retest the tests with the feature under
test broken, we have no way of knowing. A false test will pass irrespective
of whether the feature works or not... likely rather features are working,
so nobody will question the passing tests that originally tested the
feature but now are simultaneously failing to test and failing to verify.

Fragile tests are a worse problem in my mind.

Useless slow tests are an even worse problem.

But that is just my opinion, how you apply your energy is your call.

>
>
> > We need an ATH that can be realistically run by humans in an hour or two
>
> Yes that seems like a good goal.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/CANfRfr04H5Zd4xoq2HHQH0XdUezCOnykcOgRM_9T7npwtwBA0Q%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Sent from my phone

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CA%2BnPnMzpw5n8OS9rFZjCFdXBvDtk2%3DaaMnWNuW_RgSz_F_vViA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to