I was working/planning with Oliver for some time on this. One of the
identified downsides of this approach is that the Jenkins UI does often
evolve in a backward incompatible way (ie. table to divs change, plugin
manager recent changes, etc), all those potential breaking changes would
not be caught by acceptance tests running at plugin build/release time
(unless the plugin maintainer updates the `jenkins.version` to the latest
very often). The current ATH approach runs all acceptance tests against a
specific Jenkins version, so possible regressions are caught at Jenkins
core release time, as desired.
The alternative would be to define the plugin build/release pipeline in a
way that ATHs are run against a dynamic Jenkins version (could be the
latest weekly at the time of the build), but that leads to unreproducible
builds, which is IMO quite bad. And it does not address the issue of
checking for regressions caused on plugins at core release time (unless
some complicated pipeline is put in place to run ATHs from some key plugins
as part of the Jenkins core build).

On Tue, 26 Apr 2022 at 20:57, Basil Crow <m...@basilcrow.com> wrote:

> On Tue, Apr 26, 2022 at 11:12 AM 'Jesse Glick' via Jenkins Developers
> <jenkinsci-dev@googlegroups.com> wrote:
>
> > `acceptance-test-harness` has a bunch of dependencies some of which
> clash with those in Jenkins core or some plugins, so you would need to
> either shade them all, or otherwise somehow ensure the ATH dependency
> trails are given minimum priority.
>
> I see; thank you. I suppose the same problem exists in JTH, with
> shading being the solution, so I see no reason why the same couldn't
> be done for ATH.
>
> > Although ATH uses RESTish endpoints for a few purposes, for the most
> part the test setup (not just the actual assertions) uses the browser. In
> all cases the whole test run is “black-box”. This has its appeal (stronger
> coverage) but GUI setup makes tests much slower and either REST or GUI
> setup can also make it a lot more awkward to write tests than when using
> `[Real]JenkinsRule`, which are “white-box” and let you quickly set up an
> environment and run some assertions at the Java level. JTH also lets you
> write test-only extensions, which ATH does not.
>
> I see; very interesting. I wonder how many parts of the test setup
> process are common to many tests/plugins versus unique to each
> test/plugin. For example, I could imagine that things like going
> through the setup wizard, creating a freestyle project, or adding an
> agent are common to many tests (and therefore pointless from the
> perspective of test coverage to repeat them multiple times), while
> other parts of the test setup process, like configuring a specific
> plugin, might be unique to each test/plugin. If true, and if the test
> time is dominated by the parts of the test setup process that are
> common to many tests/plugins, perhaps we could, for each portion of
> the test setup process that is common to many tests/plugins, run that
> portion with the GUI/REST API in one "primary" repository (e.g., core,
> Workflow: Job, etc) and (without loss of test coverage at a macro
> level) run that portion with JCasC/REST or even Java/JTH in the
> "secondary" repositories as a way of speeding up and de-flaking the
> setup process to get to the unique/interesting part.
>
> In other words, I wonder if a middle ground between test coverage and
> performance couldn't be found. Or maybe I am hopelessly naive and this
> is an exercise in futility in the long term. I will discuss this with
> my friend who has experience in this domain.
>
> > ATH installs plugins, and selects plugin versions, using user-mode
> tools. `[Real]JenkinsRule` follow the Maven test classpath. Possibly ATH
> could be given an option to define a mock UC based on a test classpath.
>
> Yes, a great example of the "middle ground" approach from the previous
> paragraph!
>
> > Also whether the fragility is in the actual assertions, or test setup
> (point #1 above).
>
> I think your point is that fragility in the test setup would undermine
> the idea at its core, while fragility in the actual assertions could
> be tolerated on a case-by-case basis in individual test suites. If so,
> I concur, and I think that finding a way to mitigate the fragility of
> test setup by factoring out that logic as described in the preceding
> paragraphs could be one solution.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Jenkins Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to jenkinsci-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jenkinsci-dev/CAFwNDjpmQLNRwhJSGYc3wT28P32gHvt2aF_CJtkDtX47iwBGzw%40mail.gmail.com
> .
>


-- 
Antonio Muñiz
Human, Engineer
CloudBees, Inc.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/CAJc7kzRnSP9ELawz8rAhzyFMW6xLfjA1XBO8QcSjYUFu-J3X8Q%40mail.gmail.com.

Reply via email to