When I am doing a review and want to test a PR, I do something similar to what Johan mentioned. I download the PR to my local fork, verify it (often with manual testing in addition to the provided tests), and then backout the code changes to see whether the provided automated test would catch the failure.

The Skara bots don't run any tests at all. What Skara does do is run a PR check that looks for the results of a GitHub actions test run, which is run in the context of the personal fork of the PR author, and present those results to the reviewers of the PR. So we already get this level of testing.

I can't think of a good way to automate running the new / modified tests without the fix. I don't think we want Skara to get into the business of understanding enough about the different projects (jfx, jdk, skara itself, etc) to be able to split a PR into multiple parts. Also, we don't run any tests either in the bots themselves or in the openjdk group on GitHub, only on individual user's personal forks. Anything we do would be down to whatever we could run in the GitHub actions on the developer's personal fork.

-- Kevin


On 6/22/2021 9:06 AM, Nir Lisker wrote:
I also thought it would be useful for the bot to run the tests after the
patch and notify on a failure. I didn't think about applying the new tests
to the old code with "hopes" for a failure, I like this idea. Providing 2
downloads is also useful.

One thing we need to be careful with is that an older test can fail with a
new patch even though the tests that come with the patch pass because the
patch could have broken some dependent code (which could also mean that the
problem is in the dependent code).
How expensive is it for the bot to run the tests?

On Tue, Jun 22, 2021 at 6:31 PM Johan Vos <johan....@gluonhq.com> wrote:

Hi,

I really like the automation that Skara is providing. It saves time for
committers as well as for reviewers. In order to increase productivity even
more (without decreasing quality), there might be an additional step we
might somehow automate. Many PR's contain changes in java code and in test
code. What I typically do as a first step of a review, is to first run the
tests on the latest code, then apply the PR, and run the tests again.
Clearly, no test should fail in the second run, while it would be ok (and
desired) that at least 1 tests fails in the first run.

My typical approach for this is to download the diff, and manually split it
in 2 parts: code + tests. I then apply the part with the tests, and run
them. Next, I apply the part with the code, and re-run the tests.
Clearly, this is not enough for a review, but if we automate this, it gives
an additional indication that a PR is going in the right direction.

I'm not sure how easy/hard this would be to implement. The bot should be
able to detect the `src/test/java`, and provide 2 downloads in the comments
of the PR ("download test diff" and "download code diff"). It can also
execute those steps in a github action, and fail if the resulting tests are
failing after applying the code diff.

Thoughts?

- Johan


Reply via email to