On 15/06/16 19:07, Finucane, Stephen wrote:
This has come up for discussion before, and the same argument for not
doing it back then still stands now: bisectability. If you have N
patches in a series, then tests should pass for every single patch (+
dependencies) in the series. By testing a whole series, we can't
validate this (or, at least we can't be explicit about this). In
addition, I consider series (well, series revisions) as mere containers
for patches, and I'd be very reluctant to add much logic to them.

I get the bisectability argument, though I think fundamentally that's a judgement for the tester as to which tests they feel are significant bisectability-wise, and Patchwork should try to fit in with preferred workflows where possible.

The primary use case which I'm currently working on involves compiling and testing kernels under multiple different configurations and (eventually) on physical hardware. I'm fine with doing one or two defconfig builds on a per-patch basis as a basic bisectability test, but testing every patch on a 20-50 patch series on more than a couple of defconfig builds, and running several 10-20 minute boot and runtime tests on VMs or physical hardware, just isn't going to be feasible. (In an ideal world with unlimited capital equipment budgets, it would obviously be a different story...)

I can see your point that series are merely containers for patches... I'm not *entirely* convinced of that myself but I can appreciate the argument there.

The best option we might have, if per-series reporting is really
necessary, is to allow Check uploading against a Series endpoint. This
would actually cause N Checks to be created - one for each Patch in the
series - meaning each Patch could still be individually queried. It
would be a bit of a lie (we didn't actually test the patch by itself,
therefore it might be broken) and I wouldn't promote this workflow
myself (bisectability FTW), but it could be a good way of dealing with
extremely long-running or resource-intensive test suites, where
per-patch validation would be too expensive.

I suppose this would be better than nothing, though apart from being mildly misleading it would also generate a lot of spam. If I'm submitting 5 test results against the entire series and 1-2 test results against each individual patch, and we have all of those results appearing on every patch, things could get more than a little bit confusing. Personally, I'd be inclined to submit series-wide test results as checks on the very last patch in the series instead of using this.

--
Andrew Donnellan              OzLabs, ADL Canberra
[email protected]  IBM Australia Limited

_______________________________________________
Patchwork mailing list
[email protected]
https://lists.ozlabs.org/listinfo/patchwork

Reply via email to