On Sunday, 7 August 2016 at 23:08:34 UTC, Martin Nowak wrote:
I actually don't think this makes sense. You're not in the
position to maintain 1K+ packages, it's the library owners that
need to test their code.
Thanks for taking the time to respond.
I agree with you. Library owners should test their code
themselves. But they don't. 24% of the packages don't build.
Just this short list I'm using for the project tester is hardly
I don't need to maintain anything besides linker errors. It is
quite simple, I just run `dub test` and see what happens. If that
doesn't work I consider it a failed build.
https://github.com/MartinNowak/project_tester (uses Jenkins, no
need to write yet another CI).
I would argue mine is simpler to deploy and have nodes join.
I've already thought about many different aspects of this and
here are the 2 things that are useful and might work out.
- Implement a tester that runs for every PR (just the other
testers) and tests the most popular/important dub packages.
Once a day is not enough b/c will feel responsible for
breakages, we really need feedback before merging.
It is just a matter of resources. I choose nightly since it
seemed doable using just my own resources.
- Show test results of various CIs on code.dlang.org. Testing a
dub package on Travis-CI is already a no-brainer. For example
the following .travis.yml would test a package against all dmd
Yes, that is quite nice. But that only gets triggered when the
repo is updated.
d: [dmd, dmd-beta, dmd-nightly]
All in all I understand your reservations, and I highly
appreciate your feedback. I understand I won't bring the end-all
solution to testing, but I do hope to reach the goals that I have
set forth for myself. 1) catching (some) regressions, 2) giving
insights into bit rot on code.dlang.org, 3) have fun.
It might take a couple of months before I reach them, or I might
not at all.