On 16 Dec 2025, at 20:33, Roger Leigh via Tiff <[email protected]> wrote:
>
> On 16 Dec 2025, at 20:24, Even Rouault <[email protected]> wrote:
>>
>> Hi Roger,
>>
>> Thanks for working on that.
>>
>>> The other downside is that these systems are libtiff group runners; they
>>> are only accessible to members of projects within the libtiff GitLab group,
>>> so external or internal contributors who open merge requests from forked
>>> repositories won’t have the CI builds run successfully. Only branches
>>> pushed to the libtiff repository will run.
>>
>> That's indeed a major drawback to me. It will be hard to accept
>> contributions from casual contributors with such setup since they won't have
>> push rights to the libtiff repository. Is there somehow a way to have a mix
>> of self-hosted runners for thorough tests, while we keep the existing/past
>> gitlab provided runners for everyone?
>
> Yes, we could just keep e.g. a single Linux build using a cloud runner to do
> a crude initial pass, and then do more thorough testing once it met that
> minimum bar. It might also be possible to exclude all of the other builds
> for building in a fork, so they won’t see all of the failures. This would
> also keep our CI minutes quota in check as well, since we wouldn’t be doing
> as much per merge request.
>
> I have previously done this for submissions where the user did not have
> appveyor set up; I would just fetch their branch and then push it up to the
> libtiff repo to check the builds all passed before approving it. This is
> essentially the same, but for more platforms.
I’ve done a little bit of work on this during the evening, and you’ll now see
for example on https://gitlab.com/libtiff/libtiff/-/pipelines/2218901494 that
there is a “pre-build” stage before the main “build” stage. In the main
libtiff/libtiff repository you’ll see this runs a “basic-build-main” step. If
you push the “ci-windows” branch to your forked project you’ll see it run
“basic-build-fork” and there is no main “build” stage. The forked project
build will use cloud-based runners.
It might need a little further tweaking, but that’s the essence of it: the main
project does the comprehensive builds, forks do a minimal build.
At present, the minimal build is on Linux with CMake and Makefiles. It’s just
a basic sanity check that the codebase compiles and passes the tests. In the
main project, it also saves resources by gating the comprehensive testing so
that in case there’s some minor problem, we don’t run 18 jobs that are certain
to fail.
Kind regards,
Roger
_______________________________________________
Tiff mailing list
[email protected]
https://lists.osgeo.org/mailman/listinfo/tiff