On Thu, Feb 23, 2023 at 03:33:00PM +0000, Daniel P. Berrangé wrote: > IIUC, we already have available compute resources from a couple of > sources we could put into service. The main issue is someone to > actually configure them to act as runners *and* maintain their > operation indefinitely going forward. The sysadmin problem is > what made/makes gitlab's shared runners so incredibly appealing.
Hello, I would like to do this, but the path to contribute in this way isn't clear to me at this moment. I made it as far as creating a GitLab fork of QEMU, and then attempting to create a merge request from my branch in order to test the GitLab runner I have provisioned. Not having previously tried to contribute via GitLab, I was a bit stymied that it is not even possibly to create a merge request unless I am a member of the project? I clicked a button to request access. Alex's plan from last month sounds feasible: - provisioning scripts in scripts/ci/setup (if existing not already good enough) - tweak to handle multiple runner instances (or more -j on the build) - changes to .gitlab-ci.d/ so we can use those machines while keeping ability to run on shared runners for those outside the project Daniel, you pointed out the importance of reproducibility, and thus the use of the two-step process, build-docker, and then test-in-docker, so it seems that only docker and the gitlab agent would be strong requirements for running the jobs? I feel like the greatest win for this would be to at least host the cirrus-run jobs on a dedicated runner because the machine seems to simply be burning double minutes until the cirrus job is complete, so I would expect the GitLab runner requirements for those jobs to be low? If there are some other steps that I should take to contribute in this capacity, please let me know. Maybe I could send a patch to tag cirrus jobs in the same way that the s390x jobs are currently tagged, so that we could run those separately? Thanks, Eldon