Hello Nathan, I see what you mean. Still it is an issue solved by the runner.
Say we have current CI building everything and I have a computer with some ESP devices plugged in. I can bring up the local runner and let it available for the NuttX CI to use. Now, on the Nuttx CI, after all images are built, the CI searches for available hardware runners that have ESP support and tells it to: "use this Docker image which has all required components for flashing and testing, execute the test and return me the results". I'll look more into the GitHub self hosted runners, but since Gitlab does support this, I'm pretty sure GitHub will support it too. Filipe ________________________________ From: Nathan Hartman <[email protected]> Sent: Friday, February 6, 2026 4:22 PM To: [email protected] <[email protected]> Subject: Re: Distributed CI attemp/POC/early prototype [External: This email originated outside Espressif] Hi Filipe, I think part of the motivation is this, and Sebastien please correct me if I'm wrong: Just running the compiler and generating a binary is all fine and dandy for catching things like syntax errors or breakage caused by an API being changed unexpectedly, but just because you get a successful build doesn't mean that it will actually work well. The flaw with NuttX's CI based on GitHub is that building the code is all it can do. It can't load the resulting firmware onto real hardware and run any other tests with that hardware. Sebastien's testing framework probably could be made do that, if we can solve the question of how to properly script the building, flash-programming, interfacing, running ostest and possibly other programs on the board, determining success/failure, and collecting the results back to the server. An added challenge is that each developer has a certain subset of compilers and boards they can test with, so you only want to dispatch appropriate tests to each developer's test machine. If GitHub's self-hosted runners can provide all of those capabilities, please let us know! Thanks, Nathan On Fri, Feb 6, 2026 at 12:58 PM Filipe Cavalcanti < [email protected]> wrote: > Hello, > > This seems like a nice project but I believe what you propose already > exists and is supported by GitHub itself: > https://docs.github.com/en/actions/concepts/runners/self-hosted-runners > > I'm pretty sure the current CI already supports it. Basically it allows > you to run a CI job on anywhere that has a runner assimilated to the > repository. > > I'm familiar with Gitlab self-hosted runners but this looks similar. In my > case, I have a local runner (which is just a Docker image) running on my > local PC and a CI trigger with the correct tag makes it execute right here. > No need to download or do fancy setups for anything. > > All in all, if we want to integrate GitHub CI to other people PCs to make > user of more computational power, we should stick to the GitHub runners. > Our CI just need optimization and I think NTFC is getting there. It could > get a productivity boost from ntxbuild to script the builds. > > Best regards, > Filipe > > ________________________________ > From: [email protected] <[email protected]> > Sent: Friday, February 6, 2026 2:34 PM > To: [email protected] <[email protected]> > Subject: Re: Distributed CI attemp/POC/early prototype > > [External: This email originated outside Espressif] > > On 2026-02-04 18:34, Sebastien Lorquet wrote: > > I have decided to work on distributed CI because Alan clearly listed > > this as a tool that will help the community. > > > > It wont be fancy initially, but it can be improved later. Release > > early, release often, yeah? > > > > So I am writing a tool that will allow many clients to fetch and report > > jobs, in a way that can run on ANY machine with python and build tools. > > If I may, I'd like to suggest few things for consideration. > > "For safety, only approved users can retrieve jobs." I think it's worth > to not tie asking people for help to some pre-selected trustworthy users > (and let's be honest, how many people in the community do you truly know > so you can safely consider them to be trustworthy. Or maybe slightly > different scenario - if someone trustworthy asks you for access, how > sure are you that the asking person is truly the person you know.) > > What I am aiming at is that you could also consider model used by BOINC > framework - their tasks are distributed to multiple workers, the result > is compared and only considered valid if it matches. SHA256 checksum of > the resulting binary could be used for this validation. (Alebeit from > security standpoint, there is still a problem in that without enough > users participating, someone can simply create multiple accounts and > fish for getting the same job.) > > You could also combine the two approaches and consider some users more > trustworthy than others. Probably want to have random usernames too. Or > better yet - have the users identify themselves by signing the request > with a key (of which the server will know the public part.) But in short > - make offering help as easy as possible (some people may be too shy to > request access.) > > You may also want to have worker ID in the request to make the server be > able to recognize that two requests from single user came from two > workers (and not from one worker which crashed and lost the first job.) > > Another thing that probably could prove useful in the request is > (optional?) list of targets which the worker can process. For example, > it's going to be pretty easy to make your machine capable of building > for x86 but maybe not so much for some ARM chip that needs external > libraries from the manufacturer. (For example, the machine may have a > policy that only software from distro repository may be installed on > it.) > > Other than that - the protocol seems ok to me at the first glance. >
