tmedicci commented on issue #15730: URL: https://github.com/apache/nuttx/issues/15730#issuecomment-2681785660
Hi @lupyuen , just commenting it: > * It fails randomly, according to [NuttX Dashboard](https://nuttx-dashboard.org/d/fe2q876wubc3kc/nuttx-build-history?from=now-7d&to=now&timezone=browser&var-arch=$__all&var-subarch=$__all&var-board=rv-virt&var-config=citest&var-group=$__all&var-Filters=) It fails to execute runtime testing (the LTP). Although it's important, we don't need to test on every single board. My idea is 1) to decouple build testing and runtime testing (solves the problem of being stuck) and 2) run the most basic tests (`ostest`, `free`, `mm` etc) > * If CI Test crashes: [Everything runs super slowly, up to 1 hour](https://github.com/apache/nuttx/issues/14808#issue-2661180633) We can run LTP only on sim, for instance. This could be a parallel job. > * `rv-virt:nsh64` and `knsh64` won't run correctly with [QEMU on Docker on GitHub Actions](https://lupyuen.github.io/articles/rust6#appendix-nuttx-qemu-risc-v-fails-on-github-actions) Yes, this is important, but we don't need to test every single `defconfig` of a board on QEMU. Do we have an issue to track this problem? > * Adding CI Tests might exceed our GitHub Usage Quota [(we're roughly at 50% right now)](https://github.com/apache/nuttx/issues/15451#issuecomment-2611341475) The whole idea of creating intermediary steps is to lower CI usage. If the previous workflow failed, we don't need to test the subsequent workflows. Considering a PR that triggers all target groups, by testing some pre-defined set of defconfigs of all arch, we would end up building 100+ configs first (and, if built successfully, test the others) instead of 1600+ in parallel. The overall usage is expected to lower and we can use this for running QEMU testing, for instance. > * Adding a Self-Hosted GitHub Runner requires permission from [ASF Infra Team](https://infra.apache.org/self-hosted-runners.html) > * GitHub Runners need to be secured against [malicious scripts and code](https://infra.apache.org/self-hosted-runners.html) inside PRs, which means we need to budget for Hardened Servers and a Security Team to maintain them > * Also watch out for sneaky attacks like [QEMU Semihosting Breakout](https://lupyuen.org/articles/testbot2.html#semihosting-breakout) Do we **need** a security team for sure? The idea is to run QEMU testing on GH runners. Self-hosted runners would only run tests on real hardware after previous steps finished successfully. These HW tests would be restricted to some pre-defined defconfig `citest` or `hwtest` and known apps (like `ostest`). The firmware would have already been tested on sim and QEMU (and finished successfully). Perhaps we can use ["review deployments"](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-deployments/reviewing-deployments) feature to trigger the HW testing manually. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org