Ideally, we should execute all/most of the tests with GPU enabled.
However, given that GPU-specific code changes are limited, I agree that we
can start slowly.

Regards,
Arnab..

On Mon, Nov 8, 2021 at 10:57 PM Baunsgaard, Sebastian
<baunsga...@tugraz.at.invalid> wrote:

> Grat topic.
>
> It would be nice if we could use some small GPU cloud instances on demand
> then maybe we can find some budget if it is not to large a cost.
>
> Best regards
> Sebastian
> ________________________________
> From: Janardhan <janard...@apache.org>
> Sent: Monday, November 8, 2021 7:30:00 PM
> To: dev@systemds.apache.org
> Subject: [DISCUSS][SYSTEMDS-3210] Self hosted runner for the GPU
> integration test suite
>
> Hi all,
>
> TL;DR Integration test for GPU with self hosted runners
>
> Our integration tests span most of the system including basic checks for
> docs.
>
> One component where we would be needing extensive testing would be GPU,
> for this although GPU specific machines are not available on the GitHub
> workflow
> we with the help self hosted runners [1] could run the testing the
> same way current testing is done.
>
> As a starting point, I have implemented this feature on my fork [2] as
> an experimental feature. Planning to also be in touch with INFRA. I am
> planning to run this action every saturday and PRs touching the gpu
> component
> as per the suggestion made by Mark (thank you!).
>
> Hoping that it helps keep confidence in that part of the code and also
> remove the cognitive burden on the gpu devs. :)
>
> --
> [1]
> https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners
> [2] https://github.com/j143/systemds/commits/main-gpu
>
> Thank you,
> Janardhan
>

Reply via email to