We already have a CUDA builder in ursabot [1], just need to enable
--runtime=nvidia for the docker worker.

[1]:
https://github.com/ursa-labs/ursabot/blob/master/ursabot/builders.py#L445

On Fri, Jun 21, 2019 at 9:58 PM Keith Kraus <kkr...@nvidia.com> wrote:

> There's nvidia-docker (https://github.com/NVIDIA/nvidia-docker) which
> handles passing through the GPU devices and necessary driver modules into a
> docker container. CUDA doesn't get mapped in as it's userspace so you'll
> need to either use an image with CUDA baked in (i.e.
> https://hub.docker.com/r/nvidia/cuda) or install CUDA yourself into your
> container.
>
> -Keith
>
> On 6/21/19, 2:20 PM, "Antoine Pitrou" <solip...@pitrou.net> wrote:
>
>
>     Is it possible to test CUDA under a Docker container?
>
>     I feel like I'm the only person who routinely tests CUDA on my home
>     machine :-) And of course I only do that on Linux...
>
>     Regards
>
>     Antoine.
>
>
>     On Fri, 21 Jun 2019 12:23:10 -0500
>     Wes McKinney <wesmck...@gmail.com> wrote:
>     > hi folks,
>     >
>     > I would suggest the following development approach to help with
>     > increasing our CI capacity:
>     >
>     > 1. For all Linux builds, refactor Travis CI jobs to be Docker-based
>     > and not depend on Travis-CI-specific state or environment variables
>     > 2. Add such jobs to Ursabot. If there is satisfaction with the
> service
>     > provided by these builds, then the Travis CI entry can be toggled
> off,
>     > but we should preserve the Travis CI configuration so they can be
>     > turned back on
>     >
>     > I'm not sure what to do about Windows and macOS jobs.
>     >
>     > Obvious initial candidate for this process is the lint job, to give
>     > faster linter failures on PRs which currently can take a while
>     >
>     > Thoughts?
>     >
>     > - Wes
>     >
>     > On Mon, Jun 17, 2019 at 3:48 PM Krisztián Szűcs
>     > <szucs.kriszt...@gmail.com> wrote:
>     > >
>     > > That's right, OWNER, MEMBER and CONTRIBUTOR roles are allowed:
>     > > CONTRIBUTOR
>     > >
>     > > Author has previously committed to the repository.
>     > > MEMBER
>     > >
>     > > Author is a member of the organization that owns the repository.
>     > > OWNER
>     > >
>     > > Author is the owner of the repository.
>     > > See https://developer.github.com/v4/enum/commentauthorassociation/
>     > >
>     > > On Mon, Jun 17, 2019 at 3:16 PM Wes McKinney <wesmck...@gmail.com>
> wrote:
>     > >
>     > > > On Mon, Jun 17, 2019 at 7:25 AM Krisztián Szűcs
>     > > > <szucs.kriszt...@gmail.com> wrote:
>     > > > >
>     > > > > On Sun, Jun 16, 2019 at 6:17 AM Micah Kornfield <
> emkornfi...@gmail.com>
>     > > > > wrote:
>     > > > >
>     > > > > > Hi Krisztian,
>     > > > > > This is really cool, thank you for doing this.   Two
> questions:
>     > > > > > 1.  How reliable is the build setup? Is it reliable enough
> at this
>     > > > point to
>     > > > > > be considered a merge blocker if a build fails?
>     > > > > >
>     > > > >  IMO yes.
>     > > > >
>     > > > > > 2.  What is the permission model for triggering runs?  Is it
> open to
>     > > > > > anybody on github?  Only Ursalab members?  Committers?
>     > > > > >
>     > > > > Most of the builders are automatically triggered on each
> commits.
>     > > > > Specific control buttons are available for ursalabs member at
> the moment,
>     > > > > but I can grant access to other organizations (e.g. apache)
> and
>     > > > individual
>     > > > > members.
>     > > > >
>     > > >
>     > > > You're talking about the Buildbot UI here? Suffice to say if any
> CI
>     > > > system is going to be depended on for decision-making, then any
>     > > > _contributor_ needs to be able to trigger runs. It seems that
>     > > > presently any contributor can trigger builds from GitHub
> comments, is
>     > > > that right?
>     > > >
>     > > > > >
>     > > > > > Thanks,
>     > > > > > Micah
>     > > > > >
>     > > > > > On Fri, Jun 14, 2019 at 2:30 PM Antoine Pitrou <
> anto...@python.org>
>     > > > wrote:
>     > > > > >
>     > > > > > >
>     > > > > > > Le 14/06/2019 à 23:22, Krisztián Szűcs a écrit :
>     > > > > > > >>
>     > > > > > > >> * Do machines have to be co-located on the same
> physical network
>     > > > as
>     > > > > > > >> the master, or can they reside in other locations?
>     > > > > > > >>
>     > > > > > > > It is preferable to have a master in the same network
> where the
>     > > > workers
>     > > > > > > are,
>     > > > > > > > because the build steps are rpc calls made by the
> master.
>     > > > > > >
>     > > > > > > I'm unaware that this is a problem.
>     > > > > > > CPython has build workers all over the world (contributed
> by
>     > > > volunteers)
>     > > > > > > connected to a single build master.
>     > > > > > >
>     > > > > > > Regards
>     > > > > > >
>     > > > > > > Antoine.
>     > > > > > >
>     > > > > >
>     > > >
>     >
>
>
>
>
>
>
>
> -----------------------------------------------------------------------------------
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email and destroy all copies of the original message.
>
> -----------------------------------------------------------------------------------
>

Reply via email to