Since Docker was mentioned, I use the community's CROPS containers via
Docker in GitLab CI on a shared build server, providing the builders'
downloads and sstate caches to the team to accelerate their own builds
(these paths are volume-mounted to the runners).  One of the caveats to
this approach is that if you use the containers in a shared build host, you
should limit the individual builder's bitbake environment in terms of
parallelization (PARALLEL_MAKE and the like).  This will prevent a single
containers from causing one another to fail by not sharing effectively
(yes, you can set GitLab docker runner limits but those limits are
invisible to the container).  The good news is that these variables are in
the white list, so you do not have to set them in a conf file; exporting
them in the build environment is enough, meaning your build runner can be
tuned according to that build host executing the runner.

I based my tuning on this person work,
https://elinux.org/images/d/d4/Goulart.pdf, a presentation from a few years
back at an ELC event.  It contains a significant amount of information
about project flow and other things that you might also find interesting.

Cheers,

Thomas

On Mon, Feb 17, 2020 at 7:52 AM [email protected] <[email protected]>
wrote:

> On Mon, 17 Feb 2020, Quentin Schulz wrote:
>
> > Hi Philip,
> >
> > *Very* quick and vague answer as it's not something I'm doing right now.
> > I can only give hints to where to look next.
> >
> > On Mon, Feb 17, 2020 at 04:27:17AM -0800, [email protected]
> wrote:
> > > Hi,
> > >
> > > I'm looking for some advice about the best way to implement a
> > > build environment in the cloud for multiple dev teams which will
> > > scale as the number of dev teams grow.
> > >
> > > Our devs are saying:
> > >
> > > *What do we want?*
> > >
> > > To scale our server-based build infrastructure, so that engineers
> > > can build branches using the same infrastructure that produces a
> > > releasable artefact, before pushing it into develop. As much
> > > automation of this as possible is desired..
> > >
> > > *Blocker* : Can’t just scale current system – can’t keep throwing
> > > more hardware at it, particularly storage. The main contributor to
> > > storage requirements is using a local cache in each build
> > > workspace and there will be one workspace for each branch, per
> > > Jenkins agent: 3 teams x 10 branches per team x 70Gb per
> > > branch/workspace x number of build agents (let say 5) = 10Tb. As
> > > you can see this doesn’t scale well as we add branches, teams or
> > > build agents. Most of this 10Tb is the caches in each workspace,
> > > where most of the contents of each individual cache is identical.
> > >
> >
> > Have you had a look at INHERIT += "rm_work"? Should get rid of most of
> > the space in the work directory (we use this one, tremendous benefit in
> > terms of storage space).
> >
> > c.f.
> >
> https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#ref-classes-rm-work
>
>   in addition, you can always override that build-wide setting with
> RM_WORK_EXCLUDE if you want to keep generated work from a small set of
> recipes for debugging.
>
> rday
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#48453): https://lists.yoctoproject.org/g/yocto/message/48453
Mute This Topic: https://lists.yoctoproject.org/mt/71347835/21656
Mute #yocto: https://lists.yoctoproject.org/mk?hashtag=yocto&subid=6691583
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to