Re: [Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-15 Thread Anton Gladky
Hi Bruno,

I have created several branches for all supported distributions (ubuntu and
debian).
Both (yade and yadedaily) are installed, but the doc-packages were dropped,
I
do not think they are needed for production.

Also I tried to reduce the size of the images, so they are just a little
more than 1gb.
Pipeline are scheduled for every day (after newly built yadedaily-packages
are placed
into the repository).

For yadedaily one needs to install also the python-mpi4py package. It
should be fixed with
this MR [1].

[1] https://gitlab.com/yade-dev/trunk/-/merge_requests/636

Please test the setup. And when it is OK, I will update the documentation.
Basically you need to run the following:

docker run --rm -it registry.gitlab.com/yade-dev/docker-prod:ubuntu20.04

or

docker run --rm -it registry.gitlab.com/yade-dev/docker-prod:debian-buster

Best regards

Anton


Am Fr., 5. März 2021 um 10:38 Uhr schrieb Bruno Chareyre <
bruno.chare...@3sr-grenoble.fr>:

> Hi there,
>
> I'm planning to build new docker images in yade's gitlab for production,
> and possibly for development (see second part of this message, some
> background comes first).  This is open to suggestions.
>
> * Background:
>
> I recently started playing with "Singularity" images since I found our HPC
> department made it available on the clusters. There was also a user
> mentioning that on launchpad recently. From end-user POV, singularity
> images work like docker images, but a very practical difference is that it
> is allowed on our (and others') HPC. Docker isn't, for security reason.
>
> It made running yade so easy. The above command worked immediately, and
> should work just the same on every system with singularity installed:
> *ssh myHPC*
> *singularity exec
> docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily
> 
> yadedaily --check*
>
> or equivalently:
>
>
>
>
> *export YADE='singularity exec
> docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily
> 
> yadedaily' $YADE --check $YADE myScript.py $ etc. *
>
> Key points:
> 1- singularity accepts docker images in input.
> 2- the above command is using some custom docker with yadedaily
> pre-installed (which then needs to be downloadable from somewhere where
> docker is permitted)
> 3- it is compatible with MPI(!). The host system's MPI is able to
> communicate with the image system's MPI in a scenario like this, as if it
> was just yade running natively on the host:
>
> *mpirun -np 500 $YADE someParrallelStuff.py *4- a condition for this MPI
> magic to work is that the mpi library is in the same version for the host
> and for the executed image
> 5- performance: no measurable difference compared to a yade compiled on
> the host (be it running -j1, -jN or mpiexec).
>
> For the moment the custom dockers are built in [1]
> .
> I'm also building a Singularity images with [2]
> 
> but I didn't really use it since I can build it from docker directly on the
> cluster (building the singularity image is implicit in *singularity exec
> docker://...*). Building on-site may not be allowed everywhere, though,
> and in that case [2] could be useful.
>
> * What can be done:
>
> I will move [1,2] or something similar to gitlab/yade-dev and advertise it
> in the install page. Also build more versions for people to use them. More
> versions because of the MPI point above (4): depending on the host system
> someone may want OMPI v1 (unbuntu16), or v2 (ubuntu18), etc.
>
> For production images it would make sense to just use canonical
> debian/ubuntu with yade and/or yadedaily preinstalled. But, it is not
> exactly what I did for the moment. Instead I used docker images from our
> registry. Which implies the images have yade, and also what it needs to
> compile yade (I didn't test compilation yet but it should work).
>
> I was thinking of splitting that into two types of images; minimal images
> for production and "dev" images with all compilation pre-requisites. Then I
> realized that the best "dev" image would be - by far - one reflecting the
> state of the system at the end of our current pipeline, i.e. one with a
> full /build folder and possibly ccache info (if not too large).
>
> If such dev images were pushed to yade registry then anyone could grab
> latest build and recompile incrementally. It could save a lot of
> (compilation) time for us when trying to debug something on multiple
> distros.
>
> And what about that?: compiling with a ubuntu20 docker image on a ubuntu20
> host should make it possible to use the pipeline's ccache while still
> running yade on the native system (provided that the install path is in the
> host filesystem).
>
> Maybe pushing to registry could be done directly as part of current
> 

Re: [Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-06 Thread Janek Kozicki (yade)
Yeah, let's do that. Friday for me is the best option, sometime after 12:00.

Anton Gladky said: (by the date of Sat, 6 Mar 2021 19:46:54 +0100)

> Hi Bruno,
> 
> this is very interesting! I have never heard about singularity yet. Thanks
> for information.
> 
> From my point of view, it is not a problem at all to build docker-images
> with yadedaily
> inside, if it is helpful for you and anybody. I have some concerns about
> large dev-images,
> but I am also opened to it.
> 
> I would propose to organize a short zoom/bbb/jitsi video-meeting and to
> discuss it by voice.
> We are practicing it already with Janek and Klaus for some other (paper)
> stuff and it works
> perfectly and just faster as writing long emails.
> 
> Best regards
> 
> Anton
> 
> 
> Am Sa., 6. März 2021 um 19:18 Uhr schrieb Bruno Chareyre <
> bruno.chare...@3sr-grenoble.fr>:
> 
> >
> > On 06/03/2021 17:06, Janek Kozicki (yade) wrote:
> >
> > I am not exactly sure what you want to discuss,
> >
> > I don't know either LOL. That's more an announcement in advance so someone
> > can raise issues, ask features, etc.
> >
> > Do you want to create some sort of packages with yade installed inside?
> >
> > You can call it package, but that's more like some docker images in a
> > different format (from a very macroscopic point of view). Main thing is
> > that it is allowed on HPC (where compiling yade can be big pain) and it
> > seems to become more popular.
> >
> > Hence people will look for a yade-docker target (one with yade inside) in
> > order to build their singularity images, and it is fairly easy to offer
> > some.
> >
> > Mind that before using Singularity I have never been able to get all
> > checkTests to pass on our HPC cluster. I was able to run what I needed most
> > of the time, but never to pass all tests. There was always an issue with
> > something.
> >
> > I am not sure if yade-dev registry will be able to hold big
> > docker images.
> >
> > Good point, though the images have no reason to be much bigger than our
> > current docker images. The problem would be more images, not larger images.
> > I will check registry limit. If it is a problem I can keep pushing to
> > gitlab.com/bchareyre registry, not an issue.   See:
> > https://gitlab.com/bchareyre/docker-yade/container_registry/1672064
> >
> > As you see the images are from 1.1GB to 1.7GB, not a big increment.
> >
> > We may run out of space if we don't start paying
> > gitlab for hosting.
> >
> > Not an issue. What I described is what I'm already doing under my account
> > (without paying). If migrating one thing from gitlab/bchareyre to
> > gitlab/yade-dev is the cause of running out of space, then I'll just not
> > migrate. It is not an problem to provide the images to the users under my
> > registry.
> >
> > Perhaps these singularity_docker packages should also be on yade-dem.org ?
> >
> > Excessively complex. We would have to setup a registry on our local server
> > while gitlab does that very well.
> >
> >
> > The interesting stuff for me would be if we could use these HPC
> > singularity servers in our gitlab continuous integration pipeline :)
> >
> > If you mean accessing more hardware ressources, no, it will not work in
> > Grenoble.
> > The HPC clusters are dedicated to scientific computing. They have special
> > job submission systems, it will absolutely not integrate in a CI framework.
> >
> > The yade-runner-01 quickly runs out of space whenever try to I enable
> > it ;-)
> >
> > Yeah, but this is a completely different type of ressources, even if they
> > are provided by the same people overall.
> > Maybe it is a good time to check again how I could get gitlab runners for
> > yade. They improved a number of things and offered new services in the
> > recent years. There might be docker farms more easily accessible than when
> > Rémi configured yade-runner-01, now. Rémi was basically ahead of things.
> >
> >
> > Maybe it is only a matter of single line in
> > file /etc/gitlab-runner/config.toml , change:
> >
> >   executor = "docker"
> >
> > to
> >
> >   executor = "singularity"
> >
> > I think this is quite likely.
> >
> >
> > Very likely but there is no point doing that, I think.
> > Why would you generate a singularity image from a docker image to achieve
> > something the docker image does just as well?
> > In the context of using gitlabCI/docker we have root privileges, hence no
> > issue with docker.
> >
> >
> > We already have incremental recompilation in our gitlab CI pipeline.
> > The ccache is used for that. The trick was to mount inside docker
> > (for you: inside singularity) a local directory from the host
> > filesystem, where the ccache files are stored.
> >
> > That the gitlab compilation is incremental doesn't make my own local
> > compilation incremental.
> > However if I can download a snapshot of the gitlab pipeline as a virtual
> > machine I can compile incrementally, locally, even though the initial
> > compilation wasn't local.
> >
> > 

Re: [Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-06 Thread Anton Gladky
Hi Bruno,

this is very interesting! I have never heard about singularity yet. Thanks
for information.

>From my point of view, it is not a problem at all to build docker-images
with yadedaily
inside, if it is helpful for you and anybody. I have some concerns about
large dev-images,
but I am also opened to it.

I would propose to organize a short zoom/bbb/jitsi video-meeting and to
discuss it by voice.
We are practicing it already with Janek and Klaus for some other (paper)
stuff and it works
perfectly and just faster as writing long emails.

Best regards

Anton


Am Sa., 6. März 2021 um 19:18 Uhr schrieb Bruno Chareyre <
bruno.chare...@3sr-grenoble.fr>:

>
> On 06/03/2021 17:06, Janek Kozicki (yade) wrote:
>
> I am not exactly sure what you want to discuss,
>
> I don't know either LOL. That's more an announcement in advance so someone
> can raise issues, ask features, etc.
>
> Do you want to create some sort of packages with yade installed inside?
>
> You can call it package, but that's more like some docker images in a
> different format (from a very macroscopic point of view). Main thing is
> that it is allowed on HPC (where compiling yade can be big pain) and it
> seems to become more popular.
>
> Hence people will look for a yade-docker target (one with yade inside) in
> order to build their singularity images, and it is fairly easy to offer
> some.
>
> Mind that before using Singularity I have never been able to get all
> checkTests to pass on our HPC cluster. I was able to run what I needed most
> of the time, but never to pass all tests. There was always an issue with
> something.
>
> I am not sure if yade-dev registry will be able to hold big
> docker images.
>
> Good point, though the images have no reason to be much bigger than our
> current docker images. The problem would be more images, not larger images.
> I will check registry limit. If it is a problem I can keep pushing to
> gitlab.com/bchareyre registry, not an issue.   See:
> https://gitlab.com/bchareyre/docker-yade/container_registry/1672064
>
> As you see the images are from 1.1GB to 1.7GB, not a big increment.
>
> We may run out of space if we don't start paying
> gitlab for hosting.
>
> Not an issue. What I described is what I'm already doing under my account
> (without paying). If migrating one thing from gitlab/bchareyre to
> gitlab/yade-dev is the cause of running out of space, then I'll just not
> migrate. It is not an problem to provide the images to the users under my
> registry.
>
> Perhaps these singularity_docker packages should also be on yade-dem.org ?
>
> Excessively complex. We would have to setup a registry on our local server
> while gitlab does that very well.
>
>
> The interesting stuff for me would be if we could use these HPC
> singularity servers in our gitlab continuous integration pipeline :)
>
> If you mean accessing more hardware ressources, no, it will not work in
> Grenoble.
> The HPC clusters are dedicated to scientific computing. They have special
> job submission systems, it will absolutely not integrate in a CI framework.
>
> The yade-runner-01 quickly runs out of space whenever try to I enable
> it ;-)
>
> Yeah, but this is a completely different type of ressources, even if they
> are provided by the same people overall.
> Maybe it is a good time to check again how I could get gitlab runners for
> yade. They improved a number of things and offered new services in the
> recent years. There might be docker farms more easily accessible than when
> Rémi configured yade-runner-01, now. Rémi was basically ahead of things.
>
>
> Maybe it is only a matter of single line in
> file /etc/gitlab-runner/config.toml , change:
>
>   executor = "docker"
>
> to
>
>   executor = "singularity"
>
> I think this is quite likely.
>
>
> Very likely but there is no point doing that, I think.
> Why would you generate a singularity image from a docker image to achieve
> something the docker image does just as well?
> In the context of using gitlabCI/docker we have root privileges, hence no
> issue with docker.
>
>
> We already have incremental recompilation in our gitlab CI pipeline.
> The ccache is used for that. The trick was to mount inside docker
> (for you: inside singularity) a local directory from the host
> filesystem, where the ccache files are stored.
>
> That the gitlab compilation is incremental doesn't make my own local
> compilation incremental.
> However if I can download a snapshot of the gitlab pipeline as a virtual
> machine I can compile incrementally, locally, even though the initial
> compilation wasn't local.
>
> Note that the docker images are re-downloaded from gitlab only when
> they have been rebuilt on https://gitlab.com/yade-dev/docker-yade/-/pipelines
> And this download is pretty slow. Fortunately it happens only every
> few weeks. Otherwise docker uses the cached linux distro image.
>
> I see where I lost you. Singularity images (at least in my project) are
> not in any way related to CI.
> They 

Re: [Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-06 Thread Bruno Chareyre


On 06/03/2021 17:06, Janek Kozicki (yade) wrote:

I am not exactly sure what you want to discuss,


I don't know either LOL. That's more an announcement in advance so 
someone can raise issues, ask features, etc.



Do you want to create some sort of packages with yade installed inside?


You can call it package, but that's more like some docker images in a 
different format (from a very macroscopic point of view). Main thing is 
that it is allowed on HPC (where compiling yade can be big pain) and it 
seems to become more popular.


Hence people will look for a yade-docker target (one with yade inside) 
in order to build their singularity images, and it is fairly easy to 
offer some.


Mind that before using Singularity I have never been able to get all 
checkTests to pass on our HPC cluster. I was able to run what I needed 
most of the time, but never to pass all tests. There was always an issue 
with something.



I am not sure if yade-dev registry will be able to hold big
docker images.


Good point, though the images have no reason to be much bigger than our 
current docker images. The problem would be more images, not larger images.
I will check registry limit. If it is a problem I can keep pushing to 
gitlab.com/bchareyre registry, not an issue.   See: 
https://gitlab.com/bchareyre/docker-yade/container_registry/1672064


As you see the images are from 1.1GB to 1.7GB, not a big increment.


We may run out of space if we don't start paying
gitlab for hosting.


Not an issue. What I described is what I'm already doing under my 
account (without paying). If migrating one thing from gitlab/bchareyre 
to gitlab/yade-dev is the cause of running out of space, then I'll just 
not migrate. It is not an problem to provide the images to the users 
under my registry.



Perhaps these singularity_docker packages should also be on yade-dem.org ?


Excessively complex. We would have to setup a registry on our local 
server while gitlab does that very well.




The interesting stuff for me would be if we could use these HPC
singularity servers in our gitlab continuous integration pipeline :)


If you mean accessing more hardware ressources, no, it will not work in 
Grenoble.
The HPC clusters are dedicated to scientific computing. They have 
special job submission systems, it will absolutely not integrate in a CI 
framework.




The yade-runner-01 quickly runs out of space whenever try to I enable
it ;-)


Yeah, but this is a completely different type of ressources, even if 
they are provided by the same people overall.
Maybe it is a good time to check again how I could get gitlab runners 
for yade. They improved a number of things and offered new services in 
the recent years. There might be docker farms more easily accessible 
than when Rémi configured yade-runner-01, now. Rémi was basically ahead 
of things.




Maybe it is only a matter of single line in
file /etc/gitlab-runner/config.toml , change:

   executor = "docker"

to

   executor = "singularity"

I think this is quite likely.



Very likely but there is no point doing that, I think.
Why would you generate a singularity image from a docker image to 
achieve something the docker image does just as well?
In the context of using gitlabCI/docker we have root privileges, hence 
no issue with docker.




We already have incremental recompilation in our gitlab CI pipeline.
The ccache is used for that. The trick was to mount inside docker
(for you: inside singularity) a local directory from the host
filesystem, where the ccache files are stored.


That the gitlab compilation is incremental doesn't make my own local 
compilation incremental.
However if I can download a snapshot of the gitlab pipeline as a virtual 
machine I can compile incrementally, locally, even though the initial 
compilation wasn't local.



Note that the docker images are re-downloaded from gitlab only when
they have been rebuilt on https://gitlab.com/yade-dev/docker-yade/-/pipelines
And this download is pretty slow. Fortunately it happens only every
few weeks. Otherwise docker uses the cached linux distro image.


I see where I lost you. Singularity images (at least in my project) are 
not in any way related to CI.
They are related to, primarily: how actual users get actual results 
(production).

And optionally, to how devs actually compile locally.


Well, download once (wait for download to finish) then start working.
Not much difference to waiting for local compilation (for me that's
inside chroot, sometimes inside docker) then start working :)


With my university connection speed, downloading a docker image and 
recompiling just one *.cpp is way faster than downloading trunk and 
compiling everything from scratch. Like incredibly faster.
I'm not speaking of what happens on gitlab, I'm speaking of what happens 
on my own computer.





pushing to registry is part of the pipeline on docker-yade:

https://gitlab.com/yade-dev/docker-yade/-/blob/master/.gitlab-ci.yml#L17


Yes, that's 

Re: [Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-06 Thread Janek Kozicki (yade)
Hi,

I am not exactly sure what you want to discuss, so I will answer
some random parts of your email, and you will see that I completely
misunderstood :)

Do you want to create some sort of packages with yade installed inside?

Anton is building .deb packages for various distros.
Do you want to make some other kind of "packages" with yade, like
singularity_docker packages_images with yade inside?

I am not sure if yade-dev registry will be able to hold big
docker images. We may run out of space if we don't start paying
gitlab for hosting. For this reason Anton cannot build yade-debug.deb
packages, such packages are useful. But the files were too large. On
the other hand the .deb packages build by Anton are already hosted
off-gitlab, they are in http://www.yade-dem.org/packages/

Perhaps these singularity_docker packages should also be on yade-dem.org ?

> From end-user POV, singularity images work like docker images

The interesting stuff for me would be if we could use these HPC
singularity servers in our gitlab continuous integration pipeline :)
The yade-runner-01 quickly runs out of space whenever try to I enable
it ;-)

Maybe it is only a matter of single line in
file /etc/gitlab-runner/config.toml , change:

  executor = "docker"

to

  executor = "singularity"

I think this is quite likely.


> If such dev images were pushed to yade registry then anyone could grab 
> latest build and recompile incrementally. 

We already have incremental recompilation in our gitlab CI pipeline.
The ccache is used for that. The trick was to mount inside docker
(for you: inside singularity) a local directory from the host
filesystem, where the ccache files are stored.

The fact that it starts from "only" sources without the previously
compiled yade binaries (incremental compilation) changes nothing:
these binaries are in the ccache, and are quickly fetched from there.
In fact when everything is ccached the build step takes about 1 minute
(as you may have noticed :).

When we were configuring CI, Anton tried to always fetch ccached
files from gitlab. This worked the same in principle. Just the
download of these ccached binaries was taking over 10 minutes. This
is why we switched to mounting a local filesystem inside docker.

Note that the docker images are re-downloaded from gitlab only when
they have been rebuilt on https://gitlab.com/yade-dev/docker-yade/-/pipelines
And this download is pretty slow. Fortunately it happens only every
few weeks. Otherwise docker uses the cached linux distro image.

And this is why your planned singularity solution may not be efficient.

> It could save a lot of (compilation) time for us when trying to
> debug something on multiple distros.

Well, download once (wait for download to finish) then start working.
Not much difference to waiting for local compilation (for me that's
inside chroot, sometimes inside docker) then start working :)

> Maybe pushing to registry could be done directly as part of current 

pushing to registry is part of the pipeline on docker-yade:

https://gitlab.com/yade-dev/docker-yade/-/blob/master/.gitlab-ci.yml#L17


best regards
Janek

Bruno Chareyre said: (by the date of Fri, 5 Mar 2021 10:37:59 +0100)

> Hi there,
> 
> I'm planning to build new docker images in yade's gitlab for production, 
> and possibly for development (see second part of this message, some 
> background comes first).  This is open to suggestions.
> 
> * Background:
> 
> I recently started playing with "Singularity" images since I found our 
> HPC department made it available on the clusters. There was also a user 
> mentioning that on launchpad recently. From end-user POV, singularity 
> images work like docker images, but a very practical difference is that 
> it is allowed on our (and others') HPC. Docker isn't, for security reason.
> 
> It made running yade so easy. The above command worked immediately, and 
> should work just the same on every system with singularity installed:
> /ssh myHPC//
> //singularity exec 
> docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily 
> yadedaily --check/
> 
> or equivalently: /
> ///export YADE=//'singularity exec 
> docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily 
> yadedaily'
> $YADE --check
> $YADE myScript.py
> $ etc.
> //
> 
> Key points:
> 1- singularity accepts docker images in input.
> 2- the above command is using some custom docker with yadedaily 
> pre-installed (which then needs to be downloadable from somewhere where 
> docker is permitted)
> 3- it is compatible with MPI(!). The host system's MPI is able to 
> communicate with the image system's MPI in a scenario like this, as if 
> it was just yade running natively on the host:
> /mpirun -np 500 $YADE someParrallelStuff.py
> /4- a condition for this MPI magic to work is that the mpi library is in 
> the same version for the host and for the executed image
> 5- performance: no measurable difference compared to a yade compiled on 
> the host (be 

[Yade-dev] Docker/Singularity images for production (and possibly development)

2021-03-05 Thread Bruno Chareyre

Hi there,

I'm planning to build new docker images in yade's gitlab for production, 
and possibly for development (see second part of this message, some 
background comes first).  This is open to suggestions.


* Background:

I recently started playing with "Singularity" images since I found our 
HPC department made it available on the clusters. There was also a user 
mentioning that on launchpad recently. From end-user POV, singularity 
images work like docker images, but a very practical difference is that 
it is allowed on our (and others') HPC. Docker isn't, for security reason.


It made running yade so easy. The above command worked immediately, and 
should work just the same on every system with singularity installed:

/ssh myHPC//
//singularity exec 
docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily 
yadedaily --check/


or equivalently: /
///export YADE=//'singularity exec 
docker://registry.gitlab.com/bchareyre/docker-yade:ubuntu20.04-daily 
yadedaily'

$YADE --check
$YADE myScript.py
$ etc.
//

Key points:
1- singularity accepts docker images in input.
2- the above command is using some custom docker with yadedaily 
pre-installed (which then needs to be downloadable from somewhere where 
docker is permitted)
3- it is compatible with MPI(!). The host system's MPI is able to 
communicate with the image system's MPI in a scenario like this, as if 
it was just yade running natively on the host:

/mpirun -np 500 $YADE someParrallelStuff.py
/4- a condition for this MPI magic to work is that the mpi library is in 
the same version for the host and for the executed image
5- performance: no measurable difference compared to a yade compiled on 
the host (be it running -j1, -jN or mpiexec).


For the moment the custom dockers are built in [1] 
.
I'm also building a Singularity images with [2] 
 
but I didn't really use it since I can build it from docker directly on 
the cluster (building the singularity image is implicit in /singularity 
exec docker://.../). Building on-site may not be allowed everywhere, 
though, and in that case [2] could be useful.


* What can be done:

I will move [1,2] or something similar to gitlab/yade-dev and advertise 
it in the install page. Also build more versions for people to use them. 
More versions because of the MPI point above (4): depending on the host 
system someone may want OMPI v1 (unbuntu16), or v2 (ubuntu18), etc.


For production images it would make sense to just use canonical 
debian/ubuntu with yade and/or yadedaily preinstalled. But, it is not 
exactly what I did for the moment. Instead I used docker images from our 
registry. Which implies the images have yade, and also what it needs to 
compile yade (I didn't test compilation yet but it should work).


I was thinking of splitting that into two types of images; minimal 
images for production and "dev" images with all compilation 
pre-requisites. Then I realized that the best "dev" image would be - by 
far - one reflecting the state of the system at the end of our current 
pipeline, i.e. one with a full /build folder and possibly ccache info 
(if not too large).


If such dev images were pushed to yade registry then anyone could grab 
latest build and recompile incrementally. It could save a lot of 
(compilation) time for us when trying to debug something on multiple 
distros.


And what about that?: compiling with a ubuntu20 docker image on a 
ubuntu20 host should make it possible to use the pipeline's ccache while 
still running yade on the native system (provided that the install path 
is in the host filesystem).


Maybe pushing to registry could be done directly as part of current 
pipeline, not sure yet. I am still thinking about some aspects but I 
think you get the general idea. Suggestions and advices are welcome. :)


Chers

Bruno

[1] https://gitlab.com/bchareyre/docker-yade

[2] 
https://gitlab.com/bchareyre/yade-singularity/-/blob/master/.gitlab-ci.yml


/
/

/
/

/
/

/
/




___
Mailing list: https://launchpad.net/~yade-dev
Post to : yade-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yade-dev
More help   : https://help.launchpad.net/ListHelp