[ 
https://issues.apache.org/jira/browse/BIGTOP-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056860#comment-14056860
 ] 

Julien Eid commented on BIGTOP-1323:
------------------------------------

[~rvs]

All of this looks great and definitely coincides with my work as well. Glad to 
work with you on this issue!

+1 to almost everything on the semantic side of things. I just have one problem 
with the images layout. For the seed repository, we shouldn't be rehosting or 
recreating images that come from official sources as it doesn't net us many 
positives. Of the platforms you listed, all except one platform is either 
sponsored by Docker or semi-official/official from the distro (Fedora is 
semi-official for 20 and will be official by 21). Only OpenSUSE is from 
complete 3rd parties at the moment but seems to be gaining momentum from their 
community to make an official build and you can see some of that progress here 
http://flavio.castelli.name/2014/05/06/building-docker-containers-with-kiwi/ As 
a stopgap, I think it would be fine if we hosted our own OpenSUSE image in the 
seed repo, but for others one we should just stick with using upstream as our 
base images. And once OpenSUSE an official image, we should remove the seed 
repo and switch to using their upstream image. I trust the Docker/Canonical 
images for Ubuntu, as well as the Docker sponsored CentOS image and Fedora 
produced Fedora 20 image.

For the issues, I think I have possible solutions to them.
1. What new OSes were you thinking about for the seed repo? If there is an 
upstream base image like I said above, there shouldn't be an image going into 
the seed repo for that distro. If there is a new unsupported platform or 
something that we want to add, we're going to have problems. The issue that you 
may find is that building Docker base images can be a pain in the ass to 
attempt to standardize. We can take a look at how some of the Docker images get 
built here https://github.com/dotcloud/docker/tree/master/contrib and 
https://github.com/lsm5/docker-brew-fedora for the Fedora images. For each 
distribution we want to make a seed image for the build process will have to be 
highly customized for that distros tooling and needs. If you can tell me what 
you have in mind for what distros you want in the seed repo, I can look more 
into building a base image for that distro and see what type of processes we 
should setup.

2. We shouldn't leave creds in plain sight in Jenkins like you said and we 
don't have to. "Note: Your authentication credentials will be stored in the 
.dockercfg authentication file in your home directory." For any Jenkins slaves 
that need to publish to the registry, someone/something would put the 
.dockercfg file on that machine so it could push to the registry. We just have 
to make sure Jenkins webui exposes the file anywhere. If we're paranoid about 
the security of that file, we could have kind of "two step" upload mechanisms 
to the registry where a Jenkins job on slave1 builds the image and then has the 
image downloaded by a separate server that isn't connected to Jenkins which 
would then take that image and upload that to the registry. It would save us 
putting the .dockercfg file on every Jenkins slave and potentially having some 
security vulnerability on some Jenkins slave revealing the login info. You just 
make sure the .dockercfg server is under pretty good lock and key. That 
.dockercfg file thing looks to not be in the official documentation for the 
Docker login command for some reason and instead is located here 
https://docs.docker.com/userguide/dockerrepos/ if you wanted to know more about 
it.

3. In my Dockerfiles for my image I currently git checkout the bigtop repo into 
images and then remove the repo and git+puppet+dependencies when I'm finished 
applying Puppet. It was the laziest, fastest, and simplest way for me to do it 
so that people building the image didn't have to worry about where to put the 
repo when building the image and having to deal with volumes in Docker. It's 
just easier to have it all be automatic when building the Dockerfile for the 
image to checkout the latest in the repo to build the image. It's currently all 
automated in my Dockerfiles and I don't touch anything when building images as 
well as being "host agnostic". I would recommend using this way instead of 
doing host mount volumes with the repo to make building the image easier for 
other people.

4. Yeah, my images setup as slave containers are like 2.5GB's as well. I'll 
take a look into it today. I have a feeling it's from keeping unnecessary 
package downloads and source downloads after puppet apply is finished.


> provide an option to containerize Bigtop package build 
> -------------------------------------------------------
>
>                 Key: BIGTOP-1323
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-1323
>             Project: Bigtop
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 0.8.0
>            Reporter: Roman Shaposhnik
>            Assignee: Roman Shaposhnik
>             Fix For: 0.8.0
>
>         Attachments: Dockerfile.sh
>
>
> Looking at the state of our Jenkins made me scream "I'm mad as Hell and I'm 
> not going to take this anymore". The basic problem is simple: our build 
> slaves are not actually managed by Puppet (even though we do have Puppet code 
> to do that) and they seem to bitrot quite a bit (e.g. git clone from ASF 
> repos on SLES and CentOS5 is totally borked).
> There's another, somewhat subtler, problem: currently we have to keep as many 
> slaves as we've got platforms to support. This leads to a pretty poor 
> utilization and if a slave goes down requires manual intervention.
> What I'd like to introduce in this JIRA is a notion of decoupling OS 
> environment dependency from the base OS. The technology I'm proposing here is 
> Docker (which really is just a prettier UI on top of Linux Containers). IOW, 
> regardless of what OS you'd be running on your host -- as long as you've got 
> docker you can have an exact replica of a target os running in a container.
> Here's my plan for simplifying the management aspect of what's going on. 
> Note, that this is a complimentary plan. Whatever is happening (or not 
> happening!) today on our Jenkins can remain there for as long as we need it.
>   # make our build slaves be 100% uniform and fungible instances of CoreOS 
> VMs https://coreos.com/ The benefit of CoreOS is pretty much 0 administration 
> cost, quick boot up time, full integration with Docker (see bellow) and full 
> integration with things like VirtualBox, etc (so that you can replicate 
> exactly the kind of process that we will setup on the Jenkins).
>   # make the actual builds happen in a container using CoreOS's integration 
> with Docker. IOW, instead of running: make foo on the slave the actual 
> command will look like: docker run -i -t -v `pwd`:/workspace:rw BUILD-ENV 
> bash -c 'cd /workspace ; make foo'
>   # add jobs to the jenkins that would maintain images of BUILD-ENV 
> containers. Each image will be a base-line OS (RHEL, Ubuntu, etc) plus all 
> the build dependencies of a given source package (but nothing else!).
> All of these 3 steps are pretty easy to automate and they do have an added 
> benefit of being 100% replicatable on your workstation. IOW, as long as you 
> have docker running on your host OS you can build packages for any OS with a 
> simple command.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to