I second Cos,

First steps first: Automatically build repositories are needed badly.

Olaf


> Am 27.03.2015 um 00:43 schrieb Konstantin Boudnik <[email protected]>:
> 
> I am reading this discussion and it's great to have the road map for the
> fully-automated CI including containers rebuilding, etc.
> 
> However, in the very short-term of the 1.0 release all we need IMO is this:
> - a set of slaves to run builds and minimal package tests
> - a way to automatically publish the artifacts
> 
> I am pretty sure we can do it with what we have right now + perhaps some minor
> bug-fixes where needed. Am I delusional? Thoughts?
> 
> Cos
> 
> On Wed, Mar 25, 2015 at 12:17PM, Roman Shaposhnik wrote:
>> First of all, thanks a million to all of you guys for pitching in!
>> I so wish I can close these loops myself, but I'm completely
>> out of cycles at least till the end of Apache CON. With that
>> disclaimer, here's my braindump on the subject. Note that
>> this is not a plan of action, but rather a list of things that
>> need to be done. Perhaps there's a subset of things in
>> there somewhere that would get us a functional CI without
>> doing all the work that I'm suggesting. This would be up
>> to you guys to figure out:
>> 
>>   0. Whoever would be helping with the project would need to
>>   get the creds for the AWS account that we're using. I don't think
>>   we ever figured out a reliable way to share those credentials,
>>   but a few of us have them. Please contact Cos and myself
>>   offline.
>> 
>>   1. The big idea behind the new CI pipeline was to use Docker
>>   containers as the environments running on generic OSes. So
>>   far I have been experimenting with CoreOS hoping that its
>>   focus on Docker would make it a more reliable environment
>>   to run these containers on. Quite the contrary, it seems that the
>>   CoreOS slave is down way more than it is up:
>>         http://bigtop01.cloudera.org:8080/computer/docker/
>>   We need to figure out a reliable way of spinning these slaves. One
>>   option would be to use a Jenkins EC2 plugin (which we have deployed
>>   on our Jenkins) to spin new slaves when they are needed. Or we can
>>   keep the static slaves around. Either way one thing that needs to be
>>   figured out is this: Docker is pretty disk hungry and most of the default
>>   AMIs come with a tiny disk images.
>> 
>>   At any rate, once we have a way to have reliable slaves on which
>>   we can run Docker containers we can proceed.
>> 
>>   2. Another big idea was to move all 'state' from Jenkins configuration
>>   files into a Jenkins DSL that would be checked into our repo. The prototype
>>   exist here:
>>         bigtop-ci/jenkins/jobsCreator.groovy
>>   and hooked up to this job:
>>         http://bigtop01.cloudera.org:8080/view/Gradle/job/Gradle-seed/
>>    If click on that link you will see the list of Jobs that was generated via
>>    running the seed job. This is how it is expected to work -- but there are
>>    a few bugs still. Somebody needs to make sure that all the generated
>>    jobs are actually bug free. The easiest way is to compare and contrast
>>    the differences between generated jobs and the static ones we have over
>>    here: http://bigtop01.cloudera.org:8080/view/Bigtop-trunk/
>> 
>>    Once the seed job can generate the jobs that actually produce the
>> right result
>>    utilizing Docker builds on the fixed Docker slave from step #1 we can take
>>    care of one last detail
>> 
>>    3. Currently the Docker containers we're using came from me manually
>>    building these containers and pushing them to:
>>          https://registry.hub.docker.com/u/bigtop/slaves/
>>    We need to automate two aspects of this:
>>         3.1. we need to have a job on Jenkins that would at least build these
>>         containers based on the latest state of our puppet code in
>> bigtop_toolchain
>> 
>>         3.2. we need to figure out a way to publish the containers
>> without endangering
>>         Bigtop's Docker HUB account creds.
>> 
>>     Ideally these containers need to be regenrated every time there's a 
>> change
>>     in bigtop_toolchain puppet code. The prototype job that I was
>> playing with for
>>     this purpose is over here:
>>          http://bigtop01.cloudera.org:8080/view/Docker/job/Docker-Toolchain/
>> 
>> 
>> Hope this helps. I'd be more than happy to answer questions and review JIRAs.
>> 
>> Thanks,
>> Roman.
>> 
>> 
>> On Tue, Mar 24, 2015 at 2:50 PM, Konstantin Boudnik <[email protected]> wrote:
>>> Guys,
>>> 
>>> thanks a lot for your commitment - I have chatted with Nate offline and he
>>> might be able to help a bit as well. Let's wait for Roman's input - I will
>>> ping him tonight if we don't hear from him by then, so we have a better
>>> picture of the 1st question.
>>> 
>>> Cos
>>> 
>>> On Tue, Mar 24, 2015 at 06:06AM, Konstantin Boudnik wrote:
>>>> Guys,
>>>> 
>>>> I want to start a separate thread to track the CI preparations for the 
>>>> release
>>>> next month (fingers crossed). Clearly, we can make a release without CI, 
>>>> but
>>>> it'd way easier to test and create binary artifacts if we have a working
>>>> environment for official validation. Roman has done a lot in this direction
>>>> (many thanks!), but there are still a few rough edges, which might be easy 
>>>> to
>>>> finish of.
>>>> 
>>>> I want to figure out a couple of things:
>>>> - what's the state of CI and how much still needs to be done (Rvs? Could 
>>>> you
>>>>   share any first hand feedback?)
>>>> - who would be able to help with the CI completion? I can commit some of my
>>>>   cycles, but it'd be great to have few more hands on that. Clearly, some
>>>>   Jenkins-foo and prior CI skills won't hurt ;)
>>>> 
>>>> Please chime in if you can help. Thanks a lot!
>>>>  Cos
>>>> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to