Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Britt Houser (bhouser)
How does confd run inside the container?  Does this mean we’d need some kind of 
systemd in every container which would spawn both confd and the real service?  
That seems like a very large architectural change.  But maybe I’m 
misunderstanding it.

Thx,
britt

On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:

Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:

> Unless I'm missing something, to use confd with an OpenStack deployment on
> k8s, we'll have to do something like this:
> 
> * Deploy confd in every node where we may want to run a pod (basically
> wvery node)

Oh, no, no. That's not how it works at all.

confd runs *inside* the containers. It's input files and command line
arguments tell it how to watch for the settings to be used just for that
one container instance. It does all of its work (reading templates,
watching settings, HUPing services, etc.) from inside the container.

The only inputs confd needs from outside of the container are the
connection information to get to etcd. Everything else can be put
in the system package for the application.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Britt Houser (bhouser)
I agree with Paul here.  I like the idea of solving this with labels instead of 
tags.  A label is imbedded into the docker image, and if it changes, the 
checksum of the image changes.  A tag is kept in the image manifest, and can be 
altered w/o changing the underlying image.  So to me a label is better IMHO, 
b/c it preserves this data within the image itself in a manner which is easy to 
detect if its been altered.

thx,
britt


From: Paul Bourke 
Sent: Apr 19, 2017 6:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I'm wondering if moving to using docker labels is a better way of
solving the various issue being raised here.

We can maintain a tag for each of master/ocata/newton/etc, and on each
image have a LABEL with info such as 'pbr of service/pbr of kolla/link
to CI of build/etc'. I believe this solves all points Kevin mentioned
except rollback, which afaik, OpenStack doesn't support anyway. It also
solves people's concerns with what is actually in the images, and is a
standard Docker mechanism.

Also as Michal mentioned, if users are concerned about keeping images,
they can tag and stash them away themselves. It is overkill to maintain
hundreds of (imo meaningless) tags in a registry, the majority of which
people don't care about - they only want the latest of the branch
they're deploying.

Every detail of a running Kolla system can be easily deduced by scanning
across nodes and printing the labels of running containers,
functionality which can be shipped by Kolla. There are also methods for
fetching labels of remote images[0][1] for users wishing to inspect what
they are upgrading to.

[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:
> On 18 April 2017 at 13:54, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
 Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>> Cheers,
>>> Michal
>>>
>>
>> What's in the images, kolla? Other OpenStack components?
>
>
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
>
>> Where does the
>> 4.0.0 come from?
>>
>>
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use 
> for
> other dev packages.
>
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

 Why are you tagging the artifacts containing other projects with the
 version number of kolla, instead of their own version numbers and some
 sort of incremented build number?
>>>
>>> This is what we do in Kolla and I'd say logistics and simplicity of
>>> implementation. Tags are more than just information for us. We have to
>>
>> But for a user consuming the image, they have no idea what version of
>> nova is in it because the version on the image is tied to a different
>> application entirely.
>
> That's easy enough to check tho (just docker exec into container and
> do pip freeze). On the other hand you'll have information that "this
> set of various versions was tested together" which is arguably more
> important.
>
>>> deploy these images and we have to know a tag. Combine that with clear
>>> separation of build phase from deployment phase (really build phase is
>>> entirely optional thanks to dockerhub), you'll end up with either
>>> automagical script that will have to somehow detect correct 

Re: [openstack-dev] [kolla] PTG Day #1 Webex remote participation

2017-02-19 Thread Britt Houser (bhouser)
I’ve attended two Kolla midcycles remotely everything Steve said here is true.  
But, as long as your expectations are set accordingly, suboptimal is better 
than nothing at all. =)

Thx,
britt

On 2/19/17, 1:09 AM, "Steven Dake (stdake)"  wrote:

Jeremy,

Completely makes sense.  I believe the prior experience the foundation has 
had with remote participation probably won’t be much improved at the Pike PTG 
as there is no dedicated production staff and I’m not sure what type of 
teleconference hardware Michal is bringing to the PTG.  Not to mention the fact 
that the wireless services may be in a DOS state because of the number of 
attendees.

Michal asked the foundation if remote participation was an option, and they 
stated it was.  Michal had then suggested to use zoom.us and promised remote 
participation, however, that didn’t get done until Friday.  Cisco isn’t 
donating the webex services in this case; I am as a community member and Kolla 
core.

I have also found during Kolla midcycle events we have held since the 
founding of Kolla and prior to the PTG (which is the new midcycle) that every 
midcycle event with remote participation is not very optimal for participants 
for the litany of reasons you mention.

Clearly in-person PTG participation is superior to the hacked-together 
telepresence we may have available at the PTG.  I do think whatever 
teleconference hardware Michal brings to summit is better than looking at the 
output of an Etherpad where all kinds of context is lost.

For folks attending from their office remotely, don’t expect a great 
experience.  Face to face participation is always better as Jeremy has stated.

Regards
-steve

-Original Message-
From: Jeremy Stanley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, February 18, 2017 at 5:11 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] PTG Day #1 Webex remote participation

On 2017-02-18 15:46:19 + (+), Steven Dake (stdake) wrote:
[...]
> In the past the foundation has been wary of enabling remote
> participation.
[...]

Only wary because of prior experience: Cisco donated Webex service
and dedicated remote production staff for us throughout the Havana
design summit in Portland, OR. As one of the volunteers who agreed
to monitor the local systems, I can say that it was a suboptimal
solution at best. Supplied laptops acting as the "presenter"
consoles crashed/hung repeatedly, slight instabilities in conference
networking were a major impediment to streaming, omnidirectional
microphones placed throughout the conference rooms still failed to
pick up much of the conversation, and for those who did attempt to
participate remotely we heard numerous complaints about how they
couldn't follow heated discussions with dozens of participants in a
room together.
-- 
Jeremy Stanley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-11 Thread Britt Houser (bhouser)
My sentiments exactly Michal.  We’ll get there, but let’s not jump the gun 
quite yet.

On 1/11/17, 1:38 PM, "Michał Jastrzębski"  wrote:

So from my point of view, while I understand why project separation
makes sense in the long run, I will argue that at this moment it will
be hurtful for the project. Our community is still fairly integrated,
and I'd love to keep it this way a while longer. We haven't yet fully
cleaned up mess that split of kolla-ansible caused (documentation and
whatnot). Having small revolution like that again is something that
would greatly hinder our ability to deliver valuable project, and I
think for now that should be our priority.

To me, at least before we will have more than one prod-ready
deployment tool, separation of projects would be bad. I think project
separation should be a process instead of revolution, and we already
started this process by separating kolla-ansible repo and core team.
I'd be happy to discuss how to pave road for full project separation
without causing pain for operators, users and developers, as to me
their best interest should take priority.

Cheers,
Michal

On 11 January 2017 at 09:59, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-01-11 14:50:31 +:
>> Thierry,
>>
>> I am not a big fan of the separate gerrit teams we have instituted 
inside the Kolla project.  I always believed we should have one core reviewer 
team responsible for all deliverables to avoid not just the appearance but the 
reality that each team would fragment the overall community of people working 
on Kolla containers and deployment tools.  This is yet another reason I didn’t 
want to split the repositories into separate deliverables in the first place – 
since it further fragments the community working on Kolla deliverables.
>>
>> When we made our original mission statement, I originally wanted it 
scoped around just Ansible and Docker.  Fortunately, the core review team at 
the time made it much more general and broad before we joined the big tent.  
Our mission statement permits multiple different orchestration tools.
>>
>> Kolla is not “themed”, at least to me.  Instead it is one community with 
slightly different interests (some people work on Ansible, some on Kubernetes, 
some on containers, some on all 3, etc).  If we break that into separate 
projects with separate PTLs, those projects may end up competing with each 
other (which isn’t happening now inside Kolla).  I think competition is a good 
thing.  In this case, I am of the opinion it is high time we end the 
competition on deployment tools related to containers and get everyone working 
together rather than apart.  That is, unless those folks want to “work apart” 
which of course is their prerogative.  I wouldn’t suggest merging teams today 
that are separate that don’t have a desire to merge.  That said, Kolla is very 
warm and open to new contributors so hopefully no more new duplicate effort 
solutions are started.
>
> It sure sounds to me like you want Kolla to "own" container deployment
> tools. As Thierry said, we aren't intended to be organized that way any
> more.
>
>> Siloing the deliverables into separate teams I believe would result in 
the competition I just mentioned, and further discord between the deployment 
tool projects in the big tent.  We need consolidation around people working 
together, not division.  Division around Kolla weakens Kolla specifically and 
doesn’t help out OpenStack all that much either.
>
> I would hope that the spirit of collaboration could extend across team
> boundaries. #WeAreOpenStack
>
> Doug
>
>>
>> The idea of branding or themes is not really relevant to me.  Instead 
this is all about the people producing and consuming Kolla.  I’d like these 
folks to work together as much as feasible.  Breaking a sub-community apart (in 
this case Kolla) into up to 4 different communities with 4 different PTLs 
sounds wrong to me.
>>
>> I hope my position is clear ☺  If not, feel free to ask any follow-up 
questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Thierry Carrez 
>> Organization: OpenStack
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

>> Date: Wednesday, January 11, 2017 at 4:21 AM
>> To: "openstack-dev@lists.openstack.org" 

>> Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables
>>
>> Michał Jastrzębski wrote:
>> > I created CIVS poll with options we discussed. Every core member 
should
>> > get link to poll voting, if that's not the case, please let me 
know.
>>
>> Just a quick 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Britt Houser (bhouser)
I think you’re giving a great example of my point that we’re not yet at the 
stage where we can say, “Any tool should be able to deploy kolla containers”.  
Right?

From: Pete Birley <pete@port.direct>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, January 5, 2017 at 9:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

I'll reply to Britts comments, and then duck out, unless explicitly asked back, 
as I don't want to (totally) railroad this conversation:

The Kolla containers entry-point is a great example of how the field have moved 
on. While it was initially required, in the Kkubernetes world the Kolla ABI is 
actually more of a hindrance than help, as it makes the containers much more of 
a 'black-box' to use. In the other Openstack on Kubernetes projects I 
contribute to, and my own independent work, in we actually just define the 
entry point to the container directly in the k8s manifest and make no use of 
Kolla's entry point and config mechanisms, either running another 'init' 
container to build and bind mount the configuration (Harbor), or use Kubernetes 
configmap object to achieve the same result (Openstack Helm). It would be 
perfectly possible for Kolla Ansible (and indeed Salt) to take a similar 
approach - meaning that rather maintaining an ABI that works for all platforms, 
Kolla would be free to just ensure that the required binaries were present in 
images.

I agree that this cannot happen overnight, but think that when appropriate we 
should take stock of where we are and how to plot a course that lets all of our 
projects flourish without competing for resources, or being so entwined that we 
become technically paralyzed and overloaded.

Sorry, Sam and Michal! You can have your thread back now :)

On Fri, Jan 6, 2017 at 1:17 AM, Britt Houser (bhouser) 
<bhou...@cisco.com<mailto:bhou...@cisco.com>> wrote:
I think both Pete and Steve make great points and it should be our long term 
vision.  However, I lean more with Michael that we should make that a separate 
discussion, and it’s probably better done further down the road.  Yes, Kolla 
containers have come a long way, and the ABI has been stable for awhile, but 
the vast majority of that “for awhile” was with a single deployment tool: 
ansible.  Now we have kolla-k8s and kolla-salt.  Neither one is fully featured 
yet as ansible, which to me means I don’t think we can say for sure that ABI 
won’t need to change as we try to support many deployment tools.  (Someone 
remind me, didn’t kolla-mesos change the ABI?)  Anyway, the point is I don’t 
think we’re at a point of maturity to be certain the ABI won’t need changing.  
When we have 2-3 deployment tools with enough feature parity to say, “Any tool 
should be able to deploy kolla containers” then I think it make sense to have 
that discussion.  I just don’t think we’re there yet.  And until the point, 
changes to the ABI will be quite painful if each project is in outside of the 
kolla umbrella, IMHO.

Thx,
britt

From: Pete Birley <pete@port.direct>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, January 5, 2017 at 6:47 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Also coming from the perspective of a Kolla-Kubernetes contributor, I am 
worried about the scope that Kolla is extending itself to.

Moving from a single repo to multiple repo's has made the situation much 
better, but by operating under a single umbrella I feel that we may potentially 
be significantly limiting the potential for each deliverable. Alex Schultz, 
Steve and Sam raise some good points here.

The interdependency between the projects is causing issues, the current 
reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and 
unsustainable in my opinion. This is both because it limits the flexibility 
that we have as Kolla-Kubernetes developers, but also because it places burden 
and rigidity on Kolla-Ansible. This will ultimately prevent both projects from 
being able to take advantage of the capabilities offered to them by the 
deployment mechanism they use.

Like Steve, I don't think the addition of Kolla-aSlt should affect me, and as a 
result don't feel I should have any say in the project. That said, I'd really 
like to see it happen in one form or another - as having a wide variety of 
complementary projects and tooling for OpenStack deployment can only be a good 
thing for the community if correctly managed

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Britt Houser (bhouser)
I think both Pete and Steve make great points and it should be our long term 
vision.  However, I lean more with Michael that we should make that a separate 
discussion, and it’s probably better done further down the road.  Yes, Kolla 
containers have come a long way, and the ABI has been stable for awhile, but 
the vast majority of that “for awhile” was with a single deployment tool: 
ansible.  Now we have kolla-k8s and kolla-salt.  Neither one is fully featured 
yet as ansible, which to me means I don’t think we can say for sure that ABI 
won’t need to change as we try to support many deployment tools.  (Someone 
remind me, didn’t kolla-mesos change the ABI?)  Anyway, the point is I don’t 
think we’re at a point of maturity to be certain the ABI won’t need changing.  
When we have 2-3 deployment tools with enough feature parity to say, “Any tool 
should be able to deploy kolla containers” then I think it make sense to have 
that discussion.  I just don’t think we’re there yet.  And until the point, 
changes to the ABI will be quite painful if each project is in outside of the 
kolla umbrella, IMHO.

Thx,
britt

From: Pete Birley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, January 5, 2017 at 6:47 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Also coming from the perspective of a Kolla-Kubernetes contributor, I am 
worried about the scope that Kolla is extending itself to.

Moving from a single repo to multiple repo's has made the situation much 
better, but by operating under a single umbrella I feel that we may potentially 
be significantly limiting the potential for each deliverable. Alex Schultz, 
Steve and Sam raise some good points here.

The interdependency between the projects is causing issues, the current 
reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and 
unsustainable in my opinion. This is both because it limits the flexibility 
that we have as Kolla-Kubernetes developers, but also because it places burden 
and rigidity on Kolla-Ansible. This will ultimately prevent both projects from 
being able to take advantage of the capabilities offered to them by the 
deployment mechanism they use.

Like Steve, I don't think the addition of Kolla-aSlt should affect me, and as a 
result don't feel I should have any say in the project. That said, I'd really 
like to see it happen in one form or another - as having a wide variety of 
complementary projects and tooling for OpenStack deployment can only be a good 
thing for the community if correctly managed.

When Kolla started it was very experimental, containers (In their modern form) 
were a relatively new construct, and it took on the audacious task of trying to 
package and deploy OpenStack using the tooling that was available at the time. 
I really feel that this effort has succeeded admirably, and conversations like 
this are a result of that. Kolla is one of the most active projects in 
OpenStack, with two deployment mechanisms being developed currently, and 
hopefully to increase soon with a salt based deployment and potentially even 
more on the horizon.

With this in mind, I return to my original point and wonder if we may be better 
moving from our current structure of Kolla-deploy(x) to deploy(x)-Kolla and 
redefine the governance of these deliverables, turning them into freestanding 
projects. I think this would offer several potential advantages, as it would 
allow teams to form tighter bonds with the tools and communities they use (ie 
Kubernetes/Helm, Ansible or Salt). This would also make it easier for these 
projects to use upstream components where available (eg Ceph, RabbitMQ, and 
MariaDB) which are (and should be) in many cases better than the artifacts we 
can produce. To this end, I have been working with the Ceph community to get 
their Kubernetes Helm implementation to the point where we can use it for our 
own work, and would love to see more of this. It benefits not only us by 
offloading support to the upstream project, but gives them a vested interest in 
supporting us and also helps provide better quality tooling for the entire open 
source ecosystem.

This should also allow Kolla itself to become much more streamlined, and 
focused simply on producing docker containers for consumption by the community, 
and make the artifacts produced potentially much less opinionated and more 
attractive to other projects. And being honest, I have a real desire for this 
activity to eventually be taken on by the relevant OpenStack projects 
themselves - and would love to see Kolla help develop a framework that would 
allow projects to take ownership of the containerisation of their output.

Sorry for such a long email - but this seems like a good opportunity to raise 
some of these issues 

Re: [openstack-dev] [tc] Re: [kolla] A new kolla-salt deliverable

2016-12-24 Thread Britt Houser (bhouser)
Seems like these are some of the same growing pains (cores can’t be experts on 
all technologies) neutron went through.  Maybe at the PTG we could pick their 
brain and see if the path they have chosen would work well for Kolla.

Thx,
britt

On 12/24/16, 10:31 AM, "Steven Dake (stdake)"  wrote:

My response wasn’t clear and I’ve also thought more about your proposal.  
I’d be highly in favor of the approach you mentioned as long as 2 was modified 
in your proposal to include some larger number then 2 individuals.  One option 
that comes to mind is a majority of each core review sub-team for point 2 
taking into account some of our core reviewers have issues that temporarily 
prevent them from fulfilling their core reviewer duties (although they plan to 
re-engage).

I agree we don’t want to duplicate the TC – that would be super heavy – not 
that the TC is heavy – but rather that Kolla doesn’t need its own governance 
structure as the technical governance of OpenStack is directed by the technical 
committee.  I for one, don’t want to have a full-blown governance structure for 
Kolla, although I can’t speak for other core reviewers.

Regards
-steve


-Original Message-
From: "Steven Dake (stdake)" 
Date: Friday, December 23, 2016 at 3:53 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tc] Re: [kolla] A new kolla-salt deliverable

WFM – I’d be partly in favor of such an approach although I can’t speak 
for others.  I think we should require some larger set then 2 individuals from 
the kolla-policy-core; perhaps a majority of active reviewers for some 
definition of active reviewers.

Regards
-steve


-Original Message-
From: Michał Jastrzębski 
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" 
Date: Friday, December 23, 2016 at 3:38 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tc] Re: [kolla] A new kolla-salt 
deliverable

So I agree with you that we need policy established here. What I'm
getting at - which core teams will vote on inclusion of new
deliverable? All of them? Just Kolla? This is grey area I'm 
referring
to. What's kolla-k8s core team's business if somebody would like to
add saltstack support? What I wouldn't want to have is to establish
new semi-tc in form of our core team that will decide what is and
isn't good orchiestration engine for Kolla. That would seriously
hinder our ability to innovate, experiment. What if we find out this
new orchiestration engine and just want to play with it? But keep it
community from start?

So let me throw an idea there, one which we should vote on:

Prep:
1. We create kolla-policy-core-team which is sum of all core teams 
of
kolla supported projects
2. We create list of kolla supported projects - today it's kolla,
kolla-ansible and kolla-k8s

Add new project:
1. Everyone is free to create kolla-* deliverable as long as they
follow certain documented standards (action: document these)
2. We create lightweight process to include new deliverable to 
Kolla,
like just 2* +2 from kolla-policy-core-team to include project like
that
3. If project gets traction, interest and is successful, we vote on
including it's core team to kolla-policy-core-team

This way it would be easy to try and fail fast to run kolla with
whatever. We need this kind of flexibility for people to innovate.

Thoughts?
Michal

On 23 December 2016 at 13:11, Steven Dake (stdake) 
 wrote:
> Michal,
>
> Really what I was getting at was placing in the governance 
repository as a kolla deliverable.  In days past we *always* voted on additions 
and removals of deliverables.  It doesn’t seem like a gray area to me – we have 
always followed a voting pattern for adding and removal of deliverables.  This 
repo could be added to the git openstack namespace but then not have it as a 
kolla deliverable without a vote I think; this is sort of what Fuel did with 
Fuel-ccp – that proposal is a gray area.  I found when Fuel did  that to be 
extremely odd personally ☺  I’m not sure if there is a trademark policy or 
something similar that affects the use of Kolla and OpenStack together.  

Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-05 Thread Britt Houser (bhouser)
W0t  Great work so far everyone! =)




On 8/5/16, 2:53 PM, "Michał Jastrzębski"  wrote:

>And we finished our first deployment
>We had some hurdles due to misconfiguration, you can see it in along
>with a fix. After these fixes and cleanups performed (we don't want to
>affect resulting time now do we?), we deployed functional openstack
>successfully within 20min:) More videos and tests to come!
>
>https://www.youtube.com/watch?v=RNZMtym5x1c
>
>
>
>On 5 August 2016 at 11:48, Paul Bourke  wrote:
>> Hi Kolla,
>>
>> Thought it will be helpful to send a status mail once we hit checkpoints in
>> the osic cluster work, so people can keep up to speed without having to
>> trawl IRC.
>>
>> Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic
>>
>> Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
>> complete. The 131 bare metal nodes have been provisioned with Ubuntu 14.04,
>> networking is configured, and all Kolla prechecks are passing.
>>
>> The default set of images (--profile default) have been built and pushed to
>> a registry running on the deployment node, the build taking a very speedy
>> 5m37.040s.
>>
>> Cheers,
>> -Paul
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] repo split

2016-07-20 Thread Britt Houser (bhouser)
-0 (My vote doesn't count).  We had endless problems keeping containers and 
orchestration in sync when we had two repos.  I am really impressed that Mitaka 
ansible can orchestrate Liberty containers.  That really speaks volumes.  And  
I understand there is a stable ABI now, but surely at some point in the future 
the ABI will need some unforeseen change?  Will we have ABI v1 and v2?  Seems 
like that will bring headaches, at least based on my past experience.


Thx,
britt

On 7/20/16, 6:32 PM, "Michał Jastrzębski"  wrote:

>+1 I was reluctant to repo split by Newton timeframe as it was crucial
>time to our project, people will start using it, and I didn't want to
>cause needless confusion. Now time is right imho.
>
>On 20 July 2016 at 15:48, Ryan Hallisey  wrote:
>> Hello.
>>
>> The repo split discussion that started at summit was brought up again at the 
>> midcycle.
>> The discussion was focused around splitting the Docker containers and 
>> Ansible code into
>> two separate repos [1].
>>
>> One of the main opponents to the split is backports.  Backports will need to 
>> be done
>> by hand for a few releases.  So far, there hasn't been a ton of backports, 
>> but that could
>> always change.
>>
>> As for splitting, it provides a much clearer view of what pieces of the 
>> project are where.
>> Kolla-ansible with its own repo will sit along side kolla-kubernetes as 
>> consumers of the
>> kolla repo.
>>
>> The target for the split will be for day 1 of Occata. The core team will 
>> vote on
>> the change of splitting kolla into kolla-ansible and kolla.
>>
>> Cores please respond with a +1/-1 to approve or disapprove the repo split. 
>> Any community
>> member feel free to weigh in with your opinion.
>>
>> +1
>> -Ryan
>>
>> [1] - https://etherpad.openstack.org/p/kolla-N-midcycle-repo-split
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] our mascot - input needed

2016-07-14 Thread Britt Houser (bhouser)
Koala is the best by a long shot. These ideas are all total stretches:
  Bee on a honeycomb – Its kinda like the bee is orchestrating containers of 
honey.
  Armadillo – It rolls up into a ball and is "immutable"

Thx,
britt

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, July 14, 2016 at 4:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] our mascot - input needed

Hey folks,

The OpenStack foundation is putting together mascots for every project with a 
proper artist doing the work.  The cool thing about this is we a) get stickers 
b) have consistent look and feel among mascots in OpenStack.

The downside is we have to make a decision.  The foundation wants 3-5 choices.

The mascot must be an animal and it can't involve glue or other inc007 inspired 
ideas :)

Obviously Koala bear is probably everyone's first choice since we have sort of 
been using that as a mascot since day one.  Anyone else have anything else for 
me to provide the foundation with a choices 2-5?

Please respond quickly, the foundation needs the information shortly.  I'd like 
to offer up two alternatives just in case.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-27 Thread Britt Houser (bhouser)
I admit I'm not as knowledgable about the Kolla codebase as I'd like to be, so 
most of what you're saying is going over my head.  I think mainly I don't 
understand the problem statement.  It looks like you're pulling all the "hard 
coded" things out of the docker files, and making them user replaceable?  So 
the dockerfiles just become a list of required steps, and the user can change 
how each step is implemented?  Would this also unify the dockefiles so there 
wouldn't be a huge if statements between Centos and Ubuntu?

Thx,
Britt



On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:

>
>
>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)"  wrote:
>
>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake) 
>>wrote:
>>> Hey folks,
>>>
>>> While Swapnil has been busy churning the dockerfile.j2 files to all
>>>match
>>> the same style, and we also had summit where we declared we would solve
>>>the
>>> plugin problem, I have decided to begin work on a DSL prototype.
>>>
>>> Here are the problems I want to solve in order of importance by this
>>>work:
>>>
>>> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
>>> Provide a programmatic way to manage Dockerfile construction rather
>>>then a
>>> manual (with vi or emacs or the like) mechanism
>>> Allow complete overrides of every facet of Dockerfile construction, most
>>> especially repositories per container (rather than in the base
>>>container) to
>>> permit the use case of dependencies from one version with dependencies
>>>in
>>> another version of a different service
>>> Get out of the business of maintaining 100+ dockerfiles but instead
>>>maintain
>>> one master file which defines the data that needs to be used to
>>>construct
>>> Dockerfiles
>>> Permit different types of optimizations or Dockerfile building by
>>>changing
>>> around the parser implementation ­ to allow layering of each operation,
>>>or
>>> alternatively to merge layers as we do today
>>>
>>> I don't believe we can proceed with both binary and source plugins
>>>given our
>>> current implementation of Dockerfiles in any sane way.
>>>
>>> I further don't believe it is possible to customize repositories &
>>>installed
>>> files per container, which I receive increasing requests for offline.
>>>
>>> To that end, I've created a very very rough prototype which builds the
>>>base
>>> container as well as a mariadb container.  The mariadb container builds
>>>and
>>> I suspect would work.
>>>
>>> An example of the DSL usage is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>>>
>>> A very poorly written parser is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>>>
>>> I played around with INI as a format, to take advantage of oslo.config
>>>and
>>> kolla-build.conf, but that didn't work out.  YML is the way to go.
>>>
>>> I'd appreciate reviews on the YML implementation especially.
>>>
>>> How I see this work progressing is as follows:
>>>
>>> A yml file describing all docker containers for all distros is placed in
>>> kolla/docker
>>> The build tool adds an option ‹use-yml which uses the YML file
>>> A parser (such as load.py above) is integrated into build.py to lay
>>>down he
>>> Dockerfiles
>>> Wait 4-6 weeks for people to find bugs and complain
>>> Make the ‹use-yml the default for 4-6 weeks
>>> Once we feel confident in the yml implementation, remove all
>>>Dockerfile.j2
>>> files
>>> Remove ‹use-yml option
>>> Remove all jinja2-isms from build.py
>>>
>>> This is similar to the work that took place to convert from raw
>>>Dockerfiles
>>> to Dockerfile.j2 files.  We are just reusing that pattern.  Hopefully
>>>this
>>> will be the last major refactor of the dockerfiles unless someone has
>>>some
>>> significant complaints about the approach.
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> 
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>The DSL template to generate the Dockerfile seems way better than the
>>jinja templates in terms of extension which is currently a major
>>bottleneck in the plugin implementation. I am +2+W on this plan of
>>action to test it for next 4-6 weeks and see thereon.
>>
>>Swapnil
>>
>
>Agree.
>
>Customization and plugins are the trigger for the work.  I was thinking of
>the following:
>
>Elemental.yml (ships with Kolla)
>Elemental-merge.yml (operator provides in /etc/kolla, this file is yaml
>merged with elemental.yml)
>Elemental-override.yml (operator provides in /etc/kolla, this file
>overrides any YAML sections defined)
>
>I think merging and overriding the yaml files should be pretty easy,
>compared to jinja2, where I don't even know where to begin in a way that
>the operator doesn't have to have deep 

Re: [openstack-dev] [kolla][kolla-kubernetes][kubernetes]

2016-05-23 Thread Britt Houser (bhouser)
I wouldn't expect new users to be created on upgrade, so is the problem with 
bootstrap and upgrade that we do the database migration during bootstrap too?

Thx,
britt



On 5/22/16, 3:50 PM, "Ryan Hallisey"  wrote:

>Hi all,
>
>At the Kolla meeting last week, I brought up some of the challenges around the 
>bootstrapping
>process in Kubernetes.  The main highlight revolved around how the 
>bootstrapping process will
>work.
>
>Currently, in the kolla-kubernetes spec [1], the process for bootstrapping 
>involves
>outside orchestration running Kubernetes 'Jobs' that will handle the database 
>initialization,
>creating users, etc...  One of the flaws in this approach, is that 
>kolla-kubernetes can't use
>native Kubernetes upgrade tooling. Kubernetes does upgrades as a single action 
>that scales
>down running containers and replaces them with the upgraded containers. So 
>instead of having
>Kubernetes manage the upgrade, it would be guided by an external engine.  Not 
>saying this is
>a bad thing, but it does loosen the control Kubernetes would have over stack 
>management.
>
>Kubernetes does have some incoming new features that are a step in the right 
>direction to
>allow for kolla-kubernetes to make complete use of Kubernetes tooling like 
>init containers [2].
>There is also the introduction to wait.for conditions in the kubectl [3].
>
>   kubectl get pod my-pod --wait --wait-for="pod-running"
>
>Upgrades will be in the distant future for kolla-kubernetes, but I want to 
>make sure the
>community maintains an open mind about bootstrap/upgrades since there are 
>potentially many
>options that could come down the road.
>
>I encourage everyone to add your input to the spec!
>
>Thanks,
>Ryan
>
>[1] SPEC - https://review.openstack.org/#/c/304182/
>[2] Init containers - https://github.com/kubernetes/kubernetes/pull/23567
>[3] wait.for kubectl - https://github.com/kubernetes/kubernetes/issues/1899
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Vagrant environment for kolla-kubernetes

2016-05-13 Thread Britt Houser (bhouser)
Would we ever run AIO-k8s vagrant w/o Kolla repo?  If not, then it makes sense 
to me just to extend Vagrant kolla repo.

Thx,
britt




On 5/13/16, 2:37 AM, "Michal Rostecki"  wrote:

>Hi,
>
>Currently we have nice guide about setting up AIO k8s environment in 
>review. But at the same time I'm thinking also about automating this by 
>Vagrant, like we did in kolla-mesos.
>
>Here comes the question - what repo is good for keeping the Vagrant 
>stuff for kolla-k8s? I see two options.
>
>1) Create a Vagrantfile in kolla-k8s repo
>
>That's what we've done in kolla-mesos. And that was a huge problem, because:
>- we had to copy dhcp-leases script for vagrant-libvirt
>- we almost copied all the logic of overriding the resources via 
>Vagrantfile.custom
>
>While we can do something with the first con - by trying to drop the 
>dhcp-leases script and use eth0 for keepalived/haproxy and endpoints, 
>getting rif of the second con may be hard.
>
>2) Extend Vagrantfile in kolla repo
>
>That would be easy - it requires just adding some boolean to the 
>Vagrantfile and additional provisioning script.
>
>But it sounds odd, to have separate repo for kolla-k8s and at the same 
>time centralize only some compoments in the one repo.
>
>What are your thoughts?
>
>Cheers,
>Michal
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-09 Thread Britt Houser (bhouser)
Mark,

This is exactly the kind of discussion I was hoping for during Austin.  I agree 
with pretty much all your statements.  I think if Kolla can define what it 
would expect in the inventory provided by a bare metal provisioner, and we can 
make an ABI around that, then this becomes a lot more operator friendly.  I 
kinda hoped the discussion would start with that definition, and the delve into 
individual bare metal tools after that.

To add the discussion of looking a little deeper at the deployment tools: we 
use cobbler and have containerized it in the "Kolla way".  We run TFTP and HTTP 
in their own containers.  Cobblerd and DHCP had to be in the same container, 
only b/c cobbler expects to issue "systemctl restart isc-dhcpd-server" command 
when it changes the DHCP config.  If either cobbler or isc-dhcp could handle 
this is a more graceful manner, then there wouldn't be any problem putting them 
each in their own container.  We share volumes between the containers, and the 
cobblerd container runs supervisord.  Cobbler has an API using xmlrpc which we 
utilize for system definition.  It also can provide an ansible inventory, 
although I haven't played with that feature.  I know cobbler doesn't do the new 
shiny image based deployment, but for us its feature-mature, steady, and 
reliable.

I'd love to hear from other folks about their journey with bare metal 
deployment with Kolla.

Thx,
britt

From: Mark Casey 
<markca...@pointofrental.com<mailto:markca...@pointofrental.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 6:48 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I'm not sure if it is necessary to write up or provide support on how to use 
more than one deployment tool, but I think any work that inadvertently makes it 
harder for an operator to use their own existing deployment infrastructure 
could run some people off.

Regarding "deploy a VM to deploy bifrost to deploy bare metal", I suspect that 
situation will not be unique to bifrost. At the moment I'm using MAAS and it 
has a hard dependency on Upstart for init up until around Ubuntu Trusty and 
then was ported to systemd in Wily. I do not think you can just switch to 
another init daemon or run it under supervisord without significant work. I was 
not even able to get the maas package to install during a docker build because 
it couldn't communicate with the init system it wanted. In addition, for any 
deployment tool that enrolls/deploys via PXE the tool may also require 
accommodations when being containerized simply because this whole topic is 
fairly low in the stack of abstractions. For example I'm not sure whether any 
of these tools running in a container would respond to a new bare metal host's 
initial DHCP broadcast without --net=host or similar consideration.

As long as the most common deployment option in Kolla is Ansible, making 
deployment tools pluggable is fairly easy to solve. MAAS and bifrost both have 
inventory scripts that can provide dynamic inventory to kolla-ansible while 
still pulling Kolla's child groups from the multinode inventory file. Another 
common pattern could be for a given deployment tool to template out a new 
(static) multinode inventory and then we just append Kolla's groups to the file 
before calling kolla-ansible. The problem, to me, becomes in getting every 
other option (k8s, puppet, etc.) to work similarly. Perhaps you just state that 
each implementation must be pluggable to various deployment tools and let 
people that know their respective tool handle the how.(?)

Currently I am running MAAS inside a Vagrant box to retain some of the 
immutability and easy "create/destroy" workflow that having it containerized 
would offer. It works very well and, assuming nothing else was running on the 
underlying deployment host, I'd have no issue running it in prod that way even 
with the Vagrant layer.

Thank you,
Mark

On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners?  The 
austin discussion was titled generic bare metal, but very quickly turned into 
bifrost-only discourse.  The initial survey showed cobbler/maas/OoO as  
alternatives people use today.  So if the bifrost strategy is, "deploy a VM to 
deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its 
time to take a deeper look at the other deployment tools and see if they are a 
better fit?

Thx,
britt

From: "Steven Dake (stdake)" <std...@cisco.com<mailto:std...@cisco.co

Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-09 Thread Britt Houser (bhouser)
Are we (as the Kolla community) open to other bare metal provisioners?  The 
austin discussion was titled generic bare metal, but very quickly turned into 
bifrost-only discourse.  The initial survey showed cobbler/maas/OoO as  
alternatives people use today.  So if the bifrost strategy is, "deploy a VM to 
deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its 
time to take a deeper look at the other deployment tools and see if they are a 
better fit?

Thx,
britt

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.



From: Devananda van der Veen 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.



On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake) 
> wrote:
Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)

From: "Mooney, Sean K" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.

That's going to be quite a bit of work. The bifrost-install playbook does a lot 
more than just install the ironic services and a few system packages; it also 
installs rabbit, mysql, nginx, dnsmasq *and* configures all of these in a very 
specific way. Re-inventing all of this is basically re-inventing Bifrost.



Sean's latest proposal was splitting this one operation into three smaller 
decomposed steps.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.

Do you really need to run them in a different way? If it's just a matter of 
"use a different init system", I wonder how easily that could be accomodated 
within the Bifrost project itself If there's another reason, please 
elaborate.


To run in a container, we cannot use systemd.  This leaves us with supervisord, 
which certainly can and should be done in the context of upstream bifrost.


 This will (as we discussed at ODS) be a fat container on the underlord cloud – 
which I guess is ok.  I'd recommend not using systemd, as that will break 
systemd systems badly.  Instead use a different init system, such as 
supervisord.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is 
not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Britt Houser (bhouser)
I think several of the people who have expressed support of split repo have 
given the caveat that they want it integrated once it matures.  I know that 
merging repos at a future date without losing history is a major drawback to 
this approach.  What if instead of separate repo, we just had a "k8s" branch 
with periodic (weekly?) syncs from master?  That would allow easy merge of git 
history at the point that k8s meats the "stable" requirement.  Would having a 
separate branch in the same repo give the kolla-k8s-core the independence 
desired for quick development without infringing on master?  Is it possible in 
gerrit for kolla-k8s-core have +2 on k8s branch and not master?  Just food for 
thought.

Thx,
britt




On 5/2/16, 1:32 AM, "Swapnil Kulkarni" <m...@coolsvap.net> wrote:

>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
><bhou...@cisco.com> wrote:
>> Although it seems I'm in the minority, I am in favor of unified repo.
>>
>> From: "Steven Dake (stdake)" <std...@cisco.com>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev@lists.openstack.org>
>> Date: Sunday, May 1, 2016 at 5:03 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> <openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>> Ryan had rightly pointed out that when we made the original proposal 9am
>> morning we had asked folks if they wanted to participate in a separate
>> repository.
>>
>> I don't think a separate repository is the correct approach based upon one
>> off private conversations with folks at summit.  Many people from that list
>> approached me and indicated they would like to see the work integrated in
>> one repository as outlined in my vote proposal email.  The reasons I heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed to
>> communicate these complaints to the core team prior to the vote, so now is
>> the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they want to
>> work on an offshoot repository and open us up to the possible problems
>> above.
>>
>> If you are on this list:
>>
>> Ryan Hallisey
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>> Jinay Vora
>> Hui Kang
>> Davanum Srinivas
>>
>>
>>
>> Please speak up if you are in favor of a separate repository or a unified
>> repository.
>>
>> The core reviewers will still take responsibility for determining if we
>> proceed on the action of implementing kubernetes in general.
>>
>> Thank you
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>I am in the favor of having two separate repos and evaluating the
>merge/split option later.
>Though in the longer run, I would recommend having a single repo with
>multiple stable deployment tools(maybe too early to comment views but
>yeah)
>
>Swapnil
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-01 Thread Britt Houser (bhouser)
Although it seems I'm in the minority, I am in favor of unified repo.

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, May 1, 2016 at 5:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

Ryan had rightly pointed out that when we made the original proposal 9am 
morning we had asked folks if they wanted to participate in a separate 
repository.

I don't think a separate repository is the correct approach based upon one off 
private conversations with folks at summit.  Many people from that list 
approached me and indicated they would like to see the work integrated in one 
repository as outlined in my vote proposal email.  The reasons I heard were:

  *   Better integration of the community
  *   Better integration of the code base
  *   Doesn't present an us vs them mentality that one could argue happened 
during kolla-mesos
  *   A second repository makes k8s a second class citizen deployment 
architecture without a voice in the full deployment methodology
  *   Two gating methods versus one
  *   No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  *   It presents a unified workspace for kubernetes alone
  *   Packaging without ansible is simpler as the ansible directory need not be 
deleted

There were other complaints but not many pros.  Unfortunately I failed to 
communicate these complaints to the core team prior to the vote, so now is the 
time for fixing that.

I'll leave it open to the new folks that want to do the work if they want to 
work on an offshoot repository and open us up to the possible problems above.

If you are on this list:


  *   Ryan Hallisey
  *   Britt Houser

  *   mark casey

  *   Steven Dake (delta-alpha-kilo-echo)

  *   Michael Schmidt

  *   Marian Schwarz

  *   Andrew Battye

  *   Kevin Fox (kfox)

  *   Sidharth Surana (ssurana)

  *Michal Rostecki (mrostecki)

  * Swapnil Kulkarni (coolsvap)

  * MD NADEEM (mail2nadeem92)

  * Vikram Hosakote (vhosakot)

  * Jeff Peeler (jpeeler)

  * Martin Andre (mandre)

  * Ian Main (Slower)

  *   Hui Kang (huikang)

  *   Serguei Bezverkhi (sbezverk)

  *   Alex Polvi (polvi)

  *   Rob Mason

  *   Alicja Kwasniewska

  *   sean mooney (sean-k-mooney)

  *   Keith Byrne (kbyrne)

  *   Zdenek Janda (xdeu)

  *   Brandon Jozsa (v1k0d3n)

  *   Rajath Agasthya (rajathagasthya)
  *   Jinay Vora
  *   Hui Kang
  *   Davanum Srinivas


Please speak up if you are in favor of a separate repository or a unified 
repository.

The core reviewers will still take responsibility for determining if we proceed 
on the action of implementing kubernetes in general.

Thank you
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev