Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-26 Thread Zane Bitter

On 26/09/14 00:01, Angus Lees wrote:

On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:

Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3
for control jobs? Has anyone tried yet?


For reference, the cases were:


- Something to deploy the code (docker / distro packages / pip install /
etc)
- Something to choose where to deploy
- Something to respond to machine outages / autoscaling and re-deploy as
necessary



I tried for a while, yes.  The problems I ran into (and I'd be interested to
know if there are solutions to these):

- I'm trying to deploy into VMs on rackspace public cloud (just because that's
what I have).  This means I can't use the nova docker driver, without
constructing an entire self-contained openstack undercloud first.

- heat+cloud-init (afaics) can't deal with circular dependencies (like nova-

neutron) since the machines need to exist first before you can refer to their

IPs.


I suspect that this now can actually be done with Software Deployment 
resources (which separate the creation of the actual servers from the 
inputs to the software configured on them, thus breaking the circular 
dependency).



 From what I can see, TripleO gets around this by always scheduling them on the
same machine and just using the known local IP.  Other installs declare fixed
IPs up front - on rackspace I can't do that (easily).
I can't use loadbalancers via heat for this because the loadbalancers need to
know the backend node addresses, which means the nodes have to exist first and
you're back to a circular dependency.


Again, Sofware Deployments should resolve that.


For comparision, with kubernetes you declare the loadbalancer-equivalents
(services) up front with a search expression for the backends.  In a second
pass you create the backends (pods) which can refer to any of the loadbalanced
endpoints.  The loadbalancers then reconfigure themselves on the fly to find the
new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with
openstack too, but not with heat and not just out of the box.

- My experiences using heat for anything complex have been extremely
frustrating.  The version on rackspace public cloud is ancient and limited,
and quite easy to get into a state where the only fix is to destroy the entire
stack and recreate it.


I don't know that Rackspace's version is ancient (afaik they chase 
trunk, so it's probably more recent than even Icehouse), but the Juno 
release should be much better on this front.



I'm sure these are fixed in newer versions of heat, but
last time I tried I was unable to run it standalone against an arms-length
keystone because some of the recursive heat callbacks became confused about
which auth token to use.

(I'm sure this can be fixed, if it wasn't already just me using it wrong in the
first place.)


Yeah, Heat only really supports a fairly limited subset of functionality 
in standalone mode. (I mean, it's probably the majority, but it's 
everything except the _really_ useful parts ;)


We're hearing more from users that they care about this, and by happy 
coincidence making technical changes from which a true standalone mode 
should fall out.



- As far as I know, nothing in a heat/loadbalancer/nova stack will actually
reschedule jobs away from a failed machine.


That's correct, although it is something we have already started looking 
at and plan to implement over the next few cycles.



There's also no lazy
discovery/nameservice mechanism, so updating IP address declarations in cloud-
configs tend to ripple through the heat config and cause all sorts of
VMs/containers to be reinstalled without any sort of throttling or rolling
update.


Yes, I can't wait for Designate :)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-26 Thread Fox, Kevin M
 -Original Message-
 From: Angus Lees [mailto:gusl...@gmail.com] On Behalf Of Angus Lees
 Sent: Thursday, September 25, 2014 9:01 PM
 To: openstack-dev@lists.openstack.org
 Cc: Fox, Kevin M
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
  Doesn't nova with a docker driver and heat autoscaling handle case 2
  and 3 for control jobs? Has anyone tried yet?
 
 For reference, the cases were:
 
  - Something to deploy the code (docker / distro packages / pip install
  /
  etc)
  - Something to choose where to deploy
  - Something to respond to machine outages / autoscaling and re-deploy
  as necessary
 
 
 I tried for a while, yes.  The problems I ran into (and I'd be interested to
 know if there are solutions to these):
 
 - I'm trying to deploy into VMs on rackspace public cloud (just because that's
 what I have).  This means I can't use the nova docker driver, without
 constructing an entire self-contained openstack undercloud first.

That's true. But you are essentially doing the same with Kubernetes. Installing 
a self contained undercloud.

If that's the case, it would be nice to use the same docker containers to build 
an undercloud to deploy the over cloud and reuse almost all of the work, rather 
then deploy two different systems.
 
 - heat+cloud-init (afaics) can't deal with circular dependencies (like nova-
 neutron) since the machines need to exist first before you can refer to
 their
 IPs.

Could this be made to work with by either a floating ip or load balancer with a 
floating ip in front so you know before hand what its going to be?

 From what I can see, TripleO gets around this by always scheduling them on
 the same machine and just using the known local IP.  Other installs declare
 fixed IPs up front - on rackspace I can't do that (easily).
 I can't use loadbalancers via heat for this because the loadbalancers need to
 know the backend node addresses, which means the nodes have to exist
 first and you're back to a circular dependency.

Starting in icehouse, heat does not need to know the backend ip's on lb 
creation. There is a PoolMember resource that acts similarly to a cinder mount 
resource. It lets you create the lb in one stack, and then an instance template 
that launches the vm, installs stuff, then binds to the lb in another.

For an example, see:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn/389/
BaseCluster and RO_Replica templates.


 For comparision, with kubernetes you declare the loadbalancer-equivalents
 (services) up front with a search expression for the backends.  In a second
 pass you create the backends (pods) which can refer to any of the
 loadbalanced endpoints.  The loadbalancers then reconfigure themselves
 on the fly to find the new backends.  You _can_ do a similar lazy-
 loadbalancer-reconfig thing with openstack too, but not with heat and not
 just out of the box.

As above, you should be able to do this with heat now.

 - My experiences using heat for anything complex have been extremely
 frustrating.  The version on rackspace public cloud is ancient and limited,
 and quite easy to get into a state where the only fix is to destroy the entire
 stack and recreate it.  I'm sure these are fixed in newer versions of heat, 
 but
 last time I tried I was unable to run it standalone against an arms-length
 keystone because some of the recursive heat callbacks became confused
 about which auth token to use.

Newer versions of heat are better. and its getting better all the time. I agree 
its not as fault tolerant as it needs to be yet in all cases. For now, I 
usually end up breaking my systems into a few separate stacks that I can deal 
with updates in a way that if there is a failure, nothing critical gets lost. 
Usually that means launching/deleting/relaunching components rather then ever 
updating them.
 
 (I'm sure this can be fixed, if it wasn't already just me using it wrong in 
 the
 first place.)
 
 - As far as I know, nothing in a heat/loadbalancer/nova stack will actually
 reschedule jobs away from a failed machine. 

That's a problem currently, yes.

 There's also no lazy
 discovery/nameservice mechanism, so updating IP address declarations in
 cloud- configs tend to ripple through the heat config and cause all sorts of
 VMs/containers to be reinstalled without any sort of throttling or rolling
 update.

I try and use floating ip's for these. Designate support so that you have a 
nice pretty name for it would be even better. Alternatively, I've had good luck 
with private Neutron networks. When you own the net, you can assign the vm any 
ip you want in it. In cases like Ceph where you have mons that need to be at 
known ip's, I just make sure the vm's running the mons are always launched with 
those fixed ips. Then you never have to reconfig slaves.


 So: I think there's some things to learn from

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Chmouel Boudjnah
On Thu, Sep 25, 2014 at 6:02 AM, Clint Byrum cl...@fewbar.com wrote:

 However, this does make me think that Keystone domains should be exposable
 to services inside your cloud for use as SSO. It would be quite handy
 if the keystone users used for the VMs that host Kubernetes could use
 the same credentials to manage the containers.



I was exactly thinking about the same and looking at the code here :

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/request.go#L263

it seems to use some basic HTTP auth which should be enough with the
REMOTE_USER/apache feature of keystone :

http://docs.openstack.org/developer/keystone/external-auth.html#using-httpd-authentication

but if we want to have proper full integration with OpenStack we would
probably at some point want to teach modularity and a keystone plugin to
give to k8

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:
 Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:
 
  Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
   Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
...
   ...
   Does TripleO require container functionality that is not available
   when using the Docker driver for Nova?
   
   As far as I can tell, the quantitative handling of capacities and
   demands in Kubernetes is much inferior to what Nova does today.
   
  
  Yes, TripleO needs to manage baremetal and containers from a single
  host. Nova and Neutron do not offer this as a feature unfortunately.
 
 In what sense would Kubernetes manage baremetal (at all)?
 By from a single host do you mean that a client on one host
 can manage remote baremetal and containers?
 
 I can see that Kubernetes allows a client on one host to get
 containers placed remotely --- but so does the Docker driver for Nova.
 

I mean that one box would need to host Ironic, Docker, and Nova, for
the purposes of deploying OpenStack. We call it the undercloud, or
sometimes the Deployment Cloud.

It's not necessarily something that Nova/Neutron cannot do by design,
but it doesn't work now.

  
As far as use cases go, the main use case is to run a specific 
Docker container on a specific Kubernetes minion bare metal host.
 
 Clint, in another branch of this email tree you referred to
 the VMs that host Kubernetes.  How does that square with
 Steve's text that seems to imply bare metal minions?
 

That was in a more general context, discussing using Kubernetes for
general deployment. Could have just as easily have said hosts,
machines, or instances.

 I can see that some people have had much more detailed design
 discussions than I have yet found.  Perhaps it would be helpful
 to share an organized presentation of the design thoughts in
 more detail.
 

I personally have not had any detailed discussions about this before it
was announced. I've just dug into the design and some of the code of
Kubernetes because it is quite interesting to me.

   
   If TripleO already knows it wants to run a specific Docker image
   on a specific host then TripleO does not need a scheduler.
   
  
  TripleO does not ever specify destination host, because Nova does not
  allow that, nor should it. It does want to isolate failure domains so
  that all three Galera nodes aren't on the same PDU, but we've not really
  gotten to the point where we can do that yet.
 
 So I am still not clear on what Steve is trying to say is the main use 
 case.
 Kubernetes is even farther from balancing among PDUs than Nova is.
 At least Nova has a framework in which this issue can be posed and solved.
 I mean a framework that actually can carry the necessary information.
 The Kubernetes scheduler interface is extremely impoverished in the
 information it passes and it uses GO structs --- which, like C structs,
 can not be subclassed.

I don't think this is totally clear yet. The thing that Steven seems to be
trying to solve is deploying OpenStack using docker, and Kubernetes may
very well be a better choice than Nova for this. There are some really
nice features, and a lot of the benefits we've been citing about image
based deployments are realized in docker without the pain of a full OS
image to redeploy all the time.

The structs vs. classes argument is completely out of line and has
nothing to do with where Kubernetes might go in the future. It's like
saying because cars use internal combustion engines they are limited. It
is just a facet of how it works today.

 Nova's filter scheduler includes a fatal bug that bites when balancing and 
 you want more than
 one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
 However: (a) you might not need more than one element per area and
 (b) fixing that bug is a much smaller job than expanding the mind of K8s.
 

Perhaps. I am quite a fan of set based design, and Kubernetes is a
narrowly focused single implementation solution, where Nova is a broadly
focused abstraction layer for VM's. I think it is worthwhile to push
a bit into the Kubernetes space and see whether the limitations are
important or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/25/2014 12:01 AM, Clint Byrum wrote:

Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:

Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:


Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:

Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:

...

...
Does TripleO require container functionality that is not available
when using the Docker driver for Nova?

As far as I can tell, the quantitative handling of capacities and
demands in Kubernetes is much inferior to what Nova does today.


Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes manage baremetal (at all)?
By from a single host do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.


I mean that one box would need to host Ironic, Docker, and Nova, for
the purposes of deploying OpenStack. We call it the undercloud, or
sometimes the Deployment Cloud.

It's not necessarily something that Nova/Neutron cannot do by design,
but it doesn't work now.


As far as use cases go, the main use case is to run a specific
Docker container on a specific Kubernetes minion bare metal host.

Clint, in another branch of this email tree you referred to
the VMs that host Kubernetes.  How does that square with
Steve's text that seems to imply bare metal minions?


That was in a more general context, discussing using Kubernetes for
general deployment. Could have just as easily have said hosts,
machines, or instances.


I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.


I personally have not had any detailed discussions about this before it
was announced. I've just dug into the design and some of the code of
Kubernetes because it is quite interesting to me.


If TripleO already knows it wants to run a specific Docker image
on a specific host then TripleO does not need a scheduler.


TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use
case.
Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and solved.
I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.

I don't think this is totally clear yet. The thing that Steven seems to be
trying to solve is deploying OpenStack using docker, and Kubernetes may
very well be a better choice than Nova for this. There are some really
nice features, and a lot of the benefits we've been citing about image
based deployments are realized in docker without the pain of a full OS
image to redeploy all the time.


This is precisely the problem I want to solve.  I looked at Nova+Docker 
as a solution, and it seems to me the runway to get to a successful 
codebase is longer with more risk.  That is why this is an experiment to 
see if a Kubernetes based approach would work.  if at the end of the day 
we throw out Kubernetes as a scheduler once we have the other problems 
solved and reimplement Kubernetes in Nova+Docker, I think that would be 
an acceptable outcome, but not something I want to *start* with but 
*finish* with.


Regards
-steve


The structs vs. classes argument is completely out of line and has
nothing to do with where Kubernetes might go in the future. It's like
saying because cars use internal combustion engines they are limited. It
is just a facet of how it works today.


Nova's filter scheduler includes a fatal bug that bites when balancing and
you want more than
one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.


Perhaps. I am quite a fan of set based design, and Kubernetes is a
narrowly focused single implementation solution, where Nova is a broadly
focused abstraction layer for VM's. I think it is worthwhile to push
a bit into the Kubernetes space and see whether the limitations are
important or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/24/2014 10:01 PM, Mike Spreitzer wrote:

Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:

 Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
  Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
   ...
  ...
  Does TripleO require container functionality that is not available
  when using the Docker driver for Nova?
 
  As far as I can tell, the quantitative handling of capacities and
  demands in Kubernetes is much inferior to what Nova does today.
 

 Yes, TripleO needs to manage baremetal and containers from a single
 host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes manage baremetal (at all)?
By from a single host do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.


   As far as use cases go, the main use case is to run a specific
   Docker container on a specific Kubernetes minion bare metal host.

Clint, in another branch of this email tree you referred to
the VMs that host Kubernetes.  How does that square with
Steve's text that seems to imply bare metal minions?

I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.



Mike,

I have had no such design discussions.  Thus far the furthest along we 
are in the project is determining we need Docker containers for each of 
the OpenStack daemons.  We are working a bit on how that design should 
operate.  For example, our current model on reconfiguration of a docker 
container is to kill the docker container and start a fresh one with the 
new configuration.


This is literally where the design discussions have finished.  We have 
not had much discussion about Kubernetes at all other then I know it is 
a docker scheduler and I know it can get the job done :) I think other 
folks design discussions so far on this thread are speculation about 
what an architecture should look like.  That is great - lets have those 
Monday 2000 UTC in #openstack-medeting in our first Kolla meeting.


Regards
-steve


 
  If TripleO already knows it wants to run a specific Docker image
  on a specific host then TripleO does not need a scheduler.
 

 TripleO does not ever specify destination host, because Nova does not
 allow that, nor should it. It does want to isolate failure domains so
 that all three Galera nodes aren't on the same PDU, but we've not really
 gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use 
case.

Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and 
solved.

I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.
Nova's filter scheduler includes a fatal bug that bites when balancing 
and you want more than

one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3 for 
control jobs? Has anyone tried yet?

Thanks,
Kevin

From: Angus Lees [g...@inodes.org]
Sent: Wednesday, September 24, 2014 6:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

On Wed, 24 Sep 2014 10:31:19 PM Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from integrating
 Kubernetes into Openstack? Would be really useful if you can elaborate and
 outline some use cases and benefits Openstack and Kubernetes can gain.

I've no idea what Steven's motivation is, but here's my reasoning for going
down a similar path:

OpenStack deployment is basically two types of software:
1. Control jobs, various API servers, etc that are basically just regular
python wsgi apps.
2. Compute/network node agents that run under hypervisors, configure host
networking, etc.

The 2nd group probably wants to run on baremetal and is mostly identical on
all such machines, but the 1st group wants higher level PaaS type things.

In particular, for the control jobs you want:

- Something to deploy the code (docker / distro packages / pip install / etc)
- Something to choose where to deploy
- Something to respond to machine outages / autoscaling and re-deploy as
necessary

These last few don't have strong existing options within OpenStack yet (as far
as I'm aware).  Having explored a few different approaches recently, kubernetes
is certainly not the only option - but is a reasonable contender here.


So: I certainly don't see kubernetes as competing with anything in OpenStack -
but as filling a gap in job management with something that has a fairly
lightweight config syntax and is relatively simple to deploy on VMs or
baremetal.  I also think the phrase integrating kubernetes into OpenStack is
overstating the task at hand.

The primary downside I've discovered so far seems to be that kubernetes is
very young and still has an awkward cli, a few easy to encounter bugs, etc.

 - Gus

 From: Steven Dake [mailto:sd...@redhat.com]
 Sent: September-24-14 7:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker

 On 09/24/2014 10:12 AM, Joshua Harlow wrote:
 Sounds like an interesting project/goal and will be interesting to see where
 this goes.

 A few questions/comments:

 How much golang will people be exposed to with this addition?

 Joshua,

 I expect very little.  We intend to use Kubernetes as an upstream project,
 rather then something we contribute to directly.


 Seeing that this could be the first 'go' using project it will be
 interesting to see where this goes (since afaik none of the infra support
 exists, and people aren't likely to familiar with go vs python in the
 openstack community overall).

 What's your thoughts on how this will affect the existing openstack
 container effort?

 I don't think it will have any impact on the existing Magnum project.  At
 some point if Magnum implements scheduling of docker containers, we may add
 support for Magnum in addition to Kubernetes, but it is impossible to tell
 at this point.  I don't want to derail either project by trying to force
 them together unnaturally so early.


 I see that kubernetes isn't exactly a small project either (~90k LOC, for
 those who use these types of metrics), so I wonder how that will affect
 people getting involved here, aka, who has the resources/operators/other...
 available to actually setup/deploy/run kubernetes, when operators are
 likely still just struggling to run openstack itself (at least operators
 are getting used to the openstack warts, a new set of kubernetes warts
 could not be so helpful).

 Yup it is fairly large in size.  Time will tell if this approach will work.

 This is an experiment as Robert and others on the thread have pointed out
 :).

 Regards
 -steve


 On Sep 23, 2014, at 3:40 PM, Steven Dake
 sd...@redhat.commailto:sd...@redhat.com wrote:


 Hi folks,

 I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base.
 Our long term goal is to merge into the TripleO/Deployment program rather
 then create a new program.



 Docker is a container technology for delivering hermetically sealed
 applications and has about 620 technical contributors [1]. We intend to
 produce docker images for a variety of platforms beginning with Fedora 20.
 We are completely open to any distro support, so if folks want to add new
 Linux distribution to Kolla please feel free to submit patches :)



 Kubernetes

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Then you still need all the kubernetes api/daemons for the master and slaves. 
If you ignore the complexity this adds, then it seems simpler then just using 
openstack for it. but really, it still is an under/overcloud kind of setup, 
your just using kubernetes for the undercloud, and openstack for the overcloud?

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Wednesday, September 24, 2014 8:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
Steven
I have to ask what is the motivation and benefits we get from integrating 
Kubernetes into Openstack? Would be really useful if you can elaborate and 
outline some use cases and benefits Openstack and Kubernetes can gain.

/Alan

Alan,

I am either unaware or ignorant of another Docker scheduler that is currently 
available that has a big (100+ folks) development community.  Kubernetes meets 
these requirements and is my main motivation for using it to schedule Docker 
containers.  There are other ways to skin this cat - The TripleO folks wanted 
at one point to deploy nova with the nova docker VM manager to do such a thing. 
 This model seemed a little clunky to me since it isn't purpose built around 
containers.

As far as use cases go, the main use case is to run a specific Docker container 
on a specific Kubernetes minion bare metal host.  These docker containers are 
then composed of the various config tools and services for each detailed 
service in OpenStack.  For example, mysql would be a container, and tools to 
configure the mysql service would exist in the container.  Kubernetes would 
pass config options for the mysql database prior to scheduling and once 
scheduled, Kubernetes would be responsible for connecting the various 
containers together.

Regards
-steve



From: Steven Dake [mailto:sd...@redhat.com]
Sent: September-24-14 7:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 10:12 AM, Joshua Harlow wrote:
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Joshua,

I expect very little.  We intend to use Kubernetes as an upstream project, 
rather then something we contribute to directly.


Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I don't think it will have any impact on the existing Magnum project.  At some 
point if Magnum implements scheduling of docker containers, we may add support 
for Magnum in addition to Kubernetes, but it is impossible to tell at this 
point.  I don't want to derail either project by trying to force them together 
unnaturally so early.


I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

Yup it is fairly large in size.  Time will tell if this approach will work.

This is an experiment as Robert and others on the thread have pointed out :).

Regards
-steve


On Sep 23, 2014, at 3:40 PM, Steven Dake 
sd...@redhat.commailto:sd...@redhat.com wrote:


Hi folks,

I'm pleased to announce the development of a new project Kolla which is Greek 
for glue :). Kolla has a goal of providing an implementation that deploys 
OpenStack using Kubernetes and Docker. This project will begin as a StackForge 
project separate from the TripleO/Deployment program code base. Our long term 
goal is to merge into the TripleO/Deployment program rather then create a new 
program.



Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend to produce 
docker images for a variety of platforms beginning with Fedora 20. We are 
completely open to any distro support, so if folks want to add new Linux 
distribution to Kolla please feel free to submit patches :)



Kubernetes at the most basic level is a Docker scheduler produced by and used 
within Google [2]. Kubernetes has in excess of 100 technical contributors. 
Kubernetes is more then just a scheduler, it provides

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Why can't you manage baremetal and containers from a single host with 
nova/neutron? Is this a current missing feature, or has the development teams 
said they will never implement it?

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, September 24, 2014 9:13 PM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:

  On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
  Steven
  I have to ask what is the motivation and benefits we get from
  integrating Kubernetes into Openstack? Would be really useful if you
  can elaborate and outline some use cases and benefits Openstack and
  Kubernetes can gain.
 
  /Alan
 
  Alan,
 
  I am either unaware or ignorant of another Docker scheduler that is
  currently available that has a big (100+ folks) development
  community.  Kubernetes meets these requirements and is my main
  motivation for using it to schedule Docker containers.  There are
  other ways to skin this cat - The TripleO folks wanted at one point
  to deploy nova with the nova docker VM manager to do such a thing.
  This model seemed a little clunky to me since it isn't purpose built
  around containers.

 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?

 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.


Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

  As far as use cases go, the main use case is to run a specific
  Docker container on a specific Kubernetes minion bare metal host.

 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.


TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

  These docker containers are then composed of the various config
  tools and services for each detailed service in OpenStack.  For
  example, mysql would be a container, and tools to configure the
  mysql service would exist in the container.  Kubernetes would pass
  config options for the mysql database prior to scheduling

 I am not sure what is meant here by pass config options nor how it
 would be done prior to scheduling; can you please clarify?
 I do not imagine Kubernetes would *choose* the config values,
 K8s does not know anything about configuring OpenStack.
 Before scheduling, there is no running container to pass
 anything to.


Docker containers tend to use environment variables passed to the initial
command to configure things. The Kubernetes API allows setting these
environment variables on creation of the container.

and once
  scheduled, Kubernetes would be responsible for connecting the
  various containers together.

 Kubernetes has a limited role in connecting containers together.
 K8s creates the networking environment in which the containers
 *can* communicate, and passes environment variables into containers
 telling them from what protocol://host:port/ to import each imported
 endpoint.  Kubernetes creates a universal reverse proxy on each
 minion, to provide endpoints that do not vary as the servers
 move around.
 It is up to stuff outside Kubernetes to decide
 what should be connected to what, and it is up to the containers
 to read the environment variables and actually connect.


This is a nice simple interface though, and I like that it is narrowly
defined, not trying to be anything that containers want to share with
other containers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
First, Kevin, please try to figure out a way to reply in-line when you're
replying to multiple levels of threads. Even if you have to copy and
quote it manually.. it took me reading your message and the previous
message 3 times to understand the context.

Second, I don't think anybody minds having a control plane for each
level of control. The point isn't to replace the undercloud, but to
replace nova rebuild as the way you push out new software while
retaining the benefits of the image approach.

Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
 Then you still need all the kubernetes api/daemons for the master and slaves. 
 If you ignore the complexity this adds, then it seems simpler then just using 
 openstack for it. but really, it still is an under/overcloud kind of setup, 
 your just using kubernetes for the undercloud, and openstack for the 
 overcloud?
 
 Thanks,
 Kevin
 
 From: Steven Dake [sd...@redhat.com]
 Sent: Wednesday, September 24, 2014 8:02 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
 Manage OpenStack using Kubernetes and Docker
 
 On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from integrating 
 Kubernetes into Openstack? Would be really useful if you can elaborate and 
 outline some use cases and benefits Openstack and Kubernetes can gain.
 
 /Alan
 
 Alan,
 
 I am either unaware or ignorant of another Docker scheduler that is currently 
 available that has a big (100+ folks) development community.  Kubernetes 
 meets these requirements and is my main motivation for using it to schedule 
 Docker containers.  There are other ways to skin this cat - The TripleO folks 
 wanted at one point to deploy nova with the nova docker VM manager to do such 
 a thing.  This model seemed a little clunky to me since it isn't purpose 
 built around containers.
 
 As far as use cases go, the main use case is to run a specific Docker 
 container on a specific Kubernetes minion bare metal host.  These docker 
 containers are then composed of the various config tools and services for 
 each detailed service in OpenStack.  For example, mysql would be a container, 
 and tools to configure the mysql service would exist in the container.  
 Kubernetes would pass config options for the mysql database prior to 
 scheduling and once scheduled, Kubernetes would be responsible for connecting 
 the various containers together.
 
 Regards
 -steve
 
 
 
 From: Steven Dake [mailto:sd...@redhat.com]
 Sent: September-24-14 7:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
 Manage OpenStack using Kubernetes and Docker
 
 On 09/24/2014 10:12 AM, Joshua Harlow wrote:
 Sounds like an interesting project/goal and will be interesting to see where 
 this goes.
 
 A few questions/comments:
 
 How much golang will people be exposed to with this addition?
 
 Joshua,
 
 I expect very little.  We intend to use Kubernetes as an upstream project, 
 rather then something we contribute to directly.
 
 
 Seeing that this could be the first 'go' using project it will be interesting 
 to see where this goes (since afaik none of the infra support exists, and 
 people aren't likely to familiar with go vs python in the openstack community 
 overall).
 
 What's your thoughts on how this will affect the existing openstack container 
 effort?
 
 I don't think it will have any impact on the existing Magnum project.  At 
 some point if Magnum implements scheduling of docker containers, we may add 
 support for Magnum in addition to Kubernetes, but it is impossible to tell at 
 this point.  I don't want to derail either project by trying to force them 
 together unnaturally so early.
 
 
 I see that kubernetes isn't exactly a small project either (~90k LOC, for 
 those who use these types of metrics), so I wonder how that will affect 
 people getting involved here, aka, who has the resources/operators/other... 
 available to actually setup/deploy/run kubernetes, when operators are likely 
 still just struggling to run openstack itself (at least operators are getting 
 used to the openstack warts, a new set of kubernetes warts could not be so 
 helpful).
 
 Yup it is fairly large in size.  Time will tell if this approach will work.
 
 This is an experiment as Robert and others on the thread have pointed out :).
 
 Regards
 -steve
 
 
 On Sep 23, 2014, at 3:40 PM, Steven Dake 
 sd...@redhat.commailto:sd...@redhat.com wrote:
 
 
 Hi folks,
 
 I'm pleased to announce the development of a new project Kolla which is Greek 
 for glue :). Kolla has a goal of providing an implementation that deploys 
 OpenStack using Kubernetes and Docker. This project will begin as a 
 StackForge project separate from the TripleO/Deployment program code base

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Ah. So the goal of project Kolla then is to deploy OpenStack via Docker using 
whatever means that works, not, to deploy OpenStack using Docker+Kubernetes, 
where the first stab at an implementation is using Kubernetes. That seems like 
a much more reasonable goal to me.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Thursday, September 25, 2014 8:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/25/2014 12:01 AM, Clint Byrum wrote:
 Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:
 Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:

 Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
 ...
 ...
 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?

 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.

 Yes, TripleO needs to manage baremetal and containers from a single
 host. Nova and Neutron do not offer this as a feature unfortunately.
 In what sense would Kubernetes manage baremetal (at all)?
 By from a single host do you mean that a client on one host
 can manage remote baremetal and containers?

 I can see that Kubernetes allows a client on one host to get
 containers placed remotely --- but so does the Docker driver for Nova.

 I mean that one box would need to host Ironic, Docker, and Nova, for
 the purposes of deploying OpenStack. We call it the undercloud, or
 sometimes the Deployment Cloud.

 It's not necessarily something that Nova/Neutron cannot do by design,
 but it doesn't work now.

 As far as use cases go, the main use case is to run a specific
 Docker container on a specific Kubernetes minion bare metal host.
 Clint, in another branch of this email tree you referred to
 the VMs that host Kubernetes.  How does that square with
 Steve's text that seems to imply bare metal minions?

 That was in a more general context, discussing using Kubernetes for
 general deployment. Could have just as easily have said hosts,
 machines, or instances.

 I can see that some people have had much more detailed design
 discussions than I have yet found.  Perhaps it would be helpful
 to share an organized presentation of the design thoughts in
 more detail.

 I personally have not had any detailed discussions about this before it
 was announced. I've just dug into the design and some of the code of
 Kubernetes because it is quite interesting to me.

 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.

 TripleO does not ever specify destination host, because Nova does not
 allow that, nor should it. It does want to isolate failure domains so
 that all three Galera nodes aren't on the same PDU, but we've not really
 gotten to the point where we can do that yet.
 So I am still not clear on what Steve is trying to say is the main use
 case.
 Kubernetes is even farther from balancing among PDUs than Nova is.
 At least Nova has a framework in which this issue can be posed and solved.
 I mean a framework that actually can carry the necessary information.
 The Kubernetes scheduler interface is extremely impoverished in the
 information it passes and it uses GO structs --- which, like C structs,
 can not be subclassed.
 I don't think this is totally clear yet. The thing that Steven seems to be
 trying to solve is deploying OpenStack using docker, and Kubernetes may
 very well be a better choice than Nova for this. There are some really
 nice features, and a lot of the benefits we've been citing about image
 based deployments are realized in docker without the pain of a full OS
 image to redeploy all the time.

This is precisely the problem I want to solve.  I looked at Nova+Docker
as a solution, and it seems to me the runway to get to a successful
codebase is longer with more risk.  That is why this is an experiment to
see if a Kubernetes based approach would work.  if at the end of the day
we throw out Kubernetes as a scheduler once we have the other problems
solved and reimplement Kubernetes in Nova+Docker, I think that would be
an acceptable outcome, but not something I want to *start* with but
*finish* with.

Regards
-steve

 The structs vs. classes argument is completely out of line and has
 nothing to do with where Kubernetes might go in the future. It's like
 saying because cars use internal combustion engines they are limited. It
 is just a facet of how it works today.

 Nova's filter scheduler includes a fatal bug that bites when balancing and
 you want more than
 one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
 However: (a) you might not need more than one element per area

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
 Why can't you manage baremetal and containers from a single host with 
 nova/neutron? Is this a current missing feature, or has the development teams 
 said they will never implement it?
 

It's a bug.

But it is also a complexity that isn't really handled well in Nova's
current design. Nova wants to send the workload onto the machine, and
that is it. In this case, you have two workloads, one hosted on the other,
and Nova has no model for that. You end up in a weird situation where one
(baremetal) is host for other (containers) and no real way to separate
the two or identify that dependency.

I think it's worth pursuing in OpenStack, but Steven is solving deployment
of OpenStack today with tools that exist today. I think Kolla may very
well prove that the container approach is too different from Nova's design
and wants to be more separate, at which point our big tent will be in
an interesting position: Do we adopt Kubernetes and put an OpenStack
API on it, or do we re-implement it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: Thursday, September 25, 2014 9:35 AM
 To: openstack-dev
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 First, Kevin, please try to figure out a way to reply in-line when you're
 replying to multiple levels of threads. Even if you have to copy and quote it
 manually.. it took me reading your message and the previous message 3
 times to understand the context.

I'm sorry. I think your frustration with it mirrors the frustration I have with 
having to use this blankity blank microsoft webmail that doesn't support inline 
commenting, or having to rdesktop to a windows terminal server so I can reply 
inline. :/

 
 Second, I don't think anybody minds having a control plane for each level of
 control. The point isn't to replace the undercloud, but to replace nova
 rebuild as the way you push out new software while retaining the benefits
 of the image approach.

I don't quite follow. Wouldn't you be using heat autoscaling, not nova directly?

Thanks,
Kevin
 
 Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
  Then you still need all the kubernetes api/daemons for the master and
 slaves. If you ignore the complexity this adds, then it seems simpler then
 just using openstack for it. but really, it still is an under/overcloud kind 
 of
 setup, your just using kubernetes for the undercloud, and openstack for the
 overcloud?
 
  Thanks,
  Kevin
  
  From: Steven Dake [sd...@redhat.com]
  Sent: Wednesday, September 24, 2014 8:02 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla:
  Deploy and Manage OpenStack using Kubernetes and Docker
 
  On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
  Steven
  I have to ask what is the motivation and benefits we get from integrating
 Kubernetes into Openstack? Would be really useful if you can elaborate and
 outline some use cases and benefits Openstack and Kubernetes can gain.
 
  /Alan
 
  Alan,
 
  I am either unaware or ignorant of another Docker scheduler that is
 currently available that has a big (100+ folks) development community.
 Kubernetes meets these requirements and is my main motivation for using
 it to schedule Docker containers.  There are other ways to skin this cat - The
 TripleO folks wanted at one point to deploy nova with the nova docker VM
 manager to do such a thing.  This model seemed a little clunky to me since it
 isn't purpose built around containers.
 
  As far as use cases go, the main use case is to run a specific Docker
 container on a specific Kubernetes minion bare metal host.  These docker
 containers are then composed of the various config tools and services for
 each detailed service in OpenStack.  For example, mysql would be a
 container, and tools to configure the mysql service would exist in the
 container.  Kubernetes would pass config options for the mysql database
 prior to scheduling and once scheduled, Kubernetes would be responsible
 for connecting the various containers together.
 
  Regards
  -steve
 
 
 
  From: Steven Dake [mailto:sd...@redhat.com]
  Sent: September-24-14 7:41 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla:
  Deploy and Manage OpenStack using Kubernetes and Docker
 
  On 09/24/2014 10:12 AM, Joshua Harlow wrote:
  Sounds like an interesting project/goal and will be interesting to see
 where this goes.
 
  A few questions/comments:
 
  How much golang will people be exposed to with this addition?
 
  Joshua,
 
  I expect very little.  We intend to use Kubernetes as an upstream project,
 rather then something we contribute to directly.
 
 
  Seeing that this could be the first 'go' using project it will be 
  interesting to
 see where this goes (since afaik none of the infra support exists, and people
 aren't likely to familiar with go vs python in the openstack community
 overall).
 
  What's your thoughts on how this will affect the existing openstack
 container effort?
 
  I don't think it will have any impact on the existing Magnum project.  At
 some point if Magnum implements scheduling of docker containers, we
 may add support for Magnum in addition to Kubernetes, but it is impossible
 to tell at this point.  I don't want to derail either project by trying to 
 force
 them together unnaturally so early.
 
 
  I see that kubernetes isn't exactly a small project either (~90k LOC, for
 those who use these types of metrics), so I wonder how that will affect
 people getting involved here, aka, who has the
 resources/operators/other... available to actually setup/deploy/run
 kubernetes, when operators are likely still just struggling to run openstack
 itself (at least operators are getting used to the openstack warts, a new set

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: Thursday, September 25, 2014 9:44 AM
 To: openstack-dev
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
  Why can't you manage baremetal and containers from a single host with
 nova/neutron? Is this a current missing feature, or has the development
 teams said they will never implement it?
 
 
 It's a bug.
 
 But it is also a complexity that isn't really handled well in Nova's current
 design. Nova wants to send the workload onto the machine, and that is it. In
 this case, you have two workloads, one hosted on the other, and Nova has
 no model for that. You end up in a weird situation where one
 (baremetal) is host for other (containers) and no real way to separate the
 two or identify that dependency.

Ideally, like you say, you should be able to have one host managed by two 
different nova drivers in the same cell. But I think today, you can simply use 
two different cells and it should work? One cell for deploying bare metal 
images, of which one image is provided that contains the nova docker compute 
resources. The other cell used to support launching docker instances on those 
hosts. To the end user, it still looks like one unified cloud like we all want, 
but under the hood, its two separate subclouds. An under and an overcloud.

 I think it's worth pursuing in OpenStack, but Steven is solving deployment of
 OpenStack today with tools that exist today. I think Kolla may very well
 prove that the container approach is too different from Nova's design and
 wants to be more separate, at which point our big tent will be in an
 interesting position: Do we adopt Kubernetes and put an OpenStack API on
 it, or do we re-implement it.

That is a very interesting question, worth pursuing.

I think either way, most of the work is going to be in dockerizing the 
services. So that alone is worth playing with too.

I managed to get libvirt to work in docker once. It was a pain. Getting nova 
and neutron bits in that container too would be even harder. I'm waiting to try 
again until I know that systemd will run nicely inside a docker container. It 
would make managing the startup/stopping of the container much easier to get 
right. 

Thanks,
Kevin

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Angus Lees
On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
 Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3
 for control jobs? Has anyone tried yet?

For reference, the cases were:

 - Something to deploy the code (docker / distro packages / pip install /
 etc)
 - Something to choose where to deploy
 - Something to respond to machine outages / autoscaling and re-deploy as
 necessary


I tried for a while, yes.  The problems I ran into (and I'd be interested to 
know if there are solutions to these):

- I'm trying to deploy into VMs on rackspace public cloud (just because that's 
what I have).  This means I can't use the nova docker driver, without 
constructing an entire self-contained openstack undercloud first.

- heat+cloud-init (afaics) can't deal with circular dependencies (like nova-
neutron) since the machines need to exist first before you can refer to their 
IPs.
From what I can see, TripleO gets around this by always scheduling them on the 
same machine and just using the known local IP.  Other installs declare fixed 
IPs up front - on rackspace I can't do that (easily).
I can't use loadbalancers via heat for this because the loadbalancers need to 
know the backend node addresses, which means the nodes have to exist first and 
you're back to a circular dependency.

For comparision, with kubernetes you declare the loadbalancer-equivalents 
(services) up front with a search expression for the backends.  In a second 
pass you create the backends (pods) which can refer to any of the loadbalanced 
endpoints.  The loadbalancers then reconfigure themselves on the fly to find 
the 
new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with 
openstack too, but not with heat and not just out of the box.

- My experiences using heat for anything complex have been extremely 
frustrating.  The version on rackspace public cloud is ancient and limited, 
and quite easy to get into a state where the only fix is to destroy the entire 
stack and recreate it.  I'm sure these are fixed in newer versions of heat, but 
last time I tried I was unable to run it standalone against an arms-length 
keystone because some of the recursive heat callbacks became confused about 
which auth token to use.

(I'm sure this can be fixed, if it wasn't already just me using it wrong in the 
first place.)

- As far as I know, nothing in a heat/loadbalancer/nova stack will actually 
reschedule jobs away from a failed machine.  There's also no lazy 
discovery/nameservice mechanism, so updating IP address declarations in cloud-
configs tend to ripple through the heat config and cause all sorts of 
VMs/containers to be reinstalled without any sort of throttling or rolling 
update.


So: I think there's some things to learn from the kubernetes approach, which 
is why I'm trying to gain more experience with it.  I know I'm learning more 
about the various OpenStack components along the way too ;)

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Angus Salkeld
On Fri, Sep 26, 2014 at 2:01 PM, Angus Lees g...@inodes.org wrote:

 On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
  Doesn't nova with a docker driver and heat autoscaling handle case 2 and
 3
  for control jobs? Has anyone tried yet?

 For reference, the cases were:

  - Something to deploy the code (docker / distro packages / pip install /
  etc)
  - Something to choose where to deploy
  - Something to respond to machine outages / autoscaling and re-deploy as
  necessary


 I tried for a while, yes.  The problems I ran into (and I'd be interested
 to
 know if there are solutions to these):

 - I'm trying to deploy into VMs on rackspace public cloud (just because
 that's
 what I have).  This means I can't use the nova docker driver, without
 constructing an entire self-contained openstack undercloud first.

 - heat+cloud-init (afaics) can't deal with circular dependencies (like
 nova-
 neutron) since the machines need to exist first before you can refer to
 their
 IPs.
 From what I can see, TripleO gets around this by always scheduling them on
 the
 same machine and just using the known local IP.  Other installs declare
 fixed
 IPs up front - on rackspace I can't do that (easily).
 I can't use loadbalancers via heat for this because the loadbalancers need
 to
 know the backend node addresses, which means the nodes have to exist first
 and
 you're back to a circular dependency.

 For comparision, with kubernetes you declare the loadbalancer-equivalents
 (services) up front with a search expression for the backends.  In a second
 pass you create the backends (pods) which can refer to any of the
 loadbalanced
 endpoints.  The loadbalancers then reconfigure themselves on the fly to
 find the
 new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with
 openstack too, but not with heat and not just out of the box.


Do you have a minimal template that shows what you are trying to do?
(just to demonstrate the circular dependency).


 - My experiences using heat for anything complex have been extremely
 frustrating.  The version on rackspace public cloud is ancient and limited,
 and quite easy to get into a state where the only fix is to destroy the
 entire
 stack and recreate it.  I'm sure these are fixed in newer versions of
 heat, but
 last time I tried I was unable to run it standalone against an arms-length
 keystone because some of the recursive heat callbacks became confused about
 which auth token to use.


Gus we are working at improving standalone (Steven Baker has some patch out
for this).



 (I'm sure this can be fixed, if it wasn't already just me using it wrong
 in the
 first place.)

 - As far as I know, nothing in a heat/loadbalancer/nova stack will actually
 reschedule jobs away from a failed machine.  There's also no lazy


This might go part of the way there, the other part of it is detecting the
failed machine
and some how marking it as failed.
 https://review.openstack.org/#/c/105907/

discovery/nameservice mechanism, so updating IP address declarations in
 cloud-
 configs tend to ripple through the heat config and cause all sorts of
 VMs/containers to be reinstalled without any sort of throttling or rolling
 update.


 So: I think there's some things to learn from the kubernetes approach,
 which
 is why I'm trying to gain more experience with it.  I know I'm learning
 more
 about the various OpenStack components along the way too ;)


This is valuable feedback, we need to improve Heat to make these use case
work better.
But I also don't believe there is one tool for all jobs, so see little harm
in trying
other things out too.

Thanks
Angus



 --
  - Gus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Robert Collins
On 24 September 2014 16:38, Jay Pipes jaypi...@gmail.com wrote:
 On 09/23/2014 10:29 PM, Steven Dake wrote:

 There is a deployment program - tripleo is just one implementation.


 Nope, that is not correct. Like it or not (I personally don't), Triple-O is
 *the* Deployment Program for OpenStack:

 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284

 Saying Triple-O is just one implementation of a deployment program is like
 saying Heat is just one implementation of an orchestration program. It
 isn't. It's *the* implemenation of an orchestration program that has been
 blessed by the TC:

 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112

Thats not what Steve said. He said that the codebase they are creating
is a *project* with a target home of the OpenStack Deployment
*program*, aka TripleO. The TC blesses social structure and code
separately: no part of TripleO has had its code blessed by the TC yet
(incubation/graduation), but the team was blessed.

I've no opinion on the Murano bits you raise.

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-09-23 21:38:37 -0700:
 On 09/23/2014 10:29 PM, Steven Dake wrote:
  There is a deployment program - tripleo is just one implementation.
 
 Nope, that is not correct. Like it or not (I personally don't), Triple-O 
 is *the* Deployment Program for OpenStack:
 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284
 
 Saying Triple-O is just one implementation of a deployment program is 
 like saying Heat is just one implementation of an orchestration program. 
 It isn't. It's *the* implemenation of an orchestration program that has 
 been blessed by the TC:
 
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112
 

That was written before we learned everything we've learned in the last
12 months. I think it is unfair to simply point to this and imply that
bending or even changing it is not open for discussion.

   We
  went through this with Heat and various projects that want to extend
  heat (eg Murano) and one big mistake I think Murano folks made was not
  figuring out where there code would go prior to writing it.  I'm only
  making a statement as to where I think it should belong.
 
 Sorry, I have to call you to task on this.
 
 You think it was a mistake for the Murano folks to not figure out where 
 the code would go prior to writing it? For the record, Murano existed 
 nearly 2 years ago, as a response to various customer requests. Having 
 the ability to properly deploy Windows applications like SQL Server and 
 Active Directory into an OpenStack cloud was more important to the 
 Murano developers than trying to predict what the whims of the OpenStack 
 developer and governance model would be months or years down the road.
 
 Tell me, did any of Heat's code exist prior to deciding to propose it 
 for incubation? Saying that Murano developers should have thought about 
 where their code would live is holding them to a higher standard than 
 any of the other developer communities. Did folks working on 
 disk-image-builder pre-validate with the TC or the mailing list that the 
 dib code would live in the triple-o program? No, of course not. It was 
 developed naturally and then placed into the program that fit it best.
 
 Murano was developed naturally in exactly the same way, and the Murano 
 developers have been nothing but accommodating to every request made of 
 them by the TC (and those requests have been entirely different over the 
 last 18 months, ranging from split it out to just propose another 
 program) and by the PTLs for projects that requested they split various 
 parts of Murano out into existing programs.
 
 The Murano developers have done no power grab, have deliberately tried 
 to be as community-focused and amenable to all requests as possible, and 
 yet they are treated with disdain by a number of folks in the core Heat 
 developer community, including yourself, Clint and Zane. And honestly, I 
 don't get it... all Murano is doing is generating Heat templates and 
 trying to fill in some pieces that Heat isn't interested in doing. I 
 don't see why there is so much animosity towards a project that has, to 
 my knowledge, acted in precisely the ways that we've asked projects to 
 act in the OpenStack community: with openness, transparency, and 
 community good will.

Disdain is hardly the right word. Disdain implies we don't have any
respect at all for Murano. I cannot speak for others, but I do have
respect. I'm just not interested in Murano.

FWIW, I think what Steven Dake is saying is that he does not want to
end up in the same position Murano is in. I think that is unlikely,
as we're seeing many projects hitting the same wall, which is the cause
for discussing changing how we include or exclude projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Chmouel Boudjnah
On Wed, Sep 24, 2014 at 12:40 AM, Steven Dake sd...@redhat.com wrote:

 I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base


Congratulations this sounds promising!

If I understand correctly reading your POC there is two part to Kolla the
docker images repository of openstack services and a future service (or
kubernetes plugin?[1]) driving the communication and deployments to
kubernetes.

I think making sure that we separate the two would be nice to have, since
if we can plug those images within devstack, thanks to the abstraction of
how we run processes that was introduced by Dean (http://git.io/Px1nMg)
perhaps that would be a nice way to make devstack more robust and have a
nice side effect to have a pretty good testing for those docker images.

Chmouel

[1] CAVEAT: I don't know very well kubernetes,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Jay Pipes

On 09/24/2014 03:57 AM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2014-09-23 21:38:37 -0700:

On 09/23/2014 10:29 PM, Steven Dake wrote:

There is a deployment program - tripleo is just one implementation.


Nope, that is not correct. Like it or not (I personally don't), Triple-O
is *the* Deployment Program for OpenStack:

http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284

Saying Triple-O is just one implementation of a deployment program is
like saying Heat is just one implementation of an orchestration program.
It isn't. It's *the* implemenation of an orchestration program that has
been blessed by the TC:

http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112


That was written before we learned everything we've learned in the last
12 months. I think it is unfair to simply point to this and imply that
bending or even changing it is not open for discussion.


My statement above is a reflection of the current reality of OpenStack 
governance policies and organizational structure. It's neither fair nor 
unfair.



   We

went through this with Heat and various projects that want to extend
heat (eg Murano) and one big mistake I think Murano folks made was not
figuring out where there code would go prior to writing it.  I'm only
making a statement as to where I think it should belong.


Sorry, I have to call you to task on this.

You think it was a mistake for the Murano folks to not figure out where
the code would go prior to writing it? For the record, Murano existed
nearly 2 years ago, as a response to various customer requests. Having
the ability to properly deploy Windows applications like SQL Server and
Active Directory into an OpenStack cloud was more important to the
Murano developers than trying to predict what the whims of the OpenStack
developer and governance model would be months or years down the road.

Tell me, did any of Heat's code exist prior to deciding to propose it
for incubation? Saying that Murano developers should have thought about
where their code would live is holding them to a higher standard than
any of the other developer communities. Did folks working on
disk-image-builder pre-validate with the TC or the mailing list that the
dib code would live in the triple-o program? No, of course not. It was
developed naturally and then placed into the program that fit it best.

Murano was developed naturally in exactly the same way, and the Murano
developers have been nothing but accommodating to every request made of
them by the TC (and those requests have been entirely different over the
last 18 months, ranging from split it out to just propose another
program) and by the PTLs for projects that requested they split various
parts of Murano out into existing programs.

The Murano developers have done no power grab, have deliberately tried
to be as community-focused and amenable to all requests as possible, and
yet they are treated with disdain by a number of folks in the core Heat
developer community, including yourself, Clint and Zane. And honestly, I
don't get it... all Murano is doing is generating Heat templates and
trying to fill in some pieces that Heat isn't interested in doing. I
don't see why there is so much animosity towards a project that has, to
my knowledge, acted in precisely the ways that we've asked projects to
act in the OpenStack community: with openness, transparency, and
community good will.


Disdain is hardly the right word. Disdain implies we don't have any
respect at all for Murano. I cannot speak for others, but I do have
respect. I'm just not interested in Murano.


OK.


FWIW, I think what Steven Dake is saying is that he does not want to
end up in the same position Murano is in.


Perhaps. I just took offense to the implication (big mistake .. the 
Murano folks made) that somehow it was the Murano developer team's 
fault that they didn't have the foresight to predict the mess that the 
governance structure and policies have caused projects that want to be 
in the openstack/ code namespace but need to go through several 
arbitrary Trials by Fire before the TC to do so.


 I think that is unlikely,

as we're seeing many projects hitting the same wall, which is the cause
for discussing changing how we include or exclude projects.


Hey, I'm all for changing the way we build the OpenStack tent. I just 
didn't think it was right to call out the Murano team in the way that it 
was.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 9:16 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 09/24/2014 03:19 AM, Robert Collins wrote:

 On 24 September 2014 16:38, Jay Pipes jaypi...@gmail.com wrote:

 On 09/23/2014 10:29 PM, Steven Dake wrote:


 There is a deployment program - tripleo is just one implementation.


 Nope, that is not correct. Like it or not (I personally don't), Triple-O
 is
 *the* Deployment Program for OpenStack:


 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284

 Saying Triple-O is just one implementation of a deployment program is
 like
 saying Heat is just one implementation of an orchestration program. It
 isn't. It's *the* implemenation of an orchestration program that has been
 blessed by the TC:


 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112


 Thats not what Steve said. He said that the codebase they are creating
 is a *project* with a target home of the OpenStack Deployment
 *program*, aka TripleO. The TC blesses social structure and code
 separately: no part of TripleO has had its code blessed by the TC yet
 (incubation/graduation), but the team was blessed.


 There are zero programs in the OpenStack governance repository that have
 competing implementations for the same thing.

 Like it or not, the TC process of blessing these teams has effectively
 blessed a single implementation of something.

And it looks to me like what's being proposed here is that there is a
group of folks who intend to work on Knoll, and they are indicating
that they plan to participate and would like to be a part of that
team. Personally, as a TripleO team member, I welcome that
approach and their willingness to participate and share experience
with the Deployment program.

Meaning: exactly what you seem to claim is not possible due to some
perceived blessing, is indeed in fact happening, or trying to come
about.

It would be great if Heat was already perfect and great at doing
container orchestration *really* well. I'm not saying Kubernetes is
either, but I'm not going to dismiss it just b/c it might compete
with Heat. I see lots of other integration points with OpenStack
services  (using heat/nova/ironic to deploy kubernetes host, still
using ironic to deploy baremetal storage nodes due to the iscsi issue,
etc).



 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 9:41 AM, James Slagle james.sla...@gmail.com wrote:

 And it looks to me like what's being proposed here is that there is a
 group of folks who intend to work on Knoll, and they are indicating

Oops, I meant Kolla, obviously :-).




-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Jay Pipes

On 09/24/2014 09:41 AM, James Slagle wrote:

On Wed, Sep 24, 2014 at 9:16 AM, Jay Pipes jaypi...@gmail.com wrote:

There are zero programs in the OpenStack governance repository that have
competing implementations for the same thing.

Like it or not, the TC process of blessing these teams has effectively
blessed a single implementation of something.


And it looks to me like what's being proposed here is that there is a
group of folks who intend to work on Knoll, and they are indicating
that they plan to participate and would like to be a part of that
team. Personally, as a TripleO team member, I welcome that
approach and their willingness to participate and share experience
with the Deployment program.


Nobody is saying what the Kolla folks are doing is not laudable. I'm 
certainly not saying that. I think it's great to participate and be open 
from the start. What I took umbrage with was the statement that it was 
the Murano developers who made the mistake years ago of basically not 
being in the right place at the right time.



Meaning: exactly what you seem to claim is not possible due to some
perceived blessing, is indeed in fact happening, or trying to come
about.


:) Talking about something on the ML is not the same thing as having 
that thing happen in real life. Kolla folks can and should discuss their 
end goal of being in the openstack/ code namespace and offering an 
alternate implementation for deploying OpenStack. That doesn't mean that 
the Technical Committee will allow this, though. Which is what I'm 
saying... the real world right now does not match this perception that a 
group can just state where they want to end up in the openstack/ code 
namespace and by just being up front about it, that magically happens.



It would be great if Heat was already perfect and great at doing
container orchestration *really* well. I'm not saying Kubernetes is
either, but I'm not going to dismiss it just b/c it might compete
with Heat. I see lots of other integration points with OpenStack
services  (using heat/nova/ironic to deploy kubernetes host, still
using ironic to deploy baremetal storage nodes due to the iscsi issue,
etc).


Again, I'm not dismissing Kolla whatsoever. I think it's a great 
initiative. I'd point out that Fuel has been doing deployment with 
Docker containers for a while now, also out in the open, but on 
stackforge. Would the deployment program welcome Fuel into the 
openstack/ code namespace as well? Something to think about.


-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 10:03 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 09/24/2014 09:41 AM, James Slagle wrote:
 Meaning: exactly what you seem to claim is not possible due to some
 perceived blessing, is indeed in fact happening, or trying to come
 about.


 :) Talking about something on the ML is not the same thing as having that
 thing happen in real life.

Hence the trying to come about. And the only thing proposed for real
life right now is a project under stackforge whose long term goal is
to merge into the Deployment program. I don't get the opposition to a
long term goal.

 Kolla folks can and should discuss their end goal
 of being in the openstack/ code namespace and offering an alternate
 implementation for deploying OpenStack. That doesn't mean that the Technical
 Committee will allow this, though.

Certainly true. Perhaps the mission statement for the Deployment
program needs some tweaking. Perhaps it will be covered by whatever
plays out within the larger OpenStack changes that are being discussed
about the future of programs/projects/etc.

Personally, I think there is some room for interpretation in the
existing mission statement around the wherever possible phrase.
Where it's not possible, OpenStack does not have to be used. So again,
we probably need to update for clarity. I think the Deployment program
should work with the TC to help define what it wants to be.

 Which is what I'm saying... the real
 world right now does not match this perception that a group can just state
 where they want to end up in the openstack/ code namespace and by just
 being up front about it, that magically happens.

I'm not sure who you are arguing against that has that perception :).

I've reread the thread, and I see desires being voiced  to join an
existing program, and some initial support offered in favor of that,
minus your responses ;-). Obviously patches would have to be proposed
to the governance repo to add projects under the program, those would
have to be approved by people with +2 in governance, etc. No one
claims it will be magically done.

 It would be great if Heat was already perfect and great at doing
 container orchestration *really* well. I'm not saying Kubernetes is
 either, but I'm not going to dismiss it just b/c it might compete
 with Heat. I see lots of other integration points with OpenStack
 services  (using heat/nova/ironic to deploy kubernetes host, still
 using ironic to deploy baremetal storage nodes due to the iscsi issue,
 etc).


 Again, I'm not dismissing Kolla whatsoever. I think it's a great initiative.
 I'd point out that Fuel has been doing deployment with Docker containers for
 a while now, also out in the open, but on stackforge. Would the deployment
 program welcome Fuel into the openstack/ code namespace as well? Something
 to think about.

Based on what you're saying about the Deployment program, you seem to
indicate the TC would say No.

I don't speak for the program. In the past, I've personally expressed
support for alternative implementations where they make sense for
OpenStack as a whole, and I still feel that way.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Joshua Harlow
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

On Sep 23, 2014, at 3:40 PM, Steven Dake sd...@redhat.com wrote:

 Hi
 folks,
 
 
   I'm pleased to announce the development of a new project Kolla
   which is Greek for glue :). Kolla has a goal of providing an
   implementation that deploys OpenStack using Kubernetes and
   Docker. This project will begin as a StackForge project
   separate from the TripleO/Deployment program code base. Our
   long term goal is to merge into the TripleO/Deployment program
   rather then create a new program.
 
 
 
 
 
 Docker
   is a container technology for delivering hermetically sealed
   applications and has about 620 technical contributors [1]. We
   intend to produce docker images for a variety of platforms
   beginning with Fedora 20. We are completely open to any distro
   support, so if folks want to add new Linux distribution to
   Kolla please feel free to submit patches :)
 
 
 
 Kubernetes
   at the most basic level is a Docker scheduler produced by and
   used within Google [2]. Kubernetes has in excess of 100
   technical contributors. Kubernetes is more then just a
   scheduler, it provides additional functionality such as load
   balancing and scaling and has a significant roadmap.
 
 
 
 
   The #tripleo channel on Freenode will be used for Kolla
   developer and user communication. Even though we plan to
   become part of the Deployment program long term, as we experiment
   we believe it is best to hold a separate weekly one hour IRC
   meeting on Mondays at 2000 UTC in #openstack-meeting [3].
 
 
 
 
   This project has been discussed with the current TripleO PTL
   (Robert Collins) and he seemed very supportive and agreed with
   the organization of the project outlined above. James
 Slagle, a TripleO core developer, has kindly offered to
 liase between Kolla and the broader TripleO community. 
 
   
 
 
   
 I
   personally feel it is necessary to start from a nearly empty
   repository when kicking off a new project. As a result, there
   is limited code in the repository [4] at this time. I suspect
   folks will start cranking out a kick-ass implementation once
   the Kolla/Stackforge integration support is reviewed by the
   infra team [5].
 
 
 
 The
   initial core team is composed of Steven Dake, Ryan Hallisey,
   James Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman,
   and David Vossel. The core team will be reviewed every 6 weeks
   to add fresh developers.
 
 
 
 
   Please join the core team in designing and inventing this
   rockin' new technology!
 
 
 
 
   Regards
 
   -steve
 
 
 
 ~~
 
 [1]
 https://github.com/docker/docker
 [2] https://github.com/GoogleCloudPlatform/kubernetes
 
   
 [3]
 https://wiki.openstack.org/wiki/Meetings/Kolla
 [4] https://github.com/jlabocki/superhappyfunshow
 [5] https://review.openstack.org/#/c/122972/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-09-23 15:40:29 -0700:
 *Hi folks,***
 *
 
 
 I'm pleased to announce the development of a new project Kolla which is 
 Greek for glue :). Kolla has a goal of providing an implementation that 
 deploys OpenStack using Kubernetes and Docker. This project will begin 
 as a StackForge project separate from the TripleO/Deployment program 
 code base. Our long term goal is to merge into the TripleO/Deployment 
 program rather then create a new program.
 
 
 Docker is a container technology for delivering hermetically sealed 
 applications and has about 620 technical contributors [1]. We intend to 
 produce docker images for a variety of platforms beginning with Fedora 
 20. We are completely open to any distro support, so if folks want to 
 add new Linux distribution to Kolla please feel free to submit patches :)
 
 
 Kubernetes at the most basic level is a Docker scheduler produced by and 
 used within Google [2]. Kubernetes has in excess of 100 technical 
 contributors. Kubernetes is more then just a scheduler, it provides 
 additional functionality such as load balancing and scaling and has a 
 significant roadmap.
 

You had me at Docker.. 

Kubernetes establishes robust declarative primitives for maintaining
the desired state requested by the user. We see these primitives as
the main value added by Kubernetes. Self-healing mechanisms, such as
auto-restarting, re-scheduling, and replicating containers require active
controllers, not just imperative orchestration.

But the bit above has me a bit nervous...

I'm not exactly ignorant of what declarative orchestration is, and of
late I've found it to be more trouble than I had previously imagined it
would be. All of the features above are desirable in any application,
whether docker managed or not, and have been discussed for Heat
specifically. I'm not entirely sure I want these things in my OpenStack
deployment, but it will be interesting to see if there are operators who
want them bad enough to deal with the inherent complexities of trying to
write such a thing for an application as demanding as OpenStack.

Anyway, I would definitely be interested in seeing if we can plug it
into the interfaces we already have for image building, config file and
system state management. Thanks for sharing, and see you in the
deployment trenches. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Alan Kavanagh
Steven
I have to ask what is the motivation and benefits we get from integrating 
Kubernetes into Openstack? Would be really useful if you can elaborate and 
outline some use cases and benefits Openstack and Kubernetes can gain.

/Alan


From: Steven Dake [mailto:sd...@redhat.com]
Sent: September-24-14 7:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 10:12 AM, Joshua Harlow wrote:
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Joshua,

I expect very little.  We intend to use Kubernetes as an upstream project, 
rather then something we contribute to directly.


Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I don't think it will have any impact on the existing Magnum project.  At some 
point if Magnum implements scheduling of docker containers, we may add support 
for Magnum in addition to Kubernetes, but it is impossible to tell at this 
point.  I don't want to derail either project by trying to force them together 
unnaturally so early.


I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

Yup it is fairly large in size.  Time will tell if this approach will work.

This is an experiment as Robert and others on the thread have pointed out :).

Regards
-steve


On Sep 23, 2014, at 3:40 PM, Steven Dake 
sd...@redhat.commailto:sd...@redhat.com wrote:


Hi folks,

I'm pleased to announce the development of a new project Kolla which is Greek 
for glue :). Kolla has a goal of providing an implementation that deploys 
OpenStack using Kubernetes and Docker. This project will begin as a StackForge 
project separate from the TripleO/Deployment program code base. Our long term 
goal is to merge into the TripleO/Deployment program rather then create a new 
program.



Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend to produce 
docker images for a variety of platforms beginning with Fedora 20. We are 
completely open to any distro support, so if folks want to add new Linux 
distribution to Kolla please feel free to submit patches :)



Kubernetes at the most basic level is a Docker scheduler produced by and used 
within Google [2]. Kubernetes has in excess of 100 technical contributors. 
Kubernetes is more then just a scheduler, it provides additional functionality 
such as load balancing and scaling and has a significant roadmap.


The #tripleo channel on Freenode will be used for Kolla developer and user 
communication. Even though we plan to become part of the Deployment program 
long term, as we experiment we believe it is best to hold a separate weekly one 
hour IRC meeting on Mondays at 2000 UTC in #openstack-meeting [3].


This project has been discussed with the current TripleO PTL (Robert Collins) 
and he seemed very supportive and agreed with the organization of the project 
outlined above. James Slagle, a TripleO core developer, has kindly offered to 
liase between Kolla and the broader TripleO community.



I personally feel it is necessary to start from a nearly empty repository when 
kicking off a new project. As a result, there is limited code in the repository 
[4] at this time. I suspect folks will start cranking out a kick-ass 
implementation once the Kolla/Stackforge integration support is reviewed by the 
infra team [5].



The initial core team is composed of Steven Dake, Ryan Hallisey, James Lebocki, 
Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David Vossel. The core team 
will be reviewed every 6 weeks to add fresh developers.


Please join the core team in designing and inventing this rockin' new 
technology!


Regards
-steve


~~



[1] https://github.com/docker/docker [2] 
https://github.com/GoogleCloudPlatform/kubernetes

[3] https://wiki.openstack.org/wiki/Meetings/Kolla [4] 
https://github.com/jlabocki/superhappyfunshow [5] 
https://review.openstack.org/#/c/122972/



___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Angus Lees
On Wed, 24 Sep 2014 10:31:19 PM Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from integrating
 Kubernetes into Openstack? Would be really useful if you can elaborate and
 outline some use cases and benefits Openstack and Kubernetes can gain.

I've no idea what Steven's motivation is, but here's my reasoning for going 
down a similar path:

OpenStack deployment is basically two types of software:
1. Control jobs, various API servers, etc that are basically just regular 
python wsgi apps.
2. Compute/network node agents that run under hypervisors, configure host 
networking, etc.

The 2nd group probably wants to run on baremetal and is mostly identical on 
all such machines, but the 1st group wants higher level PaaS type things.

In particular, for the control jobs you want:

- Something to deploy the code (docker / distro packages / pip install / etc)
- Something to choose where to deploy
- Something to respond to machine outages / autoscaling and re-deploy as 
necessary

These last few don't have strong existing options within OpenStack yet (as far 
as I'm aware).  Having explored a few different approaches recently, kubernetes 
is certainly not the only option - but is a reasonable contender here.


So: I certainly don't see kubernetes as competing with anything in OpenStack - 
but as filling a gap in job management with something that has a fairly 
lightweight config syntax and is relatively simple to deploy on VMs or 
baremetal.  I also think the phrase integrating kubernetes into OpenStack is 
overstating the task at hand.

The primary downside I've discovered so far seems to be that kubernetes is 
very young and still has an awkward cli, a few easy to encounter bugs, etc.

 - Gus

 From: Steven Dake [mailto:sd...@redhat.com]
 Sent: September-24-14 7:41 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and
 Manage OpenStack using Kubernetes and Docker
 
 On 09/24/2014 10:12 AM, Joshua Harlow wrote:
 Sounds like an interesting project/goal and will be interesting to see where
 this goes.
 
 A few questions/comments:
 
 How much golang will people be exposed to with this addition?
 
 Joshua,
 
 I expect very little.  We intend to use Kubernetes as an upstream project,
 rather then something we contribute to directly.
 
 
 Seeing that this could be the first 'go' using project it will be
 interesting to see where this goes (since afaik none of the infra support
 exists, and people aren't likely to familiar with go vs python in the
 openstack community overall).
 
 What's your thoughts on how this will affect the existing openstack
 container effort?
 
 I don't think it will have any impact on the existing Magnum project.  At
 some point if Magnum implements scheduling of docker containers, we may add
 support for Magnum in addition to Kubernetes, but it is impossible to tell
 at this point.  I don't want to derail either project by trying to force
 them together unnaturally so early.
 
 
 I see that kubernetes isn't exactly a small project either (~90k LOC, for
 those who use these types of metrics), so I wonder how that will affect
 people getting involved here, aka, who has the resources/operators/other...
 available to actually setup/deploy/run kubernetes, when operators are
 likely still just struggling to run openstack itself (at least operators
 are getting used to the openstack warts, a new set of kubernetes warts
 could not be so helpful).
 
 Yup it is fairly large in size.  Time will tell if this approach will work.
 
 This is an experiment as Robert and others on the thread have pointed out
 :).
 
 Regards
 -steve
 
 
 On Sep 23, 2014, at 3:40 PM, Steven Dake
 sd...@redhat.commailto:sd...@redhat.com wrote:
 
 
 Hi folks,
 
 I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base.
 Our long term goal is to merge into the TripleO/Deployment program rather
 then create a new program.
 
 
 
 Docker is a container technology for delivering hermetically sealed
 applications and has about 620 technical contributors [1]. We intend to
 produce docker images for a variety of platforms beginning with Fedora 20.
 We are completely open to any distro support, so if folks want to add new
 Linux distribution to Kolla please feel free to submit patches :)
 
 
 
 Kubernetes at the most basic level is a Docker scheduler produced by and
 used within Google [2]. Kubernetes has in excess of 100 technical
 contributors. Kubernetes is more then just a scheduler, it provides
 additional functionality such as load balancing and scaling and has a
 significant roadmap.
 
 
 The #tripleo channel on Freenode will be used for Kolla developer and user

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Clint Byrum
Excerpts from Angus Lees's message of 2014-09-24 18:33:22 -0700:
 These last few don't have strong existing options within OpenStack yet (as 
 far 
 as I'm aware).  Having explored a few different approaches recently, 
 kubernetes 
 is certainly not the only option - but is a reasonable contender here.
 
 
 So: I certainly don't see kubernetes as competing with anything in OpenStack 
 - 
 but as filling a gap in job management with something that has a fairly 
 lightweight config syntax and is relatively simple to deploy on VMs or 
 baremetal.  I also think the phrase integrating kubernetes into OpenStack 
 is 
 overstating the task at hand.
 

Reading more on the design and intention, I think it very much competes
with Heat. The moment it learns to talk to Nova, it becomes a convergence
engine for OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Steven Dake

On 09/24/2014 03:31 PM, Alan Kavanagh wrote:


Steven

I have to ask what is the motivation and benefits we get from 
integrating Kubernetes into Openstack? Would be really useful if you 
can elaborate and outline some use cases and benefits Openstack and 
Kubernetes can gain.


/Alan


Alan,

I am either unaware or ignorant of another Docker scheduler that is 
currently available that has a big (100+ folks) development community.  
Kubernetes meets these requirements and is my main motivation for using 
it to schedule Docker containers.  There are other ways to skin this cat 
- The TripleO folks wanted at one point to deploy nova with the nova 
docker VM manager to do such a thing. This model seemed a little clunky 
to me since it isn't purpose built around containers.


As far as use cases go, the main use case is to run a specific Docker 
container on a specific Kubernetes minion bare metal host. These 
docker containers are then composed of the various config tools and 
services for each detailed service in OpenStack.  For example, mysql 
would be a container, and tools to configure the mysql service would 
exist in the container.  Kubernetes would pass config options for the 
mysql database prior to scheduling and once scheduled, Kubernetes would 
be responsible for connecting the various containers together.


Regards
-steve



*From:*Steven Dake [mailto:sd...@redhat.com]
*Sent:* September-24-14 7:41 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [all][tripleo] New Project - Kolla: 
Deploy and Manage OpenStack using Kubernetes and Docker


On 09/24/2014 10:12 AM, Joshua Harlow wrote:

Sounds like an interesting project/goal and will be interesting to
see where this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Joshua,

I expect very little.  We intend to use Kubernetes as an upstream 
project, rather then something we contribute to directly.



Seeing that this could be the first 'go' using project it will be 
interesting to see where this goes (since afaik none of the infra 
support exists, and people aren't likely to familiar with go vs python 
in the openstack community overall).


What's your thoughts on how this will affect the existing openstack 
container effort?


I don't think it will have any impact on the existing Magnum project.  
At some point if Magnum implements scheduling of docker containers, we 
may add support for Magnum in addition to Kubernetes, but it is 
impossible to tell at this point.  I don't want to derail either 
project by trying to force them together unnaturally so early.



I see that kubernetes isn't exactly a small project either (~90k LOC, 
for those who use these types of metrics), so I wonder how that will 
affect people getting involved here, aka, who has the 
resources/operators/other... available to actually setup/deploy/run 
kubernetes, when operators are likely still just struggling to run 
openstack itself (at least operators are getting used to the openstack 
warts, a new set of kubernetes warts could not be so helpful).


Yup it is fairly large in size.  Time will tell if this approach will 
work.


This is an experiment as Robert and others on the thread have pointed 
out :).


Regards
-steve


On Sep 23, 2014, at 3:40 PM, Steven Dake sd...@redhat.com 
mailto:sd...@redhat.com wrote:




Hi folks,


I'm pleased to announce the development of a new project Kolla which 
is Greek for glue :). Kolla has a goal of providing an implementation 
that deploys OpenStack using Kubernetes and Docker. This project will 
begin as a StackForge project separate from the TripleO/Deployment 
program code base. Our long term goal is to merge into the 
TripleO/Deployment program rather then create a new program.




Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend 
to produce docker images for a variety of platforms beginning with 
Fedora 20. We are completely open to any distro support, so if folks 
want to add new Linux distribution to Kolla please feel free to submit 
patches :)




Kubernetes at the most basic level is a Docker scheduler produced by 
and used within Google [2]. Kubernetes has in excess of 100 technical 
contributors. Kubernetes is more then just a scheduler, it provides 
additional functionality such as load balancing and scaling and has a 
significant roadmap.



The #tripleo channel on Freenode will be used for Kolla developer and 
user communication. Even though we plan to become part of the 
Deployment program long term, as we experiment we believe it is best 
to hold a separate weekly one hour IRC meeting on Mondays at 2000 UTC 
in #openstack-meeting [3].



This project has been discussed with the current TripleO PTL (Robert 
Collins) and he seemed very supportive and agreed with the 
organization of the project outlined above. James Slagle

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Mike Spreitzer
Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:

 On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
 Steven
 I have to ask what is the motivation and benefits we get from 
 integrating Kubernetes into Openstack? Would be really useful if you
 can elaborate and outline some use cases and benefits Openstack and 
 Kubernetes can gain. 
  
 /Alan
  
 Alan,
 
 I am either unaware or ignorant of another Docker scheduler that is 
 currently available that has a big (100+ folks) development 
 community.  Kubernetes meets these requirements and is my main 
 motivation for using it to schedule Docker containers.  There are 
 other ways to skin this cat - The TripleO folks wanted at one point 
 to deploy nova with the nova docker VM manager to do such a thing.  
 This model seemed a little clunky to me since it isn't purpose built
 around containers.

Does TripleO require container functionality that is not available
when using the Docker driver for Nova?

As far as I can tell, the quantitative handling of capacities and
demands in Kubernetes is much inferior to what Nova does today.

 As far as use cases go, the main use case is to run a specific 
 Docker container on a specific Kubernetes minion bare metal host.

If TripleO already knows it wants to run a specific Docker image
on a specific host then TripleO does not need a scheduler.

 These docker containers are then composed of the various config 
 tools and services for each detailed service in OpenStack.  For 
 example, mysql would be a container, and tools to configure the 
 mysql service would exist in the container.  Kubernetes would pass 
 config options for the mysql database prior to scheduling

I am not sure what is meant here by pass config options nor how it
would be done prior to scheduling; can you please clarify?
I do not imagine Kubernetes would *choose* the config values,
K8s does not know anything about configuring OpenStack.
Before scheduling, there is no running container to pass
anything to.

   and once 
 scheduled, Kubernetes would be responsible for connecting the 
 various containers together.

Kubernetes has a limited role in connecting containers together.
K8s creates the networking environment in which the containers
*can* communicate, and passes environment variables into containers
telling them from what protocol://host:port/ to import each imported
endpoint.  Kubernetes creates a universal reverse proxy on each
minion, to provide endpoints that do not vary as the servers
move around.
It is up to stuff outside Kubernetes to decide
what should be connected to what, and it is up to the containers
to read the environment variables and actually connect.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-09-24 20:02:49 -0700:
 On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
 
  Steven
 
  I have to ask what is the motivation and benefits we get from 
  integrating Kubernetes into Openstack? Would be really useful if you 
  can elaborate and outline some use cases and benefits Openstack and 
  Kubernetes can gain.
 
  /Alan
 
 Alan,
 
 I am either unaware or ignorant of another Docker scheduler that is 
 currently available that has a big (100+ folks) development community.  
 Kubernetes meets these requirements and is my main motivation for using 
 it to schedule Docker containers.  There are other ways to skin this cat 
 - The TripleO folks wanted at one point to deploy nova with the nova 
 docker VM manager to do such a thing. This model seemed a little clunky 
 to me since it isn't purpose built around containers.
 

Agreed. Containers are a special kind of workload, not a VM. However, I
do think that a container manager could be taught manage vms and vice
versa.

 As far as use cases go, the main use case is to run a specific Docker 
 container on a specific Kubernetes minion bare metal host. These 
 docker containers are then composed of the various config tools and 
 services for each detailed service in OpenStack.  For example, mysql 
 would be a container, and tools to configure the mysql service would 
 exist in the container.  Kubernetes would pass config options for the 
 mysql database prior to scheduling and once scheduled, Kubernetes would 
 be responsible for connecting the various containers together.


I like it. This is just good old fashioned encapsulation, finally
applied to ops.

I also like that it rides on top of OpenStack entirely, and doesn't have
to be integrated.

However, this does make me think that Keystone domains should be exposable
to services inside your cloud for use as SSO. It would be quite handy
if the keystone users used for the VMs that host Kubernetes could use
the same credentials to manage the containers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
 
  On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
  Steven
  I have to ask what is the motivation and benefits we get from 
  integrating Kubernetes into Openstack? Would be really useful if you
  can elaborate and outline some use cases and benefits Openstack and 
  Kubernetes can gain. 
   
  /Alan
   
  Alan,
  
  I am either unaware or ignorant of another Docker scheduler that is 
  currently available that has a big (100+ folks) development 
  community.  Kubernetes meets these requirements and is my main 
  motivation for using it to schedule Docker containers.  There are 
  other ways to skin this cat - The TripleO folks wanted at one point 
  to deploy nova with the nova docker VM manager to do such a thing.  
  This model seemed a little clunky to me since it isn't purpose built
  around containers.
 
 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?
 
 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.
 

Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

  As far as use cases go, the main use case is to run a specific 
  Docker container on a specific Kubernetes minion bare metal host.
 
 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.
 

TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

  These docker containers are then composed of the various config 
  tools and services for each detailed service in OpenStack.  For 
  example, mysql would be a container, and tools to configure the 
  mysql service would exist in the container.  Kubernetes would pass 
  config options for the mysql database prior to scheduling
 
 I am not sure what is meant here by pass config options nor how it
 would be done prior to scheduling; can you please clarify?
 I do not imagine Kubernetes would *choose* the config values,
 K8s does not know anything about configuring OpenStack.
 Before scheduling, there is no running container to pass
 anything to.
 

Docker containers tend to use environment variables passed to the initial
command to configure things. The Kubernetes API allows setting these
environment variables on creation of the container.

and once 
  scheduled, Kubernetes would be responsible for connecting the 
  various containers together.
 
 Kubernetes has a limited role in connecting containers together.
 K8s creates the networking environment in which the containers
 *can* communicate, and passes environment variables into containers
 telling them from what protocol://host:port/ to import each imported
 endpoint.  Kubernetes creates a universal reverse proxy on each
 minion, to provide endpoints that do not vary as the servers
 move around.
 It is up to stuff outside Kubernetes to decide
 what should be connected to what, and it is up to the containers
 to read the environment variables and actually connect.
 

This is a nice simple interface though, and I like that it is narrowly
defined, not trying to be anything that containers want to share with
other containers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 09/25/2014 12:13:53 AM:

 Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
  Steven Dake sd...@redhat.com wrote on 09/24/2014 11:02:49 PM:
   ...
  ...
  Does TripleO require container functionality that is not available
  when using the Docker driver for Nova?
  
  As far as I can tell, the quantitative handling of capacities and
  demands in Kubernetes is much inferior to what Nova does today.
  
 
 Yes, TripleO needs to manage baremetal and containers from a single
 host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes manage baremetal (at all)?
By from a single host do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.

 
   As far as use cases go, the main use case is to run a specific 
   Docker container on a specific Kubernetes minion bare metal host.

Clint, in another branch of this email tree you referred to
the VMs that host Kubernetes.  How does that square with
Steve's text that seems to imply bare metal minions?

I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.

  
  If TripleO already knows it wants to run a specific Docker image
  on a specific host then TripleO does not need a scheduler.
  
 
 TripleO does not ever specify destination host, because Nova does not
 allow that, nor should it. It does want to isolate failure domains so
 that all three Galera nodes aren't on the same PDU, but we've not really
 gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use 
case.
Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and solved.
I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.
Nova's filter scheduler includes a fatal bug that bites when balancing and 
you want more than
one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Robert Collins
All hail our container overlords!

-Rob

On 24 September 2014 10:40, Steven Dake sd...@redhat.com wrote:
 Hi folks,


 I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base.
 Our long term goal is to merge into the TripleO/Deployment program rather
 then create a new program.


 Docker is a container technology for delivering hermetically sealed
 applications and has about 620 technical contributors [1]. We intend to
 produce docker images for a variety of platforms beginning with Fedora 20.
 We are completely open to any distro support, so if folks want to add new
 Linux distribution to Kolla please feel free to submit patches :)


 Kubernetes at the most basic level is a Docker scheduler produced by and
 used within Google [2]. Kubernetes has in excess of 100 technical
 contributors. Kubernetes is more then just a scheduler, it provides
 additional functionality such as load balancing and scaling and has a
 significant roadmap.


 The #tripleo channel on Freenode will be used for Kolla developer and user
 communication. Even though we plan to become part of the Deployment program
 long term, as we experiment we believe it is best to hold a separate weekly
 one hour IRC meeting on Mondays at 2000 UTC in #openstack-meeting [3].


 This project has been discussed with the current TripleO PTL (Robert
 Collins) and he seemed very supportive and agreed with the organization of
 the project outlined above. James Slagle, a TripleO core developer, has
 kindly offered to liase between Kolla and the broader TripleO community.


 I personally feel it is necessary to start from a nearly empty repository
 when kicking off a new project. As a result, there is limited code in the
 repository [4] at this time. I suspect folks will start cranking out a
 kick-ass implementation once the Kolla/Stackforge integration support is
 reviewed by the infra team [5].


 The initial core team is composed of Steven Dake, Ryan Hallisey, James
 Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David Vossel.
 The core team will be reviewed every 6 weeks to add fresh developers.


 Please join the core team in designing and inventing this rockin' new
 technology!


 Regards
 -steve


 ~~


 [1] https://github.com/docker/docker [2]
 https://github.com/GoogleCloudPlatform/kubernetes

 [3] https://wiki.openstack.org/wiki/Meetings/Kolla [4]
 https://github.com/jlabocki/superhappyfunshow [5]
 https://review.openstack.org/#/c/122972/



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Fox, Kevin M
I'm interested in how this relates/conflicts with the TripleO goal of using 
OpenStack to deploy OpenStack.

It looks like (maybe just superficially) that Kubernetes is simply a 
combination of (nova + docker driver) = container schedualer and (heat) = 
orchestration. They both schedule containers, will need advanced scheduling 
like ensure these two containers are on different servers (nova ServerGroups), 
autoscale resources, hook up things together, have a json document that 
describes the desired state, etc... If that's the case, it seems odd to use an 
OpenStack competing product to deploy a competitor of Kubernetes. Two software 
stacks to learn how to debug rather then just one.

Maybe I'm just totally misunderstanding what Kubernetes is trying to accomplish 
though. I'm not trying to stur up trouble here. I just really want to 
understand how these two technologies fit together.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Tuesday, September 23, 2014 3:40 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage 
OpenStack using Kubernetes and Docker

Hi folks,

I'm pleased to announce the development of a new project Kolla which is Greek 
for glue :). Kolla has a goal of providing an implementation that deploys 
OpenStack using Kubernetes and Docker. This project will begin as a StackForge 
project separate from the TripleO/Deployment program code base. Our long term 
goal is to merge into the TripleO/Deployment program rather then create a new 
program.


Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend to produce 
docker images for a variety of platforms beginning with Fedora 20. We are 
completely open to any distro support, so if folks want to add new Linux 
distribution to Kolla please feel free to submit patches :)


Kubernetes at the most basic level is a Docker scheduler produced by and used 
within Google [2]. Kubernetes has in excess of 100 technical contributors. 
Kubernetes is more then just a scheduler, it provides additional functionality 
such as load balancing and scaling and has a significant roadmap.

The #tripleo channel on Freenode will be used for Kolla developer and user 
communication. Even though we plan to become part of the Deployment program 
long term, as we experiment we believe it is best to hold a separate weekly one 
hour IRC meeting on Mondays at 2000 UTC in #openstack-meeting [3].

This project has been discussed with the current TripleO PTL (Robert Collins) 
and he seemed very supportive and agreed with the organization of the project 
outlined above. James Slagle, a TripleO core developer, has kindly offered to 
liase between Kolla and the broader TripleO community.


I personally feel it is necessary to start from a nearly empty repository when 
kicking off a new project. As a result, there is limited code in the repository 
[4] at this time. I suspect folks will start cranking out a kick-ass 
implementation once the Kolla/Stackforge integration support is reviewed by the 
infra team [5].


The initial core team is composed of Steven Dake, Ryan Hallisey, James Lebocki, 
Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David Vossel. The core team 
will be reviewed every 6 weeks to add fresh developers.

Please join the core team in designing and inventing this rockin' new 
technology!

Regards
-steve

~~


[1] https://github.com/docker/docker [2] 
https://github.com/GoogleCloudPlatform/kubernetes

[3] https://wiki.openstack.org/wiki/Meetings/Kolla [4] 
https://github.com/jlabocki/superhappyfunshow [5] 
https://review.openstack.org/#/c/122972/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Steven Dake

On 09/23/2014 05:38 PM, Fox, Kevin M wrote:
I'm interested in how this relates/conflicts with the TripleO goal of 
using OpenStack to deploy OpenStack.


It looks like (maybe just superficially) that Kubernetes is simply a 
combination of (nova + docker driver) = container schedualer and 
(heat) = orchestration. They both schedule containers, will need 
advanced scheduling like ensure these two containers are on different 
servers (nova ServerGroups), autoscale resources, hook up things 
together, have a json document that describes the desired state, 
etc... If that's the case, it seems odd to use an OpenStack competing 
product to deploy a competitor of Kubernetes. Two software stacks to 
learn how to debug rather then just one.



Kevin,

Thanks for the feedback.

There are two orthogonal points you address re competitiveness.  One is 
the deployment program (which Kolla intends to be a part of). The 
deployment program includes an implementation (tripleo). TripleO is 
focused around using OpenStack to deploy OpenStack. Kolla is focused 
around using Kubernetes to deploy OpenStack.  But they both fit into the 
same program, and at some point they may even be remerged into both 
using OpenStack to deploy OpenStack.  Time will tell.


IMO Kubernetes is not competitive with OpenStack.  The way in which the 
Kolla project uses them is in fact complimentary.  In a perfect world 
OpenStack's container service (Magnum) + Heat could be used instead of 
Kubernetes.  The problem with that approach is the container service for 
OpenStack is not functional and not integrated into the release.


It is indeed true that another software stack must be learned.  We hope 
to abstract most/all of the differences so the actual maintenance 
difference (ie what must be learned) presents a small learning footprint.


Maybe I'm just totally misunderstanding what Kubernetes is trying to 
accomplish though. I'm not trying to stur up trouble here. I just 
really want to understand how these two technologies fit together.




I don't see you stirring up trouble :)  Essentially this project 
proposes an alternative method for deploying OpenStack (ie not using 
OpenStack, but using Kubernetes).


I did run the idea by Robert Collins (current TripleO PTL) first before 
we got cracking on the code base.  He indicated the approach was worth 
experimenting with.


Regards
-steve



Thanks,
Kevin

*From:* Steven Dake [sd...@redhat.com]
*Sent:* Tuesday, September 23, 2014 3:40 PM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] [all][tripleo] New Project - Kolla: Deploy 
and Manage OpenStack using Kubernetes and Docker


*Hi folks,***
*


I'm pleased to announce the development of a new project Kolla which 
is Greek for glue :). Kolla has a goal of providing an implementation 
that deploys OpenStack using Kubernetes and Docker. This project will 
begin as a StackForge project separate from the TripleO/Deployment 
program code base. Our long term goal is to merge into the 
TripleO/Deployment program rather then create a new program.



Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend 
to produce docker images for a variety of platforms beginning with 
Fedora 20. We are completely open to any distro support, so if folks 
want to add new Linux distribution to Kolla please feel free to submit 
patches :)



Kubernetes at the most basic level is a Docker scheduler produced by 
and used within Google [2]. Kubernetes has in excess of 100 technical 
contributors. Kubernetes is more then just a scheduler, it provides 
additional functionality such as load balancing and scaling and has a 
significant roadmap.



The #tripleo channel on Freenode will be used for Kolla developer and 
user communication. Even though we plan to become part of the 
Deployment program long term, as we experiment we believe it is best 
to hold a separate weekly one hour IRC meeting on Mondays at 2000 UTC 
in #openstack-meeting [3].



This project has been discussed with the current TripleO PTL (Robert 
Collins) and he seemed very supportive and agreed with the 
organization of the project outlined above. James Slagle, a TripleO 
core developer, has kindly offered to liase between Kolla and the 
broader TripleO community.



I personally feel it is necessary to start from a nearly empty 
repository when kicking off a new project. As a result, there is 
limited code in the repository [4] at this time. I suspect folks will 
start cranking out a kick-ass implementation once the Kolla/Stackforge 
integration support is reviewed by the infra team [5].



The initial core team is composed of Steven Dake, Ryan Hallisey, James 
Lebocki, Jeff Peeler, James Slagle, Lars Kellogg-Sedman, and David 
Vossel. The core team will be reviewed every 6 weeks to add fresh 
developers.



Please join the core team 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Mike Spreitzer
I don't know if anyone else has noticed, but you can not install Cinder 
inside a container.  Cinder requires an iSCSI package that fails to 
install; its install script tries to launch the daemon, and that fails.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread John Griffith
On Tue, Sep 23, 2014 at 7:06 PM, Steven Dake sd...@redhat.com wrote:

  On 09/23/2014 05:38 PM, Fox, Kevin M wrote:

 I'm interested in how this relates/conflicts with the TripleO goal of
 using OpenStack to deploy OpenStack.

 It looks like (maybe just superficially) that Kubernetes is simply a
 combination of (nova + docker driver) = container schedualer and (heat) =
 orchestration. They both schedule containers, will need advanced scheduling
 like ensure these two containers are on different servers (nova
 ServerGroups), autoscale resources, hook up things together, have a json
 document that describes the desired state, etc... If that's the case, it
 seems odd to use an OpenStack competing product to deploy a competitor of
 Kubernetes. Two software stacks to learn how to debug rather then just one.

  Kevin,

 Thanks for the feedback.

 There are two orthogonal points you address re competitiveness.  One is
 the deployment program (which Kolla intends to be a part of).  The
 deployment program includes an implementation (tripleo).  TripleO is
 focused around using OpenStack to deploy OpenStack.  Kolla is focused
 around using Kubernetes to deploy OpenStack.  But they both fit into the
 same program, and at some point they may even be remerged into both using
 OpenStack to deploy OpenStack.  Time will tell.

 IMO Kubernetes is not competitive with OpenStack.  The way in which the
 Kolla project uses them is in fact complimentary.  In a perfect world
 OpenStack's container service (Magnum) + Heat could be used instead of
 Kubernetes.  The problem with that approach is the container service for
 OpenStack is not functional and not integrated into the release.

 It is indeed true that another software stack must be learned.  We hope to
 abstract most/all of the differences so the actual maintenance difference
 (ie what must be learned) presents a small learning footprint.

  Maybe I'm just totally misunderstanding what Kubernetes is trying to
 accomplish though. I'm not trying to stur up trouble here. I just really
 want to understand how these two technologies fit together.


 I don't see you stirring up trouble :)  Essentially this project proposes
 an alternative method for deploying OpenStack (ie not using OpenStack, but
 using Kubernetes).

 I did run the idea by Robert Collins (current TripleO PTL) first before we
 got cracking on the code base.  He indicated the approach was worth
 experimenting with.


So I think it's a cool idea and worth looking at as you have said.  But I'm
very confused by your statements, it seems to me that there's a
misunderstanding, either in what Triple'O is, or something else entirely.

Given that Triple'O stands for Openstack On Openstack I'm not sure how you
separate the OpenStack piece from the project?  Don't get me wrong, I'm
certainly not saying one way is better than the other etc.  just that the
statements here are a bit confusing tome.

Also, it seems REALLY strange that Triple'O hasn't even graduated and my
limited understanding is that it still has a ways to go and we're proposing
alternate implementations of it.  Very odd IMO.

That being said, I'd love to see some details on what you have in mind
here.  I don't necessarily see why it needs to be an OpenStack Project
per-say as opposed to a really cool Open Source Project for deploying
OpenStack containers (or whatever it is exactly that you have in mind).



 Regards
 -steve


  Thanks,
 Kevin
  --
 *From:* Steven Dake [sd...@redhat.com]
 *Sent:* Tuesday, September 23, 2014 3:40 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [all][tripleo] New Project - Kolla: Deploy
 and Manage OpenStack using Kubernetes and Docker

  *Hi folks,*

























 * I'm pleased to announce the development of a new project Kolla which is
 Greek for glue :). Kolla has a goal of providing an implementation that
 deploys OpenStack using Kubernetes and Docker. This project will begin as a
 StackForge project separate from the TripleO/Deployment program code base.
 Our long term goal is to merge into the TripleO/Deployment program rather
 then create a new program. Docker is a container technology for delivering
 hermetically sealed applications and has about 620 technical contributors
 [1]. We intend to produce docker images for a variety of platforms
 beginning with Fedora 20. We are completely open to any distro support, so
 if folks want to add new Linux distribution to Kolla please feel free to
 submit patches :) Kubernetes at the most basic level is a Docker scheduler
 produced by and used within Google [2]. Kubernetes has in excess of 100
 technical contributors. Kubernetes is more then just a scheduler, it
 provides additional functionality such as load balancing and scaling and
 has a significant roadmap. The #tripleo channel on Freenode will be used
 for Kolla developer and user communication. Even though we plan to become
 part of the Deployment program long 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread John Griffith
On Tue, Sep 23, 2014 at 7:30 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 I don't know if anyone else has noticed, but you can not install Cinder
 inside a container.  Cinder requires an iSCSI package that fails to
 install; its install script tries to launch the daemon, and that fails.


Yes, certainly have.  I've had discussions with a number of folks on the
topic of how to solve the storage problem but I think people are focused
on bigger goals at the moment, including myself.  Maybe that's something we
can change in the coming months.

Or maybe somebody has made some progress and I just haven't heard about it.



 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Steven Dake

On 09/23/2014 06:30 PM, Mike Spreitzer wrote:
I don't know if anyone else has noticed, but you can not install 
Cinder inside a container.  Cinder requires an iSCSI package that 
fails to install; its install script tries to launch the daemon, and 
that fails.


Regards,
Mike



Mike,

Mordred pointed that out late last week - that essentially iscsi + 
containers are bust.  This is something that needs fixing in upstream 
kernel.org.  I don't know the details, but I assume at some point we 
will need to figure out what the problem is and sort out a solution and 
beg the upstream to fix it ;-).


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Robert Collins
Ok, so perhaps I need to be a bit more verbose here.

TripleO already had a plan to use docker for some services via Nova,
but that ran into some issues; specifically running two computes
(ironic and docker) on a single machine plays havoc with the
assumption in Neutron that the compute hostname shall be the same as
the neutron agent queue name in oslo,messaging, and that breaks vif
plugging.. Dan Prince was driving a discussion with nova core about
the best way to address that.

Doing docker for everything, as the base layer is interesting - how
does one deploy the machines and do their base network? Is that still
Ironic and Neutron? There's a bunch of learning that needs to be done
to answer those questions and the best way to learn is to experiment.
There are lots of possible outcomes, and I don't want to pre-judge
them!

Now, with respect to the 'and must use openstack components where one
exists angle' - well the big tent model is quite clearly weakening
what that means [with the presumed exception of Ring 0] - but Ironic
isn't in Ring 0 in any of the design sketches - so its not at all
clear to me whether we as a group want to keep enforcing that, or
perhaps be much more inclusive and say 'pick good components that do
the job'. So again, lets see what happens and how it pans out: the
folk hacking on this are already well established 'stackers, we need
have no fear that they are going to ignore the capabilities present in
the OpenStack ecosystem: much more interesting to me will be to see
what things they /can/ reuse, and what they cannot.

Finally, yes, docker/lxc containers and iscsi don't mix very well
today as netlink isn't namespaced - we spent a bunch of time early on
in TripleO investigating this, before shelving it as a 'later'
problem:- if there is an answer and this team finds it - great! If
not, they'll need some way to deploy the iscsi using components
outside of docker... and that may bring us full circle back to
deploying images via Ironic rather than docker - or it may not! we'll
just have to see :)

-Rob


On 24 September 2014 13:30, Mike Spreitzer mspre...@us.ibm.com wrote:
 I don't know if anyone else has noticed, but you can not install Cinder
 inside a container.  Cinder requires an iSCSI package that fails to install;
 its install script tries to launch the daemon, and that fails.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Steven Dake

On 09/23/2014 07:15 PM, John Griffith wrote:



On Tue, Sep 23, 2014 at 7:06 PM, Steven Dake sd...@redhat.com 
mailto:sd...@redhat.com wrote:


On 09/23/2014 05:38 PM, Fox, Kevin M wrote:

I'm interested in how this relates/conflicts with the TripleO
goal of using OpenStack to deploy OpenStack.

It looks like (maybe just superficially) that Kubernetes is
simply a combination of (nova + docker driver) = container
schedualer and (heat) = orchestration. They both schedule
containers, will need advanced scheduling like ensure these two
containers are on different servers (nova ServerGroups),
autoscale resources, hook up things together, have a json
document that describes the desired state, etc... If that's the
case, it seems odd to use an OpenStack competing product to
deploy a competitor of Kubernetes. Two software stacks to learn
how to debug rather then just one.


Kevin,

Thanks for the feedback.

There are two orthogonal points you address re competitiveness. 
One is the deployment program (which Kolla intends to be a part

of).  The deployment program includes an implementation
(tripleo).  TripleO is focused around using OpenStack to deploy
OpenStack. Kolla is focused around using Kubernetes to deploy
OpenStack.  But they both fit into the same program, and at some
point they may even be remerged into both using OpenStack to
deploy OpenStack.  Time will tell.

IMO Kubernetes is not competitive with OpenStack.  The way in
which the Kolla project uses them is in fact complimentary.  In a
perfect world OpenStack's container service (Magnum) + Heat could
be used instead of Kubernetes.  The problem with that approach is
the container service for OpenStack is not functional and not
integrated into the release.

It is indeed true that another software stack must be learned.  We
hope to abstract most/all of the differences so the actual
maintenance difference (ie what must be learned) presents a small
learning footprint.


Maybe I'm just totally misunderstanding what Kubernetes is trying
to accomplish though. I'm not trying to stur up trouble here. I
just really want to understand how these two technologies fit
together.



I don't see you stirring up trouble :) Essentially this project
proposes an alternative method for deploying OpenStack (ie not
using OpenStack, but using Kubernetes).

I did run the idea by Robert Collins (current TripleO PTL) first
before we got cracking on the code base.  He indicated the
approach was worth experimenting with.

So I think it's a cool idea and worth looking at as you have said.  
But I'm very confused by your statements, it seems to me that there's 
a misunderstanding, either in what Triple'O is, or something else 
entirely.


Given that Triple'O stands for Openstack On Openstack I'm not sure how 
you separate the OpenStack piece from the project?  Don't get me 
wrong, I'm certainly not saying one way is better than the other etc. 
 just that the statements here are a bit confusing tome.



John,

There is a deployment program - tripleo is just one implementation. We 
went through this with Heat and various projects that want to extend 
heat (eg Murano) and one big mistake I think Murano folks made was not 
figuring out where there code would go prior to writing it.  I'm only 
making a statement as to where I think it should belong.


Also, it seems REALLY strange that Triple'O hasn't even graduated and 
my limited understanding is that it still has a ways to go and we're 
proposing alternate implementations of it.  Very odd IMO.




Our goal is deploying OpenStack using containers.  TripleO could have 
this same goal, but at the present it does not (I could be mistaken 
here, please feel free to correct if I am incorrect).  It rather prefers 
to deploy on bare metal.  We are just focusing on this particular point, 
(openstack in containers on bare metal).  I spoke with Robert for quite 
awhile about integration time and we were both in agreement early or 
late integration is not a concern for us - getting something working for 
containers seemed more compelling.


That being said, I'd love to see some details on what you have in mind 
here.  I don't necessarily see why it needs to be an OpenStack 
Project per-say as opposed to a really cool Open Source Project for 
deploying OpenStack containers (or whatever it is exactly that you 
have in mind).



It doesn't have to necessarily go into the deployment program.  My main 
motivation at this point for using stackforge and attaching it to 
OpenStack is I desperately want to use the OpenStack workflow since our 
workflow rocks!


Regards
-steve



Regards
-steve



Thanks,
Kevin

*From:* Steven Dake [sd...@redhat.com mailto:sd...@redhat.com]
*Sent:* Tuesday, 

Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Fox, Kevin M
There are other options. 3 docker containers with ceph in them perhaps.

Kevin


From: Steven Dake
Sent: Tuesday, September 23, 2014 7:21:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/23/2014 06:30 PM, Mike Spreitzer wrote:
I don't know if anyone else has noticed, but you can not install Cinder inside 
a container.  Cinder requires an iSCSI package that fails to install; its 
install script tries to launch the daemon, and that fails.

Regards,
Mike

Mike,

Mordred pointed that out late last week - that essentially iscsi + containers 
are bust.  This is something that needs fixing in upstream kernel.org.  I don't 
know the details, but I assume at some point we will need to figure out what 
the problem is and sort out a solution and beg the upstream to fix it ;-).

Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-23 Thread Jay Pipes

On 09/23/2014 10:29 PM, Steven Dake wrote:

There is a deployment program - tripleo is just one implementation.


Nope, that is not correct. Like it or not (I personally don't), Triple-O 
is *the* Deployment Program for OpenStack:


http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284

Saying Triple-O is just one implementation of a deployment program is 
like saying Heat is just one implementation of an orchestration program. 
It isn't. It's *the* implemenation of an orchestration program that has 
been blessed by the TC:


http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112

 We

went through this with Heat and various projects that want to extend
heat (eg Murano) and one big mistake I think Murano folks made was not
figuring out where there code would go prior to writing it.  I'm only
making a statement as to where I think it should belong.


Sorry, I have to call you to task on this.

You think it was a mistake for the Murano folks to not figure out where 
the code would go prior to writing it? For the record, Murano existed 
nearly 2 years ago, as a response to various customer requests. Having 
the ability to properly deploy Windows applications like SQL Server and 
Active Directory into an OpenStack cloud was more important to the 
Murano developers than trying to predict what the whims of the OpenStack 
developer and governance model would be months or years down the road.


Tell me, did any of Heat's code exist prior to deciding to propose it 
for incubation? Saying that Murano developers should have thought about 
where their code would live is holding them to a higher standard than 
any of the other developer communities. Did folks working on 
disk-image-builder pre-validate with the TC or the mailing list that the 
dib code would live in the triple-o program? No, of course not. It was 
developed naturally and then placed into the program that fit it best.


Murano was developed naturally in exactly the same way, and the Murano 
developers have been nothing but accommodating to every request made of 
them by the TC (and those requests have been entirely different over the 
last 18 months, ranging from split it out to just propose another 
program) and by the PTLs for projects that requested they split various 
parts of Murano out into existing programs.


The Murano developers have done no power grab, have deliberately tried 
to be as community-focused and amenable to all requests as possible, and 
yet they are treated with disdain by a number of folks in the core Heat 
developer community, including yourself, Clint and Zane. And honestly, I 
don't get it... all Murano is doing is generating Heat templates and 
trying to fill in some pieces that Heat isn't interested in doing. I 
don't see why there is so much animosity towards a project that has, to 
my knowledge, acted in precisely the ways that we've asked projects to 
act in the OpenStack community: with openness, transparency, and 
community good will.


Sorry to be so blunt, but this has been weighing on me.
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev