Thanks for the feedback. There have been some concerns here with this proposal. 
I think we had sufficient valid arguments why we need this functionality in 
CloudStack. While this proposal is to use k8s as container orchestrator design 
should be able to adopt other container orchestrator.


Unless anybody has any specific technical points on this, I'd like to go on 
with its implementation and open up a PR.


Kishan, when do you think you can share further details (northbound API 
semantics etc) on the work you are doing? I am planning to get this feature the 
for 4.11. 



On 31/01/17, 5:54 PM, "Will Stevens" <williamstev...@gmail.com> wrote:

>I think that is covered in this proposal. There is nothing k8s specific in
>this integration (from what I understand), all the k8s details are passed
>in via the cloud-init configuration after the cluster has been provisioned.
>
>On Jan 31, 2017 3:06 AM, "Lianghwa Jou" <lianghwa....@accelerite.com> wrote:
>
>
>There are many container orchestrators. Those container orchestrators are
>happy to run on any VMs or bare metal machines. K8s is just one of them and
>there will be more in the future. It may not be a good idea to make
>CloudStack to be k8s aware. IMO, the relationship between k8s and
>cloudstack should be similar to application and os. It probably not a good
>idea to make your OS to be aware of any specific applications so IMHO I
>don’t think k8s should be native to CloudStack. It makes more sense to
>provide some generic services that many applications can take advantages
>of. Some generic resource grouping service makes sense so a group of VMs,
>baremetal machines or network can be treated as a single entity. Some life
>cycle management will be necessary for these entities too. We can deploy
>k8s, swarm, dcos or even non-container specific services on top of
>CloudStack. The k8s is changing fast. One single tenant installation may
>need more than one VM group and may actually requires more (k8s
>federation). It will be a struggle to be in sync if we try to bring k8s
>specific knowledge into cloudstack.
>
>Regards,
>
>
>--
>Lianghwa Jou
>VP Engineering, Accelerite
>email: lianghwa....@accelerite.com
>
>
>
>
>
>On 1/29/17, 11:54 PM, "Murali Reddy" <murali.re...@shapeblue.com> wrote:
>
>
>    I agree with some good views Will has shared and I also agree to the
>concerns raised by Wido and Eric. IMO we need balance of staying
>relevant/add new features vs stability of CloudStack and take corrective
>action if needed. We have two good examples for both. When SDN was hot
>technology CloudStack ended up with bunch of SDN controller integrations.
>Few years later, now CloudStack is carrying baggage with no maintainers for
>those plugins, with probably not many of CloudStack users needing overlays.
>On the other hand, other than attempt to liaison with ETSI for NFV no
>effort was done. OpenStack has become de-facto for NFV. Now for OpenStack,
>arguably NFV has become larger use case than cloud it self. I concur with
>Will’s point that with minimal viable solution that does not change the
>core of CloudStack, and can keep CloudStack relevant is worth to be taken
>in.
>
>    Will,
>
>    To your question of how different is from ShapeBlue’s container
>service, its design, implementation and API semantics etc remain same.
>ShapeBlue’s container service was true drop in plug-in to CloudStack, with
>this proposal I am trying to re-work to make it a core CloudStack pluggable
>service which is part of CloudStack.
>
>    Key concepts that this proposal is trying to add
>
>        - add notion of ‘container cluster’ as a first class entity in
>CloudStack. Which is bacially collection of other CloudStack resources
>(like VM’s, networks, public ip, network rules etc)
>        - life cycle operation to manage ‘container cluster’ like create,
>delete, start, stop, scale-up, scale-down, heal etc
>        - orchestrate container orchestrator control plane on top of
>provisioned resources
>
>    At-least for k8s (which is what this proposal is targeting),
>integration with k8s is bare minimum. There are cloud-config scripts that
>automatically setup k8s cluster master and node VM’s. All CloudStack is
>doing in passing the cloud-config to the core OS VM’s as user data.
>
>    Regards
>    Murali Reddy
>
>
>
>
>
>
>
>    On 29/01/17, 8:54 AM, "Will Stevens" <williamstev...@gmail.com on
>behalf of wstev...@cloudops.com> wrote:
>
>    >I agree that we need to be careful what we take on and own inside
>    >CloudStack.  I feel like some of the plugins or integrations which we
>have
>    >been "maintaining" may serve us better to abandon, but I feel like
>that is
>    >a whole discussion on its own.
>    >
>    >In this case, I feel like there is a minimum viable solution which puts
>    >CloudStack in a pretty good place to enable container orchestration.
>For
>    >example, one of the biggest challenges with K8S is the fact that it is
>    >single tenant.  CloudStack has good multi tenancy support and has the
>    >ability to orchestrate the underlying infra quite well.  We will have
>to be
>    >very careful not to try to own too deep into the K8S world though, in
>my
>    >opinion.  We only want to be responsible for providing the infra (and
>a way
>    >to bootstrap K8S ideally) and be able to scale the infra, everything
>else
>    >should be owned by the K8S on top.  That is the way I see it anyway,
>but
>    >please add your input.
>    >
>    >I think it is a liability to try to go too deep, for the same reasons
>Wido
>    >and Erik have mentioned.  But I also think we need to take it seriously
>    >because that train is moving and this may be a good opportunity to stay
>    >relevant in a rapidly changing market.
>    >
>    >*Will STEVENS*
>    >Lead Developer
>    >
>    ><https://goo.gl/NYZ8KK>
>    >
>    >On Sat, Jan 28, 2017 at 1:13 PM, Wido den Hollander <w...@widodh.nl>
>wrote:
>    >
>    >>
>    >> > Op 27 januari 2017 om 16:08 schreef Will Stevens <
>wstev...@cloudops.com
>    >> >:
>    >> >
>    >> >
>    >> > Hey Murali,
>    >> > How different is this proposal than what ShapeBlue already built.
>It
>    >> looks
>    >> > pretty consistent with the functionality that you guys open
>sourced in
>    >> > Seville.
>    >> >
>    >> > I have not yet used this functionality, but I have reports that it
>works
>    >> > quite well.
>    >> >
>    >> > I believe the premise here is to only orchestrate the VM layer and
>    >> > basically expose a "group" of running VMs to the user.  The user is
>    >> > responsible for configuring K8S or whatever other container
>orchestrator
>    >> on
>    >> > top.  I saw mention of the "cloud-config" scripts in the FS, how
>are
>    >> those
>    >> > exposed to the cluster?  Maybe the FS can expand on that a bit?
>    >> >
>    >> > I believe the core feature that is being requested to be added is
>the
>    >> > ability to create a group of VMs which will be kept active as a
>group if
>    >> at
>    >> > all possible.  ACS would be responsible for making sure that the
>number
>    >> of
>    >> > VMs specified for the group are in running state and it would spin
>up new
>    >> > VMs as needed in order to satisfy the group settings.  In general,
>it is
>    >> > understood that any application running on this group would have
>to be
>    >> > fault tolerant enough to be able to rediscover a new VM if one
>fails and
>    >> is
>    >> > replaced by a fresh copy.  Is that fair to say?  How is it
>expected that
>    >> > this service discovery is done, just by VMs being present on the
>network?
>    >> >
>    >> > As for some of the other people's concerns in this thread.
>    >> >
>    >> > - Regarding Wido's remarks.  I understand that there is some added
>    >> > complexity, but I don't feel like the scope of the addition is
>    >> > unrealistic.  I think the LXC integration was a lot farther out of
>the
>    >> > scope of what ACS does then this is.  This does not change the
>"things"
>    >> > which ACS orchestrates, it just adds the concept of a grouping of
>things
>    >> > which ACS already manages.  I think this is the right approach
>since it
>    >> is
>    >> > not trying to be a container orchestrator.  We will never compete
>with
>    >> K8S,
>    >> > for example, and we should not try, but K8S is here and the market
>wants
>    >> > it.  I do think we should be keeping our head up about that fact
>because
>    >> > being able to provide a the underlay for K8S is very valuable in
>the
>    >> > current marketplace.  I see this functionality as a way to enable
>K8S
>    >> > adoption on top of ACS without changing our core values.
>    >> >
>    >> > - Regarding Erik's remarks.  The container space is moving fast,
>but so
>    >> is
>    >> > the industry.  If we want to remain relevant, we need to be able to
>    >> adapt a
>    >> > bit.  I don't think this is a big shift in what we do, but it is
>one that
>    >> > enables people to be able to start running with something like K8S
>on top
>    >> > of their existing ACS.  This is something we are interested in
>doing and
>    >> so
>    >> > are our customers.  If we can have a thin layer in ACS which helps
>enable
>    >> > the use of K8S (or other container orchestrators) by orchestrating
>    >> > infrastructure, as we already do, and making it easier to adopt a
>    >> container
>    >> > orchestrator running on top of ACS, I think that gives us a nice
>foothold
>    >> > in the market.  I don't really feel it is fair to compare
>containers to
>    >> > IPv6.  IPv6 has been out forever and it has taken almost a decade
>to get
>    >> > anyone to adopt it.  Containers have really only been here for
>like 2
>    >> years
>    >> > and they are changing the market landscape in a very real way.
>    >> >
>    >> > Kind of on topic and kind of off topic.  I think understanding our
>    >> approach
>    >> > to containers is going to be important for the ACS community as a
>whole.
>    >> > If we don't offer that market anything, then we will not be
>considered
>    >> and
>    >> > we will lose market share we can't afford to lose.  If we try to
>hitch
>    >> our
>    >> > horse to that cart too much, we will not be able to be agile
>enough and
>    >> > will fail.  I feel like the right approach is for us to know that
>it is a
>    >> > thriving market and continue to do what we do, but to extend an
>olive
>    >> > branch to that market.  I think this sort of implementation is the
>right
>    >> > approach because we are not trying to do too much.  We are simply
>giving
>    >> a
>    >> > foundation on which the next big thing in the container
>orchestration
>    >> world
>    >> > can adopt without us having to compete directly in that space.  I
>think
>    >> we
>    >> > have to focus on what we do best, but at the same time, think
>about what
>    >> we
>    >> > can do to enable that huge market of users to adopt ACS as their
>    >> > foundation.  The ability to offer VMs and containers in the same
>data
>    >> plane
>    >> > is something we have the ability to do, especially with this
>approach,
>    >> and
>    >> > is something that most other softwares can not do.  The adoption of
>    >> > containers by bigger organizations will be only part of their
>workload,
>    >> > they will still be running VMs for the foreseeable future. Being
>able to
>    >> > appeal to that market is going to be important for us.
>    >> >
>    >> > Hopefully I don't have too many strong opinions here, but I do
>think we
>    >> > need to be thinking about how we move forward in a world which is
>    >> adopting
>    >> > containers in a very real way.
>    >> >
>    >>
>    >> Understood. I just want to prevent that we add more features to
>CloudStack
>    >> which are poorly maintained. Not judging Murali here at all, but I'd
>rather
>    >> see CloudStack loose code then have it added.
>    >>
>    >> Thinking about LXC, would like to see that removed together with
>various
>    >> other network plugins which I think are rarely used.
>    >>
>    >> Now, the idea of Murali was misunderstood by me. I think it would be
>worth
>    >> more if we would improve our cloud-init support and integration in
>various
>    >> tools which makes it much easier to deploy VMs running containers ON
>    >> CloudStack.
>    >>
>    >> Not so much changing CloudStack code, but rather tooling around it.
>    >>
>    >> If we have CloudStack talking to Kubernetes we suddenly have to
>maintain
>    >> that API integration. Who's going to do that. Realistically.
>    >>
>    >> That's my main worry in this regard.
>    >>
>    >> We have so much more work to do in ACS in total that I don't know if
>this
>    >> is the realistic route. I talk to many people who are not looking at
>    >> containers and are still working with VMs.
>    >>
>    >> There is not a single truth which is true, it really depends on who
>you
>    >> ask.
>    >>
>    >> Wido
>    >>
>    >> > Cheers,
>    >> >
>    >> > Will
>    >> >
>    >> > *Will STEVENS*
>    >> > Lead Developer
>    >> >
>    >> > <https://goo.gl/NYZ8KK>
>    >> >
>    >> > On Fri, Jan 27, 2017 at 5:38 AM, Erik Weber <terbol...@gmail.com>
>wrote:
>    >> >
>    >> > > On Fri, Jan 27, 2017 at 7:20 AM, Murali Reddy <
>muralimmre...@gmail.com
>    >> >
>    >> > > wrote:
>    >> > > > All,
>    >> > > >
>    >> > > > I would like propose native functionality into CloudStack to
>provide
>    >> a
>    >> > > container service through which users out-of-the box can use to
>launch
>    >> > > container based application. Idea is to support ability to
>orchestrate
>    >> the
>    >> > > resources and automate aspects of setting up container
>orchestrator
>    >> through
>    >> > > CloudStack. Public IAAS service providers AWS with its ECS [1]
>and
>    >> google
>    >> > > with GKE [2] already provides ability container applications.
>    >> Competitive
>    >> > > cloud orchestration platforms already have native support for
>container
>    >> > > service. Users of CloudStack both as public cloud providers and
>users
>    >> with
>    >> > > private clouds will benefit with such functionality.
>    >> > > >
>    >> > > > While container orchestrator of user choice can be provisioned
>on
>    >> top of
>    >> > > CloudStack (with out CloudStack being involved) with tools like
>    >> > > TerraForm[3], Ansible[4] etc, advantage of having native
>orchestration
>    >> is
>    >> > > giving user a nice cohesive integration. This proposal would
>like add a
>    >> > > notion of first class CloudStack entity called container cluster
>which
>    >> can
>    >> > > be used to provision resources, scale up, scale down, start and
>stop
>    >> the
>    >> > > cluster of VM’s on which containerised applications can be run.
>For
>    >> actual
>    >> > > container orchestration we will still need container
>orchestrator like
>    >> > > docker swarm, marathon, kubernetes, but CloudStack container
>service
>    >> can
>    >> > > automate setting up of control place automatically.
>    >> > > >
>    >> > >
>    >> > > To be honest I'm torn on this one.
>    >> > >
>    >> > > Containers are a rapid changing thing, and while docker swam,
>    >> > > kubernetes, rancher or whatnot is popular today, they might not
>be
>    >> > > tomorrow.
>    >> > > They might use CoreOS today, but might not tomorrow.
>    >> > >
>    >> > > We have a rather poor track record of staying up to date with new
>    >> > > features/versions, and adding a feature that is so rapidly
>changing
>    >> > > is, I fear, going to be hard to maintain.
>    >> > > Want an example, look at xenserver. It is one of the most used
>    >> > > hypervisors we support, yet it took 6 months or so for us to
>support
>    >> > > the latest release.
>    >> > > Or IPv6...
>    >> > >
>    >> > > I don't mean to bash at maintainers/implementers of those
>features, I
>    >> > > appreciate all the work being done in every aspect, but I
>believe we
>    >> > > should be realistic and realize that we have issues with keeping
>stuff
>    >> > > up to date.
>    >> > >
>    >> > > I'd say focus on making sure other tools can do their job well
>against
>    >> > > CloudStack (kops, rancher, ++), but that does not mean I will -1
>the
>    >> > > idea if anyone really wants to go through with it.
>    >> > >
>    >> > > --
>    >> > > Erik
>    >> > >
>    >>
>
>    murali.re...@shapeblue.com
>    www.shapeblue.com
>    53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>    @shapeblue
>
>
>
>
>
>
>
>
>
>DISCLAIMER
>==========
>This e-mail may contain privileged and confidential information which is
>the property of Accelerite, a Persistent Systems business. It is intended
>only for the use of the individual or entity to which it is addressed. If
>you are not the intended recipient, you are not authorized to read, retain,
>copy, print, distribute or use this message. If you have received this
>communication in error, please notify the sender and delete all copies of
>this message. Accelerite, a Persistent Systems business does not accept any
>liability for virus infected mails.

Reply via email to