Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-18 Thread Fox, Kevin M
I'd love to attend, but this is right on top of the app catalog meeting. I 
think the app catalog might be one of the primary users of a cross COE api.

At minimum we'd like to be able to be able to store url's for 
Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow in 
Horizon to have Magnum start up a new instance of of the template the user 
selected.

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 18, 2016 2:09 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Hi all,

Magnum will have a fishbowl session to discuss if it makes sense to build a 
common abstraction layer for all COEs (kubernetes, docker swarm and mesos):

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102

Frankly, this is a controversial topic since I heard agreements and 
disagreements from different people. It would be great if all of you can join 
the session and share your opinions and use cases. I wish we will have a 
productive discussion.

Best regards,
Hongbin

> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-12-16 8:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
> On 11/04/16 16:53 +, Adrian Otto wrote:
> >Amrith,
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what
> that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream APIs, along with the disadvantage of waiting for OpenStack to
> >continually catch up to the pace of change of the various upstream
> >systems on which it depends. This is a recipe for disappointment.
> >
> >We concluded that wrapping native APIs is a mistake, particularly when
> >they are sufficiently different than what the Nova API already offers.
> >Containers APIs have limited similarities, so when you try to make a
> >universal interface to all of them, you end up with a really
> >complicated mess. It would be even worse if we tried to accommodate
> all
> >the unique aspects of BM and VM as well. Magnum’s approach is to offer
> >the upstream native API’s for the different container orchestration
> >engines (COE), and compose Bays for them to run on that are built from
> >the compute types that OpenStack supports. We do this by using
> >different Heat orchestration templates (and conditional templates) to
> >arrange a COE on the compute type of your choice. With that said,
> there
> >are still gaps where not all storage or network drivers work with
> >Ironic, and there are non-trivial security hurdles to clear to safely
> use Bays composed of libvirt-lxc instances in a multi-tenant
> environment.
> >
> >My suggestion to get what you want for Trove is to see if the cloud
> has
> >Magnum, and if it does, create a bay with the flavor type specified
> for
> >whatever compute type you want, and then use the native API for the
> COE
> >you selected for that bay. Sta

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-18 Thread Hongbin Lu
Hi all,

Magnum will have a fishbowl session to discuss if it makes sense to build a 
common abstraction layer for all COEs (kubernetes, docker swarm and mesos):

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102

Frankly, this is a controversial topic since I heard agreements and 
disagreements from different people. It would be great if all of you can join 
the session and share your opinions and use cases. I wish we will have a 
productive discussion.

Best regards,
Hongbin

> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-12-16 8:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 11/04/16 16:53 +, Adrian Otto wrote:
> >Amrith,
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what
> that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream APIs, along with the disadvantage of waiting for OpenStack to
> >continually catch up to the pace of change of the various upstream
> >systems on which it depends. This is a recipe for disappointment.
> >
> >We concluded that wrapping native APIs is a mistake, particularly when
> >they are sufficiently different than what the Nova API already offers.
> >Containers APIs have limited similarities, so when you try to make a
> >universal interface to all of them, you end up with a really
> >complicated mess. It would be even worse if we tried to accommodate
> all
> >the unique aspects of BM and VM as well. Magnum’s approach is to offer
> >the upstream native API’s for the different container orchestration
> >engines (COE), and compose Bays for them to run on that are built from
> >the compute types that OpenStack supports. We do this by using
> >different Heat orchestration templates (and conditional templates) to
> >arrange a COE on the compute type of your choice. With that said,
> there
> >are still gaps where not all storage or network drivers work with
> >Ironic, and there are non-trivial security hurdles to clear to safely
> use Bays composed of libvirt-lxc instances in a multi-tenant
> environment.
> >
> >My suggestion to get what you want for Trove is to see if the cloud
> has
> >Magnum, and if it does, create a bay with the flavor type specified
> for
> >whatever compute type you want, and then use the native API for the
> COE
> >you selected for that bay. Start your instance on the COE, just like
> >you use Nova today. This way, you have low complexity in Trove, and
> you
> >can scale both the number of instances of your data nodes (containers),
> >and the infrastructure on which they run (Nova instances).
> 
> 
> I've been researching on this area and I've reached pretty much the
> same conclusion. I've had moments of wondering whether creating bays is
> something Trove should do but I now think it should.
> 
> The need of handling the native API is the part I find a bit painful as
> that means more code needs to happen in Trove for us to provide this
> provisioning facilities. I wonder if a common *library* would help here,
> at least to handle those &quo

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-14 Thread Bogdan Dobrelya
> On 04/11/2016 09:43 AM, Allison Randal wrote:
>>> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas  
>>> wrote:
 Reading unofficial notes [1], i found one topic very interesting:
 One Platform – How do we truly support containers and bare metal under
 a common API with VMs? (Ironic, Nova, adjacent communities e.g.
 Kubernetes, Apache Mesos etc)

 Anyone present at the meeting, please expand on those few notes on
 etherpad? And how if any this feedback is getting back to the
 projects?
>>
>> It was really two separate conversations that got conflated in the
>> summary. One conversation was just being supportive of bare metal, VMs,
>> and containers within the OpenStack umbrella. The other conversation
>> started with Monty talking about his work on shade, and how it wouldn't
>> exist if more APIs were focused on the way users consume the APIs, and
>> less an expression of the implementation details of each project.
>> OpenStackClient was mentioned as a unified CLI for OpenStack focused
>> more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
>> but falls in the same general category of work.)
>>
>> i.e. There wasn't anything new in the conversation, it was more a matter
>> of the developers/TC members on the board sharing information about work
>> that's already happening.
> 
> I agree with that - but would like to clarify the 'bare metal, VMs and 
> containers' part a bit. (an in fact, I was concerned in the meeting that 
> the messaging around this would be confusing because we 'supporting bare 
> metal' and 'supporting containers' mean two different things but we use 
> one phrase to talk about it.
> 
> It's abundantly clear at the strategic level that having OpenStack be 
> able to provide both VMs and Bare Metal as two different sorts of 
> resources (ostensibly but not prescriptively via nova) is one of our 
> advantages. We wanted to underscore how important it is to be able to do 
> that, and wanted to underscore that so that it's really clear how 
> important it is any time the "but cloud should just be VMs" sentiment 
> arises.
> 
> The way we discussed "supporting containers" was quite different and was 
> not about nova providing containers. Rather, it was about reaching out 
> to our friends in other communities and working with them on making 
> OpenStack the best place to run things like kubernetes or docker swarm. 
> Those are systems that ultimately need to run, and it seems that good 
> integration (like kuryr with libnetwork) can provide a really strong 
> story. I think pretty much everyone agrees that there is not much value 
> to us or the world for us to compete with kubernetes or docker.

Let me quote exactly here and summarize the proposals mentioned in this
thread (as I understood it):

1. TOSCA YAML service templates [0], or [1], or suchlike to define
unified workloads (BM/VM/lightweight) and placement strategies as well.
Those templates are generated by either users directly or projects like
Solum, Trove shipping Apps-As-A-Service or Kolla, TrippleO, Fuel and
others - to deploy OpenStack services as well.

[0]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html
[1] https://review.openstack.org/#/c/210549/15/specs/super-scheduler.rst

2. Heat-translator [2] (or a New Project?) to present the templates as
Heat Orchestration Templates (HOT)

[2] https://github.com/openstack/heat-translator

3. Heat (or TOSCA translator, or...) to translate the HOTs (into API
calls?) and orchestrate the workloads placement to the reworked cloud
workloads schedulers of Nova [3], Magnum, Ironic, Neutron/Kuryr for SDN,
Cinder/Swift/Ceph for volumes mounts and images, then down the road to
their BM/VM/ligtweight-containers drivers nova.virt.ironic,
nova-docker/hypernova, kubernates/mesos/swarm and the like.

[3] https://review.openstack.org/#/c/183837/4

4. At this point, here they are - unified workloads running shiny on top
of OpenStack.

So the question is, do we really need a unified API or rather a unified
(TOSCA YAML) templates and a translator to *reworked* local APIs?

By the way, this flow clearly illustrates why there is no collisions
between the cp spec [1] and related Nova API reworking spec [3]. Those
are just different parts of the whole picture.

> 
> So, we do want to be supportive of bare metal and containers - but the 
> specific _WAY_ we want to be supportive of those things is different for 
> each one.
> 
> Monty


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Peng Zhao
Well, it is a myth that Docker is not linux container specific. It is born with
cgroup/namespace, but the image is an app-centric way to package, nothing
particular to linux container.
For openstack, given the virtualization root, it is an easy win in places where
requires strong isolation, multi-tenancy. And that creates new patterns to
consume technologies.
Peng
- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 9:49 PM, Fox, Kevin M kevin@pnnl.gov wrote:
It partially depends on if your following lightweight container methodology. Can
nova api support unix sockets or bind mounts between containers in the same pod?
Would it be reasonable to add that functionality? Its pretty different to novas
usual use cases.

Thanks,
Kevin




From: Peng Zhao
Sent: Tuesday, April 12, 2016 11:33:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform –
Containers/Bare Metal? (Re: Board of Directors Meeting)

Agreed.
IMO, OpenStack is an open framework to different technologies and use cases.
Different architectures for different things make sense. Some may say that using
nova to launch docker images with hypervisor is weird, but it can be seen as
“Immutable IaaS”.

- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 1:43 PM, Joshua Harlow harlo...@fastmail.com wrote:
Sure, so that helps, except it still has the issue of bumping up against the
mismatch of the API(s) of nova. This is why I'd rather have a template kind of
format (as say the input API) that allows for (optionally) expressing such
container specific capabilities/constraints. Then some project that can
understand that template /format can if needed talk to a COE (or similar
project) to translate that template 'segment' into a realized entity using the
capabilities/constraints that the template specified. Overall it starts to feel
like maybe it is time to change the upper and lower systems and shake things up
a little ;) Peng Zhao wrote: > I'd take the idea further. Imagine a typical Heat
template, what you > need to do is: > > - replace the VM id with Docker image id
> - nothing else > - run the script with a normal heat engine > - the entire
stack gets deployed in seconds > > Done! > > Well, that sounds like nova-docker.
What about cinder and neutron? They > don't work well with Linux container! The
answer is Hypernova > (https://github.com/hyperhq/hypernova) or Intel
ClearContainer, seamless > integration with most OpenStack components. > >
Summary: minimal changes to interface and upper systems, much smaller > image
and much better developer workflow. > > Peng > >
- > Hyper_ Secure Container
Cloud > > > > On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com
> wrote: > > __ Fox, Kevin M wrote: > I think part of the problem is containers
> are mostly orthogonal to vms/bare metal. Containers are a package > for a
single service. Multiple can run on a single vm/bare metal > host. Orchestration
like Kubernetes comes in to turn a pool of > vm's/bare metal into a system that
can easily run multiple > containers. > Is the orthogonal part a problem because
we have made > it so or is it just how it really is? Brainstorming starts here:
> Imagine a descriptor language like (which I stole from >
https://review.openstack.org/#/c/210549 and modified): --- > components: -
label: frontend count: 5 image: ubuntu_vanilla > requirements: high memory, low
disk stateless: true - label: > database count: 3 image: ubuntu_vanilla
requirements: high memory, > high disk stateless: false - label: memcache count:
3 image: > debian-squeeze requirements: high memory, no disk stateless: true - >
label: zookeeper count: 3 image: debian-squeeze requirements: high > memory,
medium disk stateless: false backend: VM networks: - label: > frontend_net
flavor: “public network” associated_with: - frontend - > label: database_net
flavor: high bandwidth associated_with: - > database - label: backend_net
flavor: high bandwidth and low latency > associated_with: - zookeeper -
memchache constraints: - ref: > container_only params: - frontend - ref:
no_colocated params: - > database - frontend - ref: spread params: - database -
ref: > no_colocated params: - database - frontend - ref: spread params: - >
memcache - ref: spread params: - zookeeper - ref: isolated_network > params: -
frontend_net - database_net - backend_net ... Now nothing > in the above is
about container, or baremetal or vms, (although a > 'advanced' constraint can be
that a 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Georgy Okrokvertskhov
At Mirantis we are playing with different technologies to explore possible
ways of using containers. Recently we did some POC kind of work for
integration existing OpenStack components with containers technologies.
Here is a link  for a demo. In this POC Nova
API can schedule a container via Nova Mesos driver which is quite similar
to nova-docker concept. The most interesting part is that Neutron manages
Docker networks and Cinder creates a volume which can be attached to the
container. Mesos is not necessary here, so the same work can be done with
existing nova-docker driver.

We did not try to address all possible cases for containers. This POC
covers a very specific use cases when someone has a limited number of
applications which can be executed in both VMs and containers. This
application is self container so there is no needs in complex orchestration
which Kubernetes or Marathon provides.

-Gosha

On Wed, Apr 13, 2016 at 8:05 AM, Joshua Harlow 
wrote:

> Thierry Carrez wrote:
>
>> Fox, Kevin M wrote:
>>
>>> I think my head just exploded. :)
>>>
>>> That idea's similar to neutron sfc stuff, where you just say what
>>> needs to connect to what, and it figures out the plumbing.
>>>
>>> Ideally, it would map somehow to heat & docker COE & neutron sfc to
>>> produce a final set of deployment scripts and then just runs it
>>> through the meat grinder. :)
>>>
>>> It would be awesome to use. It may be very difficult to implement.
>>>
>>> If you ignore the non container use case, I think it might be fairly
>>> easily mappable to all three COE's though.
>>>
>>
>> This feels like Heat with a more readable descriptive language. I don't
>> really like this approach, because you end up with the lowest common
>> denominator between COE's functionality. They are all different. And
>> they are at the peak of the differentiation phase. The LCD is bound to
>> be pretty basic.
>>
>> This approach may be attractive for us as infrastructure providers, but
>> I also know this is not attractive to users who used Kubernetes before
>> and wants to continue to use Kubernetes (and don't really want to care
>> about whether OpenStack is running under the hood). They don't want to
>> learn another descriptor language or API, they just want to learn the
>> Kubernetes description model and API and take advantage of its unique
>> capabilities.
>>
>> In summary, this may be a good solution for *existing* OpenStack users
>> to start playing with containerized workloads. But it is not a good
>> solution to attract the container cool kids to using OpenStack as their
>> base infrastructure provider. For those we need to make it as
>> transparent and simple to use their usual tools to deploy on top of
>> OpenStack clouds. The more they can ignore we are there, the better.
>>
>>
> I get the feeling of 'the more they can ignore we are there, the better.'
> but it just feels like at that point we have accepted our own fate in this
> arena vs trying to actually having an impact in it... Do others feel that
> it is already at the point where we can no longer attract the cool kids, is
> the tipping point of that happening already past?
>
> I'd like for openstack to still attract the cool kids, and not just
> attract the cool kids by accepting 'the more they can ignore we are there,
> the better' as our fate... I get that someone has to provide the equivalent
> of roads, plumbing and water but man, it feels like we can also provide
> other things still ;)
>
> -Josh
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Director of Performance Engineering,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Joshua Harlow

Thierry Carrez wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what
needs to connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to
produce a final set of deployment scripts and then just runs it
through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly
easily mappable to all three COE's though.


This feels like Heat with a more readable descriptive language. I don't
really like this approach, because you end up with the lowest common
denominator between COE's functionality. They are all different. And
they are at the peak of the differentiation phase. The LCD is bound to
be pretty basic.

This approach may be attractive for us as infrastructure providers, but
I also know this is not attractive to users who used Kubernetes before
and wants to continue to use Kubernetes (and don't really want to care
about whether OpenStack is running under the hood). They don't want to
learn another descriptor language or API, they just want to learn the
Kubernetes description model and API and take advantage of its unique
capabilities.

In summary, this may be a good solution for *existing* OpenStack users
to start playing with containerized workloads. But it is not a good
solution to attract the container cool kids to using OpenStack as their
base infrastructure provider. For those we need to make it as
transparent and simple to use their usual tools to deploy on top of
OpenStack clouds. The more they can ignore we are there, the better.



I get the feeling of 'the more they can ignore we are there, the 
better.' but it just feels like at that point we have accepted our own 
fate in this arena vs trying to actually having an impact in it... Do 
others feel that it is already at the point where we can no longer 
attract the cool kids, is the tipping point of that happening already past?


I'd like for openstack to still attract the cool kids, and not just 
attract the cool kids by accepting 'the more they can ignore we are 
there, the better' as our fate... I get that someone has to provide the 
equivalent of roads, plumbing and water but man, it feels like we can 
also provide other things still ;)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Fox, Kevin M
It partially depends on if your following lightweight container methodology. 
Can nova api support unix sockets or bind mounts between containers in the same 
pod? Would it be reasonable to add that functionality? Its pretty different to 
novas usual use cases.

Thanks,
Kevin



From: Peng Zhao
Sent: Tuesday, April 12, 2016 11:33:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Agreed.

IMO, OpenStack is an open framework to different technologies and use cases. 
Different architectures for different things make sense. Some may say that 
using nova to launch docker images with hypervisor is weird, but it can be seen 
as “Immutable IaaS”.


-
Hyper_ Secure Container Cloud




On Wed, Apr 13, 2016 1:43 PM, Joshua Harlow 
harlo...@fastmail.com<mailto:harlo...@fastmail.com> wrote:
Sure, so that helps, except it still has the issue of bumping up against the 
mismatch of the API(s) of nova. This is why I'd rather have a template kind of 
format (as say the input API) that allows for (optionally) expressing such 
container specific capabilities/constraints. Then some project that can 
understand that template /format can if needed talk to a COE (or similar 
project) to translate that template 'segment' into a realized entity using the 
capabilities/constraints that the template specified. Overall it starts to feel 
like maybe it is time to change the upper and lower systems and shake things up 
a little ;) Peng Zhao wrote: > I'd take the idea further. Imagine a typical 
Heat template, what you > need to do is: > > - replace the VM id with Docker 
image id > - nothing else > - run the script with a normal heat engine > - the 
entire stack gets deployed in seconds > > Done! > > Well, that sounds like 
nova-docker. What about cinder and neutron? They > don't work well with Linux 
container! The answer is Hypernova > (https://github.com/hyperhq/hypernova) or 
Intel ClearContainer, seamless > integration with most OpenStack components. > 
> Summary: minimal changes to interface and upper systems, much smaller > image 
and much better developer workflow. > > Peng > > 
- > Hyper_ Secure Container 
Cloud > > > > On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com 
> wrote: > > __ Fox, Kevin M wrote: > I think part of the problem is containers 
> are mostly orthogonal to vms/bare metal. Containers are a package > for a 
single service. Multiple can run on a single vm/bare metal > host. 
Orchestration like Kubernetes comes in to turn a pool of > vm's/bare metal into 
a system that can easily run multiple > containers. > Is the orthogonal part a 
problem because we have made > it so or is it just how it really is? 
Brainstorming starts here: > Imagine a descriptor language like (which I stole 
from > https://review.openstack.org/#/c/210549 and modified): --- > components: 
- label: frontend count: 5 image: ubuntu_vanilla > requirements: high memory, 
low disk stateless: true - label: > database count: 3 image: ubuntu_vanilla 
requirements: high memory, > high disk stateless: false - label: memcache 
count: 3 image: > debian-squeeze requirements: high memory, no disk stateless: 
true - > label: zookeeper count: 3 image: debian-squeeze requirements: high > 
memory, medium disk stateless: false backend: VM networks: - label: > 
frontend_net flavor: "public network" associated_with: - frontend - > label: 
database_net flavor: high bandwidth associated_with: - > database - label: 
backend_net flavor: high bandwidth and low latency > associated_with: - 
zookeeper - memchache constraints: - ref: > container_only params: - frontend - 
ref: no_colocated params: - > database - frontend - ref: spread params: - 
database - ref: > no_colocated params: - database - frontend - ref: spread 
params: - > memcache - ref: spread params: - zookeeper - ref: isolated_network 
> params: - frontend_net - database_net - backend_net ... Now nothing > in the 
above is about container, or baremetal or vms, (although a > 'advanced' 
constraint can be that a component must be on a > container, and it must say be 
deployed via docker image XYZ...); > instead it's just about the constraints 
that a user has on there > deployment and the components associated with it. It 
can be left up > to some consuming project of that format to decide how to turn 
that > desired description into an actual description (aka a full expanding > 
of that format into an actual deployment plan), possibly say by > optimizing 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Michał Dulko
On 04/13/2016 11:16 AM, Thierry Carrez wrote:
> Fox, Kevin M wrote:
>> I think my head just exploded. :)
>>
>> That idea's similar to neutron sfc stuff, where you just say what
>> needs to connect to what, and it figures out the plumbing.
>>
>> Ideally, it would map somehow to heat & docker COE & neutron sfc to
>> produce a final set of deployment scripts and then just runs it
>> through the meat grinder. :)
>>
>> It would be awesome to use. It may be very difficult to implement.
>>
>> If you ignore the non container use case, I think it might be fairly
>> easily mappable to all three COE's though.
>
> This feels like Heat with a more readable descriptive language. I
> don't really like this approach, because you end up with the lowest
> common denominator between COE's functionality. They are all
> different. And they are at the peak of the differentiation phase. 

Are we able to define that lowest common denominator at this stage?
Maybe that subset of features is still valuable?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Thierry Carrez

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.


This feels like Heat with a more readable descriptive language. I don't 
really like this approach, because you end up with the lowest common 
denominator between COE's functionality. They are all different. And 
they are at the peak of the differentiation phase. The LCD is bound to 
be pretty basic.


This approach may be attractive for us as infrastructure providers, but 
I also know this is not attractive to users who used Kubernetes before 
and wants to continue to use Kubernetes (and don't really want to care 
about whether OpenStack is running under the hood). They don't want to 
learn another descriptor language or API, they just want to learn the 
Kubernetes description model and API and take advantage of its unique 
capabilities.


In summary, this may be a good solution for *existing* OpenStack users 
to start playing with containerized workloads. But it is not a good 
solution to attract the container cool kids to using OpenStack as their 
base infrastructure provider. For those we need to make it as 
transparent and simple to use their usual tools to deploy on top of 
OpenStack clouds. The more they can ignore we are there, the better.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Peng Zhao
l > and VM's the same way since Magnum can launch on either. I'm afraid
> supporting both containers and non container deployment with Trove > will be a
large effort with very little code sharing. It may be > easiest to have a flag
version where non container deployments are > upgraded to containers then non
container support is dropped. > Sure > trove seems like it would be a consumer
of whatever interprets that > format, just like many other consumers could be
(with the special > case that trove creates such a format on-behalf of some
other > consumer, aka the trove user). > As for the app-catalog use case, > the
app-catalog project (http://apps.openstack.org) is working on > some of that. >
> Thanks, > Kevin > > ________________ > From: Joshua
Harlow > [harlo...@fastmail.com] > Sent: Tuesday, April 12, 2016 12:16 PM > 
OpenStack Development Mailing List (not for > usage questions) > Cc:
foundat...@lists.openstack.org > Subject: Re: > [openstack-dev] [OpenStack
Foundation] [board][tc][all] One Platform > – Containers/Bare Metal? (Re: Board
of Directors Meeting) > > Flavio > Percoco wrote: >> On 11/04/16 18:05 +,
Amrith Kumar wrote: >>> > Adrian, thx for your detailed mail. >>> >>> >>> >>>
Yes, I was > hopeful of a silver bullet and as we’ve discussed before (I >>> >
think it >>> was Vancouver), there’s likely no silver bullet in this > area.
After that >>> conversation, and some further experimentation, > I found that
even if >>> Trove had >>> access to a single Compute > API, there were other
significant >>> complications >>> further down > the road, and I didn’t pursue
the project further at the >>> time. > >>> >> Adrian, Amrith, >> >> I've spent
enough time researching on > this area during the last month >> and my >>
conclusion is pretty > much the above. There's no silver bullet in this >> area
and >> I'd > argue there shouldn't be one. Containers, bare metal and VMs differ
> >> in such >> a way (feature-wise) that it'd not be good, as far as >
deploying >> databases goes, >> for there to be one compute API. > Containers
allow for a different >> deployment >> architecture than > VMs and so does bare
metal. > > Just some thoughts from me, but why > focus on the >
compute/container/baremetal API at all? > > I'd > almost like a way that just
describes how my app should be > > interconnected, what is required to get it
going, and the features > > and/or scheduling requirements for different parts
of those app. > > > To me it feels like this isn't a compute API or really a
heat API > but > something else. Maybe it's closer to the docker compose >
API/template > format or something like it. > > Perhaps such a thing > needs a
new project. I'm not sure, but it does feel > like that as > developers we
should be able to make such a thing that > still > exposes the more advanced
functionality of the underlying API so > > that it can be used if really
needed... > > Maybe this is similar to > an app-catalog, but that doesn't quite
feel > like it's the right > thing either so maybe somewhere in between... > >
IMHO I'd be nice > to have a unified story around what this thing is, so > that
we as a > community can drive (as a single group) toward that, maybe > this is >
where the product working group can help and we as a developer > > community can
also try to unify behind... > > P.S. name for project > should be 'silver'
related, ha. > > -Josh > > >
__ > >
OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: >
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >
__ > >
OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: >
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
__ >
OpenStack Development Mailing List (not for usage questions) > Unsubscribe: >
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
__ >
OpenStack Development Mailing List (not for usage questions) > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions) Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow
rs could be (with the special
case that trove creates such a format on-behalf of some other
consumer, aka the trove user). > As for the app-catalog use case,
the app-catalog project (http://apps.openstack.org) is working on
some of that. > > Thanks, > Kevin >
 > From: Joshua Harlow
[harlo...@fastmail.com] > Sent: Tuesday, April 12, 2016 12:16 PM >
To: Flavio Percoco; OpenStack Development Mailing List (not for
usage questions) > Cc: foundat...@lists.openstack.org > Subject: Re:
[openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform
– Containers/Bare Metal? (Re: Board of Directors Meeting) > > Flavio
Percoco wrote: >> On 11/04/16 18:05 +, Amrith Kumar wrote: >>>
Adrian, thx for your detailed mail. >>> >>> >>> >>> Yes, I was
hopeful of a silver bullet and as we’ve discussed before (I >>>
think it >>> was Vancouver), there’s likely no silver bullet in this
area. After that >>> conversation, and some further experimentation,
I found that even if >>> Trove had >>> access to a single Compute
API, there were other significant >>> complications >>> further down
the road, and I didn’t pursue the project further at the >>> time.
 >>> >> Adrian, Amrith, >> >> I've spent enough time researching on
this area during the last month >> and my >> conclusion is pretty
much the above. There's no silver bullet in this >> area and >> I'd
argue there shouldn't be one. Containers, bare metal and VMs differ
 >> in such >> a way (feature-wise) that it'd not be good, as far as
deploying >> databases goes, >> for there to be one compute API.
Containers allow for a different >> deployment >> architecture than
VMs and so does bare metal. > > Just some thoughts from me, but why
focus on the > compute/container/baremetal API at all? > > I'd
almost like a way that just describes how my app should be >
interconnected, what is required to get it going, and the features >
and/or scheduling requirements for different parts of those app. > >
To me it feels like this isn't a compute API or really a heat API
but > something else. Maybe it's closer to the docker compose
API/template > format or something like it. > > Perhaps such a thing
needs a new project. I'm not sure, but it does feel > like that as
developers we should be able to make such a thing that > still
exposes the more advanced functionality of the underlying API so >
that it can be used if really needed... > > Maybe this is similar to
an app-catalog, but that doesn't quite feel > like it's the right
thing either so maybe somewhere in between... > > IMHO I'd be nice
to have a unified story around what this thing is, so > that we as a
community can drive (as a single group) toward that, maybe > this is
where the product working group can help and we as a developer >
community can also try to unify behind... > > P.S. name for project
should be 'silver' related, ha. > > -Josh > >
__
 > OpenStack Development Mailing List (not for usage questions) >
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
 >
__
 > OpenStack Development Mailing List (not for usage questions) >
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Peng Zhao
I'd take the idea further. Imagine a typical Heat template, what you need to do
is:
- replace the VM id with Docker image id - nothing else - run the script with a 
normal heat engine - the entire stack gets deployed in seconds
Done!
Well, that sounds like nova-docker. What about cinder and neutron? They don't
work well with Linux container! The answer is Hypernova
(https://github.com/hyperhq/hypernova) or Intel ClearContainer, seamless
integration with most OpenStack components.
Summary: minimal changes to interface and upper systems, much smaller image and
much better developer workflow.
Peng
- Hyper_ Secure Container 
Cloud


On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlo...@fastmail.com wrote:
Fox, Kevin M wrote: > I think part of the problem is containers are mostly
orthogonal to vms/bare metal. Containers are a package for a single service.
Multiple can run on a single vm/bare metal host. Orchestration like Kubernetes
comes in to turn a pool of vm's/bare metal into a system that can easily run
multiple containers. > Is the orthogonal part a problem because we have made it
so or is it just how it really is? Brainstorming starts here: Imagine a
descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified): --- components: - label:
frontend count: 5 image: ubuntu_vanilla requirements: high memory, low disk
stateless: true - label: database count: 3 image: ubuntu_vanilla requirements:
high memory, high disk stateless: false - label: memcache count: 3 image:
debian-squeeze requirements: high memory, no disk stateless: true - label:
zookeeper count: 3 image: debian-squeeze requirements: high memory, medium disk
stateless: false backend: VM networks: - label: frontend_net flavor: "public
network instead it's just about the constraints that a user has on there
deployment and the components associated with it. It can be left up to some
consuming project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an actual
deployment plan), possibly say by optimizing for density (packing as many things
container) or optimizing for security (by using VMs) or optimizing for
performance (by using bare-metal). > So, rather then concern itself with
supporting launching through a COE and through Nova, which are two totally
different code paths, OpenStack advanced services like Trove could just use a
Magnum COE and have a UI that asks which existing Magnum COE to launch in, or
alternately kick off the "Launch new Magnum COE" workflow in horizon, then
follow up with the Trove launch workflow. Trove then would support being able to
use containers, users could potentially pack more containers onto their vm's
then just Trove, and it still would work with both Bare Metal and VM's the same
way since Magnum can launch on either. I'm afraid supporting both containers and
non container deployment with Trove will be a large effort with very little code
sharing. It may be easiest to have a flag version where non container
deployments are upgraded to containers then non container support is dropped. >
Sure trove seems like it would be a consumer of whatever interprets that format,
just like many other consumers could be (with the special case that trove
creates such a format on-behalf of some other consumer, aka the trove user). >
As for the app-catalog use case, the app-catalog project
(http://apps.openstack.org) is working on some of that. > > Thanks, > Kevin >
 > From: Joshua Harlow
[harlo...@fastmail.com] > Sent: Tuesday, April 12, 2016 12:16 PM  OpenStack
Development Mailing List (not for usage questions) > Cc:
foundat...@lists.openstack.org > Subject: Re: [openstack-dev] [OpenStack
Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of
Directors Meeting) > > Flavio Percoco wrote: >> On 11/04/16 18:05 +, Amrith
Kumar wrote: >>> Adrian, thx for your detailed mail. >>> >>> >>> >>> Yes, I was
hopeful of a silver bullet and as we’ve discussed before (I >>> think it >>> was
Vancouver), there’s likely no silver bullet in this area. After that >>>
conversation, and some further experimentation, I found that even if >>> Trove
had >>> access to a single Compute API, there were other significant >>>
complications >>> further down the road, and I didn’t pursue the project further
at the >>> time. >>> >> Adrian, Amrith, >> >> I've spent enough time researching
on this area during the last month >> and my >> conclusion is pretty much the
above. There's no silver bullet in this >> area and >> I'd argue there shouldn't
be one. Containers, bare metal and VMs differ >>

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Devdatta Kulkarni
Hi,

Reading through the thread I thought of adding following points which   might 
be relevant to this discussion.

1) Related to the point of providing an abstraction for deploying apps to 
different form factors: 
OpenStack Solum is a project which allows deploying of applications starting 
from
their source code. Currently we support following deployment options:
(a) deploying app containers in a setup where nova is configured to use 
nova-docker driver
(b) deploying app containers on a VM with one container per VM configuration
(c) deploying app containers to a COE such as a Docker swarm cluster (currently 
under development)

For (a) and (b) we use parameterized Heat templates. For (c), we currently 
assume that
a COE cluster is already created. Solum has a feature whereby app   
developers can provide
different kinds of parameters with theirapp deployment request.
This feature is used to provide cluster creds with the app deployment request.

The deployment option is controlled by the operator at the time
of Solum deployment. Solum's architecture is such that it is straightforward
to add new deployers. I haven't looked at Ironic so won't be able to comment 
how difficult/easy
it would be to add a Ironic deployer in Solum. As Joshua mentioned, it will be 
interesting
to consider different constraints (density, performance, isolation) when 
choosing the
form factor to deploy app containers. Of course, it will also depend on how 
other
OpenStack services are configured within that particular OpenStack setup.


2) Related to the point of high-level application description:
In Solum, we use a simple yaml format for describing an app to Solum. Example 
of an app file can be found here:
https://github.com/openstack/solum/blob/master/examples/apps/python_app.yaml

We have been planning to work on multi-container/microservice-based apps next, 
and have a spec for that:
https://review.openstack.org/#/c/254729/10/specs/mitaka/micro-service-architecture.rst

Any comments/feedback on the spec is welcome.


Lastly, in case you want to try out Solum, here is a link to setting up a 
development environment:
https://wiki.openstack.org/wiki/Solum/solum-development-setup

and getting started guide:
http://docs.openstack.org/developer/solum/getting_started/

Regards,
Devdatta


From: Joshua Harlow <harlo...@fastmail.com>
Sent: Tuesday, April 12, 2016 2:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:
> On 11/04/16 18:05 +, Amrith Kumar wrote:
>> Adrian, thx for your detailed mail.
>>
>>
>>
>> Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
>> think it
>> was Vancouver), there’s likely no silver bullet in this area. After that
>> conversation, and some further experimentation, I found that even if
>> Trove had
>> access to a single Compute API, there were other significant
>> complications
>> further down the road, and I didn’t pursue the project further at the
>> time.
>>
>
> Adrian, Amrith,
>
> I've spent enough time researching on this area during the last month
> and my
> conclusion is pretty much the above. There's no silver bullet in this
> area and
> I'd argue there shouldn't be one. Containers, bare metal and VMs differ
> in such
> a way (feature-wise) that it'd not be good, as far as deploying
> databases goes,
> for there to be one compute API. Containers allow for a different
> deployment
> architecture than VMs and so does bare metal.

Just some thoughts from me, but why focus on the
compute/container/baremetal API at all?

I'd almost like a way that just describes how my app should be
interconnected, what is required to get it going, and the features
and/or scheduling requirements for different parts of those app.

To me it feels like this isn't a compute API or really a heat API but
something else. Maybe it's closer to the docker compose API/template
format or something like it.

Perhaps such a thing needs a new project. I'm not sure, but it does feel
like that as developers we should be able to make such a thing that
still exposes the more advanced functionality of the underlying API so
that it can be used if really needed...

Maybe this is similar to an app-catalog, but that doesn't quite feel
like it's the right thing either so maybe somewhere in between...

IMHO I'd be nice to have a unified story around what this thing is, so
that we as a community can drive (as a single group) toward that, maybe
this is where the product working group can help and we as a developer
community can also try to unify behind...

P.S. name for project should be 'sil

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Amrith Kumar
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Tuesday, April 12, 2016 8:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 11/04/16 18:05 +, Amrith Kumar wrote:
> >Adrian, thx for your detailed mail.
> >
> >
> >
> >Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
> >think it was Vancouver), there’s likely no silver bullet in this area.
> >After that conversation, and some further experimentation, I found that
> >even if Trove had access to a single Compute API, there were other
> >significant complications further down the road, and I didn’t pursue the
> project further at the time.
> >
> 
> Adrian, Amrith,
> 
> I've spent enough time researching on this area during the last month and
> my conclusion is pretty much the above. There's no silver bullet in this
> area and I'd argue there shouldn't be one. Containers, bare metal and VMs
> differ in such a way (feature-wise) that it'd not be good, as far as
> deploying databases goes, for there to be one compute API. Containers
> allow for a different deployment architecture than VMs and so does bare
> metal.
> 

[amrith] That is an interesting observation if we were developing a unified 
compute service. However, the issue for a project like Trove is not whether 
Containers, VM's and bare-metal are the same of different, but rather what a 
user is looking to get out of a deployment of a database in each of those 
compute environments.

> 
> >We will be discussing Trove and Containers in Austin [1] and I’ll try
> >and close the loop with you on this while we’re in Town. I still would
> >like to come up with some way in which we can offer users the option of
> >provisioning database as containers.
> 
> As the person leading this session, I'm also looking forward to providing
> such provisioning facilities to Trove users. Let's do this.
> 

[amrith] In addition to hearing about how you plan to solve the problem, I 
would like to know what problem it is that you are planning to solve. Putting a 
database in a container is a solution, not a problem (IMHO). But the scope of 
this thread is broader so I'll stop at that.

> Cheers,
> Flavio
> 
> >
> >Thanks,
> >
> >
> >
> >-amrith
> >
> >
> >
> >[1] https://etherpad.openstack.org/p/trove-newton-summit-container
> >
> >
> >
> >From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> >Sent: Monday, April 11, 2016 12:54 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> ><openstack-dev@lists.openstack.org>
> >Cc: foundat...@lists.openstack.org
> >Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> >One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> >
> >
> >
> >Amrith,
> >
> >
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Steve Baker wrote:

On 13/04/16 11:07, Joshua Harlow wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what
needs to connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat& docker COE& neutron sfc to
produce a final set of deployment scripts and then just runs it
through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL
engine has to do to parse a SQL statement into a actionable plan, just
in this case it's a application deployment 'statement' and the
realization of that plan are of course where the 'meat' of the program
is.

It would be nice to connect what neutron SFC stuff is being worked
on/exists and have a single project for this kind of stuff, but maybe
I am dreaming to much right now :-P



This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I
could imagine heat-translator[2] gaining the ability to translate TOSCA
templates to either nova or COE specific heat templates which are then
created as stacks.

[1]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html


Since I don't know, does anyone in the wider-world actually support 
TOSCA as there API? Or is TOSCA more of an exploratory kind of thing or 
something else (seems like there is TOSCA -> Heat?)? Forgive my lack of 
understanding ;)




[2] https://github.com/openstack/heat-translator



If you ignore the non container use case, I think it might be fairly
easily mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to
vms/bare metal. Containers are a package for a single service.
Multiple can run on a single vm/bare metal host. Orchestration like
Kubernetes comes in to turn a pool of vm's/bare metal into a system
that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

---
components:
- label: frontend
count: 5
image: ubuntu_vanilla
requirements: high memory, low disk
stateless: true
- label: database
count: 3
image: ubuntu_vanilla
requirements: high memory, high disk
stateless: false
- label: memcache
count: 3
image: debian-squeeze
requirements: high memory, no disk
stateless: true
- label: zookeeper
count: 3
image: debian-squeeze
requirements: high memory, medium disk
stateless: false
backend: VM
networks:
- label: frontend_net
flavor: "public network"
associated_with:
- frontend
- label: database_net
flavor: high bandwidth
associated_with:
- database
- label: backend_net
flavor: high bandwidth and low latency
associated_with:
- zookeeper
- memchache
constraints:
- ref: container_only
params:
- frontend
- ref: no_colocated
params:
- database
- frontend
- ref: spread
params:
- database
- ref: no_colocated
params:
- database
- frontend
- ref: spread
params:
- memcache
- ref: spread
params:
- zookeeper
- ref: isolated_network
params:
- frontend_net
- database_net
- backend_net
...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).


So, rather then concern itself with supporting launching through a
COE and through Nova, which are two totally different code paths,
OpenStack advanced services like Trove could just use a Magnum COE
and have a UI that asks which existing Magnum COE to launch in, or
alternately kick off the "Launch new Magnum COE" workflow in
horizon, then follow up with the Trove launch workflow. Trove then
would support being able to use containers, users could potentially
pack more containers onto their vm's then just Trove, and it still
would work with both Bare Metal and VM's the same way si

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Steve Baker

On 13/04/16 11:07, Joshua Harlow wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what 
needs to connect to what, and it figures out the plumbing.


Ideally, it would map somehow to heat&  docker COE& neutron sfc to 
produce a final set of deployment scripts and then just runs it 
through the meat grinder. :)


It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL 
engine has to do to parse a SQL statement into a actionable plan, just 
in this case it's a application deployment 'statement' and the 
realization of that plan are of course where the 'meat' of the program 
is.


It would be nice to connect what neutron SFC stuff is being worked 
on/exists and have a single project for this kind of stuff, but maybe 
I am dreaming to much right now :-P




This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I 
could imagine heat-translator[2] gaining the ability to translate TOSCA 
templates to either nova or COE specific heat templates which are then 
created as stacks.


[1] 
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html

[2] https://github.com/openstack/heat-translator



If you ignore the non container use case, I think it might be fairly 
easily mappable to all three COE's though.


Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] 
One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)


Fox, Kevin M wrote:
I think part of the problem is containers are mostly orthogonal to 
vms/bare metal. Containers are a package for a single service. 
Multiple can run on a single vm/bare metal host. Orchestration like 
Kubernetes comes in to turn a pool of vm's/bare metal into a system 
that can easily run multiple containers.




Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

  ---
  components:
  -   label: frontend
  count: 5
  image: ubuntu_vanilla
  requirements: high memory, low disk
  stateless: true
  -   label: database
  count: 3
  image: ubuntu_vanilla
  requirements: high memory, high disk
  stateless: false
  -   label: memcache
  count: 3
  image: debian-squeeze
  requirements: high memory, no disk
  stateless: true
  -   label: zookeeper
  count: 3
  image: debian-squeeze
  requirements: high memory, medium disk
  stateless: false
  backend: VM
  networks:
  -   label: frontend_net
  flavor: "public network"
  associated_with:
  - frontend
  -   label: database_net
  flavor: high bandwidth
  associated_with:
  - database
  -   label: backend_net
  flavor: high bandwidth and low latency
  associated_with:
  - zookeeper
  - memchache
  constraints:
  -   ref: container_only
  params:
  - frontend
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   database
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   memcache
  -   ref: spread
  params:
  -   zookeeper
  -   ref: isolated_network
  params:
  - frontend_net
  - database_net
  - backend_net
  ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).

So, rather then concern itself with supporting launching through a 
COE and through Nova, which are two totally different code paths, 
OpenStack advanced services like Trove could just use a Magnum 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat&  docker COE&  neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. :)

It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL 
engine has to do to parse a SQL statement into a actionable plan, just 
in this case it's a application deployment 'statement' and the 
realization of that plan are of course where the 'meat' of the program is.


It would be nice to connect what neutron SFC stuff is being worked 
on/exists and have a single project for this kind of stuff, but maybe I 
am dreaming to much right now :-P




If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

  ---
  components:
  -   label: frontend
  count: 5
  image: ubuntu_vanilla
  requirements: high memory, low disk
  stateless: true
  -   label: database
  count: 3
  image: ubuntu_vanilla
  requirements: high memory, high disk
  stateless: false
  -   label: memcache
  count: 3
  image: debian-squeeze
  requirements: high memory, no disk
  stateless: true
  -   label: zookeeper
  count: 3
  image: debian-squeeze
  requirements: high memory, medium disk
  stateless: false
  backend: VM
  networks:
  -   label: frontend_net
  flavor: "public network"
  associated_with:
  - frontend
  -   label: database_net
  flavor: high bandwidth
  associated_with:
  - database
  -   label: backend_net
  flavor: high bandwidth and low latency
  associated_with:
  - zookeeper
  - memchache
  constraints:
  -   ref: container_only
  params:
  - frontend
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   database
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   memcache
  -   ref: spread
  params:
  -   zookeeper
  -   ref: isolated_network
  params:
  - frontend_net
  - database_net
  - backend_net
  ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).


So, rather then concern itself with supporting launching through a COE and through Nova, 
which are two totally different code paths, OpenStack advanced services like Trove could 
just use a Magnum COE and have a UI that asks which existing Magnum COE to launch in, or 
alternately kick off the "Launch new Magnum COE" workflow in horizon, then 
follow up with the Trove launch workflow. Trove then would support being able to use 
containers, users could potentially pack more containers onto their vm's then just Trove, 
and it still would work with both Bare Metal and VM's the same way since Magnum can 
launch on either. I'm afraid suppor

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what needs to 
connect to what, and it figures out the plumbing.

Ideally, it would map somehow to heat & docker COE & neutron sfc to produce a 
final set of deployment scripts and then just runs it through the meat grinder. 
:)

It would be awesome to use. It may be very difficult to implement.

If you ignore the non container use case, I think it might be fairly easily 
mappable to all three COE's though.

Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Fox, Kevin M wrote:
> I think part of the problem is containers are mostly orthogonal to vms/bare 
> metal. Containers are a package for a single service. Multiple can run on a 
> single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
> pool of vm's/bare metal into a system that can easily run multiple containers.
>

Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

 ---
 components:
 -   label: frontend
 count: 5
 image: ubuntu_vanilla
 requirements: high memory, low disk
 stateless: true
 -   label: database
 count: 3
 image: ubuntu_vanilla
 requirements: high memory, high disk
 stateless: false
 -   label: memcache
 count: 3
 image: debian-squeeze
 requirements: high memory, no disk
 stateless: true
 -   label: zookeeper
 count: 3
 image: debian-squeeze
 requirements: high memory, medium disk
 stateless: false
 backend: VM
 networks:
 -   label: frontend_net
 flavor: "public network"
 associated_with:
 - frontend
 -   label: database_net
 flavor: high bandwidth
 associated_with:
 - database
 -   label: backend_net
 flavor: high bandwidth and low latency
 associated_with:
 - zookeeper
 - memchache
 constraints:
 -   ref: container_only
 params:
 - frontend
 -   ref: no_colocated
 params:
 -   database
 -   frontend
 -   ref: spread
 params:
 -   database
 -   ref: no_colocated
 params:
 -   database
 -   frontend
 -   ref: spread
 params:
 -   memcache
 -   ref: spread
 params:
 -   zookeeper
 -   ref: isolated_network
 params:
 - frontend_net
 - database_net
 - backend_net
 ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).

> So, rather then concern itself with supporting launching through a COE and 
> through Nova, which are two totally different code paths, OpenStack advanced 
> services like Trove could just use a Magnum COE and have a UI that asks which 
> existing Magnum COE to launch in, or alternately kick off the "Launch new 
> Magnum COE" workflow in horizon, then follow up with the Trove launch 
> workflow. Trove then would support being able to use containers, users could 
> potentially pack more containers onto their vm's then just Trove, and it 
> still would work with both Bare Metal and VM's the same way since Magnum can 
> launch on either. I'm afraid supporting both containers and non container 
> deployment with Trove will be a large effort with very little code sharing. 
> It may be easiest to have a flag version where non container deployments are 
> upgraded to containers then non container support is dropped.
>

Sure trove seems like it would be a consumer of whatever interprets that
format, just like many other consumers could be (with the special case
that trove creates such a format on-behalf of some other consumer, aka
the

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Fox, Kevin M wrote:

I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.



Is the orthogonal part a problem because we have made it so or is it 
just how it really is?


Brainstorming starts here:

Imagine a descriptor language like (which I stole from 
https://review.openstack.org/#/c/210549 and modified):


---
components:
-   label: frontend
count: 5
image: ubuntu_vanilla
requirements: high memory, low disk
stateless: true
-   label: database
count: 3
image: ubuntu_vanilla
requirements: high memory, high disk
stateless: false
-   label: memcache
count: 3
image: debian-squeeze
requirements: high memory, no disk
stateless: true
-   label: zookeeper
count: 3
image: debian-squeeze
requirements: high memory, medium disk
stateless: false
backend: VM
networks:
-   label: frontend_net
flavor: "public network"
associated_with:
- frontend
-   label: database_net
flavor: high bandwidth
associated_with:
- database
-   label: backend_net
flavor: high bandwidth and low latency
associated_with:
- zookeeper
- memchache
constraints:
-   ref: container_only
params:
- frontend
-   ref: no_colocated
params:
-   database
-   frontend
-   ref: spread
params:
-   database
-   ref: no_colocated
params:
-   database
-   frontend
-   ref: spread
params:
-   memcache
-   ref: spread
params:
-   zookeeper
-   ref: isolated_network
params:
- frontend_net
- database_net
- backend_net
...


Now nothing in the above is about container, or baremetal or vms, 
(although a 'advanced' constraint can be that a component must be on a 
container, and it must say be deployed via docker image XYZ...); instead 
it's just about the constraints that a user has on there deployment and 
the components associated with it. It can be left up to some consuming 
project of that format to decide how to turn that desired description 
into an actual description (aka a full expanding of that format into an 
actual deployment plan), possibly say by optimizing for density (packing 
as many things container) or optimizing for security (by using VMs) or 
optimizing for performance (by using bare-metal).



So, rather then concern itself with supporting launching through a COE and through Nova, 
which are two totally different code paths, OpenStack advanced services like Trove could 
just use a Magnum COE and have a UI that asks which existing Magnum COE to launch in, or 
alternately kick off the "Launch new Magnum COE" workflow in horizon, then 
follow up with the Trove launch workflow. Trove then would support being able to use 
containers, users could potentially pack more containers onto their vm's then just Trove, 
and it still would work with both Bare Metal and VM's the same way since Magnum can 
launch on either. I'm afraid supporting both containers and non container deployment with 
Trove will be a large effort with very little code sharing. It may be easiest to have a 
flag version where non container deployments are upgraded to containers then non 
container support is dropped.



Sure trove seems like it would be a consumer of whatever interprets that 
format, just like many other consumers could be (with the special case 
that trove creates such a format on-behalf of some other consumer, aka 
the trove user).



As for the app-catalog use case, the app-catalog project 
(http://apps.openstack.org) is working on some of that.

Thanks,
Kevin
  
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 12:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if
Trove had
access to a single Compute API, there were other significant
complications
further down the road, and I didn’t pursue the project further at the
time.


Adrian, Amrith,

I'

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
I think part of the problem is containers are mostly orthogonal to vms/bare 
metal. Containers are a package for a single service. Multiple can run on a 
single vm/bare metal host. Orchestration like Kubernetes comes in to turn a 
pool of vm's/bare metal into a system that can easily run multiple containers.

So, rather then concern itself with supporting launching through a COE and 
through Nova, which are two totally different code paths, OpenStack advanced 
services like Trove could just use a Magnum COE and have a UI that asks which 
existing Magnum COE to launch in, or alternately kick off the "Launch new 
Magnum COE" workflow in horizon, then follow up with the Trove launch workflow. 
Trove then would support being able to use containers, users could potentially 
pack more containers onto their vm's then just Trove, and it still would work 
with both Bare Metal and VM's the same way since Magnum can launch on either. 
I'm afraid supporting both containers and non container deployment with Trove 
will be a large effort with very little code sharing. It may be easiest to have 
a flag version where non container deployments are upgraded to containers then 
non container support is dropped.

As for the app-catalog use case, the app-catalog project 
(http://apps.openstack.org) is working on some of that.

Thanks,
Kevin
 
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 12:16 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Flavio Percoco wrote:
> On 11/04/16 18:05 +, Amrith Kumar wrote:
>> Adrian, thx for your detailed mail.
>>
>>
>>
>> Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
>> think it
>> was Vancouver), there’s likely no silver bullet in this area. After that
>> conversation, and some further experimentation, I found that even if
>> Trove had
>> access to a single Compute API, there were other significant
>> complications
>> further down the road, and I didn’t pursue the project further at the
>> time.
>>
>
> Adrian, Amrith,
>
> I've spent enough time researching on this area during the last month
> and my
> conclusion is pretty much the above. There's no silver bullet in this
> area and
> I'd argue there shouldn't be one. Containers, bare metal and VMs differ
> in such
> a way (feature-wise) that it'd not be good, as far as deploying
> databases goes,
> for there to be one compute API. Containers allow for a different
> deployment
> architecture than VMs and so does bare metal.

Just some thoughts from me, but why focus on the
compute/container/baremetal API at all?

I'd almost like a way that just describes how my app should be
interconnected, what is required to get it going, and the features
and/or scheduling requirements for different parts of those app.

To me it feels like this isn't a compute API or really a heat API but
something else. Maybe it's closer to the docker compose API/template
format or something like it.

Perhaps such a thing needs a new project. I'm not sure, but it does feel
like that as developers we should be able to make such a thing that
still exposes the more advanced functionality of the underlying API so
that it can be used if really needed...

Maybe this is similar to an app-catalog, but that doesn't quite feel
like it's the right thing either so maybe somewhere in between...

IMHO I'd be nice to have a unified story around what this thing is, so
that we as a community can drive (as a single group) toward that, maybe
this is where the product working group can help and we as a developer
community can also try to unify behind...

P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Joshua Harlow

Flavio Percoco wrote:

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I
think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if
Trove had
access to a single Compute API, there were other significant
complications
further down the road, and I didn’t pursue the project further at the
time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month
and my
conclusion is pretty much the above. There's no silver bullet in this
area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ
in such
a way (feature-wise) that it'd not be good, as far as deploying
databases goes,
for there to be one compute API. Containers allow for a different
deployment
architecture than VMs and so does bare metal.


Just some thoughts from me, but why focus on the 
compute/container/baremetal API at all?


I'd almost like a way that just describes how my app should be 
interconnected, what is required to get it going, and the features 
and/or scheduling requirements for different parts of those app.


To me it feels like this isn't a compute API or really a heat API but 
something else. Maybe it's closer to the docker compose API/template 
format or something like it.


Perhaps such a thing needs a new project. I'm not sure, but it does feel 
like that as developers we should be able to make such a thing that 
still exposes the more advanced functionality of the underlying API so 
that it can be used if really needed...


Maybe this is similar to an app-catalog, but that doesn't quite feel 
like it's the right thing either so maybe somewhere in between...


IMHO I'd be nice to have a unified story around what this thing is, so 
that we as a community can drive (as a single group) toward that, maybe 
this is where the product working group can help and we as a developer 
community can also try to unify behind...


P.S. name for project should be 'silver' related, ha.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Fox, Kevin M
Flavio, and whom'ever else who's available to attend,

We have a summit session for instance users listed here that could be part of 
the solution:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9485

Please attend if you can.
--

+1 for a basic common abstraction. The app catalog could really use it too. 
We'd like to be able to host container orchestration templates and hand them 
over to Magnum for launching.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, April 12, 2016 5:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

On 11/04/16 16:53 +, Adrian Otto wrote:
>Amrith,
>
>I respect your point of view, and agree that the idea of a common compute API
>is attractive… until you think a bit deeper about what that would mean. We
>seriously considered a “global” compute API at the time we were first
>contemplating Magnum. However, what we came to learn through the journey of
>understanding the details of how such a thing would be implemented, that such
>an API would either be (1) the lowest common denominator (LCD) of all compute
>types, or (2) an exceedingly complex interface.
>
>You expressed a sentiment below that trying to offer choices for VM, Bare Metal
>(BM), and Containers for Trove instances “adds considerable complexity”.
>Roughly the same complexity would accompany the use of a comprehensive compute
>API. I suppose you were imagining an LCD approach. If that’s what you want,
>just use the existing Nova API, and load different compute drivers on different
>host aggregates. A single Nova client can produce VM, BM (Ironic), and
>Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
>configured in this way. That’s what we do. Flavors determine which compute type
>you get.
>
>If what you meant is that you could tap into the power of all the unique
>characteristics of each of the various compute types (through some modular
>extensibility framework) you’ll likely end up with complexity in Trove that is
>comparable to integrating with the native upstream APIs, along with the
>disadvantage of waiting for OpenStack to continually catch up to the pace of
>change of the various upstream systems on which it depends. This is a recipe
>for disappointment.
>
>We concluded that wrapping native APIs is a mistake, particularly when they are
>sufficiently different than what the Nova API already offers. Containers APIs
>have limited similarities, so when you try to make a universal interface to all
>of them, you end up with a really complicated mess. It would be even worse if
>we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
>approach is to offer the upstream native API’s for the different container
>orchestration engines (COE), and compose Bays for them to run on that are built
>from the compute types that OpenStack supports. We do this by using different
>Heat orchestration templates (and conditional templates) to arrange a COE on
>the compute type of your choice. With that said, there are still gaps where not
>all storage or network drivers work with Ironic, and there are non-trivial
>security hurdles to clear to safely use Bays composed of libvirt-lxc instances
>in a multi-tenant environment.
>
>My suggestion to get what you want for Trove is to see if the cloud has Magnum,
>and if it does, create a bay with the flavor type specified for whatever
>compute type you want, and then use the native API for the COE you selected for
>that bay. Start your instance on the COE, just like you use Nova today. This
>way, you have low complexity in Trove, and you can scale both the number of
>instances of your data nodes (containers), and the infrastructure on which they
>run (Nova instances).


I've been researching on this area and I've reached pretty much the same
conclusion. I've had moments of wondering whether creating bays is something
Trove should do but I now think it should.

The need of handling the native API is the part I find a bit painful as that
means more code needs to happen in Trove for us to provide this provisioning
facilities. I wonder if a common *library* would help here, at least to handle
those "simple" cases. Anyway, I look forward to chatting with you all about 
this.

It'd be great if you (and other magnum folks) could join this session:

https://etherpad.openstack.org/p/trove-newton-summit-container

Thanks for chiming in, Adrian.
Flavio

>Regards,
>
>Adrian
>
>
>
>
>On Apr 11, 2016, at 8:47 AM, Amrith Kumar <amr...@tesora.com> wrote:
>
>Monty, Dims,
>

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco

On 11/04/16 18:05 +, Amrith Kumar wrote:

Adrian, thx for your detailed mail.



Yes, I was hopeful of a silver bullet and as we’ve discussed before (I think it
was Vancouver), there’s likely no silver bullet in this area. After that
conversation, and some further experimentation, I found that even if Trove had
access to a single Compute API, there were other significant complications
further down the road, and I didn’t pursue the project further at the time.



Adrian, Amrith,

I've spent enough time researching on this area during the last month and my
conclusion is pretty much the above. There's no silver bullet in this area and
I'd argue there shouldn't be one. Containers, bare metal and VMs differ in such
a way (feature-wise) that it'd not be good, as far as deploying databases goes,
for there to be one compute API. Containers allow for a different deployment
architecture than VMs and so does bare metal.



We will be discussing Trove and Containers in Austin [1] and I’ll try and close
the loop with you on this while we’re in Town. I still would like to come up
with some way in which we can offer users the option of provisioning database
as containers.


As the person leading this session, I'm also looking forward to providing such
provisioning facilities to Trove users. Let's do this.

Cheers,
Flavio



Thanks,



-amrith



[1] https://etherpad.openstack.org/p/trove-newton-summit-container



From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Monday, April 11, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)



Amrith,



I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through the journey of
understanding the details of how such a thing would be implemented, that such
an API would either be (1) the lowest common denominator (LCD) of all compute
types, or (2) an exceedingly complex interface. 




You expressed a sentiment below that trying to offer choices for VM, Bare Metal
(BM), and Containers for Trove instances “adds considerable complexity”.
Roughly the same complexity would accompany the use of a comprehensive compute
API. I suppose you were imagining an LCD approach. If that’s what you want,
just use the existing Nova API, and load different compute drivers on different
host aggregates. A single Nova client can produce VM, BM (Ironic), and
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s
configured in this way. That’s what we do. Flavors determine which compute type
you get.



If what you meant is that you could tap into the power of all the unique
characteristics of each of the various compute types (through some modular
extensibility framework) you’ll likely end up with complexity in Trove that is
comparable to integrating with the native upstream APIs, along with the
disadvantage of waiting for OpenStack to continually catch up to the pace of
change of the various upstream systems on which it depends. This is a recipe
for disappointment.



We concluded that wrapping native APIs is a mistake, particularly when they are
sufficiently different than what the Nova API already offers. Containers APIs
have limited similarities, so when you try to make a universal interface to all
of them, you end up with a really complicated mess. It would be even worse if
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s
approach is to offer the upstream native API’s for the different container
orchestration engines (COE), and compose Bays for them to run on that are built
from the compute types that OpenStack supports. We do this by using different
Heat orchestration templates (and conditional templates) to arrange a COE on
the compute type of your choice. With that said, there are still gaps where not
all storage or network drivers work with Ironic, and there are non-trivial
security hurdles to clear to safely use Bays composed of libvirt-lxc instances
in a multi-tenant environment.



My suggestion to get what you want for Trove is to see if the cloud has Magnum,
and if it does, create a bay with the flavor type specified for whatever
compute type you want, and then use the native API for the COE you selected for
that bay. Start your instance on the COE, just like you use Nova today. This
way, you have low complexity in Trove, and you can scale both the number of
instances of your data nodes (containers), and the infrastructure on which they
run (Nova instances).



Regards,



Adrian







   On Apr 11, 2016, at 8:47 AM, Amrith Kumar <amr...@tesora.com> wrote:



  

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Flavio Percoco
ototype of Trove leveraging Ironic, VM's, and nova-docker to
   provision databases is something I worked on a while ago, and have not
   revisited it since then (once the direction appeared to be Magnum for
   containers).

   With all that said, I don't want to downplay the value in a container
   specific API. I'm merely observing that from the perspective of a consumer
   of computing services, a common abstraction is incredibly valuable. 


   Thanks,

   -amrith 



   -Original Message-
   From: Monty Taylor [mailto:mord...@inaugust.com]
   Sent: Monday, April 11, 2016 11:31 AM
   To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas
   <dava...@gmail.com>; foundat...@lists.openstack.org
   Cc: OpenStack Development Mailing List (not for usage questions)
   <openstack-dev@lists.openstack.org>
   Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
   One
   Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

   On 04/11/2016 09:43 AM, Allison Randal wrote:

   On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas <
   dava...@gmail.com>

   wrote:

   Reading unofficial notes [1], i found one topic very
   interesting:
   One Platform – How do we truly support containers and bare
   metal
   under a common API with VMs? (Ironic, Nova, adjacent
   communities e.g.
   Kubernetes, Apache Mesos etc)

   Anyone present at the meeting, please expand on those few
   notes on
   etherpad? And how if any this feedback is getting back to
   the
   projects?


   It was really two separate conversations that got conflated in the
   summary. One conversation was just being supportive of bare metal,
   VMs, and containers within the OpenStack umbrella. The other
   conversation started with Monty talking about his work on shade,
   and
   how it wouldn't exist if more APIs were focused on the way users
   consume the APIs, and less an expression of the implementation
   details

   of each project.

   OpenStackClient was mentioned as a unified CLI for OpenStack
   focused
   more on the way users consume the CLI. (OpenStackSDK wasn't
   mentioned,
   but falls in the same general category of work.)

   i.e. There wasn't anything new in the conversation, it was more a
   matter of the developers/TC members on the board sharing
   information
   about work that's already happening.


   I agree with that - but would like to clarify the 'bare metal, VMs and
   containers' part a bit. (an in fact, I was concerned in the meeting
   that
   the messaging around this would be confusing because we 'supporting
   bare
   metal' and 'supporting containers' mean two different things but we use
   one phrase to talk about it.

   It's abundantly clear at the strategic level that having OpenStack be
   able
   to provide both VMs and Bare Metal as two different sorts of resources
   (ostensibly but not prescriptively via nova) is one of our advantages.
   We
   wanted to underscore how important it is to be able to do that, and
   wanted
   to underscore that so that it's really clear how important it is any
   time
   the "but cloud should just be VMs" sentiment arises.

   The way we discussed "supporting containers" was quite different and
   was
   not about nova providing containers. Rather, it was about reaching out
   to
   our friends in other communities and working with them on making
   OpenStack
   the best place to run things like kubernetes or docker swarm.
   Those are systems that ultimately need to run, and it seems that good
   integration (like kuryr with libnetwork) can provide a really strong
   story. I think pretty much everyone agrees that there is not much value
   to
   us or the world for us to compete with kubernetes or docker.

   So, we do want to be supportive of bare metal and containers - but the
   specific _WAY_ we want to be supportive of those things is different
   for
   each one.

   Monty


   
__
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?
   subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: April-11-16 2:52 PM
To: OpenStack Development Mailing List (not for usage questions); Adrian Otto
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Yeah, I think there are two places where it may make sense.

1. Ironic's nova plugin is a lowst common denominator for treating a physical 
host like a vm. Ironic's api is much more rich, but sometimes all you need is 
the lowest common denominator and don't want to rewrite a bunch of code. In 
this case, it may make sense to have a nova plugin that talks to magnum to 
launch a heavy weight container to make the use case easy.
If I understand correctly, you were proposing a Magnum virt-driver for Nova, 
which is used to provision containers in Magnum bays? Magnum has different bay 
types (i.e. kubernetes, swarm, mesos) so the proposed driver needs to 
understand the APIs of different container orchestration engines (COEs). I 
think it will work only if Magnum provides an unified Container APIs so that 
the introduced Nova virt-driver can call Magnum unified APIs to launch 
containers.


2. Basic abstraction of Orchestration systems. Most (all?) docker orchestration 
systems work with a yaml file. What's in it differs, but shipping it from point 
A to point B using an authenticated channel can probably be nicely abstracted. 
I think this would be a big usability gain as well. Things like the 
applications catalog could much more easily hook into it then. The catalog 
would provide the yaml, and a tag to know which orchestrator type it is, and 
just pass that info along to magnum.
I am open to discuss that, but inventing a standard DSL for all COEs is a 
significant amount of work. We need to evaluate the benefits and costs before 
proceeding to this direction. In comparison, the proposal of unifying Container 
APIs [1] looks easier to implement and maintain.
[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers


Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 11, 2016 11:10 AM
To: Adrian Otto; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Container

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
That’s not what I was talking about here. I’m addressing the interest in a 
common compute API for the various types of compute (VM, BM, Container). Having 
a “containers” API for multiple COE’s is a different subject.

Adrian

On Apr 11, 2016, at 11:10 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
<amr...@tesora.com<mailto:amr...@tesora.com>> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Allison Randal
On 04/11/2016 02:51 PM, Fox, Kevin M wrote:
> Yeah, I think there are two places where it may make sense.
> 
> 1. Ironic's nova plugin is a lowst common denominator for treating a
> physical host like a vm. Ironic's api is much more rich, but sometimes
> all you need is the lowest common denominator and don't want to rewrite
> a bunch of code. In this case, it may make sense to have a nova plugin
> that talks to magnum to launch a heavy weight container to make the use
> case easy.
> 
> 2. Basic abstraction of Orchestration systems. Most (all?) docker
> orchestration systems work with a yaml file. What's in it differs, but
> shipping it from point A to point B using an authenticated channel can
> probably be nicely abstracted. I think this would be a big usability
> gain as well. Things like the applications catalog could much more
> easily hook into it then. The catalog would provide the yaml, and a tag
> to know which orchestrator type it is, and just pass that info along to
> magnum.

The typical conundrum here is making "the easy things easy, and the hard
things possible". It doesn't have to be a choice between a) providing a
rich API with access to all the features of each individual compute
paradigm, and b) providing a simple API that allows users to request a
compute resource of any type that's available in the public/private
cloud they're interacting with. OpenStack can have both.

The simple lowest common denominator interface would be very limited
(both by necessity and by design), but easy to understand and get
started on, making some smart assumptions on common usage patterns. The
richer APIs are there for users who need more power and flexibility, and
are ready to go beyond the easy on-ramp.

Again, nothing new here, it seems to be the direction we're already
heading. I'm just articulating why.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Fox, Kevin M
Yeah, I think there are two places where it may make sense.

1. Ironic's nova plugin is a lowst common denominator for treating a physical 
host like a vm. Ironic's api is much more rich, but sometimes all you need is 
the lowest common denominator and don't want to rewrite a bunch of code. In 
this case, it may make sense to have a nova plugin that talks to magnum to 
launch a heavy weight container to make the use case easy.

2. Basic abstraction of Orchestration systems. Most (all?) docker orchestration 
systems work with a yaml file. What's in it differs, but shipping it from point 
A to point B using an authenticated channel can probably be nicely abstracted. 
I think this would be a big usability gain as well. Things like the 
applications catalog could much more easily hook into it then. The catalog 
would provide the yaml, and a tag to know which orchestrator type it is, and 
just pass that info along to magnum.

Thanks,
Kevin



From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 11, 2016 11:10 AM
To: Adrian Otto; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Tim Bell

As we’ve deployed more OpenStack components in production, one of the points we 
have really appreciated is the common areas

- Single pane of glass for Horizon
- Single accounting infrastructure
- Single resource management, quota and admin roles
- Single storage pools with Cinder
- (not quite yet but) common CLI

Building on this, our workflows have simplified

- Lifecycle management (cleaning up when users leave)
- Onboarding (registering for access to the resoures and mapping to the 
appropriate projects)
- Capacity planning (shifting resources, e.g. containers becoming popular 
needing more capacity)

Getting consistent APIs and CLIs is really needed though since the “one 
platform” message is not so easy to explain given the historical decisions, 
such as project vs tenant.

As Subbu has said, the cloud software is one part but there are so many others…

Tim



On 11/04/16 18:08, "Fox, Kevin M" <kevin@pnnl.gov> wrote:

>The more I've used Containers in production the more I've come to the 
>conclusion they are much different beasts then Nova Instances. Nova's 
>abstraction lets Physical hardware and VM's share one common API, and it makes 
>a lot of sense to unify them.
>
>Oh. To be explicit, I'm talking about docker style lightweight containers, not 
>heavy weight containers like LXC ones. The heavy weight ones do work well with 
>Nova. For the rest of the conversation container = lightweight container.
>
>Trove can make use of containers provided there is a standard api in OpenStack 
>for provisioning them. Right now, Magnum provides a way to get Kubernetes 
>orchestrated clusters, for example, but doensn't have good integration with it 
>to hook it into keystone so that Trusts can be used with it on the users 
>behalf for advanced services like Trove. So some pieces are missing. Heat 
>should have a way to have Kubernetes Yaml resources too.
>
>I think the recent request to rescope Kuryr to include non network features is 
>a good step in solving some of the issues.
>
>Unfortunately, it will probably take some time to get Magnum to the point 
>where it can be used by other OpenStack advanced services. Maybe these sorts 
>of issues should be written down and discussed at the upcoming summit between 
>the Magnum and Kuryr teams?
>
>Thanks,
>Kevin
>
>
>
>From: Amrith Kumar [amr...@tesora.com]
>Sent: Monday, April 11, 2016 8:47 AM
>To: OpenStack Development Mailing List (not for usage questions); Allison 
>Randal; Davanum Srinivas; foundat...@lists.openstack.org
>Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
>Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
>Monty, Dims,
>
>I read the notes and was similarly intrigued about the idea. In particular, 
>from the perspective of projects like Trove, having a common Compute API is 
>very valuable. It would allow the projects to have a single view of 
>provisioning compute, as we can today with Nova and get the benefit of bare 
>metal through Ironic, VM's through Nova VM's, and containers through 
>nova-docker.
>
>With this in place, a project like Trove can offer database-as-a-service on a 
>spectrum of compute infrastructures as any end-user would expect. Databases 
>don't always make sense in VM's, and while containers are great for quick and 
>dirty prototyping, and VM's are great for much more, there are databases that 
>will in production only be meaningful on bare-metal.
>
>Therefore, if there is a move towards offering a common API for VM's, 
>bare-metal and containers, that would be huge.
>
>Without such a mechanism, consuming containers in Trove adds considerable 
>complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a 
>working prototype of Trove leveraging Ironic, VM's, and nova-docker to 
>provision databases is something I worked on a while ago, and have not 
>revisited it since then (once the direction appeared to be Magnum for 
>containers).
>
>With all that said, I don't want to downplay the value in a container specific 
>API. I'm merely observing that from the perspective of a consumer of computing 
>services, a common abstraction is incredibly valuable.
>
>Thanks,
>
>-amrith
>
>> -Original Message-
>> From: Monty Taylor [mailto:mord...@inaugust.com]
>> Sent: Monday, April 11, 2016 11:31 AM
>> To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas
>> <dava...@gmail.com>; foundat...@lists.openstack.org
>> Cc: OpenStack Development Mailing List (not for usage questions)
>> <openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
>> Platform – Containers/Bare Metal? (Re:

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
<amr...@tesora.com<mailto:amr...@tesora.com>> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Amrith Kumar
Adrian, thx for your detailed mail.

Yes, I was hopeful of a silver bullet and as we’ve discussed before (I think it 
was Vancouver), there’s likely no silver bullet in this area. After that 
conversation, and some further experimentation, I found that even if Trove had 
access to a single Compute API, there were other significant complications 
further down the road, and I didn’t pursue the project further at the time.

We will be discussing Trove and Containers in Austin [1] and I’ll try and close 
the loop with you on this while we’re in Town. I still would like to come up 
with some way in which we can offer users the option of provisioning database 
as containers.

Thanks,

-amrith

[1] https://etherpad.openstack.org/p/trove-newton-summit-container

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Monday, April 11, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
<amr...@tesora.com<mailto:amr...@tesora.com>> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great fo

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
gt;; 
foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Cc: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

On 04/11/2016 09:43 AM, Allison Randal wrote:
On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
<dava...@gmail.com<mailto:dava...@gmail.com>>
wrote:
Reading unofficial notes [1], i found one topic very interesting:
One Platform – How do we truly support containers and bare metal
under a common API with VMs? (Ironic, Nova, adjacent communities e.g.
Kubernetes, Apache Mesos etc)

Anyone present at the meeting, please expand on those few notes on
etherpad? And how if any this feedback is getting back to the
projects?

It was really two separate conversations that got conflated in the
summary. One conversation was just being supportive of bare metal,
VMs, and containers within the OpenStack umbrella. The other
conversation started with Monty talking about his work on shade, and
how it wouldn't exist if more APIs were focused on the way users
consume the APIs, and less an expression of the implementation details
of each project.
OpenStackClient was mentioned as a unified CLI for OpenStack focused
more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
but falls in the same general category of work.)

i.e. There wasn't anything new in the conversation, it was more a
matter of the developers/TC members on the board sharing information
about work that's already happening.

I agree with that - but would like to clarify the 'bare metal, VMs and
containers' part a bit. (an in fact, I was concerned in the meeting that
the messaging around this would be confusing because we 'supporting bare
metal' and 'supporting containers' mean two different things but we use
one phrase to talk about it.

It's abundantly clear at the strategic level that having OpenStack be able
to provide both VMs and Bare Metal as two different sorts of resources
(ostensibly but not prescriptively via nova) is one of our advantages. We
wanted to underscore how important it is to be able to do that, and wanted
to underscore that so that it's really clear how important it is any time
the "but cloud should just be VMs" sentiment arises.

The way we discussed "supporting containers" was quite different and was
not about nova providing containers. Rather, it was about reaching out to
our friends in other communities and working with them on making OpenStack
the best place to run things like kubernetes or docker swarm.
Those are systems that ultimately need to run, and it seems that good
integration (like kuryr with libnetwork) can provide a really strong
story. I think pretty much everyone agrees that there is not much value to
us or the world for us to compete with kubernetes or docker.

So, we do want to be supportive of bare metal and containers - but the
specific _WAY_ we want to be supportive of those things is different for
each one.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Russell Bryant
On Mon, Apr 11, 2016 at 11:30 AM, Monty Taylor  wrote:

> On 04/11/2016 09:43 AM, Allison Randal wrote:
>
>> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
>>> wrote:
>>>
 Reading unofficial notes [1], i found one topic very interesting:
 One Platform – How do we truly support containers and bare metal under
 a common API with VMs? (Ironic, Nova, adjacent communities e.g.
 Kubernetes, Apache Mesos etc)

 Anyone present at the meeting, please expand on those few notes on
 etherpad? And how if any this feedback is getting back to the
 projects?

>>>
>> It was really two separate conversations that got conflated in the
>> summary. One conversation was just being supportive of bare metal, VMs,
>> and containers within the OpenStack umbrella. The other conversation
>> started with Monty talking about his work on shade, and how it wouldn't
>> exist if more APIs were focused on the way users consume the APIs, and
>> less an expression of the implementation details of each project.
>> OpenStackClient was mentioned as a unified CLI for OpenStack focused
>> more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
>> but falls in the same general category of work.)
>>
>> i.e. There wasn't anything new in the conversation, it was more a matter
>> of the developers/TC members on the board sharing information about work
>> that's already happening.
>>
>
> I agree with that - but would like to clarify the 'bare metal, VMs and
> containers' part a bit. (an in fact, I was concerned in the meeting that
> the messaging around this would be confusing because we 'supporting bare
> metal' and 'supporting containers' mean two different things but we use one
> phrase to talk about it.
>
> It's abundantly clear at the strategic level that having OpenStack be able
> to provide both VMs and Bare Metal as two different sorts of resources
> (ostensibly but not prescriptively via nova) is one of our advantages. We
> wanted to underscore how important it is to be able to do that, and wanted
> to underscore that so that it's really clear how important it is any time
> the "but cloud should just be VMs" sentiment arises.
>
> The way we discussed "supporting containers" was quite different and was
> not about nova providing containers. Rather, it was about reaching out to
> our friends in other communities and working with them on making OpenStack
> the best place to run things like kubernetes or docker swarm. Those are
> systems that ultimately need to run, and it seems that good integration
> (like kuryr with libnetwork) can provide a really strong story. I think
> pretty much everyone agrees that there is not much value to us or the world
> for us to compete with kubernetes or docker.
>
> So, we do want to be supportive of bare metal and containers - but the
> specific _WAY_ we want to be supportive of those things is different for
> each one.
>

I was there and agree with the summary provided by Allison and Monty.

It's important to have some high level alignment on where we see our core
strengths and where we see ourselves as complementary and not competitive.
I don't think any of it was new information, but valuable to revisit
nonetheless.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Fox, Kevin M
The more I've used Containers in production the more I've come to the 
conclusion they are much different beasts then Nova Instances. Nova's 
abstraction lets Physical hardware and VM's share one common API, and it makes 
a lot of sense to unify them.

Oh. To be explicit, I'm talking about docker style lightweight containers, not 
heavy weight containers like LXC ones. The heavy weight ones do work well with 
Nova. For the rest of the conversation container = lightweight container.

Trove can make use of containers provided there is a standard api in OpenStack 
for provisioning them. Right now, Magnum provides a way to get Kubernetes 
orchestrated clusters, for example, but doensn't have good integration with it 
to hook it into keystone so that Trusts can be used with it on the users behalf 
for advanced services like Trove. So some pieces are missing. Heat should have 
a way to have Kubernetes Yaml resources too.

I think the recent request to rescope Kuryr to include non network features is 
a good step in solving some of the issues.

Unfortunately, it will probably take some time to get Magnum to the point where 
it can be used by other OpenStack advanced services. Maybe these sorts of 
issues should be written down and discussed at the upcoming summit between the 
Magnum and Kuryr teams?

Thanks,
Kevin



From: Amrith Kumar [amr...@tesora.com]
Sent: Monday, April 11, 2016 8:47 AM
To: OpenStack Development Mailing List (not for usage questions); Allison 
Randal; Davanum Srinivas; foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable.

Thanks,

-amrith

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Monday, April 11, 2016 11:31 AM
> To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas
> <dava...@gmail.com>; foundat...@lists.openstack.org
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
> On 04/11/2016 09:43 AM, Allison Randal wrote:
> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas <dava...@gmail.com>
> wrote:
> >>> Reading unofficial notes [1], i found one topic very interesting:
> >>> One Platform – How do we truly support containers and bare metal
> >>> under a common API with VMs? (Ironic, Nova, adjacent communities e.g.
> >>> Kubernetes, Apache Mesos etc)
> >>>
> >>> Anyone present at the meeting, please expand on those few notes on
> >>> etherpad? And how if any this feedback is getting back to the
> >>> projects?
> >
> > It was really two separate conversations that got conflated in the
> > summary. One conversation was just being supportive of bare metal,
> > VMs, and containers within the OpenStack umbrella. The other
> > conversation started with Monty talking about his work on shade, and
> > how it wouldn't exist if more APIs were focused on the way users
> > consume the APIs, and less an expression of the implementation details
> of each project.
> > OpenStackClient was mentioned as a unified CLI for OpenStack focu

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Amrith Kumar
Monty, Dims, 

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable. 

Thanks,

-amrith 

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Monday, April 11, 2016 11:31 AM
> To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas
> <dava...@gmail.com>; foundat...@lists.openstack.org
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 04/11/2016 09:43 AM, Allison Randal wrote:
> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas <dava...@gmail.com>
> wrote:
> >>> Reading unofficial notes [1], i found one topic very interesting:
> >>> One Platform – How do we truly support containers and bare metal
> >>> under a common API with VMs? (Ironic, Nova, adjacent communities e.g.
> >>> Kubernetes, Apache Mesos etc)
> >>>
> >>> Anyone present at the meeting, please expand on those few notes on
> >>> etherpad? And how if any this feedback is getting back to the
> >>> projects?
> >
> > It was really two separate conversations that got conflated in the
> > summary. One conversation was just being supportive of bare metal,
> > VMs, and containers within the OpenStack umbrella. The other
> > conversation started with Monty talking about his work on shade, and
> > how it wouldn't exist if more APIs were focused on the way users
> > consume the APIs, and less an expression of the implementation details
> of each project.
> > OpenStackClient was mentioned as a unified CLI for OpenStack focused
> > more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
> > but falls in the same general category of work.)
> >
> > i.e. There wasn't anything new in the conversation, it was more a
> > matter of the developers/TC members on the board sharing information
> > about work that's already happening.
> 
> I agree with that - but would like to clarify the 'bare metal, VMs and
> containers' part a bit. (an in fact, I was concerned in the meeting that
> the messaging around this would be confusing because we 'supporting bare
> metal' and 'supporting containers' mean two different things but we use
> one phrase to talk about it.
> 
> It's abundantly clear at the strategic level that having OpenStack be able
> to provide both VMs and Bare Metal as two different sorts of resources
> (ostensibly but not prescriptively via nova) is one of our advantages. We
> wanted to underscore how important it is to be able to do that, and wanted
> to underscore that so that it's really clear how important it is any time
> the "but cloud should just be VMs" sentiment arises.
> 
> The way we discussed "supporting containers" was quite different and was
> not about nova providing containers. Rather, it was about reaching out to
> our friends in other communities and working with them on making OpenStack
> the best place to run things like kubernetes or docker swarm.
> Those are systems that ultimately need to run, and it seems that good
> integration (like kuryr with libnetwork) can provide a really strong
> story. I think pretty much everyone agrees that there is not much value to
> us 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Monty Taylor

On 04/11/2016 09:43 AM, Allison Randal wrote:

On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas  wrote:

Reading unofficial notes [1], i found one topic very interesting:
One Platform – How do we truly support containers and bare metal under
a common API with VMs? (Ironic, Nova, adjacent communities e.g.
Kubernetes, Apache Mesos etc)

Anyone present at the meeting, please expand on those few notes on
etherpad? And how if any this feedback is getting back to the
projects?


It was really two separate conversations that got conflated in the
summary. One conversation was just being supportive of bare metal, VMs,
and containers within the OpenStack umbrella. The other conversation
started with Monty talking about his work on shade, and how it wouldn't
exist if more APIs were focused on the way users consume the APIs, and
less an expression of the implementation details of each project.
OpenStackClient was mentioned as a unified CLI for OpenStack focused
more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
but falls in the same general category of work.)

i.e. There wasn't anything new in the conversation, it was more a matter
of the developers/TC members on the board sharing information about work
that's already happening.


I agree with that - but would like to clarify the 'bare metal, VMs and 
containers' part a bit. (an in fact, I was concerned in the meeting that 
the messaging around this would be confusing because we 'supporting bare 
metal' and 'supporting containers' mean two different things but we use 
one phrase to talk about it.


It's abundantly clear at the strategic level that having OpenStack be 
able to provide both VMs and Bare Metal as two different sorts of 
resources (ostensibly but not prescriptively via nova) is one of our 
advantages. We wanted to underscore how important it is to be able to do 
that, and wanted to underscore that so that it's really clear how 
important it is any time the "but cloud should just be VMs" sentiment 
arises.


The way we discussed "supporting containers" was quite different and was 
not about nova providing containers. Rather, it was about reaching out 
to our friends in other communities and working with them on making 
OpenStack the best place to run things like kubernetes or docker swarm. 
Those are systems that ultimately need to run, and it seems that good 
integration (like kuryr with libnetwork) can provide a really strong 
story. I think pretty much everyone agrees that there is not much value 
to us or the world for us to compete with kubernetes or docker.


So, we do want to be supportive of bare metal and containers - but the 
specific _WAY_ we want to be supportive of those things is different for 
each one.


Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Allison Randal
> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas  wrote:
>> Reading unofficial notes [1], i found one topic very interesting:
>> One Platform – How do we truly support containers and bare metal under
>> a common API with VMs? (Ironic, Nova, adjacent communities e.g.
>> Kubernetes, Apache Mesos etc)
>>
>> Anyone present at the meeting, please expand on those few notes on
>> etherpad? And how if any this feedback is getting back to the
>> projects?

It was really two separate conversations that got conflated in the
summary. One conversation was just being supportive of bare metal, VMs,
and containers within the OpenStack umbrella. The other conversation
started with Monty talking about his work on shade, and how it wouldn't
exist if more APIs were focused on the way users consume the APIs, and
less an expression of the implementation details of each project.
OpenStackClient was mentioned as a unified CLI for OpenStack focused
more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
but falls in the same general category of work.)

i.e. There wasn't anything new in the conversation, it was more a matter
of the developers/TC members on the board sharing information about work
that's already happening.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev