Hongbin,
I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs 
Swarm).If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep themseparate because they need different attention (e.g. I almost don't 
care why/when "agent/slave" node died, but always double check that master node 
was repaired or replaced).   
One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally 
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):  
But we definitely should discuss it during midcycle next week.   --- Egor
      From: Hongbin Lu <[email protected]>
 To: OpenStack Development Mailing List (not for usage questions) 
<[email protected]> 
 Sent: Thursday, February 11, 2016 8:50 PM
 Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
   
 <!--#yiv4408789594 _filtered #yiv4408789594 {font-family:Wingdings;panose-1:5 
0 0 0 0 0 0 0 0 0;} _filtered #yiv4408789594 {font-family:SimSun;panose-1:2 1 6 
0 3 1 1 1 1 1;} _filtered #yiv4408789594 {font-family:"Cambria Math";panose-1:2 
4 5 3 5 4 6 3 2 4;} _filtered #yiv4408789594 {font-family:Calibri;panose-1:2 15 
5 2 2 2 4 3 2 4;} _filtered #yiv4408789594 {font-family:Tahoma;panose-1:2 11 6 
4 3 5 4 4 2 4;} _filtered #yiv4408789594 {panose-1:2 1 6 0 3 1 1 1 1 1;} 
_filtered #yiv4408789594 {font-family:Consolas;panose-1:2 11 6 9 2 2 4 3 2 4;} 
_filtered #yiv4408789594 {font-family:"Lucida Console";panose-1:2 11 6 9 4 5 4 
2 2 4;}#yiv4408789594 #yiv4408789594 p.yiv4408789594MsoNormal, #yiv4408789594 
li.yiv4408789594MsoNormal, #yiv4408789594 div.yiv4408789594MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;font-family:"Times New 
Roman", "serif";}#yiv4408789594 a:link, #yiv4408789594 
span.yiv4408789594MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv4408789594 a:visited, #yiv4408789594 
span.yiv4408789594MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv4408789594 pre 
{margin:0cm;margin-bottom:.0001pt;font-size:10.0pt;font-family:"Courier 
New";}#yiv4408789594 p.yiv4408789594MsoListParagraph, #yiv4408789594 
li.yiv4408789594MsoListParagraph, #yiv4408789594 
div.yiv4408789594MsoListParagraph 
{margin-top:0cm;margin-right:0cm;margin-bottom:0cm;margin-left:36.0pt;margin-bottom:.0001pt;font-size:12.0pt;font-family:"Times
 New Roman", "serif";}#yiv4408789594 span.yiv4408789594HTMLPreformattedChar 
{font-family:Consolas;}#yiv4408789594 span.yiv4408789594EmailStyle19 
{font-family:"Calibri", "sans-serif";color:#1F497D;}#yiv4408789594 
.yiv4408789594MsoChpDefault {font-size:10.0pt;} _filtered #yiv4408789594 
{margin:72.0pt 72.0pt 72.0pt 72.0pt;}#yiv4408789594 
div.yiv4408789594WordSection1 {}#yiv4408789594 _filtered #yiv4408789594 {} 
_filtered #yiv4408789594 {font-family:Symbol;}#yiv4408789594 ol 
{margin-bottom:0cm;}#yiv4408789594 ul {margin-bottom:0cm;}-->Hi team,    Sorry 
for bringing up this old thread, but a recent debate on container resource [1] 
reminded me the use case Kris mentioned below. I am going to propose a 
preliminary idea to address the use case. Of course, we could continue the 
discussion in the team meeting or midcycle.    Idea: Introduce a docker-native 
COE, which consists of only minion/worker/slave nodes (no master nodes). Goal: 
Eliminate duplicated IaaS resources (master node VMs, lbaas vips, floating ips, 
etc.) Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and 
worker nodes. In these COEs, control services (i.e. scheduler) run on master 
nodes, and containers run on worker nodes. If we can port the COE control 
services to Magnum control plate and share them with all tenants, we eliminate 
the need of master nodes thus improving resource utilization. In the new COE, 
users create/manage containers through Magnum API endpoints. Magnum is 
responsible to spin tenant VMs, schedule containers to the VMs, and manage the 
life-cycle of those containers. Unlike other COEs, containers created by this 
COE are considered as OpenStack-manage resources. That means they will be 
tracked in Magnum DB, and accessible by other OpenStack services (i.e. Horizon, 
Heat, etc.).    What do you feel about this proposal? Let’s discuss.    
[1]https://etherpad.openstack.org/p/magnum-native-api    Best regards, Hongbin  
  From: Kris G. Lindgren [mailto:[email protected]]
Sent: September-30-15 7:26 PM
To: [email protected]
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?    We are looking 
at deploying magnum as an answer for how do we do containers company wide at 
Godaddy.  I am going to agree with both you and josh.    I agree that managing 
one large system is going to be a pain and pas experience tells me this wont be 
practical/scale, however from experience I also know exactly the pain Josh is 
talking about.    We currently have ~4k projects in our internal openstack 
cloud, about 1/4 of the projects are currently doing some form of containers on 
their own, with more joining every day.  If all of these projects were to 
convert of to the current magnum configuration we would suddenly be attempting 
to support/configure ~1k magnum clusters.  Considering that everyone will want 
it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.    From my point of view an ideal use case for companies 
like ours (yahoo/godaddy) would be able to support hierarchical projects in 
magnum.  That way we could create a project for each department, and then the 
subteams of those departments can have their own projects.  We create a a bay 
per department.  Sub-projects if they want to can support creation of their own 
bays (but support of the kube cluster would then fall to that team).  When a 
sub-project spins up a pod on a bay, minions get created inside that teams sub 
projects and the containers in that pod run on the capacity that was spun up  
under that project, the minions for each pod would be a in a scaling group and 
as such grow/shrink as dictated by load.    The above would make it so where we 
support a minimal, yet imho reasonable, number of kube clusters, give people 
who can't/don’t want to fall inline with the provided resource a way to make 
their own and still offer a "good enough for a single company" level of 
multi-tenancy. >Joshua, >   >If you share resources, you give up multi-tenancy. 
 No COE system has the >concept of multi-tenancy (kubernetes has some basic 
implementation but it >is totally insecure).  Not only does multi-tenancy have 
to “look like” it >offers multiple tenants isolation, but it actually has to 
deliver the >goods.

 >   >I understand that at first glance a company like Yahoo may not want 
 > >separate bays for their various applications because of the perceived 
 > >administrative overhead.  I would then challenge Yahoo to go deploy a COE 
 > >like kubernetes (which has no multi-tenancy or a very basic implementation 
 > >of such) and get it to work with hundreds of different competing 
 > >applications.  I would speculate the administrative overhead of getting 
 > >all that to work would be greater then the administrative overhead of 
 > >simply doing a bay create for the various tenants. >   >Placing tenancy 
 > inside a COE seems interesting, but no COE does that >today.  Maybe in the 
 > future they will.  Magnum was designed to present an >integration point 
 > between COEs and OpenStack today, not five years down >the road.  Its not as 
 > if we took shortcuts to get to where we are. >   >I will grant you that 
 > density is lower with the current design of Magnum >vs a full on integration 
 > with OpenStack within the COE itself.  However, >that model which is what I 
 > believe you proposed is a huge design change to >each COE which would overly 
 > complicate the COE at the gain of increased >density.  I personally don’t 
 > feel that pain is worth the gain.       
 > ___________________________________________________________________ Kris 
 > Lindgren Senior Linux Systems Engineer GoDaddy 
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to