Hi Kai,

Does Magnum already have drivers for networks and volumes, or are you 
suggesting this is what needs to be handled by a new COE driver structure?


I think having a directory per COE is a good idea, if the driver handles 
multiple responsibilities. So far we haven't really identified what the 
responsibility of a COE driver is - that's what I'm currently trying to 
identify. What type of classes would be in the default/ and contrib/ 
directories?


I'm assuming the scale manager would need to be specced alongside the COE 
drivers, since each COE has a different way of handling scaling. If that's the 
case, how would a scale manager integrate into the COE driver model we've 
already touched upon? Would each COE class implement various scaling methods 
from a base class, or would there be a manager class per COE?


Jamie


________________________________
From: Kai Qiang Wu <wk...@cn.ibm.com>
Sent: 17 March 2016 15:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] COE drivers spec


Here are some of my raw points,


1. For the driver mentioned, I think we not necessary use bay-driver here, as 
have network-driver, volume-driver, maybe it is not needed to introduce driver 
in bay level.(bay is higher level than network or volume)

maybe like

coes/
swarm/
mesos/
kubernetes

Each coes include, take swarm as example,

coes/
swarm/
default/
contrib/
Or we not use contrib here, just like (one is support by default, others are 
contributed by more contributors and tested in jenkins pipeline)
coes/
swarm/
atomic/
ubuntu/


We have BaseCoE, other specific CoE inherit from that, Each CoE have related 
life management operations, like Create, Update, Get, Delete life cycle 
management.



2. We need to think more about scale manager, which involves scale cluster up 
and down, Maybe a related auto-scale and manual scale ways.


The user cases, like as a Cloud Administrator, I could easily use OpenStack to 
provide CoEs cluster, and manage CoE life cycle. and scale CoEs.
CoEs could do its best to use OpenStack network, and volume services to provide 
CoE related network, volume support.


Others interesting case(not required), if user just want to deploy one 
container in Magnum, we schedule it to the right CoE, (if user manual specify, 
it would schedule to the specific CoE)


Or more user cases .....



Thanks

Best Wishes,
--------------------------------------------------------------------------------
Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------------------------------------------------------------------------------
Follow your heart. You are miracle!

[Inactive hide details for Jamie Hannaford ---17/03/2016 07:24:41 pm---Hi all, 
I'm writing the spec for the COE drivers, and I w]Jamie Hannaford ---17/03/2016 
07:24:41 pm---Hi all, I'm writing the spec for the COE drivers, and I wanted 
some feedback about what it should in

From: Jamie Hannaford <jamie.hannaf...@rackspace.com>
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Date: 17/03/2016 07:24 pm
Subject: [openstack-dev] [Magnum] COE drivers spec

________________________________



Hi all,

I'm writing the spec for the COE drivers, and I wanted some feedback about what 
it should include. I'm trying to reconstruct some of the discussion that was 
had at the mid-cycle meet-up, but since I wasn't there I need to rely on people 
who were :)

From my perspective, the spec should recommend the following:

1. Change the BayModel `coe` attribute to `bay_driver`, the value of which will 
correspond to the name of the directory where the COE code will reside, i.e. 
drivers/{driver_name}

2. Introduce a base Driver class that each COE driver extends. This would 
reside in the drivers dir too. This base driver will specify the interface for 
interacting with a Bay. The following operations would need to be defined by 
each COE driver: Get, Create, List, List detailed, Update, Delete. Each COE 
driver would implement each operation differently depending on their needs, but 
would satisfy the base interface. The base class would also contain common 
logic to avoid code duplication. Any operations that fall outside this 
interface would not exist in the COE driver class, but rather an extension 
situated elsewhere. The JSON payloads for requests would differ from COE to COE.

Cinder already uses this approach to great effect for volume drivers:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py

Question: Is this base class a feasible idea for Magnum? If so, do we need any 
other operations in the base class that I haven't mentioned?

3. Each COE driver would have its own Heat template for creating a bay node. It 
would also have a template definition that lists the JSON parameters which are 
fed into the Heat template.

Question: From a very top-level POV, what logic or codebase changes would 
Magnum need Heat templates in the above way?

4. Removal of all old code that does not fit the above paradigm.

?---

Any custom COE operations that are not common Bay operations (i.e. the six 
listed in #2) would reside in a COE extension. This is outside of the scope of 
the COE drivers spec and would require an entirely different spec that utilizes 
a common paradigm for extensions in OpenStack. Such a spec would also need to 
cover how the conductor would link off to each COE. Is this summary correct?

Does Magnum already have a scale manager? If not, should this be introduced as 
a separate BP/spec?

Is there anything else that a COE drivers spec need to cover which I have not 
mentioned??

Jamie

________________________________
Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated. 
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to