[openstack-dev] [keystone] Can anyone share some experience for how to configure keystone work with https

2016-07-10 Thread Jay Lau
Hi,

Does anyone have some experience or some document for how to configure
keystone work with https? If so, can you please help share with me or show
some links that can help?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Does Murano support version management?

2016-05-31 Thread Jay Lau
Hi,

I have a question for Murano: Suppose I want to manage two different
version Spark packages, does Murano can enable me create one Application in
application catalog but can enable me select different version spark
packages to install?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Seek advices for a licence issue

2016-04-24 Thread Jay Lau
Yes, I will contribute to this project.

Thanks

On Sat, Apr 23, 2016 at 8:44 PM, Hongbin Lu <hongbin...@huawei.com> wrote:

> Jay,
>
>
>
> I will discuss the proposal [1] in the design summit. Do you plan to
> contribute on this efforts or someone from DCOS community interest to
> contribute?
>
>
>
> [1] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Jay Lau [mailto:jay.lau@gmail.com]
> *Sent:* April-22-16 12:12 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Seek advices for a licence issue
>
>
>
> I got confirmation from Mesosphere that we can use the open source DC/OS
> in Magnum now, it is a good time to enhance the Mesos Bay to Open Source
> DCOS.
>
> From Mesosphere
>
> DC/OS software is licensed under the Apache License, so you should feel
> free to use it within the terms of that license.
> ---
>
> Thanks.
>
>
>
> On Thu, Apr 21, 2016 at 5:35 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
> Hi Mark,
>
>
>
> I have went though the announcement in details, From my point of view, it
> seems to resolve the license issue that was blocking us in before. I have
> included the Magnum team in ML to see if our team members have any comment.
>
>
>
> Thanks for the support from foundation.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Mark Collier [mailto:m...@openstack.org]
> *Sent:* April-19-16 12:36 PM
> *To:* Hongbin Lu
> *Cc:* foundat...@lists.openstack.org; Guang Ya GY Liu
> *Subject:* Re: [OpenStack Foundation] [magnum] Seek advices for a licence
> issue
>
>
>
> Hopefully today’s news that Mesosphere is open major sourcing components
> of DCOS under an Apache 2.0 license will make things easier:
>
>
>
> https://mesosphere.com/blog/2016/04/19/open-source-dcos/
>
>
>
> I’ll be interested to hear your take after you have time to look at it in
> more detail, Hongbin.
>
>
>
> Mark
>
>
>
>
>
>
>
> On Apr 9, 2016, at 10:02 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>
>
> Hi all,
>
>
>
> A brief introduction to myself. I am the Magnum Project Team Lead (PTL).
> Magnum is the OpenStack container service. I wrote this email because the
> Magnum team is seeking clarification for a licence issue for shipping
> third-party software (DCOS [1] in particular) and I was advised to consult
> OpenStack Board of Directors in this regards.
>
>
>
> Before getting into the question, I think it is better to provide some
> backgroup information. A feature provided by Magnum is to provision
> container management tool on top of a set of Nova instances. One of the
> container management tool Magnum supports is Apache Mesos [2]. Generally
> speaking, Magnum ships Mesos by providing a custom cloud image with the
> necessary packages pre-installed. So far, all the shipped components are
> open source with appropriate license, so we are good so far.
>
>
>
> Recently, one of our contributors suggested to extend the Mesos support to
> DCOS [3]. The Magnum team is unclear if there is a license issue for
> shipping DCOS, which looks like a close-source product but has community
> version in Amazon Web Services [4]. I want to know what are the appropriate
> actions Magnum team should take in this pursuit, or we should stop pursuing
> this direction further? Advices are greatly appreciated. Please let us know
> if we need to provide further information. Thanks.
>
>
>
> [1] https://docs.mesosphere.com/
>
> [2] http://mesos.apache.org/
>
> [3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
>
> [4]
> https://docs.mesosphere.com/administration/installing/installing-community-edition/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
>
> _______
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Seek advices for a licence issue

2016-04-21 Thread Jay Lau
I got confirmation from Mesosphere that we can use the open source DC/OS in
Magnum now, it is a good time to enhance the Mesos Bay to Open Source DCOS.

From Mesosphere
DC/OS software is licensed under the Apache License, so you should feel
free to use it within the terms of that license.
---

Thanks.

On Thu, Apr 21, 2016 at 5:35 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> Hi Mark,
>
>
>
> I have went though the announcement in details, From my point of view, it
> seems to resolve the license issue that was blocking us in before. I have
> included the Magnum team in ML to see if our team members have any comment.
>
>
>
> Thanks for the support from foundation.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Mark Collier [mailto:m...@openstack.org]
> *Sent:* April-19-16 12:36 PM
> *To:* Hongbin Lu
> *Cc:* foundat...@lists.openstack.org; Guang Ya GY Liu
> *Subject:* Re: [OpenStack Foundation] [magnum] Seek advices for a licence
> issue
>
>
>
> Hopefully today’s news that Mesosphere is open major sourcing components
> of DCOS under an Apache 2.0 license will make things easier:
>
>
>
> https://mesosphere.com/blog/2016/04/19/open-source-dcos/
>
>
>
> I’ll be interested to hear your take after you have time to look at it in
> more detail, Hongbin.
>
>
>
> Mark
>
>
>
>
>
>
>
> On Apr 9, 2016, at 10:02 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>
>
> Hi all,
>
>
>
> A brief introduction to myself. I am the Magnum Project Team Lead (PTL).
> Magnum is the OpenStack container service. I wrote this email because the
> Magnum team is seeking clarification for a licence issue for shipping
> third-party software (DCOS [1] in particular) and I was advised to consult
> OpenStack Board of Directors in this regards.
>
>
>
> Before getting into the question, I think it is better to provide some
> backgroup information. A feature provided by Magnum is to provision
> container management tool on top of a set of Nova instances. One of the
> container management tool Magnum supports is Apache Mesos [2]. Generally
> speaking, Magnum ships Mesos by providing a custom cloud image with the
> necessary packages pre-installed. So far, all the shipped components are
> open source with appropriate license, so we are good so far.
>
>
>
> Recently, one of our contributors suggested to extend the Mesos support to
> DCOS [3]. The Magnum team is unclear if there is a license issue for
> shipping DCOS, which looks like a close-source product but has community
> version in Amazon Web Services [4]. I want to know what are the appropriate
> actions Magnum team should take in this pursuit, or we should stop pursuing
> this direction further? Advices are greatly appreciated. Please let us know
> if we need to provide further information. Thanks.
>
>
>
> [1] https://docs.mesosphere.com/
>
> [2] http://mesos.apache.org/
>
> [3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
>
> [4]
> https://docs.mesosphere.com/administration/installing/installing-community-edition/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
>
> ___
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-03-25 Thread Jay Lau
Yes, that's exactly what I want to do, adding dcos cli and also add Chronos
to Mesos Bay to make it can handle both long running services and batch
jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki <michal.roste...@gmail.com>
wrote:

> On 03/25/2016 07:57 AM, Jay Lau wrote:
>
>> Hi Magnum,
>>
>> The current mesos bay only include mesos and marathon, it is better to
>> enhance the mesos bay have more components and finally enhance it to a
>> DCOS which focus on container service based on mesos.
>>
>> For more detail, please refer to
>>
>> https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/
>>
>> The mesosphere now has a template on AWS which can help customer deploy
>> a DCOS on AWS, it would be great if Magnum can also support it based on
>> OpenStack.
>>
>> I filed a bp here
>> https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please show
>> your comments if any.
>>
>> --
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>>
>>
> Sorry if I'm missing something, but isn't DCOS a closed source software?
>
> However, the "DCOS cli"[1] seems to be working perfectly with Marathon and
> Mesos installed by any way if you configure it well. I think that the thing
> which can be done in Magnum is to make the experience with "DOCS" tools as
> easy as possible by using open source components from Mesosphere.
>
> Cheers,
> Michal
>
> [1] https://github.com/mesosphere/dcos-cli
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-03-25 Thread Jay Lau
Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a DCOS
which focus on container service based on mesos.

For more detail, please refer to
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/

The mesosphere now has a template on AWS which can help customer deploy a
DCOS on AWS, it would be great if Magnum can also support it based on
OpenStack.

I filed a bp here https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
, please show your comments if any.
-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-06 Thread Jay Lau
+1, thanks for the great work on magnum UI,  Shu Muto!

On Sun, Mar 6, 2016 at 12:52 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> +1
>
> BTW, I am magnum core, not magnum-ui core. Not sure if my vote is counted.
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: March-04-16 7:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum-ui] Proposed Core addition, and removal
> notice
>
> Magnum UI Cores,
>
> I propose the following changes to the magnum-ui core group [1]:
>
> + Shu Muto
> - Dims (Davanum Srinivas), by request - justified by reduced activity
> level.
>
> Please respond with your +1 votes to approve this change or -1 votes to
> oppose.
>
> Thanks,
>
> Adrian
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-02 Thread Jay Lau
Welcome Ton and Egor!!

On Wed, Feb 3, 2016 at 12:04 AM, Adrian Otto <adrian.o...@rackspace.com>
wrote:

> Thanks everyone for your votes. Welcome Ton and Egor to the core team!
>
> Regards,
>
> Adrian
>
> > On Feb 1, 2016, at 7:58 AM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
> >
> > Magnum Core Team,
> >
> > I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core
> Reviewers. Please respond with your votes.
> >
> > Thanks,
> >
> > Adrian Otto
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum][heat] Quota for Magnum Resources

2015-12-22 Thread Jay Lau
>>
>> > >>
>> > >>
>> > >> Also, we can reference other platform-level services (like Trove and
>> > >> Sahara) to see if they implemented an extra layer of quota management
>> > >> system, and we could use it as a decision point.
>> > >>
>> > >>
>> > >>
>> > >> Best regards,
>> > >>
>> > >> Honbgin
>> > >>
>> > >>
>> > >>
>> > >> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
>> > >> *Sent:* December-20-15 12:50 PM
>> > >> *To:* OpenStack Development Mailing List (not for usage questions)
>> > >>
>> > >> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> > >> Resources
>> > >>
>> > >>
>> > >>
>> > >> This sounds like a source-of-truth concern. From my perspective the
>> > >> solution is not to create redundant quotas. Simply quota the Magnum
>> > >> resources. Lower level limits *could* be queried by magnum prior to
>> acting
>> > >> to CRUD the lower level resources. In the case we could check the
>> maximum
>> > >> allowed number of (or access rate of) whatever lower level resource
>> before
>> > >> requesting it, and raising an understandable error. I see that as an
>> > >> enhancement rather than a must-have. In all honesty that feature is
>> > >> probably more complicated than it's worth in terms of value.
>> > >>
>> > >> --
>> > >>
>> > >> Adrian
>> > >>
>> > >>
>> > >> On Dec 20, 2015, at 6:36 AM, Jay Lau <jay.lau@gmail.com> wrote:
>> > >>
>> > >> I also have the same concern with Lee, as Magnum depend on HEAT  and
>> HEAT
>> > >> need call nova, cinder, neutron to create the Bay resources. But
>> both Nova
>> > >> and Cinder has its own quota policy, if we define quota again in
>> Magnum,
>> > >> then how to handle the conflict? Another point is that limiting the
>> Bay by
>> > >> quota seems a bit coarse-grainded as different bays may have
>> different
>> > >> configuration and resource request. Comments? Thanks.
>> > >>
>> > >>
>> > >>
>> > >> On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote <leecalc...@gmail.com>
>> > >> wrote:
>> > >>
>> > >> Food for thought - there is a cost to FIPs (in the case of public IP
>> > >> addresses), security groups (to a lesser extent, but in terms of the
>> > >> computation of many hundreds of them), etc. Administrators may wish
>> to
>> > >> enforce quotas on a variety of resources that are direct costs or
>> indirect
>> > >> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
>> > >> multi-host pods and services, which consume CPU, mem, etc.).
>> > >>
>> > >>
>> > >>
>> > >> If Magnum quotas are brought forward, they should govern (enforce
>> quota)
>> > >> on Magnum-specific constructs only, correct? Resources created by
>> Magnum
>> > >> COEs should be governed by existing quota policies governing said
>> resources
>> > >> (e.g. Nova and vCPUs).
>> > >>
>> > >>
>> > >>
>> > >> Lee
>> > >>
>> > >>
>> > >>
>> > >> On Dec 16, 2015, at 1:56 PM, Tim Bell <tim.b...@cern.ch> wrote:
>> > >>
>> > >>
>> > >>
>> > >> -Original Message-
>> > >> From: Clint Byrum [mailto:cl...@fewbar.com <cl...@fewbar.com>]
>> > >> Sent: 15 December 2015 22:40
>> > >> To: openstack-dev <openstack-dev@lists.openstack.org>
>> > >> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> > >> Resources
>> > >>
>> > >> Hi! Can I offer a counter point?
>> > >>
>> > >> Quotas are for _real_ resources.
>> > >>
>> > >>
>> > >> The CERN container specialist agrees with you ... it would be good to
>> > >> reflect on the needs given that ironic, neutron and nova are
>> policing the
>> > >> reso

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-21 Thread Jay Lau
For case 2), can we talk this with HEAT team? This seems to be a issue
related to HEAT quota, why HEAT do not add quota support?

On Tue, Dec 22, 2015 at 7:42 AM, Vilobh Meshram <
vilobhmeshram.openst...@gmail.com> wrote:

> As mentioned by Hongbin we might have these 3 cases. Hongbin and I did
> discuss about these in the Magnum IRC.
>
> The interestring case being the #2 one. Where in case enough resources are
> not available at the IaaS layer, and Magnum is in the process of creating a
> Bay; Magnum needs to be more descriptive about the failure so that the
> operator or user can be aware what exactly happened i.e. did the request
> failed because of resource constraints at the PaaS layer, at the IaaS layer
> etc.
>
> Having the Quota layer at magnum will abstract out the underlying layer
> and would impose quota on objects that Magnum understands. But again it
> would be nice to know what operators think about it and would it be
> something that they will find useful.
>
> -Vilobh
>
> On Mon, Dec 21, 2015 at 2:58 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>> If we decide to support quotas in CaaS layer (i.e. limit the # of bays),
>> its implementation doesn’t have to be redundant to IaaS layer (from Nova,
>> Cinder, etc.). The implementation could be a layer on top of IaaS, in which
>> requests need to pass two layers of quotas to succeed. There would be three
>> cases:
>>
>> 1.   A request exceeds CaaS layer quota. Then, magnum rejects the
>> request.
>>
>> 2.   A request is within CaaS layer quota, and accepted by magnum.
>> Magnum calls Heat to create a stack, which will fail if the stack exceeds
>> IaaS layer quota. In this case, magnum catch and re-throw the exception to
>> users.
>>
>> 3.   A request is within both CaaS and IaaS layer quota, and the
>> request succeeds.
>>
>>
>>
>> I think the debate here is whether it would be useful to implement an
>> extra layer of quota management system in Magnum. My guess is “yes”, if
>> operators want to hide the underline infrastructure, and expose a pure CaaS
>> solution to its end-users. If the operators don’t use Magnum in this way,
>> then I will vote for “no”.
>>
>>
>>
>> Also, we can reference other platform-level services (like Trove and
>> Sahara) to see if they implemented an extra layer of quota management
>> system, and we could use it as a decision point.
>>
>>
>>
>> Best regards,
>>
>> Honbgin
>>
>>
>>
>> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* December-20-15 12:50 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>>
>> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> Resources
>>
>>
>>
>> This sounds like a source-of-truth concern. From my perspective the
>> solution is not to create redundant quotas. Simply quota the Magnum
>> resources. Lower level limits *could* be queried by magnum prior to acting
>> to CRUD the lower level resources. In the case we could check the maximum
>> allowed number of (or access rate of) whatever lower level resource before
>> requesting it, and raising an understandable error. I see that as an
>> enhancement rather than a must-have. In all honesty that feature is
>> probably more complicated than it's worth in terms of value.
>>
>> --
>>
>> Adrian
>>
>>
>> On Dec 20, 2015, at 6:36 AM, Jay Lau <jay.lau@gmail.com> wrote:
>>
>> I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT
>> need call nova, cinder, neutron to create the Bay resources. But both Nova
>> and Cinder has its own quota policy, if we define quota again in Magnum,
>> then how to handle the conflict? Another point is that limiting the Bay by
>> quota seems a bit coarse-grainded as different bays may have different
>> configuration and resource request. Comments? Thanks.
>>
>>
>>
>> On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote <leecalc...@gmail.com>
>> wrote:
>>
>> Food for thought - there is a cost to FIPs (in the case of public IP
>> addresses), security groups (to a lesser extent, but in terms of the
>> computation of many hundreds of them), etc. Administrators may wish to
>> enforce quotas on a variety of resources that are direct costs or indirect
>> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
>> multi-host pods and services, which consume CPU, mem, etc.).
>>
>>
>>
>> If Magnum quotas are brought forward, they sho

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Jay Lau
Thanks Adrian and Tim, I saw that @Vilobh already uploaded a patch
https://review.openstack.org/#/c/259201/ here, perhaps we can first have a
spec and discuss there. ;-)

On Mon, Dec 21, 2015 at 2:44 AM, Tim Bell <tim.b...@cern.ch> wrote:

> Given the lower level quotas in Heat, Neutron, Nova etc., the error
> feedback is very important. A Magnum “cannot create” message requires a lot
> of debugging whereas a “Floating IP quota exceeded” gives a clear root
> cause.
>
>
>
> Whether we quota Magnum resources or not, some error scenarios and
> appropriate testing+documentation would be a great help for operators.
>
>
>
> Tim
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* 20 December 2015 18:50
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
>
>
> This sounds like a source-of-truth concern. From my perspective the
> solution is not to create redundant quotas. Simply quota the Magnum
> resources. Lower level limits *could* be queried by magnum prior to acting
> to CRUD the lower level resources. In the case we could check the maximum
> allowed number of (or access rate of) whatever lower level resource before
> requesting it, and raising an understandable error. I see that as an
> enhancement rather than a must-have. In all honesty that feature is
> probably more complicated than it's worth in terms of value.
>
> --
>
> Adrian
>
>
> On Dec 20, 2015, at 6:36 AM, Jay Lau <jay.lau@gmail.com> wrote:
>
> I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT
> need call nova, cinder, neutron to create the Bay resources. But both Nova
> and Cinder has its own quota policy, if we define quota again in Magnum,
> then how to handle the conflict? Another point is that limiting the Bay by
> quota seems a bit coarse-grainded as different bays may have different
> configuration and resource request. Comments? Thanks.
>
>
>
> On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote <leecalc...@gmail.com> wrote:
>
> Food for thought - there is a cost to FIPs (in the case of public IP
> addresses), security groups (to a lesser extent, but in terms of the
> computation of many hundreds of them), etc. Administrators may wish to
> enforce quotas on a variety of resources that are direct costs or indirect
> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
> multi-host pods and services, which consume CPU, mem, etc.).
>
>
>
> If Magnum quotas are brought forward, they should govern (enforce quota)
> on Magnum-specific constructs only, correct? Resources created by Magnum
> COEs should be governed by existing quota policies governing said resources
> (e.g. Nova and vCPUs).
>
>
>
> Lee
>
>
>
> On Dec 16, 2015, at 1:56 PM, Tim Bell <tim.b...@cern.ch> wrote:
>
>
>
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com <cl...@fewbar.com>]
> Sent: 15 December 2015 22:40
> To: openstack-dev <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
> Hi! Can I offer a counter point?
>
> Quotas are for _real_ resources.
>
>
> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
>
>
> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
>
> cost
>
> real money and cannot be conjured from thin air. As such, the user being
> able to allocate 1 billion or 2 containers is not limited by Magnum, but
>
> by real
>
> things that they must pay for. If they have enough Nova quota to allocate
>
> 1
>
> billion tiny pods, why would Magnum stop them? Who actually benefits from
> that limitation?
>
> So I suggest that you not add any detailed, complicated quota system to
> Magnum. If there are real limitations to the implementation that Magnum
> has chosen, such as we had in Heat (the entire stack must fit in memory),
> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> memory quotas be the limit, and enjoy the profit margins that having an
> unbound force multiplier like Magnum in your cloud gives you and your
> users!
>
> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>
> Hi All,
>
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should 

Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-20 Thread Jay Lau
...@hpe.com <tom.camm...@hpe.com>]
> >> Sent: December-16-15 8:21 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
> >>
> >> I have been noticing a fair amount of redundant work going on in
> magnum,
> >> python-magnumclient and magnum-ui with regards to APIs we have been
> >> intending to drop support for. During the Tokyo summit it was decided
> >> that we should support for only COE APIs that all COEs can support
> which
> >> means dropping support for Kubernetes specific APIs for Pod, Service
> and
> >> Replication Controller.
> >>
> >> Egor has submitted a blueprint[1] “Unify container actions between all
> >> COEs” which has been approved to cover this work and I have submitted
> the
> >> first of many patches that will be needed to unify the APIs.
> >>
> >> The controversial patches are here:
> >> https://review.openstack.org/#/c/258485/ and
> >> https://review.openstack.org/#/c/258454/
> >>
> >> These patches are more a forcing function for our team to decide how to
> >> correctly deprecate these APIs as I mention there is a lot of redundant
> >> work going on these APIs. Please let me know your thoughts.
> >>
> >> Tom
> >>
> >> [1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Jay Lau
 Could not think of a way that Magnum to know if a COE
> specific utilities created a resource in background. One way could be
> to see the difference between whats stored in magnum.quotas and the
> information of the actual resources created for a particular bay in
>
> k8s/COE.
>
>
> 5. Introduce a config variable to set quotas values.
>
> If everyone agrees will start the changes by introducing quota
> restrictions on Bay creation.
>
> Thoughts ??
>
>
> -Vilobh
>
> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-12-02 Thread Jay Lau
ave database
> application group which consists mongoDB app and MySQL App.[3]
> >
> > So I think we need to have two resources 'apps' and 'appgroups' in mesos
> conductor like we have pod and rc for k8s. And regarding 'magnum container'
> command, we can create, delete and retrieve container details as part of
> mesos app itself(container = app with 1 instance). Though I think in mesos
> case 'magnum app-create ..." and 'magnum container-create ...' will use the
> same REST API for both cases.
> >
> > Let me know your opinion/comments on this and correct me if I am wrong
> >
> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
> >
> >
> > Regards
> > Bharath T
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://lists.openstack.org?subject:unsubscribe><
> http://lists.openstack.org/?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://lists.openstack.org?subject:unsubscribe><
> http://lists.openstack.org/?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __
> OpenStack Development Mailing List (not for usage questions) Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-26 Thread Jay Lau
One of the benefit of running daemons in docker container is that the
cluster can upgrade more easily. Take mesos as an example, if I can make
mesos running in container, then when update mesos slave with some hot
fixes, I can upgrade the mesos slave to a new version in an gray upgrade,
i.e. ABtest etc.

On Fri, Nov 27, 2015 at 12:01 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> Jay,
>
>
>
> Agree and disagree. Containerize some COE daemons will facilitate the
> version upgrade and maintenance. However, I don’t think it is correct to
> blindly containerize everything unless there is an investigation performed
> to understand the benefits and costs of doing that. Quoted from Egor, the
> common practice in k8s is to containerize everything except kublet, because
> it seems it is just too hard to containerize everything. In the case of
> mesos, I am not sure if it is a good idea to move everything to containers,
> given the fact that it is relatively easy to manage and upgrade debian
> packages at Ubuntu. However, in the new CoreOS mesos bay [1], meos daemons
> will run at containers.
>
>
>
> In summary, I think the correct strategy is to selectively containerize
> some COE daemons, but we don’t have to containerize **all** COE daemons.
>
>
>
> [1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Jay Lau [mailto:jay.lau@gmail.com]
> *Sent:* November-26-15 2:06 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Using docker container to run COE
> daemons
>
>
>
> Thanks Kai Qing, I filed a bp for mesos bay here
> https://blueprints.launchpad.net/magnum/+spec/mesos-in-container
>
>
>
> On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu <wk...@cn.ibm.com> wrote:
>
> Hi Jay,
>
> For the Kubernetes COE container ways, I think @Hua Wang is doing that.
>
> For the swarm COE, the swarm already has master and agent running in
> container
>
> For the mesos, it still not have container work until now, Maybe someone
> already draft bp on it ? Not quite sure
>
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> --------
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi,
> It is becoming more and more popular to use docker container]Jay Lau
> ---26/11/2015 07:15:59 am---Hi, It is becoming more and more popular to use
> docker container run some
>
> From: Jay Lau <jay.lau@gmail.com>
> To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>
> Date: 26/11/2015 07:15 am
> Subject: [openstack-dev] [magnum] Using docker container to run COE
> daemons
> --
>
>
>
>
> Hi,
>
> It is becoming more and more popular to use docker container run some
> applications, so what about leveraging this in Magnum?
>
> What I want to do is that we can put all COE daemons running in docker
> containers, because now Kubernetes, Mesos and Swarm support running in
> docker container and there are already some existing docker
> images/dockerfiles which we can leverage.
>
> So what about update all COE templates to use docker container to run COE
> daemons and maintain some dockerfiles for different COEs in Magnum? This
> can reduce the maintain effort for COE as if there are new versions and we
> want to upgrade, just update the dockerfile is enough. Comments?
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __

[openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-25 Thread Jay Lau
Hi,

It is becoming more and more popular to use docker container run some
applications, so what about leveraging this in Magnum?

What I want to do is that we can put all COE daemons running in docker
containers, because now Kubernetes, Mesos and Swarm support running in
docker container and there are already some existing docker
images/dockerfiles which we can leverage.

So what about update all COE templates to use docker container to run COE
daemons and maintain some dockerfiles for different COEs in Magnum? This
can reduce the maintain effort for COE as if there are new versions and we
want to upgrade, just update the dockerfile is enough. Comments?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-25 Thread Jay Lau
Thanks Kai Qing, I filed a bp for mesos bay here
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu <wk...@cn.ibm.com> wrote:

> Hi Jay,
>
> For the Kubernetes COE container ways, I think @Hua Wang is doing that.
>
> For the swarm COE, the swarm already has master and agent running in
> container
>
> For the mesos, it still not have container work until now, Maybe someone
> already draft bp on it ? Not quite sure
>
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> --------
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi,
> It is becoming more and more popular to use docker container]Jay Lau
> ---26/11/2015 07:15:59 am---Hi, It is becoming more and more popular to use
> docker container run some
>
> From: Jay Lau <jay.lau@gmail.com>
> To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>
> Date: 26/11/2015 07:15 am
> Subject: [openstack-dev] [magnum] Using docker container to run COE
> daemons
> --
>
>
>
> Hi,
>
> It is becoming more and more popular to use docker container run some
> applications, so what about leveraging this in Magnum?
>
> What I want to do is that we can put all COE daemons running in docker
> containers, because now Kubernetes, Mesos and Swarm support running in
> docker container and there are already some existing docker
> images/dockerfiles which we can leverage.
>
> So what about update all COE templates to use docker container to run COE
> daemons and maintain some dockerfiles for different COEs in Magnum? This
> can reduce the maintain effort for COE as if there are new versions and we
> want to upgrade, just update the dockerfile is enough. Comments?
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui][magnum] Suggestions for Features/Improvements

2015-11-24 Thread Jay Lau
Thanks Brad, just added a new item of "Enable end user create container
objects via magnum-ui, such as creating pod, rc, service etc", it would be
great if we can enable end user create some container applications via
magnum UI, comments? Thanks!

On Wed, Nov 25, 2015 at 1:15 AM, Bradley Jones (bradjone) <
bradj...@cisco.com> wrote:

> We have started to compile a list of possible features/improvements for
> Magnum-UI. If you have any suggestions about what you would like to see in
> the plugin please leave them in the etherpad so we can prioritise what we
> are going to work on.
>
> https://etherpad.openstack.org/p/magnum-ui-feature-list
>
> Thanks,
> Brad Jones
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-19 Thread Jay Lau
>
>
>
> @hongin, @adrian I agree with you. So can we go ahead with magnum
> container-create(delete) ... for mesos bay (which actually create
> mesos(marathon) app internally)?
>
> @jay, yes we multiple frameworks which are using mesos lib. But the mesos
> bay we are creating uses marathon. And we had discussion in irc on this
> topic, and I was asked to implement initial version for marathon. And agree
> with you to have unified client interface for creating pod,app.
>
> Regards
> Bharath T
>
> 
> Date: Thu, 19 Nov 2015 10:01:35 +0800
> From: jay.lau@gmail.com<mailto:jay.lau@gmail.com>
> To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Mesos Conductor
>
> +1.
>
> One problem I want to mention is that for mesos integration, we cannot
> limited to Marathon + Mesos as there are many frameworks can run on top of
> Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
> Mesos integration as there is a huge eco-system build on top of Mesos.
>
> On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
>
> Bharath,
>
> I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
> appgroups in the near term. If there is a strong desire to add these
> things, we could allow it by having a plugin/extensions interface for the
> Magnum API to allow additional COE specific features. Honestly, it’s just
> going to be a nuisance to keep up with the various upstreams until they
> become completely stable from an API perspective, and no additional changes
> are likely. All of our COE’s still have plenty of maturation ahead of them,
> so this is the wrong time to wrap them.
>
> If someone really wants apps and appgroups, (s)he could add that to an
> experimental branch of the magnum client, and have it interact with the
> marathon API directly rather than trying to represent those resources in
> Magnum. If that tool became popular, then we could revisit this topic for
> further consideration.
>
> Adrian
>
> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu <hongbin...@huawei.com hongbin...@huawei.com>> wrote:
> >
> > Hi Bharath,
> >
> > I agree the “container” part. We can implement “magnum container-create
> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
> introduce “apps” and “appgroups” resources to Magnum, because they are
> already provided by native tool [1]. I couldn’t see the benefits to
> implement a wrapper API to offer what native tool already offers. However,
> if you can point out a valid use case to wrap the API, I will give it more
> thoughts.
> >
> > Best regards,
> > Hongbin
> >
> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
> >
> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com bharath_...@hotmail.com>]
> > Sent: November-18-15 1:20 PM
> > To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> > Subject: [openstack-dev] [magnum] Mesos Conductor
> >
> > Hi all,
> >
> > I am working on the blueprint [1]. As per my understanding, we have two
> resources/objects in mesos+marathon:
> >
> > 1)Apps: combination of instances/containers running on multiple hosts
> representing a service.[2]
> > 2)Application Groups: Group of apps, for example we can have database
> application group which consists mongoDB app and MySQL App.[3]
> >
> > So I think we need to have two resources 'apps' and 'appgroups' in mesos
> conductor like we have pod and rc for k8s. And regarding 'magnum container'
> command, we can create, delete and retrieve container details as part of
> mesos app itself(container = app with 1 instance). Though I think in mesos
> case 'magnum app-create ..." and 'magnum container-create ...' will use the
> same REST API for both cases.
> >
> > Let me know your opinion/comments on this and correct me if I am wrong
> >
> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
> >
> >
> > Regards
> > Bharath T
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<
> http://lists.openstack.org/?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/m

Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Jay Lau
Just want to input more for this topic, I was also thinking more for how we
can unify the client interface for Magnum.

Currently, the Kubernetes is using "kubectl create" to create all of k8s
objects including pod, rc, service, pv, pvc, hpa etc using either yaml,
json, yml, stdin file format; The marathon also using yaml, json file to
create applications. In my understanding, it is difficult to unify the
concept of all COEs but at least seems many COEs are trying to unify the
input and output: all using same file format as input and getting same
format output.

It is a good signal for Magnum and the Magmum can leverage those features
to unify the client interface for different COEs. i.e we can use "magnum
app create" to create pod, rc, service, pv, pvc even marathon service etc.
Just some early thinking from my side...

Thanks!

On Thu, Nov 19, 2015 at 10:01 AM, Jay Lau <jay.lau@gmail.com> wrote:

> +1.
>
> One problem I want to mention is that for mesos integration, we cannot
> limited to Marathon + Mesos as there are many frameworks can run on top of
> Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
> Mesos integration as there is a huge eco-system build on top of Mesos.
>
> On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
>
>> Bharath,
>>
>> I agree with Hongbin on this. Let’s not expand magnum to deal with apps
>> or appgroups in the near term. If there is a strong desire to add these
>> things, we could allow it by having a plugin/extensions interface for the
>> Magnum API to allow additional COE specific features. Honestly, it’s just
>> going to be a nuisance to keep up with the various upstreams until they
>> become completely stable from an API perspective, and no additional changes
>> are likely. All of our COE’s still have plenty of maturation ahead of them,
>> so this is the wrong time to wrap them.
>>
>> If someone really wants apps and appgroups, (s)he could add that to an
>> experimental branch of the magnum client, and have it interact with the
>> marathon API directly rather than trying to represent those resources in
>> Magnum. If that tool became popular, then we could revisit this topic for
>> further consideration.
>>
>> Adrian
>>
>> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>> >
>> > Hi Bharath,
>> >
>> > I agree the “container” part. We can implement “magnum container-create
>> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
>> introduce “apps” and “appgroups” resources to Magnum, because they are
>> already provided by native tool [1]. I couldn’t see the benefits to
>> implement a wrapper API to offer what native tool already offers. However,
>> if you can point out a valid use case to wrap the API, I will give it more
>> thoughts.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
>> >
>> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
>> > Sent: November-18-15 1:20 PM
>> > To: openstack-dev@lists.openstack.org
>> > Subject: [openstack-dev] [magnum] Mesos Conductor
>> >
>> > Hi all,
>> >
>> > I am working on the blueprint [1]. As per my understanding, we have two
>> resources/objects in mesos+marathon:
>> >
>> > 1)Apps: combination of instances/containers running on multiple hosts
>> representing a service.[2]
>> > 2)Application Groups: Group of apps, for example we can have database
>> application group which consists mongoDB app and MySQL App.[3]
>> >
>> > So I think we need to have two resources 'apps' and 'appgroups' in
>> mesos conductor like we have pod and rc for k8s. And regarding 'magnum
>> container' command, we can create, delete and retrieve container details as
>> part of mesos app itself(container =  app with 1 instance). Though I think
>> in mesos case 'magnum app-create ..."  and 'magnum container-create ...'
>> will use the same REST API for both cases.
>> >
>> > Let me know your opinion/comments on this and correct me if I am wrong
>> >
>> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
>> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
>> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
>> >
>> >
>> > Regards
>> > Bharath T
>> >
>> __________________
>> &g

Re: [openstack-dev] [magnum] Autoscaling both clusters andcontainers

2015-11-18 Thread Jay Lau
Cool, thanks Ton!

On Thu, Nov 19, 2015 at 7:07 AM, Ton Ngo <t...@us.ibm.com> wrote:

> The slides for the Tokyo talk is available on slideshare:
>
> http://www.slideshare.net/huengo965921/exploring-magnum-and-senlin-integration-for-autoscaling-containers
>
> Ton,
>
>
> [image: Inactive hide details for Jay Lau ---11/17/2015 10:05:27 PM---It's
> great that we discuss this in mail list, I filed a bp here h]Jay Lau
> ---11/17/2015 10:05:27 PM---It's great that we discuss this in mail list, I
> filed a bp here https://blueprints.launchpad.net/mag
>
> From: Jay Lau <jay.lau@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 11/17/2015 10:05 PM
> Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
> containers
> --
>
>
>
> It's great that we discuss this in mail list, I filed a bp here
> *https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling*
> <https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling>
> and planning a spec for this. You can get some early ideas from what Ton
> pointed here:
> *https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers*
> <https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers>
>
> *@Ton*, is it possible that we publish the slides to slideshare? ;-)
>
> Our thinking was introduce an autoscaler service to Magnum just like what
> GCE is doing now, will have you updated when a spec is ready for review.
>
> On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz <*e...@walmartlabs.com*
> <e...@walmartlabs.com>> wrote:
>
>Ryan
>
>I haven’t seen any proposals/implementations from Mesos/Swarm (but  I
>am not following Mesos and Swam community very close these days).
>But Kubernetes 1.1 has pod autoscaling (
>
> *https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md*
>
> <https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md>
>),
>which should cover containers auto-scaling. Also there is PR for
>cluster auto-scaling (
>*https://github.com/kubernetes/kubernetes/pull/15304*
><https://github.com/kubernetes/kubernetes/pull/15304>), which
>has implementation for GCE, but OpenStack support can be added as well.
>
>—
>Egor
>
>From: Ton Ngo <*t...@us.ibm.com* <t...@us.ibm.com>*t...@us.ibm.com* <t...@us.ibm.com>>>
>Reply-To: "OpenStack Development Mailing List (not for usage
>questions)" <*openstack-dev@lists.openstack.org*
><openstack-dev@lists.openstack.org>*openstack-dev@lists.openstack.org* <openstack-dev@lists.openstack.org>
>>>
>Date: Tuesday, November 17, 2015 at 16:58
>To: "OpenStack Development Mailing List (not for usage questions)" <
>*openstack-dev@lists.openstack.org* <openstack-dev@lists.openstack.org>
><mailto:*openstack-dev@lists.openstack.org*
><openstack-dev@lists.openstack.org>>>
>Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
>containers
>
>
>Hi Ryan,
>There was a talk in the last Summit on this topics to explore the
>options with Magnum, Senlin, Heat, Kubernetes:
>
>
> *https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers*
>
> <https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers>
>A demo was shown with Senlin interfacing to Magnum to autoscale.
>There was also a Magnum design session to discuss this same topics.
>The use cases are similar to what you describe. Because the subject is
>complex, there are many moving parts, and multiple teams/projects are
>involved, one outcome of the design session is that we will write a spec on
>autoscaling containers and cluster. A patch should be coming soon, so it
>would be great to have your input on the spec.
>Ton,
>
>[Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48
>PM---Hi all, I was having a discussion with a teammate with resp]Ryan
>Rossiter ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with
>a teammate with respect to container
>
>From: Ryan Rossiter <*rlros...@linux.vnet.ibm.com*
><rlros...@linux.vnet.ibm.com><mailto:*rlros...@linux.vn

Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Jay Lau
+1.

One problem I want to mention is that for mesos integration, we cannot
limited to Marathon + Mesos as there are many frameworks can run on top of
Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
Mesos integration as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto <adrian.o...@rackspace.com>
wrote:

> Bharath,
>
> I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
> appgroups in the near term. If there is a strong desire to add these
> things, we could allow it by having a plugin/extensions interface for the
> Magnum API to allow additional COE specific features. Honestly, it’s just
> going to be a nuisance to keep up with the various upstreams until they
> become completely stable from an API perspective, and no additional changes
> are likely. All of our COE’s still have plenty of maturation ahead of them,
> so this is the wrong time to wrap them.
>
> If someone really wants apps and appgroups, (s)he could add that to an
> experimental branch of the magnum client, and have it interact with the
> marathon API directly rather than trying to represent those resources in
> Magnum. If that tool became popular, then we could revisit this topic for
> further consideration.
>
> Adrian
>
> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> >
> > Hi Bharath,
> >
> > I agree the “container” part. We can implement “magnum container-create
> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
> introduce “apps” and “appgroups” resources to Magnum, because they are
> already provided by native tool [1]. I couldn’t see the benefits to
> implement a wrapper API to offer what native tool already offers. However,
> if you can point out a valid use case to wrap the API, I will give it more
> thoughts.
> >
> > Best regards,
> > Hongbin
> >
> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
> >
> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
> > Sent: November-18-15 1:20 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [magnum] Mesos Conductor
> >
> > Hi all,
> >
> > I am working on the blueprint [1]. As per my understanding, we have two
> resources/objects in mesos+marathon:
> >
> > 1)Apps: combination of instances/containers running on multiple hosts
> representing a service.[2]
> > 2)Application Groups: Group of apps, for example we can have database
> application group which consists mongoDB app and MySQL App.[3]
> >
> > So I think we need to have two resources 'apps' and 'appgroups' in mesos
> conductor like we have pod and rc for k8s. And regarding 'magnum container'
> command, we can create, delete and retrieve container details as part of
> mesos app itself(container =  app with 1 instance). Though I think in mesos
> case 'magnum app-create ..."  and 'magnum container-create ...' will use
> the same REST API for both cases.
> >
> > Let me know your opinion/comments on this and correct me if I am wrong
> >
> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
> >
> >
> > Regards
> > Bharath T
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Jay Lau
 [2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API
>
> --
> Thanks,
>
> Ryan Rossiter (rlrossit)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Jay Lau
gt;> > was semi-rushed.
>> >
>> > I am +1 given the above constraints.  I think this will help Kolla grow
>> and
>> > potentially provide a better (or arguably different) orchestration
>> system
>> > and is worth the investigation.  At no time will we put the existing
>> Kolla
>> > Ansible + Docker goodness into harms way, so I see no harm in an
>> independent
>> > repository especially if the core reviewer team strives to work as one
>> team
>> > (rather then two independent teams with the same code base).
>> >
>> > Abstaining is the same as voting as –1, so please vote one way or
>> another
>> > with a couple line blob about your thoughts on the idea.
>> >
>> > Note of the core reviewers there, we had 7 +1 votes (and we have a 9
>> > individual core reviewer team so there is already a majority but I’d
>> like to
>> > give everyone an opportunity weigh in).
>>
>> As one of the core reviewers who couldn't make the summit, this sounds
>> like a very exciting direction to go in. I'd love to see more docs (I
>> realize it's still early) on how mesos will be utilized and what
>> additional frameworks may be used as well. Is kubernetes planned to be
>> part of this mix since mesos works with it now?
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-29 Thread Jay Lau
Hi Steve,

It is really a big loss to Magnum and thanks very much for your help in my
Magnum journey. Wish you good luck in Kolla!


On Tue, Oct 27, 2015 at 2:29 PM, 大塚元央 <yuany...@oeilvert.org> wrote:

> Hi Steve,
>
> I'm very sad about your stepping down from Magnum core. Without your help,
> I couldn't contribute to magnum project.
> But kolla is also fantastic project.
> I wish you the best of luck in kolla.
>
> Best regards.
> - Yuanying Otsuka
>
> On Tue, Oct 27, 2015 at 00:39 Baohua Yang <yangbao...@gmail.com> wrote:
>
>> Really a pity!
>>
>> We need more resources on the container part in OpenStack indeed, as so
>> many new projects are just initiated.
>>
>> Community is not only about putting technologies together, but also
>> putting technical guys together.
>>
>> Happy to see so many guys in the Tokyo Summit this afternoon.
>>
>> Let's take care of the opportunities to make good communications with
>> each other.
>>
>> On Mon, Oct 26, 2015 at 8:17 AM, Steven Dake (stdake) <std...@cisco.com>
>> wrote:
>>
>>> Hey folks,
>>>
>>> It is with sadness that I find myself under the situation to have to
>>> write this message.  I have the privilege of being involved in two of the
>>> most successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
>>> getting involved in two major initiatives on purpose, to see if I could do
>>> the job; to see if  I could deliver two major initiatives at the same
>>> time.  I also wanted it to be a length of time that was significant – 1+
>>> year.  I found indeed I was able to deliver both Magnum and Kolla, however,
>>> the impact on my personal life has not been ideal.
>>>
>>> The Magnum engineering team is truly a world class example of how an
>>> Open Source project should be constructed and organized.  I hope some young
>>> academic writes a case study on it some day but until then, my gratitude to
>>> the Magnum core reviewer team is warranted by the level of  their sheer
>>> commitment.
>>>
>>> I am officially focusing all of my energy on Kolla going forward.  The
>>> Kolla core team elected me as PTL (or more accurately didn’t elect anyone
>>> else;) and I really want to be effective for them, especially in what I
>>> feel is Kolla’s most critical phase of growth.
>>>
>>> I will continue to fight  for engineering resources for Magnum
>>> internally in Cisco.  Some of these have born fruit already including the
>>> Heat resources, the Horizon plugin, and of course the Networking plugin
>>> system.  I will also continue to support Magnum from a resources POV where
>>> I can do so (like the fedora image storage for example).  What I won’t be
>>> doing is reviewing Magnum code (serving as a gate), or likely making much
>>> technical contribution to Magnum in the future.  On the plus side I’ve
>>> replaced myself with many many more engineers from Cisco who should be much
>>> more productive combined then I could have been alone ;)
>>>
>>> Just to be clear, I am not abandoning Magnum because I dislike the
>>> people or the technology.  I think the people are fantastic! And the
>>> technology – well I helped design the entire architecture!  I am letting
>>> Magnum grow up without me as I have other children that need more direct
>>> attention.  I think this viewpoint shows trust in the core reviewer team,
>>> but feel free to make your own judgements ;)
>>>
>>> Finally I want to thank Perry Myers for influencing me to excel at
>>> multiple disciplines at once.  Without Perry as a role model, Magnum may
>>> have never happened (or would certainly be much different then it is
>>> today). Being a solid hybrid engineer has a long ramp up time and is really
>>> difficult, but also very rewarding.  The community has Perry to blame for
>>> that ;)
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsu

Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-01 Thread Jay Lau
+1 for both! Welcome!

On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> +1 for both. Welcome!
>
>
>
> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
> *Sent:* September-30-15 7:00 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>
>
>
> +1 from me for both Vilobh and Hua.
>
>
>
> Thanks,
>
> Dims
>
>
>
> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
>
> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Jay Lau
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API to
> operate. We are intentionally avoiding re-inventing the wheel. Our goal is
> not to replace docker swarm (or other existing systems), but to compliment
> it/them. We want to offer users of Docker the richness of native APIs and
> supporting tools. This way they will not need to compromise features or
> wait longer for us to implement each new feature as it is added. Keep in
> mind that our pod, service, and replication controller resources pre-date
> this philosophy. If we started out with the current approach, those would
> not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, 王华 <wanghua.hum...@gmail.com wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler of
> container, which is like nova in openstack. Docker compose is a
> orchestration program which is like heat in openstack. k8s is the
> combination of scheduler and orchestration. So I think it is better to
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org> openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Is magnum db going to be removed for k8s resources?

2015-09-17 Thread Jay Lau
Anyone who have some comments/suggestions on this? Thanks!

On Mon, Sep 14, 2015 at 3:57 PM, Jay Lau <jay.lau@gmail.com> wrote:

> Hi Vikas,
>
> Thanks for starting this thread. Here just show some of my comments here.
>
> The reason that Magnum want to get k8s resource via k8s API including two
> reasons:
> 1) Native clients support
> 2) With current implantation, we cannot get pod for a replication
> controller. The reason is that Magnum DB only persist replication
> controller info in Magnum DB.
>
> With the bp of objects-from-bay, the magnum will always call k8s API to
> get all objects for pod/service/rc. Can you please show some of your
> concerns for why do we need to persist those objects in Magnum DB? We may
> need to sync up Magnum DB and k8s periodically if we persist two copies of
> objects.
>
> Thanks!
>
> <https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>
>
> 2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvika...@gmail.com>:
>
>> Hi Team,
>>
>> As per object-from-bay blueprint implementation [1], all calls to magnum db
>> are being skipped for example pod.create() etc.
>>
>> Are not we going to use magnum db at all for pods/services/rc ?
>>
>>
>> Thanks
>> Vikas Choudhary
>>
>>
>> [1] https://review.openstack.org/#/c/213368/
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum]bay/baymodel sharing for multi-tenants

2015-09-17 Thread Jay Lau
Hi Magnum,

Currently, there are about two blueprints related to bay/baymode sharing
for different tenants.
1) https://blueprints.launchpad.net/magnum/+spec/tenant-shared-model
2) https://blueprints.launchpad.net/magnum/+spec/public-baymodels

What we want to do is we can make the bay/baymodel to be shared by
multi-tenants, the reason is that sometimes, creating a bay maybe time
consuming, and enabling the bay shared by multi-tenant can save some time
for some users.

Any comments on this?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Is magnum db going to be removed for k8s resources?

2015-09-14 Thread Jay Lau
Hi Vikas,

Thanks for starting this thread. Here just show some of my comments here.

The reason that Magnum want to get k8s resource via k8s API including two
reasons:
1) Native clients support
2) With current implantation, we cannot get pod for a replication
controller. The reason is that Magnum DB only persist replication
controller info in Magnum DB.

With the bp of objects-from-bay, the magnum will always call k8s API to get
all objects for pod/service/rc. Can you please show some of your concerns
for why do we need to persist those objects in Magnum DB? We may need to
sync up Magnum DB and k8s periodically if we persist two copies of objects.

Thanks!

<https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>

2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvika...@gmail.com>:

> Hi Team,
>
> As per object-from-bay blueprint implementation [1], all calls to magnum db
> are being skipped for example pod.create() etc.
>
> Are not we going to use magnum db at all for pods/services/rc ?
>
>
> Thanks
> Vikas Choudhary
>
>
> [1] https://review.openstack.org/#/c/213368/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum-ui] Status of Magnum UI

2015-08-26 Thread Jay Lau
Hi,

I see that we already have a magnum ui team, just wondering what is the
current status of this project? I'm planning a PoC and want to see if there
are some current magnum ui work that I can leverage.

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-08-03 Thread Jay Lau
Hi Suro,

Yes, I did not see a strong reason for adding service-list to show all of
magnum system services, but it is nice to have.

But I did see a strong reason to rename service-list to
coe-service-list or others which might be more meaningful as I was often
asked by someone why does magnum service-list is showing some services in
kubernetes but not magnum system itself? This command always make people
confused.

Thanks!

2015-08-03 15:36 GMT-04:00 SURO suro.p...@gmail.com:

 Hi Jay,

 Thanks for clarifying the requirements further.

 I do agree with the idea of having 'magnum service-list' and 'magnum
 coe-service-list' to distinguish that coe-service is a different concept.
 BUT, in openstack space, I do not see 'service-list' as a standardized
 function across other APIs -

1.  'nova service-list' = Enlists services like api, conductor etc.
2.  neutron does not have this option.
3.  'heat service-list' = Enlists available engines.
4.  'keystone service-list' = Enlists services/APIs who consults
keystone.

 Now in magnum, we may choose to model it after nova, but nova really has a
 bunch of backend services, viz. nova-conductor, nova-cert, nova-scheduler,
 nova-consoleauth, nova-compute[x N], whereas magnum not.

 For magnum, at this point creating 'service-list' only for api/conductor -
 do you see a strong need?

 Regards,
 SURO
 irc//freenode: suro-patz

 On 8/3/15 12:00 PM, Jay Lau wrote:

 Hi Suro and others, comments on this? Thanks.

 2015-07-30 5:40 GMT-04:00 Jay Lau jay.lau@gmail.com:

 Hi Suro,

 In my understanding, even other CoE might have service/pod/rc concepts in
 future, we may still want to distinguish the magnum service-list with
 magnum coe-service-list.

 service-list is mainly for magnum native services, such as magnum-api,
 magnum-conductor etc.
 coe-service-list mainly for the services that running for the CoEs in
 magnum.

 Thoughts? Thanks.

 2015-07-29 17:50 GMT-04:00 SURO suro.p...@gmail.com:

 Hi Hongbin,

 What would be the value of having COE-specific magnum command to go and
 talk to DB? As in that case, user may use the native client itself to fetch
 the data from COE, which even will have latest state.

 In a pluggable architecture there is always scope for common abstraction
 and driver implementation. I think it is too early to declare
 service/rc/pod as specific to k8s, as the other COEs may very well converge
 onto similar/same concepts.

 Regards,
 SURO
 irc//freenode: suro-patz

 On 7/29/15 2:21 PM, Hongbin Lu wrote:

 Suro,



 I think service/pod/rc are k8s-specific. +1 for Jay’s suggestion about
 renaming COE-specific command, since the new naming style looks consistent
 with other OpenStack projects. In addition, it will eliminate name
 collision of different COEs. Also, if we are going to support pluggable
 COE, adding prefix to COE-specific command is unavoidable.



 Best regards,

 Hongbin



 *From:* SURO [ suro.p...@gmail.commailto:suro.p...@gmail.com
 suro.p...@gmail.com]
 *Sent:* July-29-15 4:03 PM
 *To:* Jay Lau
 *Cc:* s...@yahoo-inc.coms...@yahoo-inc.com; OpenStack Development
 Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [magnum][blueprint] magnum-service-list



 Hi Jay,

 'service'/'pod'/'rc' are conceptual abstraction at magnum level. Yes,
 the abstraction was inspired from the same in kubernetes, but the data
 stored in DB about a 'service' is properly abstracted and not k8s-specific
 at the top level.

 If we plan to change this to 'k8s-service-list', the same applies for
 even creation and other actions. This will give rise to COE-specific
 command and concepts and which may proliferate further. Instead, we can
 abstract swarm's service concept under the umbrella of magnum's 'service'
 concept without creating k8s-service and swarm-service.

 I suggest we should keep the concept/abstraction at Magnum level, as it
 is.

 Regards,

 SURO

 irc//freenode: suro-patz

 On 7/28/15 7:59 PM, Jay Lau wrote:

 Hi Suro,

 Sorry for late. IMHO, even the magnum service-list is getting data
 from DB, but the DB is actually persisting some data for Kubernetes
 service, so my thinking is it possible to change magnum service-list to
 magnum k8s-service-list, same for pod and rc.

 I know this might bring some trouble for backward compatibility issue,
 not sure if it is good to do such modification at this time. Comments?

 Thanks



 2015-07-27 20:12 GMT-04:00 SURO  suro.p...@gmail.com
 suro.p...@gmail.com:

 Hi all,
 As we did not hear back further on the requirement of this blueprint, I
 propose to keep the existing behavior without any modification.

 We would like to explore the decision on this blueprint on our next
 weekly IRC meeting[1].


 Regards,

 SURO

 irc//freenode: suro-patz



 [1] - https://wiki.openstack.org/wiki/Meetings/Containers

 2015-07-28

 UTC 2200 Tuesday



 On 7/21/15 4:54 PM, SURO wrote:

 Hi all, [special attention: Jay Lau] The bp[1] registered, asks for the
 following

Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-08-03 Thread Jay Lau
Cool, Suro! It's great that we finally reach an agreement on this ;-)

2015-08-03 20:43 GMT-04:00 SURO suro.p...@gmail.com:

 Thanks Jay/Kennan/Adrian for chiming in!

 From this, I conclude that we have enough consensus to have 'magnum
 service-list' and 'magnum coe-service-list' segregated. I will capture
 extract of this discussion at the blueprint and start implementation of the
 same.

 Kennan,
 I would request you to submit a different bp/bug to address the staleness
 of the state of pod/rc.


 Regards,
 SURO
 irc//freenode: suro-patz

 On 8/3/15 5:33 PM, Kai Qiang Wu wrote:

 Hi Suro and Jay,

 I checked discussion below, and I do believe we also need service-list(for
 just magnum-api and magnum-conductor), but not so emergent requirement.

 I also think service-list should not bind to k8s or swarm etc. (can use
 coe-service etc.)


 But I have more for below:

 1) For k8s or swarm or mesos,  I think magnum can expose through the
 coe-service-list.
 But if right now, we fetched status from DB for pods/rcs status, It seems
 not proper to do that, as DB has old data. We need to fetch that through
 k8s/swarm API endpoints.


 2)  It can also expose that through k8s/swarm/mesos client tools. If users
 like that.


 Thanks

 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for Jay Lau ---08/04/2015 05:51:33 AM---Hi
 Suro, Yes, I did not see a strong reason for adding service-l]Jay Lau
 ---08/04/2015 05:51:33 AM---Hi Suro, Yes, I did not see a strong reason for
 adding service-list to show all of

 From: Jay Lau jay.lau@gmail.com jay.lau@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 Date: 08/04/2015 05:51 AM
 Subject: Re: [openstack-dev] [magnum][blueprint] magnum-service-list
 --



 Hi Suro,

 Yes, I did not see a strong reason for adding service-list to show all
 of magnum system services, but it is nice to have.

 But I did see a strong reason to rename service-list to
 coe-service-list or others which might be more meaningful as I was often
 asked by someone why does magnum service-list is showing some services in
 kubernetes but not magnum system itself? This command always make people
 confused.

 Thanks!

 2015-08-03 15:36 GMT-04:00 SURO  suro.p...@gmail.com*suro.p...@gmail.com
 suro.p...@gmail.com*:

Hi Jay,

Thanks for clarifying the requirements further.

I do agree with the idea of having 'magnum service-list' and 'magnum
coe-service-list' to distinguish that coe-service is a different concept.
BUT, in openstack space, I do not see 'service-list' as a standardized
function across other APIs -
   1.  'nova service-list' = Enlists services like api, conductor
   etc.
   2.  neutron does not have this option.
   3.  'heat service-list' = Enlists available engines.
   4.  'keystone service-list' = Enlists services/APIs who consults
   keystone.
Now in magnum, we may choose to model it after nova, but nova really
has a bunch of backend services, viz. nova-conductor, nova-cert,
nova-scheduler, nova-consoleauth, nova-compute[x N], whereas magnum not.

For magnum, at this point creating 'service-list' only for
api/conductor - do you see a strong need?

Regards,
SURO
irc//freenode: suro-patz

On 8/3/15 12:00 PM, Jay Lau wrote:
   Hi Suro and others, comments on this? Thanks.

   2015-07-30 5:40 GMT-04:00 Jay Lau *jay.lau@gmail.com*
   jay.lau@gmail.com:
  Hi Suro,

  In my understanding, even other CoE might have service/pod/rc
  concepts in future, we may still want to distinguish the magnum
  service-list with magnum coe-service-list.

  service-list is mainly for magnum native services, such as
  magnum-api, magnum-conductor etc.
  coe-service-list mainly for the services that running for the
  CoEs in magnum.

  Thoughts? Thanks.

  2015-07-29 17:50 GMT-04:00 SURO *suro.p...@gmail.com*
  suro.p...@gmail.com:
 Hi Hongbin,

 What would be the value of having COE-specific magnum command
 to go and talk to DB? As in that case, user may use the native 
 client
 itself to fetch the data from COE, which even will have latest 
 state.

 In a pluggable architecture there is always scope for common
 abstraction and driver implementation. I think it is too

Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-08-03 Thread Jay Lau
Hi Suro and others, comments on this? Thanks.

2015-07-30 5:40 GMT-04:00 Jay Lau jay.lau@gmail.com:

 Hi Suro,

 In my understanding, even other CoE might have service/pod/rc concepts in
 future, we may still want to distinguish the magnum service-list with
 magnum coe-service-list.

 service-list is mainly for magnum native services, such as magnum-api,
 magnum-conductor etc.
 coe-service-list mainly for the services that running for the CoEs in
 magnum.

 Thoughts? Thanks.

 2015-07-29 17:50 GMT-04:00 SURO suro.p...@gmail.com:

 Hi Hongbin,

 What would be the value of having COE-specific magnum command to go and
 talk to DB? As in that case, user may use the native client itself to fetch
 the data from COE, which even will have latest state.

 In a pluggable architecture there is always scope for common abstraction
 and driver implementation. I think it is too early to declare
 service/rc/pod as specific to k8s, as the other COEs may very well converge
 onto similar/same concepts.

 Regards,
 SURO
 irc//freenode: suro-patz

 On 7/29/15 2:21 PM, Hongbin Lu wrote:

 Suro,



 I think service/pod/rc are k8s-specific. +1 for Jay’s suggestion about
 renaming COE-specific command, since the new naming style looks consistent
 with other OpenStack projects. In addition, it will eliminate name
 collision of different COEs. Also, if we are going to support pluggable
 COE, adding prefix to COE-specific command is unavoidable.



 Best regards,

 Hongbin



 *From:* SURO [mailto:suro.p...@gmail.com suro.p...@gmail.com]
 *Sent:* July-29-15 4:03 PM
 *To:* Jay Lau
 *Cc:* s...@yahoo-inc.com; OpenStack Development Mailing List (not for
 usage questions)
 *Subject:* Re: [openstack-dev] [magnum][blueprint] magnum-service-list



 Hi Jay,

 'service'/'pod'/'rc' are conceptual abstraction at magnum level. Yes, the
 abstraction was inspired from the same in kubernetes, but the data stored
 in DB about a 'service' is properly abstracted and not k8s-specific at the
 top level.

 If we plan to change this to 'k8s-service-list', the same applies for
 even creation and other actions. This will give rise to COE-specific
 command and concepts and which may proliferate further. Instead, we can
 abstract swarm's service concept under the umbrella of magnum's 'service'
 concept without creating k8s-service and swarm-service.

 I suggest we should keep the concept/abstraction at Magnum level, as it
 is.

 Regards,

 SURO

 irc//freenode: suro-patz

 On 7/28/15 7:59 PM, Jay Lau wrote:

 Hi Suro,

 Sorry for late. IMHO, even the magnum service-list is getting data from
 DB, but the DB is actually persisting some data for Kubernetes service, so
 my thinking is it possible to change magnum service-list to magnum
 k8s-service-list, same for pod and rc.

 I know this might bring some trouble for backward compatibility issue,
 not sure if it is good to do such modification at this time. Comments?

 Thanks



 2015-07-27 20:12 GMT-04:00 SURO  suro.p...@gmail.com
 suro.p...@gmail.com:

 Hi all,
 As we did not hear back further on the requirement of this blueprint, I
 propose to keep the existing behavior without any modification.

 We would like to explore the decision on this blueprint on our next
 weekly IRC meeting[1].


 Regards,

 SURO

 irc//freenode: suro-patz



 [1] - https://wiki.openstack.org/wiki/Meetings/Containers

 2015-07-28

 UTC 2200 Tuesday



 On 7/21/15 4:54 PM, SURO wrote:

 Hi all, [special attention: Jay Lau] The bp[1] registered, asks for the
 following implementation -

- 'magnum service-list' should be similar to 'nova service-list'
- 'magnum service-list' should be moved to be ' magnum
k8s-service-list'. Also similar holds true for 'pod-list'/'rc-list'

 As I dug some details, I find -

- 'magnum service-list' fetches data from OpenStack DB[2], instead of
the COE endpoint. So technically it is not k8s-specific. magnum is serving
data for objects modeled as 'service', just the way we are catering for
'magnum container-list' in case of swarm bay.
- If magnum provides a way to get the COE endpoint details, users can
use native tools to fetch the status of the COE-specific objects, viz.
'kubectl get services' here.
- nova has lot more backend services, e.g. cert, scheduler,
consoleauth, compute etc. in comparison to magnum's conductor only. Also,
not all the api's have this 'service-list' available.

 With these arguments in view, can we have some more
 explanation/clarification in favor of the ask in the blueprint? [1] -
 https://blueprints.launchpad.net/magnum/+spec/magnum-service-list [2] -
 https://github.com/openstack/magnum/blob/master/magnum/objects/service.py#L114

 --

 Regards,

 SURO

 irc//freenode: suro-patz




 --

 Thanks,

 Jay Lau (Guangya Liu)




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org

Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-07-30 Thread Jay Lau
Hi Suro,

In my understanding, even other CoE might have service/pod/rc concepts in
future, we may still want to distinguish the magnum service-list with
magnum coe-service-list.

service-list is mainly for magnum native services, such as magnum-api,
magnum-conductor etc.
coe-service-list mainly for the services that running for the CoEs in
magnum.

Thoughts? Thanks.

2015-07-29 17:50 GMT-04:00 SURO suro.p...@gmail.com:

  Hi Hongbin,

 What would be the value of having COE-specific magnum command to go and
 talk to DB? As in that case, user may use the native client itself to fetch
 the data from COE, which even will have latest state.

 In a pluggable architecture there is always scope for common abstraction
 and driver implementation. I think it is too early to declare
 service/rc/pod as specific to k8s, as the other COEs may very well converge
 onto similar/same concepts.

 Regards,
 SURO
 irc//freenode: suro-patz

 On 7/29/15 2:21 PM, Hongbin Lu wrote:

  Suro,



 I think service/pod/rc are k8s-specific. +1 for Jay’s suggestion about
 renaming COE-specific command, since the new naming style looks consistent
 with other OpenStack projects. In addition, it will eliminate name
 collision of different COEs. Also, if we are going to support pluggable
 COE, adding prefix to COE-specific command is unavoidable.



 Best regards,

 Hongbin



 *From:* SURO [mailto:suro.p...@gmail.com suro.p...@gmail.com]
 *Sent:* July-29-15 4:03 PM
 *To:* Jay Lau
 *Cc:* s...@yahoo-inc.com; OpenStack Development Mailing List (not for
 usage questions)
 *Subject:* Re: [openstack-dev] [magnum][blueprint] magnum-service-list



 Hi Jay,

 'service'/'pod'/'rc' are conceptual abstraction at magnum level. Yes, the
 abstraction was inspired from the same in kubernetes, but the data stored
 in DB about a 'service' is properly abstracted and not k8s-specific at the
 top level.

 If we plan to change this to 'k8s-service-list', the same applies for even
 creation and other actions. This will give rise to COE-specific command and
 concepts and which may proliferate further. Instead, we can abstract
 swarm's service concept under the umbrella of magnum's 'service' concept
 without creating k8s-service and swarm-service.

 I suggest we should keep the concept/abstraction at Magnum level, as it
 is.

  Regards,

 SURO

 irc//freenode: suro-patz

  On 7/28/15 7:59 PM, Jay Lau wrote:

   Hi Suro,

 Sorry for late. IMHO, even the magnum service-list is getting data from
 DB, but the DB is actually persisting some data for Kubernetes service, so
 my thinking is it possible to change magnum service-list to magnum
 k8s-service-list, same for pod and rc.

 I know this might bring some trouble for backward compatibility issue, not
 sure if it is good to do such modification at this time. Comments?

 Thanks



 2015-07-27 20:12 GMT-04:00 SURO  suro.p...@gmail.comsuro.p...@gmail.com
 :

 Hi all,
 As we did not hear back further on the requirement of this blueprint, I
 propose to keep the existing behavior without any modification.

 We would like to explore the decision on this blueprint on our next weekly
 IRC meeting[1].


 Regards,

 SURO

 irc//freenode: suro-patz



 [1] - https://wiki.openstack.org/wiki/Meetings/Containers

   2015-07-28

 UTC 2200 Tuesday



  On 7/21/15 4:54 PM, SURO wrote:

 Hi all, [special attention: Jay Lau] The bp[1] registered, asks for the
 following implementation -

- 'magnum service-list' should be similar to 'nova service-list'
- 'magnum service-list' should be moved to be ' magnum
k8s-service-list'. Also similar holds true for 'pod-list'/'rc-list'

 As I dug some details, I find -

- 'magnum service-list' fetches data from OpenStack DB[2], instead of
the COE endpoint. So technically it is not k8s-specific. magnum is serving
data for objects modeled as 'service', just the way we are catering for
'magnum container-list' in case of swarm bay.
- If magnum provides a way to get the COE endpoint details, users can
use native tools to fetch the status of the COE-specific objects, viz.
'kubectl get services' here.
- nova has lot more backend services, e.g. cert, scheduler,
consoleauth, compute etc. in comparison to magnum's conductor only. Also,
not all the api's have this 'service-list' available.

 With these arguments in view, can we have some more
 explanation/clarification in favor of the ask in the blueprint? [1] -
 https://blueprints.launchpad.net/magnum/+spec/magnum-service-list [2] -
 https://github.com/openstack/magnum/blob/master/magnum/objects/service.py#L114

 --

 Regards,

 SURO

 irc//freenode: suro-patz




 --

 Thanks,

 Jay Lau (Guangya Liu)




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] Announcing HyperStack project

2015-07-28 Thread Jay Lau
 kernel problem in LXC, isolated by VM
 - Work seamlessly with OpenStack components, Neutron, Cinder, due to
 the hypervisor nature
 - BYOK, bring-your-own-kernel is somewhat mandatory for a public cloud
 platform


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing HyperStack project

2015-07-28 Thread Jay Lau
Thanks Adrian, we can talk later in the IRC meeting.

2015-07-28 4:07 GMT-04:00 Adrian Otto adrian.o...@rackspace.com:

  Jay,

  Yes, it is on the agenda.

  Thanks,

  Adrian

  On Jul 27, 2015, at 8:32 AM, Jay Lau jay.lau@gmail.com wrote:

   Adrian,

  Can we put hyper as a topic for this week's (Tomorrow) meeting? I want to
 have some discussion with you.

  Thanks

 2015-07-27 0:43 GMT-04:00 Adrian Otto adrian.o...@rackspace.com:

 Peng,

  For the record, the Magnum team is not yet comfortable with this
 proposal. This arrangement is not the way we think containers should be
 integrated with OpenStack. It completely bypasses Nova, and offers no Bay
 abstraction, so there is no user selectable choice of a COE (Container
 Orchestration Engine). We advised that it would be smarter to build a nova
 virt driver for Hyper, and integrate that with Magnum so that it could work
 with all the different bay types. It also produces a situation where
 operators can not effectively bill for the services that are in use by the
 consumers, there is no sensible infrastructure layer capacity management
 (scheduler), no encryption management solution for the communication
 between k8s minions/nodes and the k8s master, and a number of other
 weaknesses. I’m not convinced the single-tenant approach here makes sense.

  To be fair, the concept is interesting, and we are discussing how it
 could be integrated with Magnum. It’s appropriate for experimentation, but
 I would not characterize it as a “solution for cloud providers” for the
 above reasons, and the callouts I mentioned here:

  http://lists.openstack.org/pipermail/openstack-dev/2015-July/069940.html

  Positioning it that way is simply premature. I strongly suggest that
 you attend the Magnum team meetings, and work through these concerns as we
 had Hyper on the agenda last Tuesday, but you did not show up to discuss
 it. The ML thread was confused by duplicate responses, which makes it
 rather hard to follow.

  I think it’s a really bad idea to basically re-implement Nova in Hyper.
 Your’e already re-implementing Docker in Hyper. With a scope that’s too
 wide, you won’t be able to keep up with the rapid changes in these
 projects, and anyone using them will be unable to use new features that
 they would expect from Docker and Nova while you are busy copying all of
 that functionality each time new features are added. I think there’s a
 better approach available that does not require you to duplicate such a
 wide range of functionality. I suggest we work together on this, and select
 an approach that sets you up for success, and gives OpenStack could
 operators what they need to build services on Hyper.

  Regards,

  Adrian

   On Jul 26, 2015, at 7:40 PM, Peng Zhao p...@hyper.sh wrote:

 Hi all,
  I am glad to introduce the HyperStack project to you.
  HyperStack is a native, multi-tenant CaaS solution built on top of
 OpenStack. In terms of architecture, HyperStack = Bare-metal + Hyper +
 Kubernetes + Cinder + Neutron.
  HyperStack is different from Magnum in that HyperStack doesn't employ
 the Bay concept. Instead, HyperStack pools all bare-metal servers into one
 singe cluster. Due to the hypervisor nature in Hyper, different tenants'
 applications are completely isolated (no shared kernel), thus co-exist
 without security concerns in a same cluster.
  Given this, HyperStack is a solution for public cloud providers who want
 to offer the secure, multi-tenant CaaS.
  Ref:
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1258x535/1c85a755dcb5e4a4147d37e6aa22fd40/upload_7_23_2015_at_11_00_41_AM.png
  The next step is to present a working beta of HyperStack at Tokyo
 summit, which we submitted a presentation:
 https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/Presentation/4030.
 Please vote if you are interested.
  In the future, we want to integrate HyperStack with Magnum and Nova to
 make sure one OpenStack deployment can offer both IaaS and native CaaS
 services.
  Best,
 Peng
  -- Background
 ---
  Hyper is a hypervisor-agnostic Docker runtime. It allows to run Docker
 images with any hypervisor (KVM, Xen, Vbox, ESX). Hyper is different from
 the minimalist Linux distros like CoreOS by that Hyper runs on the physical
 box and load the Docker images from the metal into the VM instance, in
 which no guest OS is present. Instead, Hyper boots a minimalist kernel in
 the VM to host the Docker images (Pod).
  With this approach, Hyper is able to bring some encouraging results,
 which are similar to container:
 - 300ms to boot a new HyperVM instance with a pod of Docker images
 - 20MB for min mem footprint of a HyperVM instance
 - Immutable HyperVM, only kernel+images, serves as atomic unit (Pod) for
 scheduling
 - Immune from the shared kernel problem in LXC, isolated by VM
 - Work seamlessly with OpenStack components, Neutron, Cinder, due

Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-07-28 Thread Jay Lau
Hi Suro,

Sorry for late. IMHO, even the magnum service-list is getting data from
DB, but the DB is actually persisting some data for Kubernetes service, so
my thinking is it possible to change magnum service-list to magnum
k8s-service-list, same for pod and rc.

I know this might bring some trouble for backward compatibility issue, not
sure if it is good to do such modification at this time. Comments?

Thanks

2015-07-27 20:12 GMT-04:00 SURO suro.p...@gmail.com:

  Hi all,
 As we did not hear back further on the requirement of this blueprint, I
 propose to keep the existing behavior without any modification.

 We would like to explore the decision on this blueprint on our next weekly
 IRC meeting[1].


 Regards,
 SURO
 irc//freenode: suro-patz

 [1] - https://wiki.openstack.org/wiki/Meetings/Containers

 2015-07-28  UTC 2200 Tuesday

  On 7/21/15 4:54 PM, SURO wrote:

 Hi all, [special attention: Jay Lau] The bp[1] registered, asks for the
 following implementation -

- 'magnum service-list' should be similar to 'nova service-list'
- 'magnum service-list' should be moved to be ' magnum
k8s-service-list'. Also similar holds true for 'pod-list'/'rc-list'

 As I dug some details, I find -

- 'magnum service-list' fetches data from OpenStack DB[2], instead of
the COE endpoint. So technically it is not k8s-specific. magnum is serving
data for objects modeled as 'service', just the way we are catering for
'magnum container-list' in case of swarm bay.
- If magnum provides a way to get the COE endpoint details, users can
use native tools to fetch the status of the COE-specific objects, viz.
'kubectl get services' here.
- nova has lot more backend services, e.g. cert, scheduler,
consoleauth, compute etc. in comparison to magnum's conductor only. Also,
not all the api's have this 'service-list' available.

 With these arguments in view, can we have some more
 explanation/clarification in favor of the ask in the blueprint? [1] -
 https://blueprints.launchpad.net/magnum/+spec/magnum-service-list [2] -
 https://github.com/openstack/magnum/blob/master/magnum/objects/service.py#L114

 --
 Regards,
 SURO
 irc//freenode: suro-patz




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing HyperStack project

2015-07-27 Thread Jay Lau
: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Jay Lau
 with Kubernetes and also have plan integrate with mesos for scheduling.
 Once mesos integration finished, we can treat mesos+hyper as another kind
 of bay.

 Thanks

 2015-07-19 4:15 GMT+08:00 Hongbin Lu *hongbin...@huawei.com*
 hongbin...@huawei.com:

Peng,



Several questions Here. You mentioned that HyperStack is a single big
“bay”. Then, who is doing the multi-host scheduling, Hyper or something
else? Were you suggesting to integrate Hyper with Magnum directly? Or you
were suggesting to integrate Hyper with Magnum indirectly (i.e. through
k8s, mesos and/or Nova)?



Best regards,

Hongbin



*From:* Peng Zhao [mailto:*p...@hyper.sh* p...@hyper.sh]
 * Sent:* July-17-15 12:34 PM
 * To:* OpenStack Development Mailing List (not for usage questions)

*Subject:* Re: [openstack-dev] [magnum][bp] Power Magnum to run on
metal with Hyper



Hi, Adrian, Jay and all,



There could be a much longer version of this, but let me try to
explain in a minimalist way.



Bay currently has two modes: VM-based, BM-based. In both cases, Bay
helps to isolate different tenants' containers. In other words, bay is
single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
given the performance merits of LXC vs VM. However, for a VM-based bay,
there is no performance gain, but single tenancy seems a must, due to the
lack of isolation in container. Hyper, as a hypervisor-based substitute for
container, brings the much-needed isolation, and therefore enables
multi-tenancy. In HyperStack, we don't really need Ironic to provision
multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a
single big bay. Pretty similar to how Nova works.



Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
functionality. So when someone submits a Docker Compose app, HyperStack
would launch HyperVMs and call Cinder/Neutron to setup the volumes and
network. The architecture is quite simple.



Here are a blog I'd like to recommend:

 *https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html*

 https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html



Let me know your questions.



Thanks,

Peng



-- Original --

*From: * Adrian Otto*adrian.o...@rackspace.com*
adrian.o...@rackspace.com;

*Date: * Thu, Jul 16, 2015 11:02 PM

*To: * OpenStack Development Mailing List (not for usage questions)
*openstack-dev@lists.openstack.org* openstack-dev@lists.openstack.org;


*Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
onmetalwith Hyper



Jay,



Hyper is a substitute for a Docker host, so I expect it could work
equally well for all of the current bay types. Hyper’s idea of a “pod” and
a Kubernetes “pod” are similar, but different. I’m not yet convinced that
integrating Hyper host creation direct with Magnum (and completely
bypassing nova) is a good idea. It probably makes more sense to implement
use nova with the ironic dirt driver to provision Hyper hosts so we can use
those as substitutes for Bay nodes in our various Bay types. This would fit
in the place were we use Fedora Atomic today. We could still rely on nova
to do all of the machine instance management and accounting like we do
today, but produce bays that use Hyper instead of a Docker host. Everywhere
we currently offer CoreOS as an option we could also offer Hyper as an
alternative, with some caveats.



There may be some caveats/drawbacks to consider before committing to a
Hyper integration. I’ll be asking those of Peng also on this thread, so
keep an eye out.



Thanks,



Adrian


On Jul 16, 2015, at 3:23 AM, Jay Lau *jay.lau@gmail.com*
   jay.lau@gmail.com wrote:



   Thanks Peng, then I can see two integration points for Magnum and
   Hyper:

   1) Once Hyper and k8s integration finished, we can deploy k8s in
   two mode: docker and hyper mode, the end user can select which mode they
   want to use. For such case, we do not need to create a new bay but may 
 need
   some enhancement for current k8s bay

   2) After mesos and hyper integration,  we can treat mesos and hyper
   as a new bay to magnum. Just like what we are doing now for 
 mesos+marathon.

   Thanks!



   2015-07-16 17:38 GMT+08:00 Peng Zhao *p...@hyper.sh*
   p...@hyper.sh:


  Hi Jay,

 Yes, we are working with the community to integrate Hyper with Mesos and
 K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
 integrate with K8S. Mesos takes a bit more efforts, but still
 straightforward.

 We expect to finish both integration in v0.4 early August.

 Best,
 Peng

 -
 Hyper - Make VM run like Container



 On Thu, Jul 16

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-18 Thread Jay Lau
Thanks Adrian, I think that we are on same page for this: Using nova ironic
drive to provision hyper machines as bay. ;-)  In my previous email, I also
mentioned two integration proposals with assumption of using ironic driver
provision those hyper machines.

1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
docker and hyper mode, the end user can select which mode they want to use.
For such case, we do not need to create a new bay but may need some
enhancement for current k8s bay

2) After mesos and hyper integration,  we can treat mesos and hyper as a
new bay to magnum. Just like what we are doing now for mesos+marathon.

2015-07-16 23:02 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Jay,

  Hyper is a substitute for a Docker host, so I expect it could work
 equally well for all of the current bay types. Hyper’s idea of a “pod” and
 a Kubernetes “pod” are similar, but different. I’m not yet convinced that
 integrating Hyper host creation direct with Magnum (and completely
 bypassing nova) is a good idea. It probably makes more sense to implement
 use nova with the ironic dirt driver to provision Hyper hosts so we can use
 those as substitutes for Bay nodes in our various Bay types. This would fit
 in the place were we use Fedora Atomic today. We could still rely on nova
 to do all of the machine instance management and accounting like we do
 today, but produce bays that use Hyper instead of a Docker host. Everywhere
 we currently offer CoreOS as an option we could also offer Hyper as an
 alternative, with some caveats.

  There may be some caveats/drawbacks to consider before committing to a
 Hyper integration. I’ll be asking those of Peng also on this thread, so
 keep an eye out.

  Thanks,

  Adrian

  On Jul 16, 2015, at 3:23 AM, Jay Lau jay.lau@gmail.com wrote:

   Thanks Peng, then I can see two integration points for Magnum and Hyper:

  1) Once Hyper and k8s integration finished, we can deploy k8s in two
 mode: docker and hyper mode, the end user can select which mode they want
 to use. For such case, we do not need to create a new bay but may need some
 enhancement for current k8s bay

  2) After mesos and hyper integration,  we can treat mesos and hyper as a
 new bay to magnum. Just like what we are doing now for mesos+marathon.

  Thanks!

 2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.sh:

Hi Jay,

  Yes, we are working with the community to integrate Hyper with Mesos
 and K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
 integrate with K8S. Mesos takes a bit more efforts, but still
 straightforward.

  We expect to finish both integration in v0.4 early August.

  Best,
  Peng

   -
 Hyper - Make VM run like Container



  On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau jay.lau@gmail.com wrote:

   Hi Peng,


  Just want to get more for Hyper. If we create a hyper bay, then can I
 set up multiple hosts in a hyper bay? If so, who will do the scheduling,
 does mesos or some others integrate with hyper?

  I did not find much info for hyper cluster management.

  Thanks.

  2015-07-16 9:54 GMT+08:00 Peng Zhao p...@hyper.sh:







   -- Original --
  *From: * “Adrian Otto”adrian.o...@rackspace.com;
 *Date: * Wed, Jul 15, 2015 02:31 AM
 *To: * “OpenStack Development Mailing List (not for usage questions)“
 openstack-dev@lists.openstack.org;

  *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetal withHyper

  Peng,

  On Jul 13, 2015, at 8:37 PM, Peng Zhao p...@hyper.sh wrote:

  Thanks Adrian!

  Hi, all,

  Let me recap what is hyper and the idea of hyperstack.

  Hyper is a single-host runtime engine. Technically,
 Docker = LXC + AUFS
 Hyper = Hypervisor + AUFS
 where AUFS is the Docker image.


  I do not understand the last line above. My understanding is that
 AUFS == UnionFS, which is used to implement a storage driver for Docker.
 Others exist for btrfs, and devicemapper. You select which one you want by
 setting an option like this:

  DOCKEROPTS=”-s devicemapper”

  Are you trying to say that with Hyper, AUFS is used to provide
 layered Docker image capability that are shared by multiple hypervisor
 guests?

Peng  Yes, AUFS implies the Docker images here.

My guess is that you are trying to articulate that a host running
 Hyper is a 1:1 substitute for a host running Docker, and will respond 
 using
 the Docker remote API. This would result in containers running on the same
 host that have a superior security isolation than they would if LXC was
 used as the backend to Docker. Is this correct?

   Peng Exactly


  Due to the shared-kernel nature of LXC, Docker lacks of the
 necessary isolation in a multi-tenant CaaS platform, and this is what
 Hyper/hypervisor is good at.

  And because of this, most CaaS today run on top of IaaS:
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Jay Lau
Hi Peng,

Please check some of my understandings in line.

Thanks


2015-07-18 0:33 GMT+08:00 Peng Zhao p...@hyper.sh:

 Hi, Adrian, Jay and all,

 There could be a much longer version of this, but let me try to explain in
 a minimalist way.

 Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps
 to isolate different tenants' containers. In other words, bay is
 single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
 given the performance merits of LXC vs VM. However, for a VM-based bay,
 there is no performance gain, but single tenancy seems a must, due to the
 lack of isolation in container. Hyper, as a hypervisor-based substitute for
 container, brings the much-needed isolation, and therefore enables
 multi-tenancy. In HyperStack, we don't really need Ironic to provision
 multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a
 single big bay. Pretty similar to how Nova works.

IMHO, only creating one big bay might not fit into Magnum user scenario
well, what you mentioned that put the entire HyperStack as a single big bay
is more like a public cloud case. But for some private cloud  cases, there
are different users and tenants, and different  tenants might want to set
up their own HyperStack bay on their own resources.


 Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
 functionality. So when someone submits a Docker Compose app, HyperStack
 would launch HyperVMs and call Cinder/Neutron to setup the volumes and
 network. The architecture is quite simple.

This is cool!


 Here are a blog I'd like to recommend:
 https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html

 Let me know your questions.

 Thanks,
 Peng

 -- Original --
 *From: * Adrian Ottoadrian.o...@rackspace.com;
 *Date: * Thu, Jul 16, 2015 11:02 PM
 *To: * OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org;
 *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetalwith Hyper

 Jay,

  Hyper is a substitute for a Docker host, so I expect it could work
 equally well for all of the current bay types. Hyper’s idea of a “pod” and
 a Kubernetes “pod” are similar, but different. I’m not yet convinced that
 integrating Hyper host creation direct with Magnum (and completely
 bypassing nova) is a good idea. It probably makes more sense to implement
 use nova with the ironic dirt driver to provision Hyper hosts so we can use
 those as substitutes for Bay nodes in our various Bay types. This would fit
 in the place were we use Fedora Atomic today. We could still rely on nova
 to do all of the machine instance management and accounting like we do
 today, but produce bays that use Hyper instead of a Docker host. Everywhere
 we currently offer CoreOS as an option we could also offer Hyper as an
 alternative, with some caveats.

  There may be some caveats/drawbacks to consider before committing to a
 Hyper integration. I’ll be asking those of Peng also on this thread, so
 keep an eye out.

  Thanks,

  Adrian

  On Jul 16, 2015, at 3:23 AM, Jay Lau jay.lau@gmail.com wrote:

   Thanks Peng, then I can see two integration points for Magnum and Hyper:

  1) Once Hyper and k8s integration finished, we can deploy k8s in two
 mode: docker and hyper mode, the end user can select which mode they want
 to use. For such case, we do not need to create a new bay but may need some
 enhancement for current k8s bay

  2) After mesos and hyper integration,  we can treat mesos and hyper as a
 new bay to magnum. Just like what we are doing now for mesos+marathon.

  Thanks!

 2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.sh:

Hi Jay,

  Yes, we are working with the community to integrate Hyper with Mesos
 and K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
 integrate with K8S. Mesos takes a bit more efforts, but still
 straightforward.

  We expect to finish both integration in v0.4 early August.

  Best,
  Peng

   -
 Hyper - Make VM run like Container



  On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau jay.lau@gmail.com wrote:

   Hi Peng,


  Just want to get more for Hyper. If we create a hyper bay, then can I
 set up multiple hosts in a hyper bay? If so, who will do the scheduling,
 does mesos or some others integrate with hyper?

  I did not find much info for hyper cluster management.

  Thanks.

  2015-07-16 9:54 GMT+08:00 Peng Zhao p...@hyper.sh:







   -- Original --
  *From: * “Adrian Otto”adrian.o...@rackspace.com;
 *Date: * Wed, Jul 15, 2015 02:31 AM
 *To: * “OpenStack Development Mailing List (not for usage questions)“
 openstack-dev@lists.openstack.org;

  *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetal withHyper

  Peng,

  On Jul 13, 2015, at 8:37 PM, Peng Zhao p...@hyper.sh wrote:

  Thanks Adrian!

  Hi, all,

  Let me

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Jay Lau
Hong Bin,

I have some online discussion with Peng, seems hyper is now integrating
with Kubernetes and also have plan integrate with mesos for scheduling.
Once mesos integration finished, we can treat mesos+hyper as another kind
of bay.

Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

  Peng,



 Several questions Here. You mentioned that HyperStack is a single big
 “bay”. Then, who is doing the multi-host scheduling, Hyper or something
 else? Were you suggesting to integrate Hyper with Magnum directly? Or you
 were suggesting to integrate Hyper with Magnum indirectly (i.e. through
 k8s, mesos and/or Nova)?



 Best regards,

 Hongbin



 *From:* Peng Zhao [mailto:p...@hyper.sh]
 *Sent:* July-17-15 12:34 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
 with Hyper



 Hi, Adrian, Jay and all,



 There could be a much longer version of this, but let me try to explain in
 a minimalist way.



 Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps
 to isolate different tenants' containers. In other words, bay is
 single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
 given the performance merits of LXC vs VM. However, for a VM-based bay,
 there is no performance gain, but single tenancy seems a must, due to the
 lack of isolation in container. Hyper, as a hypervisor-based substitute for
 container, brings the much-needed isolation, and therefore enables
 multi-tenancy. In HyperStack, we don't really need Ironic to provision
 multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a
 single big bay. Pretty similar to how Nova works.



 Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
 functionality. So when someone submits a Docker Compose app, HyperStack
 would launch HyperVMs and call Cinder/Neutron to setup the volumes and
 network. The architecture is quite simple.



 Here are a blog I'd like to recommend:
 https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html



 Let me know your questions.



 Thanks,

 Peng



 -- Original --

 *From: * Adrian Ottoadrian.o...@rackspace.com;

 *Date: * Thu, Jul 16, 2015 11:02 PM

 *To: * OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org;

 *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetalwith Hyper



 Jay,



 Hyper is a substitute for a Docker host, so I expect it could work equally
 well for all of the current bay types. Hyper’s idea of a “pod” and a
 Kubernetes “pod” are similar, but different. I’m not yet convinced that
 integrating Hyper host creation direct with Magnum (and completely
 bypassing nova) is a good idea. It probably makes more sense to implement
 use nova with the ironic dirt driver to provision Hyper hosts so we can use
 those as substitutes for Bay nodes in our various Bay types. This would fit
 in the place were we use Fedora Atomic today. We could still rely on nova
 to do all of the machine instance management and accounting like we do
 today, but produce bays that use Hyper instead of a Docker host. Everywhere
 we currently offer CoreOS as an option we could also offer Hyper as an
 alternative, with some caveats.



 There may be some caveats/drawbacks to consider before committing to a
 Hyper integration. I’ll be asking those of Peng also on this thread, so
 keep an eye out.



 Thanks,



 Adrian



  On Jul 16, 2015, at 3:23 AM, Jay Lau jay.lau@gmail.com wrote:



 Thanks Peng, then I can see two integration points for Magnum and Hyper:

 1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
 docker and hyper mode, the end user can select which mode they want to use.
 For such case, we do not need to create a new bay but may need some
 enhancement for current k8s bay

 2) After mesos and hyper integration,  we can treat mesos and hyper as a
 new bay to magnum. Just like what we are doing now for mesos+marathon.

 Thanks!



 2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.sh:

 Hi Jay,



 Yes, we are working with the community to integrate Hyper with Mesos and
 K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
 integrate with K8S. Mesos takes a bit more efforts, but still
 straightforward.



 We expect to finish both integration in v0.4 early August.



 Best,

 Peng



 -

 Hyper - Make VM run like Container







 On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau jay.lau@gmail.com wrote:

Hi Peng,

   Just want to get more for Hyper. If we create a hyper bay, then can I
 set up multiple hosts in a hyper bay? If so, who will do the scheduling,
 does mesos or some others integrate with hyper?

 I did not find much info for hyper cluster management.



 Thanks.



 2015-07-16 9:54 GMT+08:00 Peng Zhao p...@hyper.sh

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Jay Lau
Thanks Peng, then I can see two integration points for Magnum and Hyper:

1) Once Hyper and k8s integration finished, we can deploy k8s in two mode:
docker and hyper mode, the end user can select which mode they want to use.
For such case, we do not need to create a new bay but may need some
enhancement for current k8s bay

2) After mesos and hyper integration,  we can treat mesos and hyper as a
new bay to magnum. Just like what we are doing now for mesos+marathon.

Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.sh:

Hi Jay,

 Yes, we are working with the community to integrate Hyper with Mesos and
 K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
 integrate with K8S. Mesos takes a bit more efforts, but still
 straightforward.

 We expect to finish both integration in v0.4 early August.

 Best,
 Peng

 -
 Hyper - Make VM run like Container



 On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau jay.lau@gmail.com wrote:

 Hi Peng,


 Just want to get more for Hyper. If we create a hyper bay, then can I set
 up multiple hosts in a hyper bay? If so, who will do the scheduling, does
 mesos or some others integrate with hyper?

 I did not find much info for hyper cluster management.

 Thanks.

 2015-07-16 9:54 GMT+08:00 Peng Zhao p...@hyper.sh:







 -- Original --
 *From: * “Adrian Otto”adrian.o...@rackspace.com;
 *Date: * Wed, Jul 15, 2015 02:31 AM
 *To: * “OpenStack Development Mailing List (not for usage questions)“
 openstack-dev@lists.openstack.org;

 *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
 onmetal withHyper

 Peng,

  On Jul 13, 2015, at 8:37 PM, Peng Zhao p...@hyper.sh wrote:

  Thanks Adrian!

  Hi, all,

  Let me recap what is hyper and the idea of hyperstack.

  Hyper is a single-host runtime engine. Technically,
 Docker = LXC + AUFS
 Hyper = Hypervisor + AUFS
 where AUFS is the Docker image.


  I do not understand the last line above. My understanding is that AUFS
 == UnionFS, which is used to implement a storage driver for Docker. Others
 exist for btrfs, and devicemapper. You select which one you want by setting
 an option like this:

  DOCKEROPTS=”-s devicemapper”

  Are you trying to say that with Hyper, AUFS is used to provide
 layered Docker image capability that are shared by multiple hypervisor
 guests?

 Peng  Yes, AUFS implies the Docker images here.

 My guess is that you are trying to articulate that a host running Hyper
 is a 1:1 substitute for a host running Docker, and will respond using the
 Docker remote API. This would result in containers running on the same host
 that have a superior security isolation than they would if LXC was used as
 the backend to Docker. Is this correct?

 Peng Exactly


  Due to the shared-kernel nature of LXC, Docker lacks of the necessary
 isolation in a multi-tenant CaaS platform, and this is what
 Hyper/hypervisor is good at.

  And because of this, most CaaS today run on top of IaaS:
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
 Hyper enables the native, secure, bare-metal CaaS
 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png

  From the tech stack perspective, Hyperstack turns Magnum o run in
 parallel with Nova, not running on atop.


  For this to work, we’d expect to get a compute host from Heat, so if
 the bay type were set to “hyper”, we’d need to use a template that can
 produce a compute host running Hyper. How would that host be produced, if
 we do not get it from nova? Might it make more sense to make a dirt driver
 for nova that could produce a Hyper guest on a host already running the
 nova-compute agent? That way Magnum would not need to re-create any of
 Nova’s functionality in order to produce nova instances of type “hyper”.


 Peng  We don’t have to get the physical host from nova. Let’s say
OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for
 everyone else”
HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like
 CaaS for everyone else”

 Ideally, customers should deploy a single OpenStack cluster, with both
 nova/kvm and magnum/hyper. I’m looking for a solution to make nova/magnum
 co-exist.

 Is Hyper compatible with libvirt?


 Peng We are working on the libvirt integration, expect in v0.5


  Can Hyper support nested Docker containers within the Hyper guest?


 Peng Docker in Docker? In a HyperVM instance, there is no docker
 daemon, cgroup and namespace (except MNT for pod). VM serves the purpose
 of isolation. We plan to support cgroup and namespace, so you can control
 whether multiple containers in a pod share the same namespace, or
 completely isolated. But in either case, no docker daemon is present.


  Thanks,

  Adrian Otto


  Best,
 Peng

  -- Original --
  *From

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Jay Lau
 any better name, just vote to disagree all, I
think that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,


 
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: *wk...@cn.ibm.com* wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


 
Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-16 Thread Jay Lau
 integration:
 https://blueprints.launchpad.net/magnum/+spec/hyperstack

  Wanted to hear more thoughts and kickstart some brainstorming.

  Thanks,
 Peng

  -
 Hyper - Make VM run like Container


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Tom Cammann for core

2015-07-15 Thread Jay Lau
Welcome Tom!

2015-07-15 6:53 GMT+08:00 Tom Cammann tom.camm...@hp.com:

 Thanks team, happy to be here :)

 Tom
  On 14 Jul 2015, at 23:02, Adrian Otto adrian.o...@rackspace.com wrote:
 
  Tom,
 
  It is my pleasure to welcome you to the magnum-core group. We are happy
 to have you on the team.
 
  Regards,
 
  Adrian
 
  On Jul 9, 2015, at 7:20 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
  Team,
 
  Tom Cammann (tcammann) has become a valued Magnum contributor, and
 consistent reviewer helping us to shape the direction and quality of our
 new contributions. I nominate Tom to join the magnum-core team as our
 newest core reviewer. Please respond with a +1 vote if you agree.
 Alternatively, vote -1 to disagree, and include your rationale for
 consideration.
 
  Thanks,
 
  Adrian
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-15 Thread Jay Lau
Hi Sahara Team,

Just notice that the CloudBreak (https://github.com/sequenceiq/cloudbreak)
also support running on top of OpenStack, can anyone show me some
difference between Sahara and CloudBreak when both of them using OpenStack
as Infrastructure Manager?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-10 Thread Jay Lau
I think that we have a similar bp before:
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make
much sense to customize this port as the kubernetes/swarm/mesos cluster
will be created by heat and end user do not need to care the
ports,different kubernetes/swarm/mesos cluster will have different IP
addresses so there will be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:

 I’m moving this whiteboard to the ML so we can have some discussion to
 refine it, and then go back and update the whiteboard.

 Source:
 *https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port*
 https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


 @Sdake and I have some discussion now, but may need more input from your
 side.


 1. keep apserver_port in baymodel, it may only allow admin to have (if we
 involved policy) create that baymodel, have less flexiblity for other users.


 2. apiserver_port was designed in baymodel, if moved from baymodel to bay,
 it is big change, and if we have other better ways. (it also may apply for
 other configuration fileds, like dns-nameserver etc.)



 Thanks



 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-05 Thread Jay Lau
Thanks Eric, I see. Yes, this can make sure the user would not need to
contact the server using bay-show/baymode-show to get UUID of bay/baymodel,
but Magnum need to be updated to make the bay/baymodel uuid generate logic.

2015-06-05 13:42 GMT+08:00 Eric Windisch e...@windisch.us:


I think this is perfectly fine, as long as it's reasonably large and
 the algorithm is sufficiently intelligent. The UUID algorithm is good at
 this, for instance, although it fails at readability. Docker's is not
 terribly great and could be limiting if you were looking to run several
 thousand containers on a single machine. Something better than Docker's
 algorithm but more readable than UUID could be explored.

  Also, something to consider is if this should also mean a change to
 the UUIDs themselves. You could use UUID-5 to create a UUID from your
 tenant's UUID and your unique name. The tenant's UUID would be the
 namespace, with the bay's name being the name field. The benefit of this
 is that clients, by knowing their tenant ID could automatically determine
 their bay ID, while also guaranteeing uniqueness (or as unique as UUID
 gets, anyway).


  Cool idea!

 I'm clear with the solution, but still have some questions: So we need to
 set the bay/baymodel name in the format of UUID-name format? Then if we get
 the tenant ID, we can use magnum bay-list | grep tenant-id or some
 other filter logic to get all the bays belong to the tenant?  By default,
 the magnum bay-list/baymodel-list will only show the bay/baymodels for
 one specified tenant.


 The name would be an arbitrary string, but you would also have a
 unique-identifier which is a UUID. I'm proposing the UUID could be
 generated using the UUID5 algorithm which is basically sha1(tenant_id +
 unique_name)  converted into a GUID. The Python uuid library can do this
 easily, out of the box.

 Taking from the dev-quickstart, I've changed the instructions for creating
 a container according to how this could work using uuid5:

 $ magnum create-bay --name swarmbay --baymodel testbaymodel
 $  BAY_UUID=$(python -c import uuid; print
 uuid.uuid5(uuid.UUID('urn:uuid:${TENANT_ID}'), 'swarmbay'))
 $ cat  ~/container.json  END
 {
 bay_uuid: $BAY_UUID,
 name: test-container,
 image_id: cirros,
 command: ping -c 4 8.8.8.8
 }
 END
 $ magnum container-create  ~/container.json


 The key difference in this example, of course, is that users would not
 need to contact the server using bay-show in order to obtain the UUID of
 their bay.

 Regards,
 Eric Windisch

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Jay Lau
I have filed a bp for this
https://blueprints.launchpad.net/magnum/+spec/auto-generate-name Thanks

2015-06-04 14:14 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Adrian, I see. Clear now.

 2015-06-04 11:17 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Jay,

 On Jun 3, 2015, at 6:42 PM, Jay Lau jay.lau@gmail.com wrote:

   Thanks Adrian, some questions and comments in-line.

 2015-06-03 10:29 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

 I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.

 +1 on this


  -and-

  2) Add a configuration directives (default=FALSE) for
 allow_duplicate_bay_name and allow_duplicate_baymodel_name. If TRUE,
 duplicate named Bay and BayModel resources will be allowed, as they are
 today.

  This way, by default Magnum requires a unique name, and if none is
 specified, it will automatically generate a name. This way no additional
 burden is put on users who want to act on containers exclusively using
 UUIDs, and cloud operators can decide if they want to enforce name
 uniqueness or not.

  In the case of clouds that want to allow sharing access to a BayModel
 between multiple tenants (example: a global BayModel named “kubernetes”)
 with allow_duplicate_baymodel_name set to FALSE, a user will still be
 allowed to

 Here should be allow_duplicate_baymodel set to TRUE?


  I know this is confusing, but yes, what I wrote was correct. Perhaps I
 could rephrase it to clarify:

  Regardless of the setting of allow_duplicate_bay* settings, we should
 allow a user to create a BayModel with the same name as a global or shared
 one in order to override the one that already exists from another source
 with one supplied by the user. When referred to by name, the one created by
 the user would be selected in the case where each has the same name
 assigned.

 create a BayModel with the name “kubernetes” and it will override
 the global one. If a user-supplied BayModel is present with the same name
 as a global one, we shall automatically select the one owned by the tenant.

 +1 on this , one question is what does a global BayModel means? In
 Magnum, all BayModel belong to a tenant and seems there is no global
 BayModel?


  This is a concept we have not actually discussed, and we don't have
 today as a feature. The idea is that in addition to the BayModel resources
 that tenants create, we could also have ones that the cloud operator
 creates, and automatically expose to all tenants in the system. I am
 referring to these as global BayModel resources as a potential future
 enhancement.

  The rationale for such a global resource is a way for the Cloud
 Operator to pre-define the COE's they support, and pre-seed the Magnum
 environment with such a configuration for all users. Implementing this
 would require a solution for how to handle the ssh keypair, as one will
 need to be generated uniquely for every tenant. Perhaps we could have a
 procedure that a tenant uses to activate the BayModel by somehow adding
 their own public ssh key to a local subclass of it. Perhaps this could be
 implemented as a user defined BayModel that has a parent_id set to the uuid
 of a parent baymodel. When we instantiate one, we would merge the two into
 a single resource.

  All of this is about anticipating possible future features. The only
 reason I am mentioning this is that I want us to think about where we might
 go with resource sharing so that our name uniqueness decision does not
 preclude us from later going in this direction.

  Adrian


  About Sharing of BayModel Resources:

  Similarly, if we add features to allow one tenant to share a BayModel
 with another tenant (pending acceptance of the offered share), and
 duplicate names are allowed, then prefer in this order: 1) Use the resource
 owned by the same tenant, 2) Use the resource shared by the other tenant
 (post acceptance only), 3) Use the global resource. If duplicates exist in
 the same scope of ownership, then raise an exception requiring the use of a
 UUID in that case to resolve the ambiguity.

 We can file a bp to trace this.


  One expected drawback of this approach is that tools designed to
 integrate with one Magnum may not work the same with another Magnum if the
 allow_duplicate_bay* settings are changed from the default values on one
 but not the other. This should be made clear in the comments above the
 configuration directive in the example config file.

 Just curious why do we need this feature? Different Magnum clusters might
 using different CoE engine. So you are mentioning the case all of the
 Magnum clusters are using same  CoE engine? If so, yes, this should be made
 clear in configuration file.

   Adrian

   On Jun 2, 2015, at 8:44 PM, Jay Lau jay.lau

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-04 Thread Jay Lau
Thanks Adrian, I see. Clear now.

2015-06-04 11:17 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Jay,

 On Jun 3, 2015, at 6:42 PM, Jay Lau jay.lau@gmail.com wrote:

   Thanks Adrian, some questions and comments in-line.

 2015-06-03 10:29 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

 I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.

 +1 on this


  -and-

  2) Add a configuration directives (default=FALSE) for
 allow_duplicate_bay_name and allow_duplicate_baymodel_name. If TRUE,
 duplicate named Bay and BayModel resources will be allowed, as they are
 today.

  This way, by default Magnum requires a unique name, and if none is
 specified, it will automatically generate a name. This way no additional
 burden is put on users who want to act on containers exclusively using
 UUIDs, and cloud operators can decide if they want to enforce name
 uniqueness or not.

  In the case of clouds that want to allow sharing access to a BayModel
 between multiple tenants (example: a global BayModel named “kubernetes”)
 with allow_duplicate_baymodel_name set to FALSE, a user will still be
 allowed to

 Here should be allow_duplicate_baymodel set to TRUE?


  I know this is confusing, but yes, what I wrote was correct. Perhaps I
 could rephrase it to clarify:

  Regardless of the setting of allow_duplicate_bay* settings, we should
 allow a user to create a BayModel with the same name as a global or shared
 one in order to override the one that already exists from another source
 with one supplied by the user. When referred to by name, the one created by
 the user would be selected in the case where each has the same name
 assigned.

 create a BayModel with the name “kubernetes” and it will override the
 global one. If a user-supplied BayModel is present with the same name as a
 global one, we shall automatically select the one owned by the tenant.

 +1 on this , one question is what does a global BayModel means? In
 Magnum, all BayModel belong to a tenant and seems there is no global
 BayModel?


  This is a concept we have not actually discussed, and we don't have today
 as a feature. The idea is that in addition to the BayModel resources that
 tenants create, we could also have ones that the cloud operator creates,
 and automatically expose to all tenants in the system. I am referring to
 these as global BayModel resources as a potential future enhancement.

  The rationale for such a global resource is a way for the Cloud Operator
 to pre-define the COE's they support, and pre-seed the Magnum environment
 with such a configuration for all users. Implementing this would require a
 solution for how to handle the ssh keypair, as one will need to be
 generated uniquely for every tenant. Perhaps we could have a procedure that
 a tenant uses to activate the BayModel by somehow adding their own public
 ssh key to a local subclass of it. Perhaps this could be implemented as a
 user defined BayModel that has a parent_id set to the uuid of a parent
 baymodel. When we instantiate one, we would merge the two into a single
 resource.

  All of this is about anticipating possible future features. The only
 reason I am mentioning this is that I want us to think about where we might
 go with resource sharing so that our name uniqueness decision does not
 preclude us from later going in this direction.

  Adrian


  About Sharing of BayModel Resources:

  Similarly, if we add features to allow one tenant to share a BayModel
 with another tenant (pending acceptance of the offered share), and
 duplicate names are allowed, then prefer in this order: 1) Use the resource
 owned by the same tenant, 2) Use the resource shared by the other tenant
 (post acceptance only), 3) Use the global resource. If duplicates exist in
 the same scope of ownership, then raise an exception requiring the use of a
 UUID in that case to resolve the ambiguity.

 We can file a bp to trace this.


  One expected drawback of this approach is that tools designed to
 integrate with one Magnum may not work the same with another Magnum if the
 allow_duplicate_bay* settings are changed from the default values on one
 but not the other. This should be made clear in the comments above the
 configuration directive in the example config file.

 Just curious why do we need this feature? Different Magnum clusters might
 using different CoE engine. So you are mentioning the case all of the
 Magnum clusters are using same  CoE engine? If so, yes, this should be made
 clear in configuration file.

   Adrian

   On Jun 2, 2015, at 8:44 PM, Jay Lau jay.lau@gmail.com wrote:

   I think that we did not come to a conclusion in today's IRC meeting.

  Adrian proposed that Magnum generate a unique name just like

Re: [openstack-dev] [api] [Nova] [Ironic] [Magnum] Microversion guideline in API-WG

2015-06-04 Thread Jay Lau
Hi Alex,

Based on my understanding, the Mangum code base is get from Ironic, that's
why Magnum using http headers because when Magnum was created, Ironic is
also using http headers.

Perhaps Magnum can follow the way how Ironic move to use Microversion?

Thanks.



2015-06-04 14:58 GMT+08:00 Xu, Hejie hejie...@intel.com:

  Hi, guys,

 I’m working on adding Microversion into the API-WG’s guideline which make
 sure we have consistent Microversion behavior in the API for user.
 The Nova and Ironic already have Microversion implementation, and as I
 know Magnum *https://review.openstack.org/#/c/184975/*
 https://review.openstack.org/#/c/184975/ is going to implement
 Microversion also.

 Hope all the projects which support( or plan to) Microversion can join the
 review of guideline.

 The Mircoversion specification(this almost copy from nova-specs):
 *https://review.openstack.org/#/c/187112*
 https://review.openstack.org/#/c/187112
 And another guideline for when we should bump Mircoversion
 *https://review.openstack.org/#/c/187896/*
 https://review.openstack.org/#/c/187896/

 As I know, there already have a little different between Nova and Ironic’s
 implementation. Ironic return min/max version when the requested
 version doesn’t support in server by http-headers. There isn’t such thing
 in nova. But that is something for version negotiation we need for nova
 also.
 Sean have pointed out we should use response body instead of http headers,
 the body can includes error message. Really hope ironic team can take a
 look at if you guys have compelling reason for using http headers.

 And if we think return body instead of http headers, we probably need
 think about back-compatible also. Because Microversion itself isn’t
 versioned.
 So I think we should keep those header for a while, does make sense?

 Hope we have good guideline for Microversion, because we only can change
 Mircoversion itself by back-compatible way.

 Thanks
 Alex Xu


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-03 Thread Jay Lau
Some comments and questions in line. Thanks.

2015-06-03 11:27 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Eric,

  On Jun 2, 2015, at 10:07 PM, Eric Windisch e...@windisch.us wrote:



 On Tue, Jun 2, 2015 at 10:29 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.


  For what it's worth, I also believe that requiring manual specification
 of names, especially if they must be unique is an anti-pattern.

  If auto-generation of human readable names is performed and these must
 be unique, mind that you will be accepting a limit on the number of bays
 that may be created.


  Good point. Keeping in mind that the effective limit would be per-tenant,
 and a simple mitigation could be used (adding incrementing digits or hex to
 the end of the name in the case of multiple guesses with collisions) could
 make the effective maximum high enough that it would be effectively
 unlimited. If someone actually reached the effective limit, the cloud
 provider could advise the user to specify a UUID they create as the name in
 order to avoid running out of auto-generated names. I could also imagine a
 Magnum feature that would allow a tenant to select an alternate name
 assignment strategy. For example:

  bay_name_generation_strategy = random_readable | uuid
 baymodel_name_generation_strategy = random_readable | uuid

  Where uuid simply sets the name to the uuid of the resource,
 guaranteeing an unlimited number of bays at the cost of readability. If
 this were settable on a per-tenant basis, you’d only need to use it for
 tenants with ridiculous numbers of bays. I suggest that we not optimize for
 this until the problem actually surfaces somewhere.

+1 on not optimize for this


I think this is perfectly fine, as long as it's reasonably large and
 the algorithm is sufficiently intelligent. The UUID algorithm is good at
 this, for instance, although it fails at readability. Docker's is not
 terribly great and could be limiting if you were looking to run several
 thousand containers on a single machine. Something better than Docker's
 algorithm but more readable than UUID could be explored.

  Also, something to consider is if this should also mean a change to the
 UUIDs themselves. You could use UUID-5 to create a UUID from your tenant's
 UUID and your unique name. The tenant's UUID would be the namespace, with
 the bay's name being the name field. The benefit of this is that clients,
 by knowing their tenant ID could automatically determine their bay ID,
 while also guaranteeing uniqueness (or as unique as UUID gets, anyway).


  Cool idea!

I'm clear with the solution, but still have some questions: So we need to
set the bay/baymodel name in the format of UUID-name format? Then if we get
the tenant ID, we can use magnum bay-list | grep tenant-id or some
other filter logic to get all the bays belong to the tenant?  By default,
the magnum bay-list/baymodel-list will only show the bay/baymodels for
one specified tenant.


  Adrian


  Regards,
 Eric Windisch

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-03 Thread Jay Lau
Thanks Adrian, some questions and comments in-line.

2015-06-03 10:29 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.

+1 on this


  -and-

  2) Add a configuration directives (default=FALSE) for
 allow_duplicate_bay_name and allow_duplicate_baymodel_name. If TRUE,
 duplicate named Bay and BayModel resources will be allowed, as they are
 today.

  This way, by default Magnum requires a unique name, and if none is
 specified, it will automatically generate a name. This way no additional
 burden is put on users who want to act on containers exclusively using
 UUIDs, and cloud operators can decide if they want to enforce name
 uniqueness or not.

  In the case of clouds that want to allow sharing access to a BayModel
 between multiple tenants (example: a global BayModel named “kubernetes”)
 with allow_duplicate_baymodel_name set to FALSE, a user will still be
 allowed to

Here should be allow_duplicate_baymodel set to TRUE?

 create a BayModel with the name “kubernetes” and it will override the
 global one. If a user-supplied BayModel is present with the same name as a
 global one, we shall automatically select the one owned by the tenant.

+1 on this , one question is what does a global BayModel means? In
Magnum, all BayModel belong to a tenant and seems there is no global
BayModel?


  About Sharing of BayModel Resources:

  Similarly, if we add features to allow one tenant to share a BayModel
 with another tenant (pending acceptance of the offered share), and
 duplicate names are allowed, then prefer in this order: 1) Use the resource
 owned by the same tenant, 2) Use the resource shared by the other tenant
 (post acceptance only), 3) Use the global resource. If duplicates exist in
 the same scope of ownership, then raise an exception requiring the use of a
 UUID in that case to resolve the ambiguity.

We can file a bp to trace this.


  One expected drawback of this approach is that tools designed to
 integrate with one Magnum may not work the same with another Magnum if the
 allow_duplicate_bay* settings are changed from the default values on one
 but not the other. This should be made clear in the comments above the
 configuration directive in the example config file.

Just curious why do we need this feature? Different Magnum clusters might
using different CoE engine. So you are mentioning the case all of the
Magnum clusters are using same  CoE engine? If so, yes, this should be made
clear in configuration file.

 Adrian

   On Jun 2, 2015, at 8:44 PM, Jay Lau jay.lau@gmail.com wrote:

   I think that we did not come to a conclusion in today's IRC meeting.

  Adrian proposed that Magnum generate a unique name just like what docker
 is doing for docker run, the problem mentioned by Andrew Melton is that
 Magnum support multi tenant, we should support the case that bay/baymodel
 under different tenant can have same name, the unique name is not required.

  Also we may need support name update as well if the end user specify a
 name by mistake and want to update it after the bay/baymodel was created.

  Hmm.., looking forward to more comments from you. Thanks.

 2015-06-02 23:34 GMT+08:00 Fox, Kevin M kevin@pnnl.gov:

  Names can make writing generic orchestration templates that would go in
 the applications catalog easier. Humans are much better at inputting a name
 rather then a uuid. You can even default a name in the text box and if they
 don't change any of the defaults, it will just work. You can't do that with
 a UUID since it is different on every cloud.

 Thanks,
 Kevin
  --
 *From:* Jay Lau [jay.lau@gmail.com]
 *Sent:* Tuesday, June 02, 2015 12:33 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be
 a required option when creating a Bay/Baymodel

Thanks Adrian, imho making name as required can bring more convenient
 to end users because UUID is difficult to use. Without name, the end user
 need to retrieve the UUID of the bay/baymodel first before he did some
 operations for the bay/baymodel, its really time consuming. We can discuss
 more in this week's IRC meeting. Thanks.


 2015-06-02 14:08 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  -1. I disagree.

  I am not convinced that requiring names is a good idea. I've asked
 several times why there is a desire to require names, and I'm not seeing
 any persuasive arguments that are not already addressed by UUIDs. We have
 UUID values to allow for acting upon an individual resource. Names are
 there as a convenience. Requiring names, especially unique names, would
 make Magnum harder to use for API

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Jay Lau
Thanks Adrian, imho making name as required can bring more convenient to
end users because UUID is difficult to use. Without name, the end user need
to retrieve the UUID of the bay/baymodel first before he did some
operations for the bay/baymodel, its really time consuming. We can discuss
more in this week's IRC meeting. Thanks.


2015-06-02 14:08 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  -1. I disagree.

  I am not convinced that requiring names is a good idea. I've asked
 several times why there is a desire to require names, and I'm not seeing
 any persuasive arguments that are not already addressed by UUIDs. We have
 UUID values to allow for acting upon an individual resource. Names are
 there as a convenience. Requiring names, especially unique names, would
 make Magnum harder to use for API users driving Magnum from other systems.
 I want to keep the friction as low as possible.

 I'm fine with replacing None with an empty string.

  Consistency with Nova would be a valid argument if we were being more
 restrictive, but that's not the case. We are more permissive. You can use
 Magnum in the same way you use Nova if you want, by adding names to all
 resources. I don't see the wisdom in forcing that style of use without a
 technical reason for it.

 Thanks,

 Adrian

 On May 31, 2015, at 4:43 PM, Jay Lau jay.lau@gmail.com wrote:


  Just want to use ML to trigger more discussion here. There are now
 bugs/patches tracing this, but seems more discussions are needed before we
 come to a conclusion.

 https://bugs.launchpad.net/magnum/+bug/1453732
 https://review.openstack.org/#/c/181839/
 https://review.openstack.org/#/c/181837/
 https://review.openstack.org/#/c/181847/
 https://review.openstack.org/#/c/181843/

  IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
 to end user as Magnum also support operating Bay/Baymodel via names and the
 name might be more meaningful to end users.

 Perhaps we can borrow some iead from nova, the concept in magnum can be
 mapped to nova as following:

 1) instance = bay
 2) flavor = baymodel

 So I think that a solution might be as following:
 1) Make name as a MUST for both bay/baymodel
 2) Update magnum client to use following style for bay-create and
 baymodel-create: DO NOT add --name option

 root@devstack007:/tmp# nova boot
 usage: nova boot [--flavor flavor] [--image image]
  [--image-with key=value] [--boot-volume volume_id]
  [--snapshot snapshot_id] [--min-count number]
  [--max-count number] [--meta key=value]
  [--file dst-path=src-path] [--key-name key-name]
  [--user-data user-data]
  [--availability-zone availability-zone]
  [--security-groups security-groups]
  [--block-device-mapping dev-name=mapping]
  [--block-device key1=value1[,key2=value2...]]
  [--swap swap_size]
  [--ephemeral size=size[,format=format]]
  [--hint key=value]
  [--nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
  [--config-drive value] [--poll]
  name
 error: too few arguments
 Try 'nova help boot' for more information.
 root@devstack007:/tmp# nova flavor-create
 usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
   [--rxtx-factor factor] [--is-public
 is-public]
   name id ram disk vcpus
 Please show your comments if any.

 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-06-02 Thread Jay Lau
In today's IRC meeting, the conclusion for now is that create a
Marathon+Mesos bay without exposing any object to Magnum but enable end
user operate Marathon directly. Thanks.

2015-06-03 9:19 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:

 Hi All,

 For mesos bay, I think what we should implement depends on user-cases.

 If users use magnum to create mesos-bay, what would they do with mesos in
 following steps ?

 1. If they go to mesos (framework or anything) directly, we'd better not
 involve any new mesos objects, but use container if possible.
 2. If they'd like to  operate with mesos through magnum, and it is easy to
 do that, we could provide some objects operation.

 Ideally, it is good to reuse containers api if possible. If not, we'd
 better find ways to mesos mapping(api passthrough, instead add redundant
 objects in magnum side)



 Thanks

 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for Hongbin Lu ---06/02/2015 06:15:44
 AM---Hi Jay, For your question “what is the mesos object that we w]Hongbin
 Lu ---06/02/2015 06:15:44 AM---Hi Jay, For your question “what is the mesos
 object that we want to manage”, the short answer is it

 From: Hongbin Lu hongbin...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 06/02/2015 06:15 AM
 Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint
 --



 Hi Jay,

 For your question “what is the mesos object that we want to manage”, the
 short answer is it depends. There are two options I can think of:

1.   Don’t manage any object from Marathon directly. Instead, we
can focus on the existing Magnum objects (i.e. container), and implements
them by using Marathon APIs if it is possible. Use the abstraction
‘container’ as an example. For a swarm bay, container will be implemented
by calling docker APIs. For a mesos bay, container could be implemented by
using Marathon APIs (it looks the Marathon’s object ‘app’ can be leveraged
to operate a docker container). The effect is that Magnum will have a set
of common abstractions that is implemented differently by different bay
type.
2.   Do manage a few Marathon objects (i.e. app). The effect is
that Magnum will have additional API object(s) that is from Marathon (like
what we have for existing k8s objects: pod/service/rc).

 Thoughts?

 Thanks
 Hongbin

 *From:* Jay Lau [mailto:jay.lau@gmail.com jay.lau@gmail.com]
 * Sent:* May-29-15 1:35 AM
 * To:* OpenStack Development Mailing List (not for usage questions)
 * Subject:* Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

 I want to mention that there is another mesos framework named as chronos:
 *https://github.com/mesos/chronos* https://github.com/mesos/chronos ,
 it is used for job orchestration.

 For others, please refer to my comments in line.

 2015-05-29 7:45 GMT+08:00 Adrian Otto *adrian.o...@rackspace.com*
 adrian.o...@rackspace.com:
 I’m moving this whiteboard to the ML so we can have some discussion to
 refine it, and then go back and update the whiteboard.

 Source: *https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type*
 https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

 My comments in-line below.


 Begin forwarded message:

 *From: *hongbin *hongbin...@huawei.com* hongbin...@huawei.com
 *Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay
 type*
 *Date: *May 28, 2015 at 2:11:29 PM PDT
 *To: **adrian.o...@rackspace.com* adrian.o...@rackspace.com
 *Reply-To: *hongbin *hongbin...@huawei.com* hongbin...@huawei.com

 Blueprint changed by hongbin:

 Whiteboard set to:

 I did a preliminary research on possible implementations. I think this BP
 can be implemented in two steps.
 1. Develop a heat template for provisioning mesos cluster.
 2. Implement a magnum conductor for managing the mesos cluster.

 Agreed, thanks for filing this blueprint!
 For 2, the conductor is mainly used to manage objects for CoE, k8s has
 pod, service, rc, what is the mesos object that we want to manage? IMHO,
 mesos is a resource manager and it needs to be worked with some frameworks
 to provide services.



First, I want to emphasis that mesos is not a service (It looks like a
library). Therefore, mesos doesn't have web API, and most users don't
use mesos directly. Instead, they use a mesos framework that is on top
of mesos. Therefore, a mesos bay needs to have a mesos framework

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Jay Lau
I think that we did not come to a conclusion in today's IRC meeting.

Adrian proposed that Magnum generate a unique name just like what docker is
doing for docker run, the problem mentioned by Andrew Melton is that
Magnum support multi tenant, we should support the case that bay/baymodel
under different tenant can have same name, the unique name is not required.

Also we may need support name update as well if the end user specify a name
by mistake and want to update it after the bay/baymodel was created.

Hmm.., looking forward to more comments from you. Thanks.

2015-06-02 23:34 GMT+08:00 Fox, Kevin M kevin@pnnl.gov:

  Names can make writing generic orchestration templates that would go in
 the applications catalog easier. Humans are much better at inputting a name
 rather then a uuid. You can even default a name in the text box and if they
 don't change any of the defaults, it will just work. You can't do that with
 a UUID since it is different on every cloud.

 Thanks,
 Kevin
  --
 *From:* Jay Lau [jay.lau@gmail.com]
 *Sent:* Tuesday, June 02, 2015 12:33 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be
 a required option when creating a Bay/Baymodel

   Thanks Adrian, imho making name as required can bring more convenient
 to end users because UUID is difficult to use. Without name, the end user
 need to retrieve the UUID of the bay/baymodel first before he did some
 operations for the bay/baymodel, its really time consuming. We can discuss
 more in this week's IRC meeting. Thanks.


 2015-06-02 14:08 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  -1. I disagree.

  I am not convinced that requiring names is a good idea. I've asked
 several times why there is a desire to require names, and I'm not seeing
 any persuasive arguments that are not already addressed by UUIDs. We have
 UUID values to allow for acting upon an individual resource. Names are
 there as a convenience. Requiring names, especially unique names, would
 make Magnum harder to use for API users driving Magnum from other systems.
 I want to keep the friction as low as possible.

 I'm fine with replacing None with an empty string.

  Consistency with Nova would be a valid argument if we were being more
 restrictive, but that's not the case. We are more permissive. You can use
 Magnum in the same way you use Nova if you want, by adding names to all
 resources. I don't see the wisdom in forcing that style of use without a
 technical reason for it.

 Thanks,

 Adrian

 On May 31, 2015, at 4:43 PM, Jay Lau jay.lau@gmail.com wrote:


  Just want to use ML to trigger more discussion here. There are now
 bugs/patches tracing this, but seems more discussions are needed before we
 come to a conclusion.

 https://bugs.launchpad.net/magnum/+bug/1453732
 https://review.openstack.org/#/c/181839/
 https://review.openstack.org/#/c/181837/
 https://review.openstack.org/#/c/181847/
 https://review.openstack.org/#/c/181843/

  IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
 to end user as Magnum also support operating Bay/Baymodel via names and the
 name might be more meaningful to end users.

 Perhaps we can borrow some iead from nova, the concept in magnum can be
 mapped to nova as following:

 1) instance = bay
 2) flavor = baymodel

 So I think that a solution might be as following:
 1) Make name as a MUST for both bay/baymodel
 2) Update magnum client to use following style for bay-create and
 baymodel-create: DO NOT add --name option

 root@devstack007:/tmp# nova boot
 usage: nova boot [--flavor flavor] [--image image]
  [--image-with key=value] [--boot-volume volume_id]
  [--snapshot snapshot_id] [--min-count number]
  [--max-count number] [--meta key=value]
  [--file dst-path=src-path] [--key-name key-name]
  [--user-data user-data]
  [--availability-zone availability-zone]
  [--security-groups security-groups]
  [--block-device-mapping dev-name=mapping]
  [--block-device key1=value1[,key2=value2...]]
  [--swap swap_size]
  [--ephemeral size=size[,format=format]]
  [--hint key=value]
  [--nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
  [--config-drive value] [--poll]
  name
 error: too few arguments
 Try 'nova help boot' for more information.
 root@devstack007:/tmp# nova flavor-create
 usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
   [--rxtx-factor factor] [--is-public
 is-public]
   name id ram disk vcpus
 Please show your comments if any.

 --
   Thanks,

  Jay Lau (Guangya Liu

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-01 Thread Jay Lau
2015-06-01 21:54 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 05/31/2015 05:38 PM, Jay Lau wrote:

 Just want to use ML to trigger more discussion here. There are now
 bugs/patches tracing this, but seems more discussions are needed before
 we come to a conclusion.

 https://bugs.launchpad.net/magnum/+bug/1453732
 https://review.openstack.org/#/c/181839/
 https://review.openstack.org/#/c/181837/
 https://review.openstack.org/#/c/181847/
 https://review.openstack.org/#/c/181843/

 IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
 to end user as Magnum also support operating Bay/Baymodel via names and
 the name might be more meaningful to end users.

 Perhaps we can borrow some iead from nova, the concept in magnum can be
 mapped to nova as following:

 1) instance = bay
 2) flavor = baymodel

 So I think that a solution might be as following:
 1) Make name as a MUST for both bay/baymodel
 2) Update magnum client to use following style for bay-create and
 baymodel-create: DO NOT add --name option


 You should decide whether name would be unique -- either globally or
 within a tenant.

 Note that Nova's instance names (the display_name model field) are *not*
 unique, neither globally nor within a tenant. I personally believe this was
 a mistake.

 The decision affects your data model and constraints.

Yes, my thinking is to enable Magnum has same behavior with nova. The name
can be managed by the end user and the end user can specify the name as
they want, it is end user's responsibility to make sure there are no
duplicate names. Actually, I think that the name do not need to be unique
but UUID.


 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for Magnum

2015-05-31 Thread Jay Lau
+1!

2015-05-31 23:30 GMT+08:00 Hongbin Lu hongbin...@huawei.com:

  +1!



 *From:* Steven Dake (stdake) [mailto:std...@cisco.com]
 *Sent:* May-31-15 1:56 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for
 Core for Magnum



 Hi core team,



 Kennan (Kai Qiang Wu’s nickname) has really done a nice job in Magnum
 contributions.  I would like to propose Kennan for the core reviewer team.
 I don’t think we necessarily need more core reviewers on Magnum, but Kennan
 has demonstrated a big commitment to Magnum and is a welcome addition in my
 opinion.



 For the lifetime of the project, Kennan has contributed 8% of the reviews,
 and 8% of the commits.  Kennan is also active in IRC.  He meets my
 definition of what a core developer for Magnum should be.



 Consider my proposal to be a +1 vote.



 Please vote +1 to approve or vote –1 to veto.  A single –1 vote acts as a
 veto, meaning Kennan would not be approved for the core team.  I believe we
 require 3 +1 core votes presently, but since our core team is larger now
 then when we started, I’d like to propose at our next team meeting we
 formally define the process by which we accept new cores post this proposal
 for Magnum into the magnum-core group and document it in our wiki.



 I’ll leave voting open for 1 week until June 6th to permit appropriate
 time for the core team to vote.  If there is a unanimous decision or veto
 prior to that date, I’ll close voting and make the appropriate changes in
 gerrit as appropriate.



 Thanks

 -steve



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-05-31 Thread Jay Lau
Just want to use ML to trigger more discussion here. There are now
bugs/patches tracing this, but seems more discussions are needed before we
come to a conclusion.

https://bugs.launchpad.net/magnum/+bug/1453732
https://review.openstack.org/#/c/181839/
https://review.openstack.org/#/c/181837/
https://review.openstack.org/#/c/181847/
https://review.openstack.org/#/c/181843/

IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility to
end user as Magnum also support operating Bay/Baymodel via names and the
name might be more meaningful to end users.

Perhaps we can borrow some iead from nova, the concept in magnum can be
mapped to nova as following:

1) instance = bay
2) flavor = baymodel

So I think that a solution might be as following:
1) Make name as a MUST for both bay/baymodel
2) Update magnum client to use following style for bay-create and
baymodel-create: DO NOT add --name option

root@devstack007:/tmp# nova boot
usage: nova boot [--flavor flavor] [--image image]
 [--image-with key=value] [--boot-volume volume_id]
 [--snapshot snapshot_id] [--min-count number]
 [--max-count number] [--meta key=value]
 [--file dst-path=src-path] [--key-name key-name]
 [--user-data user-data]
 [--availability-zone availability-zone]
 [--security-groups security-groups]
 [--block-device-mapping dev-name=mapping]
 [--block-device key1=value1[,key2=value2...]]
 [--swap swap_size]
 [--ephemeral size=size[,format=format]]
 [--hint key=value]
 [--nic
net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
 [--config-drive value] [--poll]
 name
error: too few arguments
Try 'nova help boot' for more information.
root@devstack007:/tmp# nova flavor-create
usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
  [--rxtx-factor factor] [--is-public is-public]
  name id ram disk vcpus
Please show your comments if any.

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-05-28 Thread Jay Lau
I want to mention that there is another mesos framework named as chronos:
https://github.com/mesos/chronos , it is used for job orchestration.

For others, please refer to my comments in line.

2015-05-29 7:45 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  I’m moving this whiteboard to the ML so we can have some discussion to
 refine it, and then go back and update the whiteboard.

  Source: https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

  My comments in-line below.

  Begin forwarded message:

  *From: *hongbin hongbin...@huawei.com
  *Subject: **COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos
 bay type*
  *Date: *May 28, 2015 at 2:11:29 PM PDT
  *To: *adrian.o...@rackspace.com
  *Reply-To: *hongbin hongbin...@huawei.com

 Blueprint changed by hongbin:

 Whiteboard set to:

 I did a preliminary research on possible implementations. I think this BP
 can be implemented in two steps.
 1. Develop a heat template for provisioning mesos cluster.
 2. Implement a magnum conductor for managing the mesos cluster.


  Agreed, thanks for filing this blueprint!

For 2, the conductor is mainly used to manage objects for CoE, k8s has pod,
service, rc, what is the mesos object that we want to manage? IMHO, mesos
is a resource manager and it needs to be worked with some frameworks to
provide services.


  First, I want to emphasis that mesos is not a service (It looks like a
 library). Therefore, mesos doesn't have web API, and most users don't
 use mesos directly. Instead, they use a mesos framework that is on top
 of mesos. Therefore, a mesos bay needs to have a mesos framework pre-
 configured so that magnum can talk to the framework to manage the bay.
 There are several framework choices. Below is a list of frameworks that
 look like a fit (in my opinion). A exhaustive list of framework can be
 found here [1].

 1. Marathon [2]
 This is a framework controlled by a company (mesosphere [3]). It is open
 source through. It supports running app on clusters of docker containers.
 It is probably the most widely-used mesos framework for long-running
 application.


  Marathon offers a REST API, whereas Aroura does not (unless one has
 materialized in the last month). This was the one we discussed in our
 Vancouver design summit, and we agreed that those wanting to use Apache
 Mesos are probably expecting this framework.

  2. Aurora [4]
 This is a framework governed by Apache Software Foundation. It looks very
 similar to Marathon, but maybe more advanced in nature. It has been used by
 Twitter at scale. Here [5] is a detailed comparison between Marathon and
 Aurora.


  We should have an alternate bay template for Aroura in our contrib
 directory. If users like Aroura better than Marathon, we can discuss making
 it the default template, and put the Marathon template in the contrib
 directory.


  3. Kubernetes/Docker swarm
 It looks the swarm-mesos is not ready yet. I cannot find any thing about
 that (besides several videos on Youtube). The kubernetes-mesos is there
 [6]. In theory, magnum should be able to deploy a mesos bay and talk to the
 bay through kubernetes API. An advantage is that we can reuse the
 kubernetes conductor. A disadvantage is that it is not a 'mesos' way to
 manage containers. Users from mesos community are probably more comfortable
 to manage containers through Marathon/Aurora.


  If you want Kubernetes, you should use the Kubernetes bay type. If you
 want Kubernetes controlling Mesos, make a custom Heat template for that,
 and we can put it into contrib.

Agree, even using Mesos as resource manager, end user can still use magnum
API to create pod, service, and rc.


  If you want Swarm controlling Mesos, then you want BOTH a Swarm bay
 *and* a Mesos bay, with the Swarm bay configured to use the Mesos bay using
 the (currently developing) integration hook for Mesos in Swarm.

  Any opposing viewpoints to consider?

  Thanks,

  Adrian

  --hongbin

 [1] http://mesos.apache.org/documentation/latest/mesos-frameworks/
 [2] https://github.com/mesosphere/marathon
 [3] https://mesosphere.com/
 [4] http://aurora.apache.org/
 [5]
 http://stackoverflow.com/questions/28651922/marathon-vs-aurora-and-their-purposes
 [6] https://github.com/mesosphere/kubernetes-mesos

 --
 Add support for mesos bay type
 https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Demo Video

2015-05-22 Thread Jay Lau
Cool!

2015-05-22 8:37 GMT-07:00 Adrian Otto adrian.o...@rackspace.com:

  Team,

  In response to excitement about the demo of magnum I showed during our
 Tuesday keynote at the OpenStack Summit in Vancouver, I have made a
 screencast of that same demo, and published it on the Magnum project page:

  https://wiki.openstack.org/wiki/Magnum

  If you want to check out the full Keynote, see the link below [1], and
 if you want to skip right to the part about Magnum, see my blog [2]. Please
 share these videos with your colleagues back home who want to get a crisp
 idea about what Magnum can do today.

  Thanks,

  Adrian

  [1]
 https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/taking-risks-how-experimentation-leads-to-breakthroughs
 [2]
 http://adrianotto.com/2015/05/video-vancouver-openstack-keynote-with-magnum-and-kubrnetes/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-28 Thread Jay Lau
+1, welcome Madhuri!

2015-04-29 0:00 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  +1

  On Apr 28, 2015, at 8:14 AM, Steven Dake (stdake) std...@cisco.com
 wrote:

  Hi folks,

  I would like to nominate Madhuri Kumari  to the core team for Magnum.
 Please remember a +1 vote indicates your acceptance.  A –1 vote acts as a
 complete veto.

  Why Madhuri for core?

1. She participates on IRC heavily
2. She has been heavily involved in a really difficult project  to
remove Kubernetes kubectl and replace it with a native python language
binding which is really close to be done (TM)
3. She provides helpful reviews and her reviews are of good quality

 Some of Madhuri’s stats, where she performs in the pack with the rest of
 the core team:

  reviews: http://stackalytics.com/?release=kilomodule=magnum-group
 commits:
 http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

  Please feel free to vote if your a Magnum core contributor.

  Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] A new Core member!

2015-04-19 Thread Jay Lau
Congrats Zhi Wei! Well deserved!

2015-04-20 11:26 GMT+08:00 Jian Wen wenjia...@gmail.com:

 Congrats

 On Sat, Apr 18, 2015 at 5:42 AM, JJ Asghar jasg...@chef.io wrote:
  Hey everyone!
 
  I’d like to announce that Zhiwei Chen has stepped up as a new Core
 Member. Please take a moment to congratulate him!
 
  Thanks Zhiwei!
 
  JJ Asghar
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Best,

 Jian

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-15 Thread Jay Lau
Thanks Andrew and Fenghua. I see. The current docker-swarm template only
using heat to create a swarm cluster, I see you have many bps related to
this. ;-)

2015-04-15 23:13 GMT+08:00 FangFenghua fang_feng...@hotmail.com:

 For docker-swam bay,I think the bay looks like a big machine have a
 docker-daemon.
 We can create container in it.
 --
 From: andrew.mel...@rackspace.com
 To: openstack-dev@lists.openstack.org
 Date: Wed, 15 Apr 2015 14:53:39 +
 Subject: Re: [openstack-dev] [magnum] How to use docker-swarm bay in magnum


 ​​​Hi Jay,



 Magnum Bays do not currently use the docker-swarm template. I'm working on
 a patch to add support for the docker-swarm template. That is going to
 require a new TemplateDefinition, and potentially some new config options
 and/or Bay/BayModel parameters. After that, the Docker container conductor
 will need to be updated to pull it's connection string from the Bay instead
 of the



 To answer your main question though, the idea is that once users can build
 a docker-swarm bay, they will use the container endpoints of our API to
 interact with the bay.



 --Andrew

  --
 *From:* Jay Lau jay.lau@gmail.com
 *Sent:* Tuesday, April 14, 2015 5:33 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [magnum] How to use docker-swarm bay in magnum

   Greetings,

  Currently, there is a docker-swarm bay in magnum, but the problem is
 after this swarm bay was created, how to let user use this bay? Still using
 swarm CLI? The magnum do not have API/CLI to interact with swarm bay now.

 --
   Thanks,

  Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions) Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] How to use docker-swarm bay in magnum

2015-04-14 Thread Jay Lau
Greetings,

Currently, there is a docker-swarm bay in magnum, but the problem is after
this swarm bay was created, how to let user use this bay? Still using swarm
CLI? The magnum do not have API/CLI to interact with swarm bay now.

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]About clean none use container imag

2015-04-13 Thread Jay Lau
Interesting topic, pulling image is time consuming, so someone might not
want to delete the container; But for some cases, if the image was not
used, then it is better to remove them from disk to release space. You may
want to send out an email to [openstack][magnum] ML to get more feedback ;-)

2015-04-13 14:51 GMT+08:00 449171342 449171...@qq.com:



 From now on magnum had container create and delete api .The  container create 
 api will pull docker image from docker-registry.But the container delete api 
 didn't delete image.It will let the image remain even though didn't had 
 container use it.Is it much better we can clear the image in someway?
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-11 Thread Jay Lau
2015-03-10 23:21 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi Adrian,

 On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Magnum Team,

 In the following review, we have the start of a discussion about how to
 tackle bay status:

 https://review.openstack.org/159546

 I think a key issue here is that we are not subscribing to an event feed
 from Heat to tell us about each state transition, so we have a low degree
 of confidence that our state will match the actual state of the stack in
 real-time. At best, we have an eventually consistent state for Bay
 following a bay creation.

 Here are some options for us to consider to solve this:

 1) Propose enhancements to Heat (or learn about existing features) to
 emit a set of notifications upon state changes to stack resources so the
 state can be mirrored in the Bay resource.


 A drawback of this option is that it increases the difficulty of
 trouble-shooting. In my experience of using Heat (SoftwareDeployments in
 particular), Ironic and Trove, one of the most frequent errors I
 encountered is that the provisioning resources stayed in deploying state
 (never went to completed). The reason is that they were waiting a callback
 signal from the provisioning resource to indicate its completion, but the
 callback signal was blocked due to various reasons (e.g. incorrect firewall
 rules, incorrect configs, etc.). Troubling-shooting such problem is
 generally harder.

I think that the heat convergence is working on the issues for your
concern: https://wiki.openstack.org/wiki/Heat/Blueprints/Convergence




 2) Spawn a task to poll the Heat stack resource for state changes, and
 express them in the Bay status, and allow that task to exit once the stack
 reaches its terminal (completed) state.

 3) Don’t store any state in the Bay object, and simply query the heat
 stack for status as needed.


 Are each of these options viable? Are there other options to consider?
 What are the pro/con arguments for each?

 Thanks,

 Adrian



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][neutron] allowed_address_pairs does not work

2015-03-03 Thread Jay Lau
Hi,

I see that the neutron port resource has a property named as
allowed_address_pairs and I tried to use this property to create a port,
but seems it does not working.

I want to create a port with mac as fa:16:3e:05:d5:9f and ip as
192.168.0.58, but after create with a heat template, the final neutron port
mac is fa:16:3e:01:45:bb and ip is 192.168.0.62, can someone show me where
is wrong in my configuration?

Also allowed_address_pairs is a list, does it means that I can create a
port with multiple mac and ip address, if this is the case, then when
create a VM with this port, does it mean that the VM can have multiple
macip?

[root@prsdemo2 ~]# cat port-3.yaml
heat_template_version: 2013-05-23

description: 
  HOT template to create a new neutron network plus a router to the public
  network, and for deploying two servers into the new network. The template
also
  assigns floating IP addresses to each server so they are routable from the
  public network.

resources:

  server1_port:
type: OS::Neutron::Port
properties:
  allowed_address_pairs:
- mac_address: fa:16:3e:05:d5:9f
  ip_address: 192.168.0.58
  network: demonet
[root@prsdemo2 ~]# heat stack-create -f ./port-3.yaml p3
+--+++--+
| id   | stack_name | stack_status   |
creation_time|
+--+++--+
| 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_IN_PROGRESS |
2015-03-03T14:35:49Z |
+--+++--+
[root@prsdemo2 ~]# heat stack-list
+--++-+--+
| id   | stack_name | stack_status|
creation_time|
+--++-+--+
| 234d512c-4c90-4d4e-8d1c-ccf272254477 | p3 | CREATE_COMPLETE |
2015-03-03T14:35:49Z |
+--++-+--+
[root@prsdemo2 ~]# neutron port-list
+--+--+---+-+
| id   |
name | mac_address   |
fixed_ips
|
+--+--+---+-+
| 8d20b3a4-024a-4613-9d26-3d49534a839c |
p3-server1_port-op3w5yzyks5i | fa:16:3e:01:45:bb |
{subnet_id: 4e7b6983-7364-4a71-8d9c-580d88fd4797, ip_address:
192.168.0.62} |
+--+--+---+-+

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
Can you check the kubelet log on your minions? Seems the container failed
to start, there might be something wrong for your minions node. Thanks.

2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to check
 the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][heat]Vote for Openstack L summit topic The Heat Orchestration Template Builder: A demonstration

2015-02-22 Thread Jay Lau
It's really a great feature for HEAT which can accelerate the speed for
HEAT to production, wish you good luck! Thanks!

2015-02-21 2:21 GMT+08:00 Aggarwal, Nikunj nikunj.aggar...@hp.com:

  Hi,



 I have submitted  presentations for Openstack L summit :



 The Heat Orchestration Template Builder: A demonstration
 https://www.openstack.org/vote-vancouver/Presentation/the-heat-orchestration-template-builder-a-demonstration





 Please cast your vote if you feel it is worth for presentation.



 Thanks  Regards,

 Nikunj



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
I suspect that there are some error after the pod/services parsed, can you
please use the native k8s command have a try first then debug k8s api part
to check the difference of the original json file and the parsed json file?
Thanks!

kubectl create -f .json xxx



2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to
 file a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack


 __
 OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Jay Lau
-1. Thanks Dmitry for the contribution and welcome back in near future!

2015-02-17 23:36 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 -1

 On Mon, Feb 16, 2015 at 10:20 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

  The initial magnum core team was founded at a meeting where several
 people committed to being active in reviews and writing code for Magnum.
 Nearly all of the folks that made that initial commitment have been active
 in IRC, on the mailing lists, or participating in code reviews or code
 development.

  Out of our core team of 9 members [1], everyone has been active in some
 way except for Dmitry.  I propose removing him from the core team.  Dmitry
 is welcome to participate in the future if he chooses and be held to the
 same high standards we have held our last 4 new core members to that didn’t
 get an initial opt-in but were voted in by their peers.

  Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1
 from any core acts as a veto meaning Dmitry will remain in the core team.

  [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Sylvain, we did not work out the API requirement till now but I
think the requirement should be similar with nova: we need
select_destination to select the best target host based on filters and
weights.

There are also some discussions here
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza sba...@redhat.com:

  Hi Magnum team,


 Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :



   From: Eric Windisch e...@windisch.us
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Saturday, February 7, 2015 at 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.


 The Gantt team explored that option by the Icehouse cycle and it failed
 with a lot of problems. I won't list all of those, but I'll just explain
 that we discovered how the Scheduler and the Nova compute manager were
 tighly coupled, which was meaning that a repository fork was really
 difficult to do without reducing the tech debt.

 That said, our concerns were far different from the Magnum team : it was
 about having feature parity and replacing the current Nova scheduler, while
 your team is just saying that they want to have something about containers.


 2) Integrate swarmd to leverage its scheduler[2].


  I see #2 as not an alternative but possibly an also. Swarm uses the
 Docker API, although they're only about 75% compatible at the moment.
 Ideally, the Docker backend would work with both single docker hosts and
 clusters of Docker machines powered by Swarm. It would be nice, however, if
 scheduler hints could be passed from Magnum to Swarm.

  Regards,
 Eric Windisch


  Adrian  Eric,

  I would prefer to keep things simple and just integrate directly with
 swarm and leave out any cherry-picking from Nova. It would be better to
 integrate scheduling hints into Swarm, but I’m sure the swarm upstream is
 busy with requests and this may be difficult to achieve.


 I don't want to give my opinion about which option you should take as I
 don't really know your needs. If I understand correctly, this is about
 having a scheduler providing affinity rules for containers. Do you have a
 document explaining which interfaces you're looking for, which kind of APIs
 you're wanting or what's missing with the current Nova scheduler ?

 MHO is that the technology shouldn't drive your decision : whatever the
 backend is (swarmd or an inherited nova scheduler), your interfaces should
 be the same.

 -Sylvain


   Regards
 -steve



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Steve, just want to discuss more for this. Then per Andrew's
comments, we need a generic scheduling interface, but if our focus is
native docker, then does this still needed? Thanks!

2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 11:31 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

  So you mean we should focus on docker and k8s scheduler? I was a bit
 confused, why do we need to care k8s? As the k8s cluster was created by
 heat and once the k8s was created, the k8s has its own scheduler for
 creating pods/service/rcs.

  So seems we only need to care scheduler for native docker and ironic bay,
 comments?


  Ya scheduler only matters for native docker.  Ironic bay can be k8s or
 docker+swarm or something similar.

  But yup, I understand your point.


 Thanks!

 2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



  On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying
 cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user experience dealing
 with the chaos that is the container development world :)


  While I don't disagree with anything said here, this does sound a lot
 like https://xkcd.com/927/



 Andrew had suggested offering a unified standard user experience and
 API.  I think that matches the 927 comic pretty well.  I think we should
 offer each type of system using APIs that are similar in nature but that
 offer the native features of the system.  In other words, we will offer
 integration across the various container landscape with OpenStack.

  We should strive to be conservative and pragmatic in our systems
 support and only support container schedulers and container managers that
 have become strongly emergent systems.  At this point that is docker and
 kubernetes.  Mesos might fit that definition as well.  Swarmd and rocket
 are not yet strongly emergent, but they show promise of becoming so.  As a
 result, they are clearly systems we should be thinking about for our
 roadmap.  All of these systems present very similar operational models.

  At some point competition will choke off new system design placing an
 upper bound on the amount of systems we have to deal with.

  Regards
 -steve



 We will absolutely support relevant container technology, likely through
 new Bay formats (which are really just heat templates).

 Regards
 -steve

 
 --
 Thierry Carrez (ttx)
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Adrian for the clear clarification, clear now.

OK, we can focus on the right solution for the Docker back-end for now.

2015-02-10 15:35 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  I think it’s fair to assert that our generic scheduling interface should
 be based on Gantt. When that approaches a maturity point where it’s
 appropriate to leverage Gantt for container use cases, we should definitely
 consider switching to that. We should remain engaged in Gantt design
 decisions along the way to provide input.

  In the short term we want a solution that works nicely for our Docker
 handler, because that’s an obvious functionality gap. The k8s handler
 already has a scheduler, so it can remain unchanged. Let’s not fall into a
 trap of over-engineering the scheduler, as that can be very tempting but
 yield limited value.

  My suggestion is that we focus on the right solution for the Docker
 backend for now, and keep in mind that we want a general purpose scheduler
 in the future that could be adapted to work with a variety of container
 backends.

  I want to recognize that Andrew’s thoughts are well considered to avoid
 rework and remain agnostic about container backends. Further, I think
 resource scheduling is the sort of problem domain that would lend itself
 well to a common solution with numerous use cases. If you look at the
 various ones that exist today, there are lots of similarities. We will find
 a multitude of scheduling algorithms, but probably not uniquely innovative
 scheduling interfaces. The interface to a scheduler will be relatively
 simple, and we could afford to collaborate a bit with the Gantt team to get
 solid ideas on the table for that. Let’s table that pursuit for now, and
 re-engage at our Midcycle meetup to explore that topic further. In the mean
 time, I’d like us to iterate on a suitable point solution for the Docker
 backend. A final iteration of that work may be to yank it completely, and
 replace it with a common scheduler at a later point. I’m willing to accept
 that tradeoff for a quick delivery of a Docker specific scheduler that we
 can learn from and iterate.

  Cheers,

  Adrian

  On Feb 9, 2015, at 10:57 PM, Jay Lau jay.lau@gmail.com wrote:

  Thanks Steve, just want to discuss more for this. Then per Andrew's
 comments, we need a generic scheduling interface, but if our focus is
 native docker, then does this still needed? Thanks!

 2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 11:31 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

  So you mean we should focus on docker and k8s scheduler? I was a bit
 confused, why do we need to care k8s? As the k8s cluster was created by
 heat and once the k8s was created, the k8s has its own scheduler for
 creating pods/service/rcs.

  So seems we only need to care scheduler for native docker and ironic
 bay, comments?


  Ya scheduler only matters for native docker.  Ironic bay can be k8s or
 docker+swarm or something similar.

  But yup, I understand your point.


 Thanks!

 2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



  On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working
 a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying
 cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or
 some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Steve,

So you mean we should focus on docker and k8s scheduler? I was a bit
confused, why do we need to care k8s? As the k8s cluster was created by
heat and once the k8s was created, the k8s has its own scheduler for
creating pods/service/rcs.

So seems we only need to care scheduler for native docker and ironic bay,
comments?

Thanks!

2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



 On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user experience dealing
 with the chaos that is the container development world :)


  While I don't disagree with anything said here, this does sound a lot
 like https://xkcd.com/927/



 Andrew had suggested offering a unified standard user experience and
 API.  I think that matches the 927 comic pretty well.  I think we should
 offer each type of system using APIs that are similar in nature but that
 offer the native features of the system.  In other words, we will offer
 integration across the various container landscape with OpenStack.

  We should strive to be conservative and pragmatic in our systems support
 and only support container schedulers and container managers that have
 become strongly emergent systems.  At this point that is docker and
 kubernetes.  Mesos might fit that definition as well.  Swarmd and rocket
 are not yet strongly emergent, but they show promise of becoming so.  As a
 result, they are clearly systems we should be thinking about for our
 roadmap.  All of these systems present very similar operational models.

  At some point competition will choke off new system design placing an
 upper bound on the amount of systems we have to deal with.

  Regards
 -steve



 We will absolutely support relevant container technology, likely through
 new Bay formats (which are really just heat templates).

 Regards
 -steve

 
 --
 Thierry Carrez (ttx)
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-07 Thread Jay Lau
Sorry for late. I'm OK with both nova scheduler and swarm as both of them
using same logic for scheduling: filter + strategy (weight), and the code
structure/logic is also very similar between nova scheduler and swarm.

In my understanding, even if we use swarm and translate go to python, after
this scheduler is added to magnum, we may notice that it is very similar
with nova scheduler.

Thanks!





2015-02-08 11:01 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  Ok, so if we proceed using Swarm as our first pursuit, and we want to add
 things to Swarm like scheduling hints, we should open a Magnum bug ticket
 to track each of the upstream patches, and I can help to bird dog those. We
 should not shy away from upstream enhancements until we get firm feedback
 suggesting our contributions are out of scope.

  Adrian


  Original message 
 From: Steven Dake (stdake)
 Date:02/07/2015 10:27 AM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



   From: Eric Windisch e...@windisch.us
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Saturday, February 7, 2015 at 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].


  I see #2 as not an alternative but possibly an also. Swarm uses the
 Docker API, although they're only about 75% compatible at the moment.
 Ideally, the Docker backend would work with both single docker hosts and
 clusters of Docker machines powered by Swarm. It would be nice, however, if
 scheduler hints could be passed from Magnum to Swarm.

  Regards,
 Eric Windisch


  Adrian  Eric,

  I would prefer to keep things simple and just integrate directly with
 swarm and leave out any cherry-picking from Nova. It would be better to
 integrate scheduling hints into Swarm, but I’m sure the swarm upstream is
 busy with requests and this may be difficult to achieve.

  Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-28 Thread Jay Lau
+1!

Thanks!

2015-01-29 6:40 GMT+08:00 Steven Dake sd...@redhat.com:

 On 01/28/2015 03:27 PM, Adrian Otto wrote:

 Magnum Cores,

 I propose the following addition to the Magnum Core group[1]:

 + Hongbin Lu (hongbin034)

 Please let me know your votes by replying to this message.

 Thanks,

 Adrian

 +1

 Regards
 -steve


  [1] https://review.openstack.org/#/admin/groups/473,members Current
 Members

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] How to bind IPMAC for a stack VM resources

2015-01-28 Thread Jay Lau
Yes, reference is a problem. I see that we already have a resource
OS::Neutron::Port , but the problem is that we need to specify MAC  IP
manually for this resource and it is difficult to use this resource in a
autoscaling group.

https://github.com/openstack/heat-templates/blob/master/cfn/F17/Neutron.template

I will try to file a bp to see if we can introduce a new source such as
OS::Heat::PortPool etc to heat.

Thanks!

2015-01-27 13:30 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:

 On Tue, Jan 27, 2015 at 10:34:47AM +0800, Jay Lau wrote:
  2015-01-27 10:28 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:
 
   On Tue, Jan 27, 2015 at 10:13:59AM +0800, Jay Lau wrote:
Greetings,
   
I have a question related to MAC and IP binding, I know that we can
   create
a port to bind a private IP and MAC together then create VM using
 this
specified port to make sure the VM can use the the IP and MAC in this
   port.
This can make sure one VM can have IP and MAC as customer required.
   
The problem is how to handle this in HEAT, I can create a pool of
 neutron
ports which have a list of MACIP biniding, then I want to create a
 stack
  
   Who maintains that pool? Is the maintainer providing an API for port
   allocation? Or ... the pool itself is a new resource type?
  
  The pool was created by a tenant user as heat do not have such kind of
  resource type for now.  Once the pool was created, I want to use the
 ports
  in my stack resources.

 Are you considering propose a spec to add that kind of a resource type
 to Heat? Or you can try generating such a pool using ResourceGroup?
 Then the question becomes referencing a member from a ResourceGroup, I
 think.

 Regards,
   Qiming

  
   Regards,
 Qiming
  
with heat to use the ports that I created before to make sure all
 VMs in
the stack can have the IPMAC specified in my port pool, I did not
 find a
solution for this, does anyone can give some suggestions for how to
configure this in heat template to achieve this goal?
   
--
Thanks,
   
Jay Lau (Guangya Liu)
  
   
  
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 
  --
  Thanks,
 
  Jay Lau (Guangya Liu)

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Heat] How to bind IPMAC for a stack VM resources

2015-01-26 Thread Jay Lau
2015-01-27 10:28 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:

 On Tue, Jan 27, 2015 at 10:13:59AM +0800, Jay Lau wrote:
  Greetings,
 
  I have a question related to MAC and IP binding, I know that we can
 create
  a port to bind a private IP and MAC together then create VM using this
  specified port to make sure the VM can use the the IP and MAC in this
 port.
  This can make sure one VM can have IP and MAC as customer required.
 
  The problem is how to handle this in HEAT, I can create a pool of neutron
  ports which have a list of MACIP biniding, then I want to create a stack

 Who maintains that pool? Is the maintainer providing an API for port
 allocation? Or ... the pool itself is a new resource type?

The pool was created by a tenant user as heat do not have such kind of
resource type for now.  Once the pool was created, I want to use the ports
in my stack resources.


 Regards,
   Qiming

  with heat to use the ports that I created before to make sure all VMs in
  the stack can have the IPMAC specified in my port pool, I did not find a
  solution for this, does anyone can give some suggestions for how to
  configure this in heat template to achieve this goal?
 
  --
  Thanks,
 
  Jay Lau (Guangya Liu)

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Heat] How to bind IPMAC for a stack VM resources

2015-01-26 Thread Jay Lau
Greetings,

I have a question related to MAC and IP binding, I know that we can create
a port to bind a private IP and MAC together then create VM using this
specified port to make sure the VM can use the the IP and MAC in this port.
This can make sure one VM can have IP and MAC as customer required.

The problem is how to handle this in HEAT, I can create a pool of neutron
ports which have a list of MACIP biniding, then I want to create a stack
with heat to use the ports that I created before to make sure all VMs in
the stack can have the IPMAC specified in my port pool, I did not find a
solution for this, does anyone can give some suggestions for how to
configure this in heat template to achieve this goal?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Steven, just filed two bps to trace all the discussions for network and
scheduler support for native docker, we can have more discussion there.

https://blueprints.launchpad.net/magnum/+spec/native-docker-network
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Another I want to discuss is still network, currently, magnum only support
neutron, what about nova-network support?

2015-01-19 0:39 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 09:23 AM, Jay Lau wrote:

 Thanks Steven, more questions/comments in line.

 2015-01-19 0:11 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 06:39 AM, Jay Lau wrote:

   Thanks Steven, just some questions/comments here:

  1) For native docker support, do we have some project to handle the
 network? The current native docker support did not have any logic for
 network management, are we going to leverage neutron or nova-network just
 like nova-docker for this?

  We can just use flannel for both these use cases.  One way to approach
 using flannel is that we can expect docker networks will always be setup
 the same way, connecting into a flannel network.

 What about introducing neutron/nova-network support for native docker
 container just like nova-docker?



 Does that mean introducing an agent on the uOS?  I'd rather not have
 agents, since all of these uOS systems have wonky filesystem layouts and
 there is not an easy way to customize them, with dib for example.

 2) For k8s, swarm, we can leverage the scheduler in those container
 management tools, but what about docker native support? How to handle
 resource scheduling for native docker containers?

   I am not clear on how to handle native Docker scheduling if a bay has
 more then one node.  I keep hoping someone in the community will propose
 something that doesn't introduce an agent dependency in the OS.

 My thinking is as this: Add a new scheduler just like what nova/cinder is
 doing now and then we can migrate to gantt once it become mature, comments?


 Cool that WFM.  Too bad we can't just use gantt out the gate.

 Regards
 -steve



 Regards
 -steve


  Thanks!

 2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code
 should be tagged and ready to run Tuesday January 20th.  Master should work
 well enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network just
like nova-docker for this?
2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?

Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code should
 be tagged and ready to run Tuesday January 20th.  Master should work well
 enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 06:39 AM, Jay Lau wrote:

   Thanks Steven, just some questions/comments here:

  1) For native docker support, do we have some project to handle the
 network? The current native docker support did not have any logic for
 network management, are we going to leverage neutron or nova-network just
 like nova-docker for this?

 We can just use flannel for both these use cases.  One way to approach
 using flannel is that we can expect docker networks will always be setup
 the same way, connecting into a flannel network.

What about introducing neutron/nova-network support for native docker
container just like nova-docker?


  2) For k8s, swarm, we can leverage the scheduler in those container
 management tools, but what about docker native support? How to handle
 resource scheduling for native docker containers?

   I am not clear on how to handle native Docker scheduling if a bay has
 more then one node.  I keep hoping someone in the community will propose
 something that doesn't introduce an agent dependency in the OS.

My thinking is as this: Add a new scheduler just like what nova/cinder is
doing now and then we can migrate to gantt once it become mature, comments?


 Regards
 -steve


  Thanks!

 2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code should
 be tagged and ready to run Tuesday January 20th.  Master should work well
 enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-30 Thread Jay Lau
Yes, host is from service table and hypervisor is from compute_nodes table,
I think that there are some discussion for this in Paris Summit and there
might be some change for this in Kilo.

   - Detach service from compute node:
   https://review.openstack.org/#/c/126895/ (implementation on a patch
   series
   
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/detach-service-from-computenode,n,z)
   
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/detach-service-from-computenode,n,z%29


   - Goal : Only provide resources to the ComputeNode object, not anything
   else


   - The primary goal of this blueprint is to decouple the servicegroup API
   from the ideas of tracking resources, since they are two wholly separate
   things

I'm not sure if there are some changes for those commands when host was
added to compute_nodes table.


2014-12-30 14:52 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:

 Thanks, Jay Pipes and Jay Lau, for your reply!

 Just as what Jay Lau said, 'nova hypervisor-show hypervisor_id'
 indeed returns host ip address, and there are more other information
 included than 'nova host-describe hostname'. I feel a little
 confused about the 'host' and 'hypervisor', what's the difference
 between them? For cloud operator, maybe 'host' is more usefull and
 intuitive for management than 'hypervisor'. From the implementation
 perspective, both 'compute_nodes' and 'services' database tables are
 used for them. Should them be combined for more common use cases?

 2014-12-29 21:40 GMT+08:00 Jay Lau jay.lau@gmail.com:
  Does nova hypervisor-show help? It already include the host ip address.
 
  2014-12-29 21:26 GMT+08:00 Jay Pipes jaypi...@gmail.com:
 
  On 12/29/2014 06:51 AM, Lingxian Kong wrote:
 
  Hi Stackers:
 
  As for now, we can get the 'host name', 'service' and 'availability
  zone' of a host through the CLI command 'nova host-list'. But as a
  programmer who communicates with OpenStack using its API, I want to
  get the host ip address, in order to perform some other actions in my
  program.
 
  And what I know is, the ip address of a host is populated in the
  'compute_nodes' table of the database, during the 'update available
  resource' periodic task.
 
  So, is it possible of the community to support it in the future?
 
  I apologize if the topic was once covered and I missed it.
 
 
  Hi!
 
  I see no real technical reason this could not be done. It would require
  waiting until all of the API microversioning bits are done, and a
  micro-increment of the API, along with minimal changes of code in the
 hosts
  extension to return the host_ip field from the nova.objects.ComputeNode
  objects returned to the HostController object.
 
  Are you interested in working on such a feature? I would be happy to
 guide
  you through the process of making a blueprint and submitting code if
 you'd
  like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes.
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Thanks,
 
  Jay
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards!
 ---
 Lingxian Kong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] should 'ip address' be retrived when decribe host?

2014-12-29 Thread Jay Lau
Does nova hypervisor-show help? It already include the host ip address.

2014-12-29 21:26 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 12/29/2014 06:51 AM, Lingxian Kong wrote:

 Hi Stackers:

 As for now, we can get the 'host name', 'service' and 'availability
 zone' of a host through the CLI command 'nova host-list'. But as a
 programmer who communicates with OpenStack using its API, I want to
 get the host ip address, in order to perform some other actions in my
 program.

 And what I know is, the ip address of a host is populated in the
 'compute_nodes' table of the database, during the 'update available
 resource' periodic task.

 So, is it possible of the community to support it in the future?

 I apologize if the topic was once covered and I missed it.


 Hi!

 I see no real technical reason this could not be done. It would require
 waiting until all of the API microversioning bits are done, and a
 micro-increment of the API, along with minimal changes of code in the hosts
 extension to return the host_ip field from the nova.objects.ComputeNode
 objects returned to the HostController object.

 Are you interested in working on such a feature? I would be happy to guide
 you through the process of making a blueprint and submitting code if you'd
 like. Just find me on IRC #openstack-nova. My IRC nick is jaypipes.

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][docker] Networking problem

2014-12-28 Thread Jay Lau
There is no problem for your cluster, it is working well. With nova docker
driver, you need to use namespace to check the network as you did:


2014-12-29 13:15 GMT+08:00 Iván Chavero ichav...@chavero.com.mx:

 Hello,

 I've installed OpenStack with Docker as hypervisor on a cubietruck,
 everything
 seems to work ok but the container ip does not respond to pings nor
 respond to
 the service i'm running inside the container (nginx por 80).

 I checked how nova created the container and it looks like everything is
 in place:

 # nova list
 +--+---+
 ++-+--+
 | ID   | Name  | Status | Task
 State | Power State | Networks |
 +--+---+
 ++-+--+
 | 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | -
 | Running | public=192.168.1.135 |
 +--+---+
 ++-+--+


 # docker ps
 CONTAINER IDIMAGE COMMAND CREATED STATUS
 PORTS  NAMES
 89b59bf9f442sotolitolabs/nginx_arm:latest /usr/sbin/nginx   6
 hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3


 A funny thing that i noticed but i'm not really sure it's relevant, the
 docker container
 does not show network info when created by nova:

 # docker inspect 89b59bf9f442

  unnecesary output

 NetworkSettings: {
 Bridge: ,
 Gateway: ,
 IPAddress: ,
 IPPrefixLen: 0,
 PortMapping: null,
 Ports: null
 },




 # neutron router-list
 +--+-+--
 
 
 -+--
 ---+---+
 | id   | name| external_gateway_info |
 distributed | ha|
 +--+-+--
 
 
 -+--
 ---+---+
 | f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {network_id:
 160add9a-2d2e-45ab-8045-68b334d29418, enable_snat: true,
 external_fixed_ips: [{subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d,
 ip_address: 192.168.1.120}]} | False   | False |
 +--+-+--
 
 
 -+--
 ---+---+


 # neutron subnet-list
 +--++---
 -++
 | id   | name   | cidr   |
 allocation_pools |
 +--++---
 -++
 | 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24
 | {start: 10.0.0.2, end: 10.0.0.254} |
 | 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet  | 192.168.1.0/24
 | {start: 192.168.1.120, end: 192.168.1.200} |
 +--++---
 -++




 # neutron port-list
 +--+--+-
 --+-
 -+
 | id   | name | mac_address   |
 fixed_ips |
 +--+--+-
 --+-
 -+
 | 863eb9a3-461c-4016-9bd1-7c4c7210db98 |  | fa:16:3e:24:7b:2c |
 {subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address:
 10.0.0.2}  |
 | bbe59188-ab4e-4b92-a578-bbc2d6759295 |  | fa:16:3e:1c:04:6a |
 {subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address:
 192.168.1.135} |
 | c8b94a90-c7d1-44fc-a582-3370f5486d26 |  | fa:16:3e:6f:69:71 |
 {subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address:
 10.0.0.1}  |
 | f108b583-0d54-4388-bcc0-f8d1cbe6efd4 |  | fa:16:3e:bb:3a:1b |
 {subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address:
 192.168.1.120} |
 +--+--+-
 --+-
 -+



 the network namespace is being created:

 # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c
 

Re: [openstack-dev] [heat-docker]Does the heat-docker supports auto-scaling and monitoring the Docker container?

2014-12-11 Thread Jay Lau
So you are using heat docker driver  but not nova docker driver, right?

If you are using nova docker driver, then the container was treated as VM
and you can do monitoring and auto scaling with heat.

But with heat docker driver, it talk to docker host directly which you need
to define in HEAT template and there is no monitoring for such case. Also
the manual scale heat resource-singal to scale up your stack manually;
for auto scale, IMHO, you may want to integrate with some 3rd party monitor
and do some development work to reach this.

Thanks.

2014-12-12 11:19 GMT+08:00 Chenliang (L) hs.c...@huawei.com:

 Hi,
 Now We can deploying Docker containers in an OpenStack environment using
 Heat. But I feel confused.
 Could someone can tell me does the heat-docker supports monitoring the
 Docker container in a stack and how to monitor it?
 Does it supports auto-scaling the Docker container?


 Best Regards,
 -- Liang Chen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
When I review a patch for OpenStack, after review finished, I want to check
more patches for this project and then after click the Project content
for this patch, it will **not** jump to all patches but project
description. I think it is not convenient for a reviewer if s/he wants to
review more patches for this project.

[image: 内嵌图片 1]

[image: 内嵌图片 2]

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
Thanks all, yes, there are ways I can get all on-going patches for one
project, I was complaining this because I can always direct to the right
page before gerrit review upgrade, the upgrade broken this feature which
makes not convenient for reviewers...

2014-12-01 14:13 GMT+08:00 Steve Martinelli steve...@ca.ibm.com:

 Clicking on the magnifying glass icon (next to the project name) lists
 all recent patches for that project (closed and open).

 I have a bookmark that lists all open reviews of a project and
 always try to keep it open in a tab, and open specific code reviews in
 new tabs.

 Steve

 Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

  From: Chen CH Ji jiche...@cn.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:09 AM
  Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
  +1, I also found this inconvenient point before ,thanks Jay for bring up
 :)
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
  [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a
  patch for OpenStack, after review finished, I want to check more
  patches for this pr
 
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:56 PM
  Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
  When I review a patch for OpenStack, after review finished, I want
  to check more patches for this project and then after click the
  Project content for this patch, it will **not** jump to all
  patches but project description. I think it is not convenient for a
  reviewer if s/he wants to review more patches for this project.
 
  [image removed]
 
  [image removed]
 
  --
  Thanks,
 
  Jay___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
I'm OK if all reviewers agree on this proposal, I may need to bookmark the
projects that I want to review. ;-)

2014-12-01 14:35 GMT+08:00 OpenStack Dev cools...@gmail.com:

 Jay this has been informed  discussed, pre  post the gerrit upgrade :)


 On Mon Dec 01 2014 at 12:00:02 PM Jay Lau jay.lau@gmail.com wrote:

 Thanks all, yes, there are ways I can get all on-going patches for one
 project, I was complaining this because I can always direct to the right
 page before gerrit review upgrade, the upgrade broken this feature which
 makes not convenient for reviewers...

 2014-12-01 14:13 GMT+08:00 Steve Martinelli steve...@ca.ibm.com:

 Clicking on the magnifying glass icon (next to the project name) lists
 all recent patches for that project (closed and open).

 I have a bookmark that lists all open reviews of a project and
 always try to keep it open in a tab, and open specific code reviews in
 new tabs.

 Steve

 Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

  From: Chen CH Ji jiche...@cn.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:09 AM
  Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
  +1, I also found this inconvenient point before ,thanks Jay for bring
 up :)
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
  [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a
  patch for OpenStack, after review finished, I want to check more
  patches for this pr
 
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:56 PM
  Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
  When I review a patch for OpenStack, after review finished, I want
  to check more patches for this project and then after click the
  Project content for this patch, it will **not** jump to all
  patches but project description. I think it is not convenient for a
  reviewer if s/he wants to review more patches for this project.
 
  [image removed]
 
  [image removed]
 
  --
  Thanks,
 
  Jay___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Order of machines to be terminated during scale down

2014-11-26 Thread Jay Lau
The current behavior is not flexible to customer,  I see that we have a
blueprint want to enhance this behavior.

https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
https://wiki.openstack.org/wiki/Heat/AutoScaling

In Use Case section, we have the following:
==
Use Cases

   1. Users want to use AutoScale without using Heat templates.
   2. Users want to use AutoScale *with* Heat templates.
   3. Users want to scale arbitrary resources, not just instances.
   4. Users want their autoscaled resources to be associated with shared
   resources such as load balancers, cluster managers, configuration servers,
   and so on.
   5. TODO: Administrators or automated processes want to add or remove
   *specific* instances from a scaling group. (one node was compromised or had
   some critical error?)
   6. TODO: Users want to specify a general policy about which resources to
   delete when scaling down, either newest or oldest
   7. TODO: A hook needs to be provided to allow completion or cancelling
   of the auto scaling down of a resource. For example, a MongoDB shard may
   need draining to other nodes before it can be safely deleted. Or another
   example, replica's may need time to resync before another is deleted. The
   check would ensure the resync is done.
   8. *TODO: Another hook should be provided to allow selection of node to
   scale down. MongoDB example again, select the node with the least amount of
   data that will need to migrate to other hosts.*

===

Item 8 is enabling customer can customize the instance to scale down.

Thanks!

2014-11-26 18:30 GMT+08:00 Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com:

 Maish,

 by default they are deleted in in the same order they were created, FIFO
 style.

 Best regards,
 Pavlo Shchelokovskyy.

 On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing 
 maishsk+openst...@maishsk.com wrote:

 In which order are machines terminated during a scale down action in an
 auto scaling group

 For example instance 1  2 were deployed in a stack. Instances 3  4
 were created as a result of load.

 When the load is reduced and the instances are scaled back down, which
 ones will be removed? And in which order?

 From old to new (1-4) or new to old (4 - 1) ?

 Thanks

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Add scheduler-hints when migration/rebuild/evacuate

2014-10-29 Thread Jay Lau
Hi Alex,

You can continue the work https://review.openstack.org/#/c/88983/ from here
;-)

2014-10-29 13:42 GMT+08:00 Chen CH Ji jiche...@cn.ibm.com:

 Yes, I remember that spec might talk about local storage (in local db?)
 and it can be the root cause

 And I think we need persistent storage otherwise the scheduler hints can't
 survive in some conditions such as system reboot or upgrade ?

 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 [image: Inactive hide details for Alex Xu ---10/29/2014 01:34:13 PM---On
 2014年10月29日 12:37, Chen CH Ji wrote: ]Alex Xu ---10/29/2014 01:34:13
 PM---On 2014年10月29日 12:37, Chen CH Ji wrote: 

 From: Alex Xu x...@linux.vnet.ibm.com
 To: openstack-dev@lists.openstack.org
 Date: 10/29/2014 01:34 PM
 Subject: Re: [openstack-dev] [Nova] Add scheduler-hints when
 migration/rebuild/evacuate

 --



 On 2014年10月29日 12:37, Chen CH Ji wrote:


I think we already support to specify the host when doing evacuate and
migration ?


 Yes, we support to specify the host, but schedule-hints is different thing.



if we need use hints that passed from creating instance, that means we
need to persistent schedule hints
I remember we used to have a spec for store it locally ...



 I also remember we have one spec for persistent before, but I don't know
 why it didn't continue.
 And I think maybe we needn't persistent schedule-hints, just add pass new
 schedule-hints when
 migration the instance. Nova just need provide the mechanism.



Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: *jiche...@cn.ibm.com*
jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
District, Beijing 100193, PRC

[image: Inactive hide details for Alex Xu ---10/29/2014 12:19:35
PM---Hi, Currently migration/rebuild/evacuate didn't support pass]Alex
Xu ---10/29/2014 12:19:35 PM---Hi, Currently migration/rebuild/evacuate
didn't support pass

From: Alex Xu *x...@linux.vnet.ibm.com* x...@linux.vnet.ibm.com
To: *openstack-dev@lists.openstack.org*
openstack-dev@lists.openstack.org
Date: 10/29/2014 12:19 PM
Subject: [openstack-dev] [Nova] Add scheduler-hints when
migration/rebuild/evacuate

--



Hi,

Currently migration/rebuild/evacuate didn't support pass
scheduler-hints, that means any migration
can't use schedule-hints that passed when creating instance.

Can we add scheduler-hints support when migration/rebuild/evacuate?
That
also can enable user
move in/out instance to/from an server group.

Thanks
Alex


___
OpenStack-dev mailing list
 *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
*OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-13 Thread Jay Lau
This is also a use case for Congress, please check use case 3 in the
following link.

https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#

2014-10-14 5:59 GMT+08:00 Russell Bryant rbry...@redhat.com:

 Nice timing.  I was working on a blog post on this topic.

 On 10/13/2014 05:40 PM, Fei Long Wang wrote:
  I think Adam is talking about this bp:
 
 https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
 
  For now, we're using Nagios probe/event to trigger the Nova evacuate
  command, but I think it's possible to do that in Nova if we can find a
  good way to define the trigger policy.

 I actually think that's the right way to do it.  There are a couple of
 other things to consider:

 1) An ideal solution also includes fencing.  When you evacuate, you want
 to make sure you've fenced the original compute node.  You need to make
 absolutely sure that the same VM can't be running more than once,
 especially when the disks are backed by shared storage.

 Because of the fencing requirement, another option would be to use
 Pacemaker to orchestrate this whole thing.  Historically Pacemaker
 hasn't been suitable to scale to the number of compute nodes an
 OpenStack deployment might have, but Pacemaker has a new feature called
 pacemaker_remote [1] that may be suitable.

 2) Looking forward, there is a lot of demand for doing this on a per
 instance basis.  We should decide on a best practice for allowing end
 users to indicate whether they would like their VMs automatically
 rescued by the infrastructure, or just left down in the case of a
 failure.  It could be as simple as a special tag set on an instance [2].

 [1]

 http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Remote/
 [2] https://review.openstack.org/#/c/127281/

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Lau
Hi Jay,

There was actually a discussion about file a blueprint for object
notification http://markmail.org/message/ztehzx2wc6dacnk2

But for patch https://review.openstack.org/#/c/107954/ , I'd like we keep
it as it is now to resolve the requirement of server group notifications
for 3rd party client.

Thanks.

2014-09-22 22:41 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 09/22/2014 07:24 AM, Daniel P. Berrange wrote:

 On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:

 Hi Folks,

 I'd like to get some opinions on the use of pairs of notification
 messages for simple events.   I get that for complex operations on
 an instance (create, rebuild, etc) a start and end message are useful
 to help instrument progress and how long the operations took. However
 we also use this pattern for things like aggregate creation, which is
 just a single DB operation - and it strikes me as kind of overkill and
 probably not all that useful to any external system compared to a
 single event .create event after the DB operation.


 A start + end pair is not solely useful for timing, but also potentially
 detecting if it completed successfully. eg if you receive an end event
 notification you know it has completed. That said, if this is a use case
 we want to target, then ideally we'd have a third notification for this
 failure case, so consumers don't have to wait  timeout to detect error.

  There is a change up for review to add notifications for service groups
 which is following this pattern (https://review.openstack.org/
 #/c/107954/)
 - the author isn't doing  anything wrong in that there just following
 that
 pattern, but it made me wonder if we shouldn't have some better guidance
 on when to use a single notification rather that a .start/.end pair.

 Does anyone else have thoughts on this , or know of external systems that
 would break if we restricted .start and .end usage to long-lived instance
 operations ?


 I think we should aim to /always/ have 3 notifications using a pattern of

try:
   ...notify start...

   ...do the work...

   ...notify end...
except:
   ...notify abort...


 Precisely my viewpoint as well. Unless we standardize on the above, our
 notifications are less than useful, since they will be open to
 interpretation by the consumer as to what precisely they mean (and the
 consumer will need to go looking into the source code to determine when an
 event actually occurred...)

 Smells like a blueprint to me. Anyone have objections to me writing one up
 for Kilo?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Jay Lau
I know that Congress is still under development, but it is better that it
can provide some info for How to use it just like docker
https://wiki.openstack.org/wiki/Docker , this might attract more people
contributing to it.


2014-08-21 22:07 GMT+08:00 Madhu Mohan mmo...@mvista.com:

 Hi,

 I am quite new to the Congress and Openstack as well and this question may
 seem very trivial and basic.

 I am trying to figure out the policy enforcement logic,

 Can some body help me understand how exactly, a policy enforcement action
 is taken.

 From the example policy there is an action defined as:



 *action(disconnect_network) nova:network-(vm, network) :-
 disconnect_network(vm, network) *
 I assume that this statement when applied would translate to deletion of
 entry in the database.

 But, how does this affect the actual setup (i.e) How is this database
 update translated to actual disconnection of the VM from the network.
 How does nova know that it has to disconnect the VM from the network ?

 Thanks and Regards,
 Madhu Mohan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New feature on Nova

2014-08-21 Thread Jay Lau
There is already a blueprint tracing KVM host maintain:
https://blueprints.launchpad.net/nova/+spec/host-maintenance , but I think
that nova will not handle the case of auto live migration for maintenance
host, this should be a use case of Congress:
https://wiki.openstack.org/wiki/Congress


2014-08-21 23:00 GMT+08:00 thomas.pessi...@orange.com:

 Hello,



 Sorry if I am not on the right mailing list. I would like to get some
 information.



 I would like to know if I am a company who wants to add a feature on an
 openstack module. How do we have to proceed ? And so, what is the way this
 new feature be adopted by the community.



 The feature is, the maintenance mode.  That is to say, disable a compute
 node and do live migration on all the instances which are  running on the
 host.

 I know we can do an evacuate, but evacuate restart the instances. I have
 already written a shell script to do this using command-cli.



 Regards,

 _

 Ce message et ses pieces jointes peuvent contenir des informations 
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
 ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme ou 
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged 
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and delete 
 this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have been 
 modified, changed or falsified.
 Thank you.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy Enforcement logic

2014-08-21 Thread Jay Lau
Hi Tim,

That's great! Does the tutorial is uploaded to Gerrit for review?

Thanks.


2014-08-21 23:56 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

  Hi Jay,

  We have a tutorial in review right now.  It should be merged in a couple
 of days.  Thanks for the suggestion!

  Tim


  On Aug 21, 2014, at 7:54 AM, Jay Lau jay.lau@gmail.com wrote:

  I know that Congress is still under development, but it is better that
 it can provide some info for How to use it just like docker
 https://wiki.openstack.org/wiki/Docker , this might attract more people
 contributing to it.


 2014-08-21 22:07 GMT+08:00 Madhu Mohan mmo...@mvista.com:

 Hi,

  I am quite new to the Congress and Openstack as well and this question
 may seem very trivial and basic.

 I am trying to figure out the policy enforcement logic,

  Can some body help me understand how exactly, a policy enforcement
 action is taken.

  From the example policy there is an action defined as:



 *action(disconnect_network) nova:network-(vm, network) :-
 disconnect_network(vm, network) *
  I assume that this statement when applied would translate to deletion of
 entry in the database.

  But, how does this affect the actual setup (i.e) How is this database
 update translated to actual disconnection of the VM from the network.
  How does nova know that it has to disconnect the VM from the network ?

  Thanks and Regards,
  Madhu Mohan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
  Thanks,

  Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >