Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Kai Qiang Wu

+1 about Dan's comments.


1. We should not discourage all cases for -1 for questions. Because it
often leads more discussion about the code issue, it is helpful for such
case.

Of course, it is diffcult to find a balance point, what can -1, what can 0.
I don't think 0 in gerrit works well, because sometimes authors not care
about that.



2. Also, for typos in comments and message, -1 makes sense too.
Because 0 in gerrit means no need to improve the message and everything is
good. It can lead many bad cases for cores, and non-cores. It means it
doesn't matter if spell right or wrong.






Best Wishes,


Follow your heart. You are miracle!



From:   Dan Smith d...@danplanet.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   04/24/2015 10:48 PM
Subject:Re: [openstack-dev] Please stop reviewing code while asking
questions



 In defense of those of us asking questions, I'll just point out
 that as a core reviewer I need to be sure I understand the intent
 and wide-ranging ramifications of patches as I review them.  Especially
 in the Oslo code, what appears to be a small local change can have
 unintended consequences when the library gets out into the applications.

 I will often ask questions like, what is going to happen in X
 situation if we change this default or how does this change in
 behavior affect the case where Y happens, which isn't well tested
 in our unit tests. If those details aren't made clear by the commit
 message and comments in the code, I consider that a good reason to
 include a -1 with a request for the author to provide more detail.
 Often these are cases I'm not intimately familiar with, so I ask a
 question rather than saying outright that I think something is
 broken because I expect to learn from the answer but I still have
 doubts that I want to indicate with the -1.

 Most of the time the author has thought about the issues and worked
 out a reason they are not a problem, but they haven't explained
 that anywhere. On the other hand, it is frequently the case that
 someone *hasn't* understood why a change might be bad and the
 question ends up leading to more research and discussion.

Right, and -1 makes the comment much more visible to both other cores
and the reviewer. Questions which rightly point out something which
would lead to what the OP considers a legit -1 can *easily* get missed
in the wash of review comments on a bug.

If you leave a -1 for a question and never come back to drop it when the
answer is provided, then that's bad and you should stop doing that.
However, I'm really concerned about the suggestion to not -1 for
questions in general because of the visibility we lose. I also worry
that more non-core people will feel even less likely to -1 a patch for
something they feel is just their failing to understand, when in fact
it's valuable feedback that the code is obscure.

As a core, I don't exclude all reviews with a -1, and doing so is pretty
dangerous behavior, IMHO.

I'm not sure if the concern of -1s for questions is over dropping the
review out of the hitlist for cores, or if it's about hurting the
feelings of the submitter. I'm not in favor of discouraging -1s for
either problem.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-04-22 Thread Kai Qiang Wu
Hi Madhuri,

1) I think we'd better not jump to v1beta3 API if our image not updated.

In release, we need to provide person with easy to use and function works
well.


As you mentioned, the v1beta3 not work with our image with many issues.



2)  If we could update our image, which integrate the latests k8s 0.15.0
release, I support to use v1beta3.
@sdake may knows how to update image, maybe others also, I not know much
about it how to update such image.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)


Follow your heart. You are miracle!



From:   Madhuri Rai madhuri.ra...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   04/22/2015 05:36 PM
Subject:Re: [openstack-dev] [magnum] swagger-codegen generated code for
python-k8sclient



Hi All,

As we are using fedora-21-atomic-2 image and that has Kubernetes v0.11.0, I
tried to run v1beta3 APIs on it. Some of the APIs failed.
The Kubernetes developer said v1beta3 wasn't fully supported until the
0.15.0 release. Hence this is causing some APIs to fail.

Below are the failures:

1. service-create API fail(422 status) with v1beta3 request format.
    The request format has changed from v1beta1 to v1beta3.

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md#v1beta3-conversion-tips


    I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
    https://github.com/GoogleCloudPlatform/kubernetes/issues/7157

2. pod-create API fail(500 status) with invalid request format.
    While doing negative testing, I found that pod-create API fails with
500 status. It should actually fail with 400 status.

    I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
    https://github.com/GoogleCloudPlatform/kubernetes/issues/7087

3. pod-update API fail(404).
    While trying to update a pod, it failed with status 404 even if the pod
exists. This is due to duplicate replacePod API in Kubernetes Client code.

    I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
    https://github.com/GoogleCloudPlatform/kubernetes/issues/7100

4. All APIs fail with json manifest.
    All Kubernetes resources(pod, rc, service) now fails with json format
manifest due to issue in swagger-codegen generated Kubernetes Client code.
    It doesn't support unicode string.

After all this issues, can we really switch to Kubernetes Client in this
release or should we wait for the Fedora image with Kubernetes 0.15.0
release that has full support of v1beta3?

Please provide your suggestions on this so that I can proceed further.

Thanks  Regards
Madhuri Kumari


On Tue, Mar 24, 2015 at 10:37 AM, Madhuri Rai madhuri.ra...@gmail.com
wrote:
  Hi Hongbin,


  On Tue, Mar 24, 2015 at 12:37 AM, Hongbin Lu hongbin...@gmail.com
  wrote:
   Hi Madhuri,

   Amazing work! I wouldn't concern the code duplication and modularity
   issue since the codes are generated. However, there is another concern
   here: if we find a bug/improvement of the generated code, we probably
   need to modify the generator. The question is if the upstream will
   accept the modifications? If yes, how fast the patch will go through.

   I would prefer to maintain a folk of the generator. By this way, we
   would have full control of the generated code. Thoughts?

  I agree that's a concern. I will try to fix the pep8 error upstream to
  look how it take to push a change upstream.


   Thanks,
   Hongbin

   On Mon, Mar 23, 2015 at 10:11 AM, Steven Dake (stdake) std...@cisco.com
wrote:


 From: Madhuri Rai madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Monday, March 23, 2015 at 1:53 AM
 To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [magnum] swagger-codegen generated code for
 python-k8sclient

   Hi All,

   This is to have a discussion on the blueprint for implementing
   python-k8client for magnum.


   https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

   I have committed the code generated by swagger-codegen at
   https://review.openstack.org/#/c/166720/.
   But I feel the quality of the code generated by swagger-codegen
   is not good.

   Some of the points:
   1) There is lot of code duplication. If we want to generate code
   for two or more versions, same code is duplicated for each API
   version.
   2) There is no modularity. CLI code for all the APIs are written
   in same file.

   So, I would like your opinion on this. How should we proceed
   further?

 Madhuri

Re: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for Magnum

2015-06-07 Thread Kai Qiang Wu
Thanks All Magnum Guys, It is my honor to work with all of you to make
Magnum Better.


:)  Thanks Stdake.



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/07/2015 08:03 AM
Subject:Re: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan)
for Core for Magnum



Kennan,

Welcome to the magnum-core team!

Regards
-steve


From: Steven Dake std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Saturday, May 30, 2015 at 10:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core
for Magnum

  Hi core team,

  Kennan (Kai Qiang Wu’s nickname) has really done a nice job in Magnum
  contributions.  I would like to propose Kennan for the core reviewer
  team.  I don’t think we necessarily need more core reviewers on
  Magnum, but Kennan has demonstrated a big commitment to Magnum and is
  a welcome addition in my opinion.

  For the lifetime of the project, Kennan has contributed 8% of the
  reviews, and 8% of the commits.  Kennan is also active in IRC.  He
  meets my definition of what a core developer for Magnum should be.

  Consider my proposal to be a +1 vote.

  Please vote +1 to approve or vote �C1 to veto.  A single �C1 vote acts
  as a veto, meaning Kennan would not be approved for the core team.  I
  believe we require 3 +1 core votes presently, but since our core team
  is larger now then when we started, I’d like to propose at our next
  team meeting we formally define the process by which we accept new
  cores post this proposal for Magnum into the magnum-core group and
  document it in our wiki.

  I’ll leave voting open for 1 week until June 6th to permit
  appropriate time for the core team to vote.  If there is a unanimous
  decision or veto prior to that date, I’ll close voting and make the
  appropriate changes in gerrit as appropriate.

  Thanks
  -steve
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-10 Thread Kai Qiang Wu

I’m moving this whiteboard to the ML so we can have some discussion to
refine it, and then go back and update the whiteboard.

Source:
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your
side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we
involved policy) create that baymodel, have less flexiblity for other
users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay,
it is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-15 Thread Kai Qiang Wu
Hi Adrian,

If I summarize your option, it would be,

1) Have a function like this,

 magnum bay-create --name swarmbay --baymodel swarmbaymodel
--baymodel-property-override apiserver_port=8766


And then magnum pass that property to override baymodel default properties,
and create the bay.


2) You talked another BP, about adjust bay api_address to be a URL, For bay
attribute api_address should be return format like following
tcp://192.168.45.12:7622 or http://192.168.45.12:8234




Is it ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/13/2015 02:04 PM
Subject:Re: [openstack-dev] [Magnum]Discuss
configurable-coe-api-port   Blueprint



Hongbin,

Good use case. I suggest that we add a parameter to magnum bay-create that
will allow the user to override the baymodel.apiserver_port attribute with
a new value that will end up in the bay.api_address attribute as part of
the URL. This approach assumes implementation of the magnum-api-address-url
blueprint. This way we solve for the use case, and don't need a new
attribute on the bay resource that requires users to concatenate multiple
attribute values in order to get a native client tool working.

Adrian

On Jun 12, 2015, at 6:32 PM, Hongbin Lu hongbin...@huawei.com wrote:

  A use case could be the cloud is behind a proxy and the API port is
  filtered. In this case, users have to start the service in an
  alternative port.

  Best regards,
  Hongbin

  From: Adrian Otto [mailto:adrian.o...@rackspace.com]
  Sent: June-12-15 2:22 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Magnum] Discuss
  configurable-coe-api-port Blueprint

  Thanks for raising this for discussion. Although I do think that the
  API port humber should be expressed in a URL that the local client
  can immediately use for connecting a native client to the API, I am
  not convinced that this needs to be a separate attribute on the Bay
  resource.

  In general, I think it’s a reasonable assumption that nova instances
  will have unique IP addresses assigned to them (public or private is
  not an issue here) so unique port numbers for running the API
  services on alternate ports seems like it may not be needed. I’d like
  to have input from at least one Magnum user explaining an actual use
  case for this feature before accepting this blueprint.

  One possible workaround for this would be to instruct those who want
  to run nonstandard ports to copy the heat template, and specify a new
  heat template as an alternate when creating the BayModel, which can
  implement the port number as a parameter. If we learn that this
  happens a lot, we should revisit this as a feature in Magnum rather
  than allowing it through an external workaround.

  I’d like to have a generic feature that allows for arbitrary
  key/value pairs for parameters and values to be passed to the heat
  stack create call so that this, and other values can be passed in
  using the standard magnum client and API without further
  modification. I’m going to look to see if we have a BP for this, and
  if not, I will make one.

  Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
wk...@cn.ibm.com wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service
endpoint. If it is that case, user would use that info

htttp(s)://ip:port

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have
default hard code port in that

2) If some users want to change port, (through heat, we can do
that)  We need add such flexibility for users.
That's  bp

https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port
 try to solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is
not necessary to customize the ports. we can drop the bp.


Thanks


Best Wishes

Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-11 Thread Kai Qiang Wu
If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it
is that case, user would use that info

htttp(s)://ip:port

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard
code port in that

2) If some users want to change port, (through heat, we can do that)  We
need add such flexibility for users.
That's  bp
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try
to solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary
to customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jay Lau jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/11/2015 01:17 PM
Subject:Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port
Blueprint



I think that we have a similar bp before:
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make
much sense to customize this port as the kubernetes/swarm/mesos cluster
will be created by heat and end user do not need to care the
ports,different kubernetes/swarm/mesos cluster will have different IP
addresses so there will be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:
  I’m moving this whiteboard to the ML so we can have some discussion to
  refine it, and then go back and update the whiteboard.

  Source:
  https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


  @Sdake and I have some discussion now, but may need more input from your
  side.


  1. keep apserver_port in baymodel, it may only allow admin to have (if we
  involved policy) create that baymodel, have less flexiblity for other
  users.


  2. apiserver_port was designed in baymodel, if moved from baymodel to
  bay, it is big change, and if we have other better ways. (it also may
  apply for
  other configuration fileds, like dns-nameserver etc.)



  Thanks



  Best Wishes,
  


  Kai Qiang Wu (吴开强  Kennan)
  IBM China System and Technology Lab, Beijing

  E-mail: wk...@cn.ibm.com
  Tel: 86-10-82451647
  Address: Building 28(Ring Building), ZhongGuanCun Software Park,
          No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
  100193
  


  Follow your heart. You are miracle!

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-06-02 Thread Kai Qiang Wu
Hi All,

For mesos bay, I think what we should implement depends on user-cases.

If users use magnum to create mesos-bay, what would they do with mesos in
following steps ?

1. If they go to mesos (framework or anything) directly, we'd better not
involve any new mesos objects, but use container if possible.
2. If they'd like to  operate with mesos through magnum, and it is easy to
do that, we could provide some objects operation.

Ideally, it is good to reuse containers api if possible. If not, we'd
better find ways to mesos mapping(api passthrough, instead add redundant
objects in magnum side)



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/02/2015 06:15 AM
Subject:Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint



Hi Jay,

For your question “what is the mesos object that we want to manage”, the
short answer is it depends. There are two options I can think of:
  1.   Don’t manage any object from Marathon directly. Instead, we
  can focus on the existing Magnum objects (i.e. container), and
  implements them by using Marathon APIs if it is possible. Use the
  abstraction ‘container’ as an example. For a swarm bay, container
  will be implemented by calling docker APIs. For a mesos bay,
  container could be implemented by using Marathon APIs (it looks the
  Marathon’s object ‘app’ can be leveraged to operate a docker
  container). The effect is that Magnum will have a set of common
  abstractions that is implemented differently by different bay type.
  2.   Do manage a few Marathon objects (i.e. app). The effect is
  that Magnum will have additional API object(s) that is from Marathon
  (like what we have for existing k8s objects: pod/service/rc).
Thoughts?

Thanks
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: May-29-15 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

I want to mention that there is another mesos framework named as chronos:
https://github.com/mesos/chronos , it is used for job orchestration.

For others, please refer to my comments in line.

2015-05-29 7:45 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:
I’m moving this whiteboard to the ML so we can have some discussion to
refine it, and then go back and update the whiteboard.

Source: https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

My comments in-line below.


Begin forwarded message:

From: hongbin hongbin...@huawei.com
Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay
type
Date: May 28, 2015 at 2:11:29 PM PDT
To: adrian.o...@rackspace.com
Reply-To: hongbin hongbin...@huawei.com

Blueprint changed by hongbin:

Whiteboard set to:

I did a preliminary research on possible implementations. I think this BP
can be implemented in two steps.
1. Develop a heat template for provisioning mesos cluster.
2. Implement a magnum conductor for managing the mesos cluster.

Agreed, thanks for filing this blueprint!
For 2, the conductor is mainly used to manage objects for CoE, k8s has pod,
service, rc, what is the mesos object that we want to manage? IMHO, mesos
is a resource manager and it needs to be worked with some frameworks to
provide services.


 First, I want to emphasis that mesos is not a service (It looks like a
 library). Therefore, mesos doesn't have web API, and most users don't
 use mesos directly. Instead, they use a mesos framework that is on top
 of mesos. Therefore, a mesos bay needs to have a mesos framework pre-
 configured so that magnum can talk to the framework to manage the bay.
 There are several framework choices. Below is a list of frameworks that
 look like a fit (in my opinion). A exhaustive list of framework can be
 found here [1].

 1. Marathon [2]
 This is a framework controlled by a company (mesosphere [3]). It is open
 source through. It supports running app on clusters of docker containers.
 It is probably the most widely-used mesos framework for long-running
 application.

 Marathon offers a REST API, whereas Aroura does not (unless one has
 materialized in the last month). This was the one we discussed in our
 Vancouver design summit, and we agreed that those wanting to use Apache
 Mesos are probably expecting this framework.


 2. Aurora [4]
 This is a framework governed by Apache Software

Re: [openstack-dev] [Magnum] Magnum installation problem with devstack kilo

2015-06-01 Thread Kai Qiang Wu
Hi BaoHua,

For kilo you test, I am not sure what's the steps you did, as the dev-guide
is for master branch development.


If you have any questions, you can ask in #openstack-containers IRC channel
or file bug in launchpad.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/01/2015 01:37 PM
Subject:Re: [openstack-dev] [Magnum] Magnum installation problem with
devstack kilo



I have this same problem with devstack master.  I’m not sure what it is
other then it involves the requirements repo possibly not containing
oslo.versionedobjects on my system.  Dims has a suggestion to do something
about it, but I didn’t get back to it.  Try joining channel
#openstack-containers and hunting down dims.

Regards
-steve

From: Baohua Yang yangbao...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Sunday, May 31, 2015 at 10:08 PM
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Magnum installation problem with devstack
kilo

  Hi, all
 I am following
  
http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst
   to install magnum with openstack kilo version.

 At the end, there are error messages like

  python update.py /opt/stack/magnum
  ...
   Syncing /opt/stack/magnum/requirements.txt
  2015-05-29 12:50:28.075 | 'oslo.versionedobjects' is not in
  global-requirements.txt
  2015-05-29 12:50:28.083 | + exit_trap
  2015-05-29 12:50:28.083 | + local r=1
  2015-05-29 12:50:28.084 | ++ jobs -p
  2015-05-29 12:50:28.085 | + jobs=
  2015-05-29 12:50:28.085 | + [[ -n '' ]]
  2015-05-29 12:50:28.085 | + kill_spinner
  2015-05-29 12:50:28.085 | + '[' '!' -z '' ']'
  2015-05-29 12:50:28.085 | + [[ 1 -ne 0 ]]
  2015-05-29 12:50:28.085 | + echo 'Error on exit'
  2015-05-29 12:50:28.085 | Error on exit
  2015-05-29 12:50:28.085 | + [[ -z /opt/stack/logs ]]
  2015-05-29 12:50:28.085 | + /opt/stack/devstack/tools/worlddump.py
  -d /opt/stack/logs
  2015-05-29 12:50:28.122 | df: '/run/user/1000/gvfs': Permission
  denied
  2015-05-29 12:50:28.165 | + exit 1


  This repeats several times, does any one know the reason?

  Thanks!

  --
  Best wishes!
  Baohua
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-01 Thread Kai Qiang Wu
+1 about Jay option.

BTW, as nova and glance all allow same name for instance or image, So name
seems not need to be unique, it is OK I think.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jay Lau jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/01/2015 11:17 PM
Subject:Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a
required option when creating a Bay/Baymodel





2015-06-01 21:54 GMT+08:00 Jay Pipes jaypi...@gmail.com:
  On 05/31/2015 05:38 PM, Jay Lau wrote:
   Just want to use ML to trigger more discussion here. There are now
   bugs/patches tracing this, but seems more discussions are needed before
   we come to a conclusion.

   https://bugs.launchpad.net/magnum/+bug/1453732
   https://review.openstack.org/#/c/181839/
   https://review.openstack.org/#/c/181837/
   https://review.openstack.org/#/c/181847/
   https://review.openstack.org/#/c/181843/

   IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
   to end user as Magnum also support operating Bay/Baymodel via names and
   the name might be more meaningful to end users.

   Perhaps we can borrow some iead from nova, the concept in magnum can be
   mapped to nova as following:

   1) instance = bay
   2) flavor = baymodel

   So I think that a solution might be as following:
   1) Make name as a MUST for both bay/baymodel
   2) Update magnum client to use following style for bay-create and
   baymodel-create: DO NOT add --name option

  You should decide whether name would be unique -- either globally or
  within a tenant.

  Note that Nova's instance names (the display_name model field) are *not*
  unique, neither globally nor within a tenant. I personally believe this
  was a mistake.

  The decision affects your data model and constraints.
Yes, my thinking is to enable Magnum has same behavior with nova. The name
can be managed by the end user and the end user can specify the name as
they want, it is end user's responsibility to make sure there are no
duplicate names. Actually, I think that the name do not need to be unique
but UUID.

  Best,
  -jay


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-06-30 Thread Kai Qiang Wu
For @Tom's suggestion, I +1 about it, maintain a separate
heat-coe-templates is very inefficient.[As @HongBin's comments below]



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Fox, Kevin M kevin@pnnl.gov
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/30/2015 11:40 PM
Subject:Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates



Whats the timeframe? I was really hoping for Liberty but its sounding like
thats unlikely? M then? The app catalog really needs conditionals for the
same reason. :/

Thanks,
Kevin

From: Angus Salkeld [asalk...@mirantis.com]
Sent: Monday, June 29, 2015 6:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

On Tue, Jun 30, 2015 at 8:23 AM, Fox, Kevin M kevin@pnnl.gov wrote:
  Needing to fork templates to tweak things is a very common problem.

  Adding conditionals to Heat was discussed at the Summit. (
  https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I
  want to say, someone was going to prototype it using YAQL, but I don't
  remember who.

I was going to do that, but I would not expect that ready in a very short
time frame. It needs
some investigation and agreement from others. I'd suggest making you
decision based
on what we have now.

-Angus


  Would it be reasonable to keep if conditionals worked?

  Thanks,
  Kevin
  
  From: Hongbin Lu [hongbin...@huawei.com]
  Sent: Monday, June 29, 2015 3:01 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

  Agree. The motivation of pulling templates out of Magnum tree is hoping
  these templates can be leveraged by a larger community and get more
  feedback. However, it is unlikely to be the case in practise, because
  different people has their own version of templates for addressing
  different use cases. It is proven to be hard to consolidate different
  templates even if these templates share a large amount of duplicated code
  (recall that we have to copy-and-paste the original template to add
  support for Ironic and CoreOS). So, +1 for stopping usage of
  heat-coe-templates.

  Best regards,
  Hongbin

  -Original Message-
  From: Tom Cammann [mailto:tom.camm...@hp.com]
  Sent: June-29-15 11:16 AM
  To: openstack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates

  Hello team,

  I've been doing work in Magnum recently to align our templates with the
  upstream templates from larsks/heat-kubernetes[1]. I've also been
  porting these changes to the stackforge/heat-coe-templates[2] repo.

  I'm currently not convinced that maintaining a separate repo for Magnum
  templates (stackforge/heat-coe-templates) is beneficial for Magnum or the
  community.

  Firstly it is very difficult to draw a line on what should be allowed
  into the heat-coe-templates. We are currently taking out changes[3] that
  introduced useful autoscaling capabilities in the templates but that
  didn't fit the Magnum plan. If we are going to treat the
  heat-coe-templates in that way then this extra repo will not allow
  organic development of new and old container engine templates that are
  not tied into Magnum.
  Another recent change[4] in development is smart autoscaling of bays
  which introduces parameters that don't make a lot of sense outside of
  Magnum.

  There are also difficult interdependency problems between the templates
  and the Magnum project such as the parameter fields. If a required
  parameter is added into the template the Magnum code must be also updated
  in the same commit to avoid functional test failures. This can be avoided
  using Depends-On:
  #xx
  feature of gerrit, but it is an additional overhead and will require some
  CI setup.

  Additionally we would have to version the templates, which I assume would
  be necessary to allow for packaging. This brings with it is own problems.

  As far as I am aware there are no other people using the
  heat-coe-templates beyond the Magnum team, if we want independent growth
  of this repo it will need to be adopted by other people rather than
  Magnum commiters.

  I don't see the heat templates as a dependency of Magnum, I see them as a
  truly fundamental part of Magnum which is going to be very difficult

Re: [openstack-dev] [magnum] Magnum Midcycle Event Scheduling Doodle Poll closes July 7th

2015-07-01 Thread Kai Qiang Wu
Hi Stdake,

If remote participation, do I need to vote for
http://doodle.com/pinkuc5hw688zhxw ?



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/02/2015 02:06 AM
Subject:[openstack-dev] [magnum] Magnum Midcycle Event Scheduling
Doodle Poll closes July 7th



Apologies for double post �C left off [magnum] prior by error.

Ton Ngo of IBM Silicon Valley Research has graciously offered to host the 2
day Magnum midcycle event at IBM’s facilities.

The sessions will run from 9AM �C 5PM and catered lunch and refreshments
(soda/water) will be provided.

The mid-cycle will be a standard mid-cycle with a 1 hour introduction
followed by two days of design sessions.

Please cast your votes on any days you can make.

http://doodle.com/pinkuc5hw688zhxw

There are ~25 seats available.  Preference will be given to in-person core
reviewers, followed by any folks that have made commits to the repository.
After dates are settled, a separate eventbrite event will be setup to sort
out the specifics such as  dietary needs, etc and confirm in-person seating
if we are past capacity limits.

We will make remote participation available, but the experience will likely
be less then optimal for remote participants.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-13 Thread Kai Qiang Wu
hi Hua,

My comments in blue below. please check.

Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 wanghua.hum...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/13/2015 03:32 PM
Subject:Re: [openstack-dev] [magnum]problems for horizontal scale



Hi Kai Qiang Wu,

I have some comments in line.

On Thu, Aug 13, 2015 at 1:32 PM, Kai Qiang Wu wk...@cn.ibm.com wrote:
  Hi Hua,

  I have some comments about this:

  A
   remove heat poller can be a way, but some of its logic needs to make
  sure it work and performance not burden.
  1) for old heat poller it is quick loop, with fixed interval, to make
  sure stack status update quickly can be reflected in bay status
  2) for periodic task running, it seems dynamic loop, and period is long,
  it was added for some stacks creation timeout, 1) loop exit, this 2) loop
  can help update the stack and also conductor crash issue


 It is not necessary to remove heat poller, so we can keep it.



  It would be ideal to put in one place for looping over the stacks, but
  periodic tasks need to consider if it really just need to loop
  IN_PROGRESS status stack ? And what's the interval for loop that ? (60s
  or short, loop performance)


It is necessary to loop IN_PROGRESS status stack for conductor crash issue.


  Does heat have other status transition  path, like delete_failed --
  (status reset) -- become OK.  etc.


 It needs to be made sure.




  B For remove db operation in bay_update case. I did not understand your
  suggestion.
  bay_update include update_stack and poll_and_check(it is in heat poller),
  if you removed heat poller to periodic task(as you said in your 3). It
  still needs db operations.



Race conditions occur in periodic tasks too.  If we save the stack params
such as node_count in bay_update and race condition occurs, then the
node_count in db is wrong and the status is UPDATE_COMPLETE. And there is
no way to correct it.
If we save stack params in periodic tasks and race condition occurs, the
node_count in db is still wrong and status is UPDATE_COMPLETE. We can
correct it in the next periodic task if race condition does not occur. The
solution I proposed can not promise the data in db is always right.

Yes, it can help some, when you talked periodic task,  I checked that,

  filters = [bay_status.CREATE_IN_PROGRESS,
   bay_status.UPDATE_IN_PROGRESS,
   bay_status.DELETE_IN_PROGRESS]
bays = objects.Bay.list_all(ctx, filters=filters)
   If UPDATE_COMPLETE, I did not find it would sync it in this task. Do you
mean add that status check in this periodic task ?



C For allow admin user to show stacks in other tenant, it seems OK. Does
other projects try this before? Is it reasonable case for customer ?

Nova allow admin user to show instances in other tenant. Neutron allow
admin user to show ports in other tenant, nova uses it to sync up network
info for instance from neutron.
   That would be OK, I think

Thanks


Best Wishes,


Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
        No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!

Inactive hide details for 王华 ---08/13/2015 11:31:53 AM---any comments on
this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 wan王华 ---08/13/2015 11:31:53
AM---any comments on this? On Wed, Aug 12, 2015 at 2:50 PM, 王华 
wanghua.hum...@gmail.com wrote:

From: 王华 wanghua.hum...@gmail.com
To: openstack-dev@lists.openstack.org
Date: 08/13/2015 11:31 AM
Subject: Re: [openstack-dev] [magnum]problems for horizontal scale




  any comments on this?

  On Wed, Aug 12, 2015 at 2:50 PM, 王华 wanghua.hum...@gmail.com wrote:
Hi All,

In order to prevent race conditions due to multiple conductors, my
solution is as blew:
1. remove the db operation in bay_update to prevent race
conditions.Stack operation is atomic. Db operation is atomic. But
the two operations together are not atomic.So the data in the db
may be wrong.
2. sync up stack status and stack parameters(now only node_count)
from heat by periodic tasks

Re: [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

2015-08-13 Thread Kai Qiang Wu
Hi Stdake and Vilobh,

If I get what you proposed below, you means pod/rc/service would not be
stored in magnum side, Just retrieved and updated in k8s side ?

For now, if magnum not add any specific logic to pod/rc/service, that can
be OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/12/2015 11:52 PM
Subject:Re: [openstack-dev] [Magnum] Obtain the objects from the bay
endpoint





From: Akash Gangil akashg1...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Wednesday, August 12, 2015 at 1:37 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Obtain the objects from the bay
endpoint

  Hi,

  I have a few questions. inline.


Problem :-

Currently objects (pod/rc/service) are read from the database. In
order for native clients to work, they must be read from the ReST
bay endpoint. To execute native clients, we must have one truth of
the state of the system, not two as in its current state of art.


  What is meant by the native clients here? Can you give an example?

Native client is docker binary or kubectl from those various projects.  We
also need to support python-magnumclient operations to support further Heat
integration, which allows Magnum to be used well with proprietary software
implementations that may be doing orchestration via Heat.



A]  READ path needs to be changed :

1. For python clients :-


python-magnum client-rest api-conductor-rest-endpoint-k8s-api
handler


In its present state of art this is python-magnum client-rest
api-db


2. For native clients :-

native client-rest-endpoint-k8s-api



  If native client can get all the info through the rest-endpoint-k8s
  handler, why in case of magnum client do we need to go through
  rest-api- conductor? Do we parse or modify the k8s-api data before
  responding to the python-magnum client?



Kubernetes has a rest API endpoint running in the bay.  This is different
from the Magnum rest API.  This is what is referred to above.

B] WRITE operations need to happen via the rest endpoint instead of
the conductor.

  If we completely bypass the conductor, is there any way to keep a
  track of trace of how a resource was modified? Since I presume now
  magnum doesn't have that info, since we talk to k8s-api directly? Or
  is this irrelevant?
C] Another requirement that needs to be satisfied is that data
returned by magnum should be the same whether its created by native
client or python-magnum client.

  I don't understand why is the information duplicated in the magnum db
  and k8s data source in first place? From what I understand magnum has
  its own database which is with k8s-api responses?

The reason it is duplicated is because when I wrote the original code, I
didn’t forsee this objective.  Essentially I’m not perfect ;)

The fix will make sure all of the above conditions are met.


Need your input on the proposed approach.



ACK accurate of my understanding of the proposed approach :)
-Vilobh


[1] https://blueprints.launchpad.net/magnum/+spec/objects-from-bay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Akash
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Magnum] Consistent functional test failures (seems infra not have enough resource)

2015-08-13 Thread Kai Qiang Wu
Hi Tom,


I did talked to infra, which I think it is resource issue, But they thought
it is nova issue,


For we boot k8s bay, we use baymodel with falvor m1.small, you can find
devstack



+-+---+---+--+---+--+---+-+---+
| ID  | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public |
+-+---+---+--+---+--+---+-+---+
| 1   | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0
| True  |
| 2   | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0
| True  |
| 3   | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0
| True  |
| 4   | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0
| True  |
| 42  | m1.nano   | 64| 0| 0 |  | 1 | 1.0
| True  |
| 451 | m1.heat   | 512   | 0| 0 |  | 1 | 1.0
| True  |
| 5   | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0
| True  |
| 84  | m1.micro  | 128   | 0| 0 |  | 1 | 1.0
| True  |
+-+---+---+--+---+--+---+-+---+



From logs below:

[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin]
(devstack-trusty-rax-dfw-4299602, devstack-trusty-rax-dfw-4299602)
ram:5172 disk:17408 io_ops:0 instances:1 does not have 20480 MB usable
disk, it only has 17408.0 MB usable disk. host_passes
/opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:60
2015-08-13 08:26:15.218 INFO nova.filters
[req-e

It is 20GB disk space, so failed for that.


I think it is related with this, the jenkins allocated VM disk space is not
large.
I am curious why it failed so often recently.  Does os-infra changed
something ?




Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Tom Cammann tom.camm...@hp.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/13/2015 06:24 PM
Subject:[openstack-dev] [Magnum] Consistent functional test failures



Hi Team,

Wanted to let you know why we are having consistent functional test
failures in the gate.

This is being caused by Nova returning No valid host to heat:

2015-08-13 08:26:16.303 31543 INFO heat.engine.resource [-] CREATE:
Server kube_minion [12ab45ef-0177-4118-9ba0-3fffbc3c1d1a] Stack
testbay-y366b2atg6mm-kube_minions-cdlfyvhaximr-0-dufsjliqfoet
[b40f0c9f-cb54-4d75-86c3-8a9f347a27a6]
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource Traceback (most
recent call last):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 625, in
_action_recorder
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 696, in _do_action
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
self.action_handler_task(action, args=handler_args)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/scheduler.py, line 320, in wrapper
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource step =
next(subtask)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resource.py, line 670, in
action_handler_task
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource while not
check(handler_data):
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/resources/openstack/nova/server.py,
line 759, in check_create_complete
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource return
self.client_plugin()._check_active(server_id)
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
/opt/stack/new/heat/heat/engine/clients/os/nova.py, line 232, in
_check_active
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 'code':
fault.get('code', _('Unknown'))
2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource
ResourceInError: Went to status ERROR due to Message: No valid host was
found. There are not enough hosts available., Code: 500

And this in turn is being caused by the compute instance running out of
disk space:

2015-08-13 08:26:15.216 DEBUG nova.filters
[req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Starting with 1
host(s) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70
2015-08-13 08:26:15.217 DEBUG nova.filters
[req-e5bb52cb-387e

Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-12 Thread Kai Qiang Wu
Hi Hua,

I have some comments about this:

A
 remove heat poller can be a way, but some of its logic needs to make sure
it work and performance not burden.
1) for old heat poller it is quick loop, with fixed interval, to make sure
stack status update quickly can be reflected in bay status
2) for periodic task running, it seems dynamic loop, and period is long, it
was added for some stacks creation timeout, 1) loop exit, this 2) loop can
help update the stack and also conductor crash issue


It would be ideal to put in one place for looping over the stacks, but
periodic tasks need to consider if it really just need to loop
IN_PROGRESS status stack ? And what's the interval for loop that ? (60s or
short, loop performance)

Does heat have other status transition  path, like delete_failed --
(status reset) -- become OK.  etc.



B For remove db operation in bay_update case. I did not understand your
suggestion.
bay_update include update_stack and poll_and_check(it is in heat poller),
if you removed heat poller to periodic task(as you said in your 3). It
still needs db operations.



C For allow admin user to show stacks in other tenant, it seems OK. Does
other projects try this before? Is it reasonable case for customer ?



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 wanghua.hum...@gmail.com
To: openstack-dev@lists.openstack.org
Date:   08/13/2015 11:31 AM
Subject:Re: [openstack-dev] [magnum]problems for horizontal scale



any comments on this?

On Wed, Aug 12, 2015 at 2:50 PM, 王华 wanghua.hum...@gmail.com wrote:
  Hi All,

  In order to prevent race conditions due to multiple conductors, my
  solution is as blew:
  1. remove the db operation in bay_update to prevent race conditions.Stack
  operation is atomic. Db operation is atomic. But the two operations
  together are not atomic.So the data in the db may be wrong.
  2. sync up stack status and stack parameters(now only node_count) from
  heat by periodic tasks. bay_update can change stack parameters, so we
  need to sync up them.
  3. remove heat poller, because we have periodic tasks.

  To sync up stack parameters from heat, we need to show stacks using
  admin_context. But heat don't allow to show stacks in other tenant. If we
  want to show stacks in other tenant, we need to store auth context for
  every bay. That is a problem. Even if we store the auth context, there is
  a timeout for token. The best way I think is to let heat allow admin user
  to show stacks in other tenant.

  Do you have a better solution or any improvement for my solution?

  Regards,
  Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-08-03 Thread Kai Qiang Wu
Hi Suro and Jay,

I checked discussion below, and I do believe we also need service-list(for
just magnum-api and magnum-conductor), but not so emergent requirement.

I also think service-list should not bind to k8s or swarm etc. (can use
coe-service etc.)


But I have more for below:

1) For k8s or swarm or mesos,  I think magnum can expose through the
coe-service-list.
But if right now, we fetched status from DB for pods/rcs status, It seems
not proper to do that, as DB has old data. We need to fetch that through
k8s/swarm API endpoints.


2)  It can also expose that through k8s/swarm/mesos client tools. If users
like that.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jay Lau jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   08/04/2015 05:51 AM
Subject:Re: [openstack-dev] [magnum][blueprint] magnum-service-list



Hi Suro,

Yes, I did not see a strong reason for adding service-list to show all of
magnum system services, but it is nice to have.

But I did see a strong reason to rename service-list to
coe-service-list or others which might be more meaningful as I was often
asked by someone why does magnum service-list is showing some services in
kubernetes but not magnum system itself? This command always make people
confused.

Thanks!

2015-08-03 15:36 GMT-04:00 SURO suro.p...@gmail.com:
  Hi Jay,

  Thanks for clarifying the requirements further.

  I do agree with the idea of having 'magnum service-list' and 'magnum
  coe-service-list' to distinguish that coe-service is a different concept.
  BUT, in openstack space, I do not see 'service-list' as a standardized
  function across other APIs -
 1.  'nova service-list' = Enlists services like api, conductor etc.
 2.  neutron does not have this option.
 3.  'heat service-list' = Enlists available engines.
 4.  'keystone service-list' = Enlists services/APIs who consults
keystone.
  Now in magnum, we may choose to model it after nova, but nova really has
  a bunch of backend services, viz. nova-conductor, nova-cert,
  nova-scheduler, nova-consoleauth, nova-compute[x N], whereas magnum not.

  For magnum, at this point creating 'service-list' only for api/conductor
  - do you see a strong need?

  Regards,
  SURO
  irc//freenode: suro-patz

  On 8/3/15 12:00 PM, Jay Lau wrote:
Hi Suro and others, comments on this? Thanks.

2015-07-30 5:40 GMT-04:00 Jay Lau jay.lau@gmail.com:
 Hi Suro,

 In my understanding, even other CoE might have service/pod/rc
 concepts in future, we may still want to distinguish the magnum
 service-list with magnum coe-service-list.

 service-list is mainly for magnum native services, such as
 magnum-api, magnum-conductor etc.
 coe-service-list mainly for the services that running for the CoEs
 in magnum.

 Thoughts? Thanks.

 2015-07-29 17:50 GMT-04:00 SURO suro.p...@gmail.com:
   Hi Hongbin,

   What would be the value of having COE-specific magnum command to
   go and talk to DB? As in that case, user may use the native
   client itself to fetch the data from COE, which even will have
   latest state.

   In a pluggable architecture there is always scope for common
   abstraction and driver implementation. I think it is too early
   to declare service/rc/pod as specific to k8s, as the other COEs
   may very well converge onto similar/same concepts.

   Regards,
   SURO
   irc//freenode: suro-patz

   On 7/29/15 2:21 PM, Hongbin Lu wrote:


 Suro,





 I think service/pod/rc are k8s-specific. +1 for Jay’s
 suggestion about renaming COE-specific command, since the
 new naming style looks consistent with other OpenStack
 projects. In addition, it will eliminate name collision of
 different COEs. Also, if we are going to support pluggable
 COE, adding prefix to COE-specific command is unavoidable.





 Best regards,


 Hongbin





 From: SURO [mailto:suro.p...@gmail.com]
 Sent: July-29-15 4:03 PM
 To: Jay Lau
 Cc: s...@yahoo-inc.com; OpenStack Development Mailing List
 (not for usage questions)
 Subject: Re: [openstack-dev] [magnum

Re: [openstack-dev] [Magnum] Consistent functional test failures

2015-08-14 Thread Kai Qiang Wu
I have checked with infra team members. For two instances, 10GB each should
be OK.

So I add some steps to create magnum specific flavor(8 GB disk), instead of
use the existed devstack flavors (m1.small needs 20GB, m1.tiny can not be
used)

Magnum creates one for jenkins job and delete it when tests finished.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Clark Boylan cboy...@sapwetik.org
To: openstack-dev@lists.openstack.org
Date:   08/14/2015 08:05 AM
Subject:Re: [openstack-dev] [Magnum] Consistent functional test
failures



On Thu, Aug 13, 2015, at 03:13 AM, Tom Cammann wrote:
 Hi Team,

 Wanted to let you know why we are having consistent functional test
 failures in the gate.

 This is being caused by Nova returning No valid host to heat:

 2015-08-13 08:26:16.303 31543 INFO heat.engine.resource [-] CREATE:
 Server kube_minion [12ab45ef-0177-4118-9ba0-3fffbc3c1d1a] Stack
 testbay-y366b2atg6mm-kube_minions-cdlfyvhaximr-0-dufsjliqfoet
 [b40f0c9f-cb54-4d75-86c3-8a9f347a27a6]
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource Traceback (most
 recent call last):
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/resource.py, line 625, in
 _action_recorder
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/resource.py, line 696, in _do_action
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource yield
 self.action_handler_task(action, args=handler_args)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/scheduler.py, line 320, in wrapper
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource step =
 next(subtask)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/resource.py, line 670, in
 action_handler_task
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource while not
 check(handler_data):
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/resources/openstack/nova/server.py,
 line 759, in check_create_complete
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource return
 self.client_plugin()._check_active(server_id)
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource File
 /opt/stack/new/heat/heat/engine/clients/os/nova.py, line 232, in
 _check_active
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource 'code':
 fault.get('code', _('Unknown'))
 2015-08-13 08:26:16.303 31543 ERROR heat.engine.resource
 ResourceInError: Went to status ERROR due to Message: No valid host was
 found. There are not enough hosts available., Code: 500

 And this in turn is being caused by the compute instance running out of
 disk space:

 2015-08-13 08:26:15.216 DEBUG nova.filters
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Starting with 1
 host(s) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70
 2015-08-13 08:26:15.217 DEBUG nova.filters
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter
 RetryFilter returned 1 host(s) get_filtered_objects
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.217 DEBUG nova.filters
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter
 AvailabilityZoneFilter returned 1 host(s) get_filtered_objects
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.217 DEBUG nova.filters
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter RamFilter
 returned 1 host(s) get_filtered_objects
 /opt/stack/new/nova/nova/filters.py:84
 2015-08-13 08:26:15.218 DEBUG nova.scheduler.filters.disk_filter
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin]
 (devstack-trusty-rax-dfw-4299602, devstack-trusty-rax-dfw-4299602)
 ram:5172 disk:17408 io_ops:0 instances:1 does not have 20480 MB usable
 disk, it only has 17408.0 MB usable disk. host_passes
 /opt/stack/new/nova/nova/scheduler/filters/disk_filter.py:60
 2015-08-13 08:26:15.218 INFO nova.filters
 [req-e5bb52cb-387e-4638-911e-8c72aa1b6400 admin admin] Filter DiskFilter
 returned 0 hosts

 For now a recheck seems to work about 1 in 2, so we can still land
 patches.

 The fix for this could be to clean up our Magnum devstack install more
 aggressively, which might be as simple as cleaning up the images we use,
 or get infra to provide our tests with a larger disk size. I will
 probably test out a patch today which cleans up the images we use in
 devstack to see if that helps

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Kai Qiang Wu
Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc.
And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot
Baremetal. 'VM' or Baremetal here is just used for heat template selection.


1 If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small  'VM' m1.middle  'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I
think flavor not good to be used.


2 @Adrian, we have --flavor-id field for baymodel now, it would picked up
by heat-templates, and boot instances with such flavor.


3 Finally, I think instance_type is better.  instance_type can be used as
heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat
stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/16/2015 04:44 AM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is
that magnum need to map a bay to a Heat template (which will be used to
provision the bay). Currently, Magnum has three layers of mapping:
  ・ platform: vm or baremetal
  ・ os: atomic, coreos, …
  ・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate
a list of flovars for VM and another list of flavors for baremetal (We may
need an additional list of flavors for container in the future for the
nested container use case). Then, the new three layers would be:
  ・ flavor: baremetal, m1.small, m1.medium,  …
  ・ os: atomic, coreos, ...
  ・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate
what Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors?
They already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What
about using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include:
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver,
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Tuesday, July 14, 2015 at 7:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?

 One drawback to virt_type if not seen in the context of the acceptable
 values, is that it should be set to values like libvirt, xen, ironic, etc.
 That might actually be good. Instead of using the values 'vm' or
 'baremetal', we use the name of the nova virt driver, and interpret those
 to be vm or baremetal types. So if I set the value to 'xen', I know the
 nova instance type is a vm, and 'ironic' means a baremetal nova instance.

 Adrian

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Kai Qiang Wu
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs others
case first.
Attach:  initial patch (about the discussion):
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not reach a
meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/16/2015 11:47 AM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field
you proposed in baymodel [1]. I prefer to drop it and use the existing
field ‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?



Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc.
And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot
Baremetal. 'VM' or Baremetal here is just used for heat template selection.


1 If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small  'VM' m1.middle  'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I
think flavor not good to be used.


2 @Adrian, we have --flavor-id field for baymodel now, it would picked up
by heat-templates, and boot instances with such flavor.


3 Finally, I think instance_type is better.  instance_type can be used as
heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat
stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,


Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!

Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the
idea of using Nova flavor directly. Why we introducHongbin Lu ---07/16/2015
04:44:14 AM---+1 for the idea of using Nova flavor directly. Why we
introduced the “platform” field to indicate “v

From: Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?




+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is
that magnum need to map a bay to a Heat template (which will be used to
provision the bay). Currently, Magnum has three layers of mapping:
  ・ platform: vm or baremetal
  ・ os: atomic, coreos, …
  ・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate
a list of flovars for VM and another list of flavors for baremetal (We may
need an additional list of flavors for container in the future for the
nested container use case). Then, the new three layers would be:
  ・ flavor: baremetal, m1.small, m1.medium,  …
  ・ os: atomic, coreos, ...
  ・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate
what Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Kai Qiang Wu
+ 1 about server_type.

I also think it is OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/16/2015 03:18 PM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



I’d be comfortable with server_type.

Adrian

  On Jul 15, 2015, at 11:51 PM, Jay Lau jay.lau@gmail.com wrote:

  After more thinking, I agree with Hongbin that instance_type might
  make customer confused with flavor, what about using server_type?

  Actually, nova has concept of server group, the servers in this
  group can be vm. pm or container.

  Thanks!

  2015-07-16 11:58 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs
others case first.
Attach:  initial patch (about the discussion):
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not
reach a meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,



Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing
P.R.China 100193



Follow your heart. You are miracle!

graycol.gifHongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for
the confusion. To clarify, I was thinking how to name the field you
proposed in baymo

From: Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 07/16/2015 11:47 AM



Subject: Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



Kai,

Sorry for the confusion. To clarify, I was thinking how to name the
field you proposed in baymodel [1]. I prefer to drop it and use the
existing field ‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type
or instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
 We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if
else etc. And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another
boot Baremetal. 'VM' or Baremetal here is just used for heat
template selection.


1 If used flavor, it is nova specific concept: take two as
example,
   m1.small, or m1.middle.
  m1.small  'VM' m1.middle  'VM'
  Both m1.small and m1.middle can be used in 'VM'
environment.
So we should not use m1.small as a template identification. That's
why I think flavor not good to be used.


2 @Adrian, we have --flavor-id field for baymodel now, it would
picked up by heat-templates, and boot instances with such flavor.


3 Finally, I think instance_type is better.  instance_type can be
used as heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM'
heat stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,



Kai Qiang Wu (吴开强  Kennan

[openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-14 Thread Kai Qiang Wu

Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or
'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner
as the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I
think that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] [nova] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Kai Qiang Wu
Hi Fox and Adrian,

Let me summarize,

1) We all agree to replace 'platform' word to  'server_type' (let's not
discuss this anymore)


2) For bay-creation now in magnum,

We pass it to baymodel.

Cloud Operators created lots of baymodels, maybe, like
kubernets-vm-baymodel,  kubernetes-baremetal-baymodel.

Cloud Users just select what kinds of baymodels they like(or they can
create themselves, which depend on policy files)

For example,
magnum baymodel-create --name k8sbaymodel \
   --image-id fedora-21-atomic-3 \
   --keypair-id testkey \
   --external-network-id ${NIC_ID} \
   --dns-nameserver 8.8.8.8 \
   --flavor-id m1.small \
   --docker-volume-size 5 \
   --coe kubernetes


One question for if user want to create a kubernetes-baremetal-baymodel, he
should input flavor-id with baremetal flavor.

Magnum template selection now:
baymodel = conductor_utils.retrieve_baymodel(context, bay)
cluster_distro = baymodel.cluster_distro
cluster_coe = baymodel.coe
definition = TDef.get_template_definition('vm', cluster_distro,
  cluster_coe)

You can find 'vm' is hardcode now, since ironic template not fully
supported before. When I introduce ironic templates management. My first
thought is here 'vm' should be code with baymodel.server_type.
So I propose to create baymodel with one parameter --server_type baremetal
(default is 'vm', if user not specified).


This solution is simple, I think not make cloud-users confused.
For example, if user want to deploy baremetal. they need specify baremeatl
flavor. If they not know which flavor is baremetal, how can they boot
baremetal instances?
If they know they used baremetal flavor, they also know server_type is
'baremetal', not 'vm'. It seems not complicated.


The nova support now for ironic, it needs customized flavors. need some
metadata input. You can not boot successfully baremetal instance with
m1.small flavors I think, as nova scheduling would thought it is not right.



3) For you suppose use nova flavor,

definition = TDef.get_template_definition('vm', cluster_distro,
  cluster_coe)

Replace 'vm' to be with
   if  baymodel.flavor.metadata['***'],
server_type = 'baremeatal'
   else
server_type = 'vm'

definition = TDef.get_template_definition(server_type, cluster_distro,
  cluster_coe)

I think it seems not stable, Because 'vm' flavor can also have metadata.
Does 'baremetal' metadatas have consistent tagging(officially released) ?
(for x86, arm. power arch all applies)

I am open to use flavors to detect baremetal or 'vm', if metadata has
consistent reliable fields.



Thanks!

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Fox, Kevin M kevin@pnnl.gov
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/17/2015 08:30 AM
Subject:Re: [openstack-dev] [magnum] [nova] Magnum template manage use
platform VS others as a type?



Adrian,

I know for sure you can attach key=value metadata to the flavors. I just
looked up in the admin guide, (
http://docs.openstack.org/admin-guide-cloud/content/customize-flavors.html)
and it mentions that the extra_specs key=value pairs are just used for
scheduling though. :/

So, Nova would have to be extended to support a non scheduled type of
metadata (That could be useful for other things too...), but doesn't seem
to exist today.

One other possibility would be, if a nova scheduling filter can remove
things from the extra_specs metadata before it hits the next plugin, we
could slide in a MagnumFilter at the beginning of scheduler_default_filters
that removes the magnum_server_type entry. Looking through the code, I
think it would work, but makes me feel a little dirty too. I've attached an
example that might be close to working for that... Maybe the Nova folks
would like to weigh in if its a good plan or not?

But, if the filter doesn't fly, then for Liberty it looks like your config
option plan seems to be the best way to go.

I like the plan, especially the default flavor/image. That should make it
much easier to use if the user trusts what the admin setup for them. Nice
and easy. :)

Thanks,
Kevin

class MagnumFilter(filters.BaseHostFilter):
  def host_passes(self, host_state

Re: [openstack-dev] [magnum]

2015-07-19 Thread Kai Qiang Wu
My thoughts:

I think we'd better check what google will do after such official
announcement. As community changes fast, and we'd really welcome someone to
contributing it consistently and actively.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Daneyon Hansen (danehans) daneh...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/18/2015 12:46 AM
Subject:[openstack-dev]  [magnum]



All,

Does anyone have insight into Google's plans for contributing to containers
within OpenStack?

http://googlecloudplatform.blogspot.tw/2015/07/Containers-Private-Cloud-Google-Sponsors-OpenStack-Foundation.html

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-19 Thread Kai Qiang Wu
Hi Peng,

As @Adrian pointed it out:

My fist suggestion is to find a way to make a nova virt driver for Hyper,
which could allow it to be used with all of our current Bay types in
Magnum.


I remembered you or other guys in your company proposed one bp about nova
virt driver for Hyper. What's the status of the bp now?
Is it accepted by nova projects or cancelled ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/19/2015 11:18 PM
Subject:Re: [openstack-dev] [magnum][bp] Power Magnum to run on
metal   withHyper



Peng,

You are not the first to think this way, and it's one of the reasons we did
not integrate Containers with OpenStack in a meaningful way a full year
earlier. Please pay attention closely.

1) OpenStack's key influences care about two personas: 1.1) Cloud Operators
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea
will get killed. Operators matter.

2) Cloud Operators need a consistent way to bill for the IaaS services the
provide. Nova emits all of the RPC messages needed to do this. Having a
second nova that does this slightly differently is a really annoying
problem that will make Operators hate the software. It's better to use
nova, have things work consistently, and plug in virt drivers to it.

3) Creation of a host is only part of the problem. That's the easy part.
Nova also does a bunch of other things too. For example, say you want to
live migrate a guest from one host to another. There is already
functionality in Nova for doing that.

4) Resources need to be capacity managed. We call this scheduling. Nova has
a pluggable scheduler to help with the placement of guests on hosts. Magnum
will not.

5) Hosts in a cloud need to integrate with a number of other services, such
as an image service, messaging, networking, storage, etc. If you only think
in terms of host creation, and do something without nova, then you need to
re-integrate with all of these things.

Now, I probably left out examples of lots of other things that Nova does.
What I have mentioned us enough to make my point that there are a lot of
things that Magnum is intentionally NOT doing that we expect to get from
Nova, and I will block all code that gratuitously duplicates functionality
that I believe belongs in Nova. I promised our community I would not
duplicate existing functionality without a very good reason, and I will
keep that promise.

Let's find a good way to fit Hyper with OpenStack in a way that best
leverages what exists today, and is least likely to be rejected. Please
note that the proposal needs to be changed from where it is today to
achieve this fit.

My fist suggestion is to find a way to make a nova virt driver for Hyper,
which could allow it to be used with all of our current Bay types in
Magnum.

Thanks,

Adrian


 Original message 
From: Peng Zhao p...@hyper.sh
Date: 07/19/2015 5:36 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
withHyper

Thanks Jay.

Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I
just think bay isn't a must in this case, and we don't need nova to
provision BM hosts, which makes things more complicated imo.

Peng


-- Original --
From:  Jay Laujay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage
questions)openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
withHyper

Hong Bin,

I have some online discussion with Peng, seems hyper is now integrating
with Kubernetes and also have plan integrate with mesos for scheduling.
Once mesos integration finished, we can treat mesos+hyper as another kind
of bay.

Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu hongbin...@huawei.com:
  Peng,





  Several questions Here. You mentioned that HyperStack is a single big
  “bay”. Then, who is doing the multi-host scheduling, Hyper or something
  else? Were you suggesting to integrate Hyper with Magnum directly? Or you
  were suggesting to integrate Hyper with Magnum indirectly (i.e. through
  k8s, mesos and/or Nova)?





  Best regards,


  Hongbin





  From: Peng Zhao [mailto:p...@hyper.sh]
  Sent: July-17-15 12:34 PM
  To: OpenStack

Re: [openstack-dev] [Magnum] Magnum Quick-Start: Need clarification on Kubernetes/Redis example

2015-07-13 Thread Kai Qiang Wu
Hi Dane,

I did not try redis-cli recently,  But it seems redis-cli example issue.
Did you check with kubernetes guys?

At the same time, we could discuss this in IRC channel
#openstack-containers, as some guys are familiar with that templates, and
did change that before.


Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Dane Leblanc (leblancd) lebla...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   07/14/2015 12:59 AM
Subject:[openstack-dev] [Magnum] Magnum Quick-Start: Need clarification
on Kubernetes/Redis example



Does anyone have recent experience getting the Kubernetes/Redis example to
work in the Magnum developer Quick-Start guide?:

https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

I can get everything in the Kubernetes/Redis example to work except for the
last step. Here’s what the quick-start guide says for this step:
“Now log into one of the other container hosts and access a redis slave
from there:
ssh minion@$(nova list | grep 10.0.0.4 | awk '{print $13}')
REDIS_ID=$(docker ps | grep redis:v1 | grep k8s_redis | tail -n +2 | awk
'{print $1}')
docker exec -i -t $REDIS_ID redis-cli

127.0.0.1:6379 get replication:test
true
^D

exit

There are four redis instances, one master and three slaves, running across
the bay, replicating data between one another.”

What I’m seeing is a bit different:
  (1)I have to use ‘sudo docker’ instead of ‘docker’.  (No big
  deal.)
  (2)I see one master redis instance on one minion and one slave
  redis instance on a second minion (each has its own associated
  sentinel container as expected).
  (3)The redis-cli command times out with “Could not connect to
  Redis at 127.0.0.1:6379: Connection refused”. HOWEVER, if I add a
  host IP and port for the redis master minion (“-h 10.100.84.2 -p 6379
  ”), the example works.

Here is the failing case, without the host/port arguments:

[minion@k8-4gmqfvntvm-0-6fymzzw3wrjx-kube-minion-zjdejo5sffxv ~]$ REDIS_ID=
$(sudo docker ps | grep redis:v1 | grep k8s_redis | awk '{print $1}')
[minion@k8-4gmqfvntvm-0-6fymzzw3wrjx-kube-minion-zjdejo5sffxv ~]$ sudo
docker exec -i -t $REDIS_ID redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected
[minion@k8-4gmqfvntvm-0-6fymzzw3wrjx-kube-minion-zjdejo5sffxv ~]$

And here is the working case, using “-h 10.100.84.2 -p 6379“:

[minion@k8-4gmqfvntvm-0-6fymzzw3wrjx-kube-minion-zjdejo5sffxv ~]$ REDIS_ID=
$(sudo docker ps | grep redis:v1 | grep k8s_redis | awk '{print $1}')
[minion@k8-4gmqfvntvm-0-6fymzzw3wrjx-kube-minion-zjdejo5sffxv ~]$ sudo
docker exec -i -t $REDIS_ID redis-cli -h 10.100.84.2 -p 6379
10.100.84.2:6379 get replication:test
true
10.100.84.2:6379

Note that I determined the ’10.100.84.2’ address for the redis master by
running the following on the master minion:

[minion@k8-4gmqfvntvm-1-6pnrx2hnxa3d-kube-minion-bh6nynhayhfy ~]$ sudo
docker exec -i -t $REDIS_ID ip addr show dev eth0
5: eth0: BROADCAST,UP,LOWER_UP mtu 1472 qdisc noqueue state UP
link/ether 02:42:0a:64:54:02 brd ff:ff:ff:ff:ff:ff
inet 10.100.84.2/24 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe64:5402/64 scope link
   valid_lft forever preferred_lft forever
[minion@k8-4gmqfvntvm-1-6pnrx2hnxa3d-kube-minion-bh6nynhayhfy ~]$

So I’m looking for confirmation as to whether or not using the “-h
10.100.84.2 -p 6379“ arguments is the right way to test this configuration?
Is this a successful test?

Thanks,
Dane
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-26 Thread Kai Qiang Wu
Hi Stdake,

You really did a fantastic  job on Magnum, and your bright ideas and
thoughts help Magnum to grow. It was sad to hear your stepping down from
Magnum Core at the first time, However, after read your messages from
heart,  I think  you are following your heart. I will wish you new success
on more areas(include Kolla and many new coming projects :).



I want to think you for your help on me while in Magnum.  Thanks very
much :)


Wish you bigger and more bright future ! Looking forward to receive your
any thoughts on Magnum.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Steven Dake (stdake)" <std...@cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   26/10/2015 09:22 am
Subject:[openstack-dev] [magnum][kolla] Stepping down as a Magnum core
reviewer



Hey folks,

It is with sadness that I find myself under the situation to have to write
this message.  I have the privilege of being involved in two of the most
successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
getting involved in two major initiatives on purpose, to see if I could do
the job; to see if  I could deliver two major initiatives at the same time.
I also wanted it to be a length of time that was significant �C 1+ year.  I
found indeed I was able to deliver both Magnum and Kolla, however, the
impact on my personal life has not been ideal.

The Magnum engineering team is truly a world class example of how an Open
Source project should be constructed and organized.  I hope some young
academic writes a case study on it some day but until then, my gratitude to
the Magnum core reviewer team is warranted by the level of  their sheer
commitment.

I am officially focusing all of my energy on Kolla going forward.  The
Kolla core team elected me as PTL (or more accurately didn’t elect anyone
else;) and I really want to be effective for them, especially in what I
feel is Kolla’s most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally
in Cisco.  Some of these have born fruit already including the Heat
resources, the Horizon plugin, and of course the Networking plugin system.
I will also continue to support Magnum from a resources POV where I can do
so (like the fedora image storage for example).  What I won’t be doing is
reviewing Magnum code (serving as a gate), or likely making much technical
contribution to Magnum in the future.  On the plus side I’ve replaced
myself with many many more engineers from Cisco who should be much more
productive combined then I could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people
or the technology.  I think the people are fantastic! And the technology �C
well I helped design the entire architecture!  I am letting Magnum grow up
without me as I have other children that need more direct attention.  I
think this viewpoint shows trust in the core reviewer team, but feel free
to make your own judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple
disciplines at once.  Without Perry as a role model, Magnum may have never
happened (or would certainly be much different then it is today). Being a
solid hybrid engineer has a long ramp up time and is really difficult, but
also very rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-12 Thread Kai Qiang Wu
Right now, we seems can not reduce devstack runtime.  ANd @Ton, yes,
download image time seems OK in jenkins job, it found about 4~5 mins

But bay-creation time is interesting topic, it seems something related with
heat or VM setup time consumption. But needs some investigation.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ton Ngo" <t...@us.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   13/11/2015 01:13 pm
Subject:Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
on  gate.



Thanks Eli for the analysis. I notice that the time to download the image
is only around 1:15 mins out of some 21 mins to set up devstack. So it
seems trying to reduce the size of the image won't make a significant
improvement in the devstack time. I wonder how the image size affects the
VM creation time for the cluster. If we can look at the Heat event stream,
we might get an idea.
Ton,


Inactive hide details for Egor Guz ---11/12/2015 05:25:15 PM---Eli, First
of all I would like to say thank you for your effort Egor Guz ---11/12/2015
05:25:15 PM---Eli, First of all I would like to say thank you for your
effort (I never seen so many path sets ;)),

From: Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Date: 11/12/2015 05:25 PM
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.



Eli,

First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work
for some reasons.

I think it’s good idea to group tests per pipeline we should definitely
follow it.

―
Egor

From: "Qiao,Liyong" <liyong.q...@intel.com<mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Date: Wednesday, November 11, 2015 at 23:02
To: "openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.

hello all:

I will update some Magnum functional testing status, functional/integration
testing
is important to us, since we change/modify the Heat template rapidly, we
need to
verify the modification is correct, so we need to cover all templates
Magnum has.
and currently we only has k8s testing(only test with atomic image), we need
to
add more, like swarm(WIP), mesos(under plan), also , we may need to support
COS image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo
summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after
testing.

for each stage, the time costing is follows:

 *   devstack prepare: 5-6 mins
 *   Running devstack: 15 mins(include downloading atomic image)
 *   1) and 2) 15 mins
 *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack
setup will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary
kinds of testing
on this bay, if want to test some specify bay (for example, network_driver
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things
(create/delete), the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce
to 45 min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 *   gate-functional-dsvm-magnum-api 30 min

Re: [openstack-dev] [Magnum] [RFC] split pip line of functional testing

2015-11-03 Thread Kai Qiang Wu
Hi eliqiao,

1) I think there are many openstack projects constructed multi-pipeline for
different purpose, for example test different os distro pipelines.
It is good to refer them.

2) we are construct new envs for different COE, I think it is easy to
maintain.

3) yes, for code restructure, sorts different tests is a good idea.

functional/swarm
functional/mesos
functional/k8s
functional/common




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Qiao,Liyong" <liyong.q...@intel.com>
To: openstack-dev@lists.openstack.org, "Qiao, Liyong"
<liyong.q...@intel.com>
Date:   03/11/2015 06:13 pm
Subject:[openstack-dev] [Magnum] [RFC] split pip line of functional
testing



hi Magnum hackers:

Currently there is a pip line on project-config to do magnum functional
testing [1]

on summit, we've discussed that we need to split it per COE[2], we can do
this by adding new pip line to testing.
- '{pipeline}-functional-dsvm-magnum{coe}{job-suffix}':
coe could be swarm/mesos/k8s,
then passing coe in our post_test_hook.sh [3], is this a good idea?
and I still have others questions need to be addressed before split
functional testing per COE:
1 how can we pass COE parameter to tox in [4], or add some new envs like
[testenv:functional-swarm] [testenv:functional-k8s] etc?
stupid?
2 also there are some common testing cases, should we run them in all
COE ?(I think so)
but how to construct the source code tree?

/functional/swarm
/functional/k8s
/functional/common ..


[1]
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L2288

[2]https://etherpad.openstack.org/p/mitaka-magnum-functional-testing
[3]
https://github.com/openstack/magnum/blob/master/magnum/tests/contrib/post_test_hook.sh#L100

[4]https://github.com/openstack/magnum/blob/master/tox.ini#L19
--
BR, Eli(Li Yong)Qiao[attachment "liyong_qiao.vcf" deleted by Kai Qiang
Wu/China/IBM]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack extras.d support going away at M1 - your jobs may break if you rely on it in your dsvm jobs

2015-10-07 Thread Kai Qiang Wu
Hi Sean,


Do you mean all other projects, like Barbican (non-standard implementation
with copy/paste ways) would break in devstack ?




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Sean Dague <s...@dague.net>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   07/10/2015 07:13 pm
Subject:[openstack-dev] [all] devstack extras.d support going away at
M1 - your jobs may break if you rely on it in your dsvm jobs



Before we had devstack plugins, we had a kind of janky extras.d
mechanism. A bunch of projects implemented some odd copy / paste
mechanism in test jobs to use that in unexpected / unsupported ways.

We've had devstack plugins for about 10 months. They provide a very "pro
user" experience by letting you enable arbitrary plugins with:

enable_plugin $name git://git.openstack.org/openstack/$project [$branch]

They have reasonable documentation here
http://docs.openstack.org/developer/devstack/plugins.html

We're now getting to the point where some projects like Magnum are
getting into trouble trying to build jobs with projects like Barbican,
because Magnum uses devstack plugins, and Barbican has some odd non
plugin copy paste method. Building composite test jobs are thus really
wonky.

This is a heads up that at Mitaka 1 milestone the extras.d support will
be removed. The copy / paste method was never supported, and now it will
explicitly break. Now would be a great time for teams to prioritize
getting to the real plugin architecture.

 -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Tom Cammann for core

2015-07-10 Thread Kai Qiang Wu
+1 from me.  Good review comments!  @Tom




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:   07/10/2015 10:25 AM
Subject:[openstack-dev] [magnum] Tom Cammann for core



Team,

Tom Cammann (tcammann) has become a valued Magnum contributor, and
consistent reviewer helping us to shape the direction and quality of our
new contributions. I nominate Tom to join the magnum-core team as our
newest core reviewer. Please respond with a +1 vote if you agree.
Alternatively, vote -1 to disagree, and include your rationale for
consideration.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding 'server_type' field to Baymodel

2015-08-31 Thread Kai Qiang Wu
HI Vikas,

The server_type (old name was platform, we corrected it to be more proper,
'server_type') was introduced for wide support, like baremetal cases, etc.

For other parts it is hardcode 'vm' now, as 'vm' was and still are
supported.(it is widely used for VM cases in dev/test)


For baremetal, I did some work before, but consider time and priority, not
did it now. Details can be checked before like

review.openstack.org/#/c/198984



If you have further questions, you can check with me in IRC channel.



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Vikas Choudhary <choudharyvika...@gmail.com>
To: openstack-dev@lists.openstack.org
Date:   01/09/2015 11:50 am
Subject:[openstack-dev] Adding 'server_type' field to Baymodel



Hi Team,

I have a doubt.What all values  'server type' field that is being used in
below function call can have?
get_template_definition(cls, server_type, os, coe)

Everywhere in code currently its value is 'vm' only. for example in classes
representing template definitions(mesos, k8s and swarm):
class AtomicSwarmTemplateDefinition(BaseTemplateDefinition):
    provides = [
    {'server_type': 'vm', 'os': 'fedora-atomic', 'coe': 'swarm'},
    ]

os and coe are already fields of baymodel.Should not 'server_type' also be
another field in baymodel?

what i understand server-type can be baremetal_node also atleast.
please correct me if i am wrong.


Regards
Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-07 Thread Kai Qiang Wu
HI Hua,

From my point of view, not everything needed to be put in container. Let's
make the initial start (be simple)to work and then discussed other options
if needed in IRC or weekly meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 <wanghua.hum...@gmail.com>
To: Egor Guz <e...@walmartlabs.com>
Cc: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   07/12/2015 10:10 am
Subject:Re: [openstack-dev] [magnum]storage for docker-bootstrap



Hi all,

If we want to run etcd and flannel in container, we will
introduce docker-bootstrap which makes setup become more complex as Egor
pointed out. Should we pay for the price?

On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz <e...@walmartlabs.com> wrote:
  Wanghua,

  I don’t think moving flannel to the container is good idea. This is setup
  great for dev environment, but become too complex from operator point of
  view (you add extra Docker daemon and need extra Cinder volume for this
  daemon, also
  keep in mind it makes sense to keep etcd data folder at Cinder storage as
  well because etcd is database). flannel has just there files without
  extra dependencies and it’s much easy to download it during cloud-init ;)

  I agree that we have pain with building Fedora Atomic images, but instead
  of simplify this process we should switch to another more “friendly”
  images (e.g. Fedora/CentOS/Ubuntu) which we can easy build with disk
  builder.
  Also we can fix CoreOS template (I believe people more asked about it
  instead of Atomic), but we may face similar to Atomic issues when we will
  try to integrate not CoreOS products (e.g. Calico or Weave)

  —
  Egor

  From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
  Reply-To: "OpenStack Development Mailing List (not for usage questions)"
  <openstack-dev@lists.openstack.org>
  Date: Thursday, November 26, 2015 at 00:15
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

  Hi Hongbin,

  The docker in master node stores data
  in /dev/mapper/atomicos-docker--data and metadata
  in /dev/mapper/atomicos-docker--meta. /dev/mapper/atomicos-docker--data
  and /dev/mapper/atomicos-docker--meta are logic volumes. The docker in
  minion node store data in the cinder volume,
  but /dev/mapper/atomicos-docker--meta
  and /dev/mapper/atomicos-docker--meta are not used. If we want to
  leverage Cinder volume for docker in master, should we
  drop /dev/mapper/atomicos-docker--meta
  and /dev/mapper/atomicos-docker--meta? I think it is not necessary to
  allocate a Cinder volume. It is enough to allocate two logic volumes for
  docker, because only etcd, flannel, k8s run in the docker daemon which
  need not a large amount of storage.

  Best regards,
  Wanghua

  On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu <hongbin...@huawei.com
  <mailto:hongbin...@huawei.com>> wrote:
  Here is a bit more context.

  Currently, at k8s and swarm bay, some required binaries (i.e. etcd and
  flannel) are built into image and run at host. We are exploring the
  possibility to containerize some of these system components. The
  rationales are (i) it is infeasible to build custom packages into an
  atomic image and (ii) it is infeasible to upgrade individual component.
  For example, if there is a bug in current version of flannel and we know
  the bug was fixed in the next version, we need to upgrade flannel by
  building a new image, which is a tedious process.

  To containerize flannel, we need a second docker daemon, called
  docker-bootstrap [1]. In this setup, pods are running on the main docker
  daemon, and flannel and etcd are running on the second docker daemon. The
  reason is that flannel needs to manage the network of the main docker
  daemon, so it needs to run on a separated daemon.

  Daneyon, I think it requires separated storage because it needs to run a
  separated docker daemon (unless there is a way to make two docker daemons
  share the same storage).

  Wanghua, is it possible to leverage Cinder volume for that. Leveraging
  external storage is always preferred [2].

  [1]
  
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker

  [2] http://www.projectatomic.io/docs/docker-storage-recommendation/

  Best regards,
  Hongbin

  From: Daneyon Hansen (danehan

[openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-09 Thread Kai Qiang Wu


Hi All,

I am not sure what changes these days, We found quite often now, the
Jenkins failed for:


http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/libvirt/libvirtd.txt.gz

2015-12-09 08:52:27.892+: 22957: debug :
qemuMonitorJSONCommandWithFd:264 : Send command
'{"execute":"qmp_capabilities","id":"libvirt-1"}' for write with FD -1
2015-12-09 08:52:27.892+: 22957: debug : qemuMonitorSend:959 :
QEMU_MONITOR_SEND_MSG: mon=0x7fa66400c6f0 msg=
{"execute":"qmp_capabilities","id":"libvirt-1"}
 fd=-1
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:347 :
dispatching to max 0 clients, called from event watch 6
2015-12-09 08:52:27.941+: 22951: debug : virNetlinkEventCallback:360 :
event not handled.
2015-12-09 08:52:28.070+: 22951: error : qemuMonitorIORead:554 : Unable
to read from monitor: Connection reset by peer
2015-12-09 08:52:28.070+: 22951: error : qemuMonitorIO:690 : internal
error: early end of file from monitor: possible problem:
Cannot set up guest memory 'pc.ram': Cannot allocate memory




It not hit such resource issue before.  I am not sure if Infra or nova guys
know about it ?


Thanks

Best Wishes,
------------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Kai Qiang Wu
Hi Adrian,

Right now, I think:

for the unify-COE-container actions bp, it needs more discussion and good
design to make it  happen. ( I think spec is needed for this)
And for the k8s related objects deprecation, it needs backup  instead of
directly dropped it. Especially when we not have any spec or design come
out for unify-COE-container bp.


Right now, the work now mostly happen on UI part, I think for UI, it can
have discussion if need to implement those views or not.(instead we
directly drop API part while not come out a consistent design on
unify-COE-container actions bp)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto <adrian.o...@rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   17/12/2015 07:00 am
Subject:Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs



Tom,

> On Dec 16, 2015, at 9:31 AM, Cammann, Tom <tom.camm...@hpe.com> wrote:
>
> I don’t see a benefit from supporting the old API through a microversion
> when the same functionality will be available through the native API.

+1

[snip]

> Have we had any discussion on adding a v2 API and what changes (beyond
> removing pod, rc, service) we would include in that change. What sort of
> timeframe would we expect to remove the v1 API. I would like to move to a

> v2 in this cycle, then we can think about removing v1 in N.

Yes, when we drop functionality from the API that’s a contract breaking
change, and requires a new API major version. We can drop the v1 API in N
if we set expectations in advance. I’d want that plan to be supported with
some evidence that maintaining the v1 API was burdensome in some way.
Because adoption is limited, deprecation of v1 is not likely to be a
contentious issue.

Adrian

>
> Tom
>
>
>
> On 16/12/2015, 15:57, "Hongbin Lu" <hongbin...@huawei.com> wrote:
>
>> Hi Tom,
>>
>> If I remember correctly, the decision is to drop the COE-specific API
>> (Pod, Service, Replication Controller) in the next API version. I think
a
>> good way to do that is to put a deprecated warning in current API
version
>> (v1) for the removed resources, and remove them in the next API version
>> (v2).
>>
>> An alternative is to drop them in current API version. If we decide to
do
>> that, we need to bump the micro-version [1], and ask users to specify
the
>> microversion as part of the requests when they want the removed APIs.
>>
>> [1]
>>
http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>> n-api-method
>>
>> Best regards,
>> Hongbin
>>
>> -Original Message-
>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> Sent: December-16-15 8:21 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>>
>> I have been noticing a fair amount of redundant work going on in magnum,

>> python-magnumclient and magnum-ui with regards to APIs we have been
>> intending to drop support for. During the Tokyo summit it was decided
>> that we should support for only COE APIs that all COEs can support which

>> means dropping support for Kubernetes specific APIs for Pod, Service and

>> Replication Controller.
>>
>> Egor has submitted a blueprint[1] “Unify container actions between all
>> COEs” which has been approved to cover this work and I have submitted
the
>> first of many patches that will be needed to unify the APIs.
>>
>> The controversial patches are here:
>> https://review.openstack.org/#/c/258485/ and
>> https://review.openstack.org/#/c/258454/
>>
>> These patches are more a forcing function for our team to decide how to
>> correctly deprecate these APIs as I mention there is a lot of redundant
>> work going on these APIs. Please let me know your thoughts.
>>
>> Tom
>>
>> [1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.op

Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Kai Qiang Wu
Thanks Clark and infra guys to work around that.
We would keep track that and see if such issue disappear.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Clark Boylan <cboy...@sapwetik.org>
To: openstack-dev@lists.openstack.org
Date:   16/12/2015 06:42 am
Subject:Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite
often for "Cannot set up guest memory 'pc.ram': Cannot allocate
memory"



On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> >
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random
> > failures, which occurred on creating a nova instance with 2G of RAM.
> > According to the error message, it seems that the hypervisor tried to
> > allocate memory to the nova instance but couldn’t find enough free
memory
> > in the host. However, by adding a few “nova hypervisor-show XX” before,
> > during, and right after the test, it showed that the host has 6G of
free
> > RAM, which is far more than 2G. Here is a snapshot of the output [1].
You
> > can find the full log here [2].
> If you look at the dstat log
>
http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz

> the host has nowhere near 6GB free memory and less than 2GB. I think you
> actually are just running out of memory.
> >
> > Another observation is that most of the failure happened on a node with
> > name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
> > at http://logstash.openstack.org/ ). It seems that the jobs will be
fine
> > if they are allocated to a node other than “ovh”.
> I have just done a quick spot check of the total memory on
> devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free -m`
> and the results are 7480, 7732, and 6976 megabytes respectively. Despite
> using 8GB flavors in each case there is variation and OVH comes in on
> the low end for some reason. I am guessing that you fail here more often
> because the other hosts give you just enough extra memory to boot these
> VMs.
To follow up on this we seem to have tracked this down to how the linux
kernel restricts memory at boot when you don't have a contiguous chunk
of system memory. We have worked around this by increasing the memory
restriction to 9023M at boot which gets OVH inline with Rackspace and
slightly increases available memory on HPCloud (because it actually has
more of it).

You should see this fix in action after image builds complete tomorrow
(they start at 1400UTC ish).
>
> We will have to look into why OVH has less memory despite using flavors
> that should be roughly equivalent.
> >
> > Any hints to debug this issue further? Suggestions are greatly
> > appreciated.
> >
> > [1] http://paste.openstack.org/show/481746/
> > [2]
> >
http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html

> > [3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-25 Thread Kai Qiang Wu
Hi Jay,

For the Kubernetes  COE container ways, I think @Hua Wang is doing that.

For the swarm COE, the swarm already has master and agent running in
container

For the mesos, it still not have container work until now, Maybe someone
already draft bp on it ?   Not quite sure



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jay Lau <jay.lau@gmail.com>
To: OpenStack Development Mailing List
<openstack-dev@lists.openstack.org>
Date:   26/11/2015 07:15 am
Subject:[openstack-dev] [magnum] Using docker container to run COE
daemons



Hi,

It is becoming more and more popular to use docker container run some
applications, so what about leveraging this in Magnum?

What I want to do is that we can put all COE daemons running in docker
containers, because now Kubernetes, Mesos and Swarm support running in
docker container and there are already some existing docker
images/dockerfiles which we can leverage.

So what about update all COE templates to use docker container to run COE
daemons and maintain some dockerfiles for different COEs in Magnum? This
can reduce the maintain effort for COE as if there are new versions and we
want to upgrade, just update the dockerfile is enough. Comments?

--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in docker client?

2015-11-25 Thread Kai Qiang Wu
Hi,

I think out docker all call, use docker_for_bay,  which use

CONF.docker.default_timeout.

And default_timeout can be changed any value. User can customized it per
case.


For why set 10, maybe it was because someone local test OK. So use that
value.

I think for jenkins, it can use CONF override value to make it work.




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Qiao,Liyong" <liyong.q...@intel.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   25/11/2015 03:24 pm
Subject:[openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in
docker client?



hi all
In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
This bring troubles in some bad networking environment (or bad
performance swarm master)
At least it doesn't work on our gate.

Here is the test patch on gate https://review.openstack.org/249522 , I
set it as 180 to make sure
the failure it due to time_out parameter passed to docker client, but we
need to chose a suitble one

I check docker client's default value,
DEFAULT_TIMEOUT_SECONDS = 60 , I wonder why we overwrite it  as 10?

Please let me know what's your though? My suggestion is we set
DEFAULT_DOCKER_TIMEOUT
as long as our rpc time_out.

--
BR, Eli(Li Yong)Qiao

[attachment "liyong_qiao.vcf" deleted by Kai Qiang Wu/China/IBM]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] 'fixed_network' actually means 'fixed_subnet'. Which way is better to fix this?

2015-12-01 Thread Kai Qiang Wu
Hi HouMing,

I checked the heat templates again, and check with heat developers.
For heat stack creation now, it can not directly use existed network when
create new stack. which means, it need create a new net and subnet.
So for fixed_network, the name is still need. just like when you create
network in neutron, you need network and subnet.

I think right now, so for magnum point of view what can be controlled is,
1. network name related properties settting.  2. subnetwork related
properties. like CIDR.


So for fixed_network, give it still what role should be.

Add a new fixed_subnet for that.


Of course, this can also be discussed in weekly meeting if needed.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hou Ming Wang <houming.wang...@gmail.com>
To: openstack-dev@lists.openstack.org
Date:   01/12/2015 05:49 pm
Subject:[openstack-dev] [Magnum] 'fixed_network' actually means
'fixed_subnet'. Which way is better to fix this?



Hi All,
I'm working on API related enhancement and encountered  the following
issue:
bay-creation return 400 Bad Request with a valid fixed-network in baymodel(
https://bugs.launchpad.net/magnum/+bug/1519748 )

The 'fixed_network' actually means 'fixed_network_cidr', or more precisely
'fixed_subnet'. There're 2 possible ways to fix this:
1. Rename the 'fixed_network' to 'fixed_subnet', in Baymodel DB, Baymodel
Object and MagnumClient.
2. Leave 'fixed_network' alone, add a new field 'fixed_subnet' to Baymodel,
and use the 'fixed_subnet' to take the place of current 'fixed_network'.

Please don't hesitate to give some inputs of which one is better, by
comment in the bug or reply this mail.

Thanks & Best regards,
HouMing Wang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Kai Qiang Wu
@bharath,

1) actually, if you mean use container-create(delete) to do on mesos bay
for apps.  I am not sure how different the interface between docker
interface and mesos interface. One point that when you introduce that
feature, please not make docker container interface more complicated than
now. I worried that because it would confuse end-users a lot than the
unified benefits. (maybe as optional parameter to pass one json file to
create containers in mesos)

2) For the unified interface, I think it need more thoughts, we need not
bring more trouble to end-users to learn new concepts or interfaces, except
we could have more clear interface, but different COES vary a lot. It is
very challenge.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   bharath thiruveedula <bharath_...@hotmail.com>
To: OpenStack Development Mailing List not for usage questions
<openstack-dev@lists.openstack.org>
Date:   19/11/2015 10:31 am
Subject:Re: [openstack-dev] [magnum] Mesos Conductor



@hongin, @adrian I agree with you. So can we go ahead with magnum
container-create(delete) ...  for mesos bay (which actually create mesos
(marathon) app internally)?

@jay, yes we multiple frameworks which are using mesos lib. But the mesos
bay we are creating uses marathon. And we had discussion in irc on this
topic, and I was asked to implement initial version for marathon. And agree
with you to have unified client interface for creating pod,app.

Regards
Bharath T

Date: Thu, 19 Nov 2015 10:01:35 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot
limited to Marathon + Mesos as there are many frameworks can run on top of
Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
Mesos integration as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto <adrian.o...@rackspace.com>
wrote:
  Bharath,

  I agree with Hongbin on this. Let’s not expand magnum to deal with
  apps or appgroups in the near term. If there is a strong desire to
  add these things, we could allow it by having a plugin/extensions
  interface for the Magnum API to allow additional COE specific
  features. Honestly, it’s just going to be a nuisance to keep up with
  the various upstreams until they become completely stable from an API
  perspective, and no additional changes are likely. All of our COE’s
  still have plenty of maturation ahead of them, so this is the wrong
  time to wrap them.

  If someone really wants apps and appgroups, (s)he could add that to
  an experimental branch of the magnum client, and have it interact
  with the marathon API directly rather than trying to represent those
  resources in Magnum. If that tool became popular, then we could
  revisit this topic for further consideration.

  Adrian

  > On Nov 18, 2015, at 3:21 PM, Hongbin Lu <hongbin...@huawei.com>
  wrote:
  >
  > Hi Bharath,
  >
  > I agree the “container” part. We can implement “magnum
  container-create ..” for mesos bay in the way you mentioned.
  Personally, I don’t like to introduce “apps” and “appgroups”
  resources to Magnum, because they are already provided by native tool
  [1]. I couldn’t see the benefits to implement a wrapper API to offer
  what native tool already offers. However, if you can point out a
  valid use case to wrap the API, I will give it more thoughts.
  >
  > Best regards,
  > Hongbin
  >
  > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
  >
  > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
  > Sent: November-18-15 1:20 PM
  > To: openstack-dev@lists.openstack.org
  > Subject: [openstack-dev] [magnum] Mesos Conductor
  >
  > Hi all,
  >
  > I am working on the blueprint [1]. As per my understanding, we have
  two resources/objects in mesos+marathon:
  >
  > 1)Apps: combination of instances/containers running on multiple
  hosts representing a service.[2]
  > 2)Application Groups: Group of apps, for example we can have
  database application group which consists mongoDB app and MySQL
  App.[3]
  >
  > So I think we need to have two resources 'apps' and 'appgroups'

Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-13 Thread Kai Qiang Wu
+1  Welcome to new one :)



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 <wanghua.hum...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   13/06/2016 05:30 pm
Subject:Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition,
and removal notice



+1

On Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou <shu-mu...@rf.jp.nec.com>
wrote:
  Hi team,

  I propose the following changes to the magnum-ui core group.

  + Thai Tran
    http://stackalytics.com/report/contribution/magnum-ui/90
    I'm so happy to propose Thai as a core reviewer.
    His reviews have been extremely valuable for us.
    And he is active Horizon core member.
    I believe his help will lead us to the correct future.

  - David Lyle

  
http://stackalytics.com/?metric=marks_type=openstack=all=magnum-ui_id=david-lyle

    No activities for Magnum-UI since Mitaka cycle.

  - Harsh Shah
    http://stackalytics.com/report/users/hshah
    No activities for OpenStack in this year.

  - Ritesh
    http://stackalytics.com/report/users/rsritesh
    No activities for OpenStack in this year.

  Please respond with your +1 votes to approve this change or -1 votes to
  oppose.

  Thanks,
  Shu


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Use Liberty Magnum bits with Kilo/ Icehouse Openstack ?

2016-01-28 Thread Kai Qiang Wu
HI,

For Magnum, The community keep upstream development since Kilo.

Magnum has some dependence on Keystone, Nova, Glance, Heat, Cinder.


If you try to use Liberty Magnum against with Icehouse and Kilo OpenStack.
It depends on related if heat templates include any non-icehouse, kilo
resources.


For example,  according to
http://docs.openstack.org/developer/heat/template_guide/openstack.html
Magnum,  mesos support used  OS::Heat::SoftwareDeploymentGroup,  It is
available since Liberty in heat.

It means you could not use Magnum to deploy mesos cluster on icehouse or
kilo openstack.

I suggest you two ways:

1>  Try Liberty openstack possible(if you want to Liberty Magnum)

 Or

2>  Check related heat templates in Magnum, and compare related support in
heat template guide. if all resources support. It could run. But you need
test on that.


Hope it can help !

Thanks




Best Wishes,
--------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Sanjeev Rampal (srampal)" <sram...@cisco.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   28/01/2016 04:10 pm
Subject:[openstack-dev] [Magnum] Use Liberty Magnum bits with Kilo/
Icehouse Openstack ?



A newbie question …

Is it described somewhere what exactly is the set of Liberty dependencies
for Magnum ? Since a significant fraction of it is orchestration templates,
one would expect it should be possible to run Liberty Magnum bits along
with an Icehouse or Kilo version of Openstack.

Can someone help clarify the set of dependencies for Magnum on having a
Liberty version of Openstack ?


Rgds,
Sanjeev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-01 Thread Kai Qiang Wu
+ 1 for both @Ton and @Egor,  they all have good review comments and
suggestions.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto <adrian.o...@rackspace.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   02/02/2016 12:01 am
Subject:[openstack-dev] [Magnum] New Core Reviewers



Magnum Core Team,

I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core
Reviewers. Please respond with your votes.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create trustee %(username) in domain $(domain_id)

2016-02-25 Thread Kai Qiang Wu
Thanks Hongbin for your info.

I really think it is not good way for new feature introduced.
As new feature introduced often break old work. it more often better with
add-in feature is plus, old work still funciton.

Or at least, the error should say "swarm bay now requires trust to work,
please use trust related access information before deploy a new swarm bay"



Thanks

--------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   26/02/2016 08:02 am
Subject:[openstack-dev] [magnum] Failed to create trustee %(username)
in  domain $(domain_id)



Hi team,

FYI, you might encounter the following error if you pull from master
recently:

magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in
domain $(domain_id) (HTTP 500)"

This is due to a recent commit that added support for trust. In case you
don’t know, this error can be resolved by running the following steps:

# 1. create the necessary domain and user:
export OS_TOKEN=password
export OS_URL=http://127.0.0.1:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create magnum
openstack user create trustee_domain_admin --password=secret
--domain=magnum
openstack role add --user=trustee_domain_admin --domain=magnum admin

# 2. populate configs
source /opt/stack/devstack/functions
export MAGNUM_CONF=/etc/magnum/magnum.conf
iniset $MAGNUM_CONF trust trustee_domain_id $(openstack domain show magnum
| awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_id $(openstack user show
trustee_domain_admin | awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_password secret

# 3. screen -r stack, and restart m-api and m-cond
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Kai Qiang Wu
HongBin,

See my replies and questions in line. >>


Thanks

Best Wishes,
--------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   15/02/2016 01:26 am
Subject:Re: [openstack-dev] [magnum]swarm + compose = k8s?



Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently,
Magnum basically have its end-users manage the COEs by themselves after a
successful deployment. This might work well for domain users, but it is a
pain for non-domain users to manage their COEs. By moving master nodes out
of users’ tenants, Magnum could offer users a COE management service. For
example, Magnum could offer to monitor the etcd/swarm-manage clusters and
recover them on failure. Again, the pattern of managing COEs for end-users
is what Google container service and AWS container service offer. I guess
it is fair to conclude that there are use cases out there?

>> I am not sure when you talked about domain here, is it keystone domain
or other case ?  What's the non-domain users case to manage the COEs?

If we decide to offer a COE management service, we could discuss further on
how to consolidate the IaaS resources for improving utilization. Solutions
could be (i) introducing a centralized control services for all
tenants/clusters, or (ii) keeping the control services separated but
isolating them by containers (instead of VMs). A typical use case is what
Kris mentioned below.

>> for (i) it is more complicated than (ii), and I did not see much
benefits gain for utilization case here for (i), instead it could introduce
much burden for upgrade case and service interference for all
tenants/clusters


Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?



Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great
benefits for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different
nodes, it seems that try to make COE openstack tenant aware, it also means
you want to introduce another control schedule layer above the COEs, we
need to think about the if it is typical user case, and what's the benefit
compared with containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,
------------

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193


Follow your heart. You are miracle!

Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor,
Thanks for sharing your insights. I gave it more thoughtHongbin Lu
---13/02/2016 11:02:13 am---Egor, Thanks for sharing your insights. I gave
it more thoughts. Maybe the goal can be achieved with

From: Hongbin Lu <hongbin...@huawei.com>
To: Guz Egor <guz_e...@yahoo.com>, "OpenStack Development Mailing List (not
for usage questions)" <openstack-dev@lists.openstack.org>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
can be achieved without implementing a shared COE. We could move all the
master nodes out of user tenants, containerize them, and consolidate them
into a set of VMs/Physical servers.

I think we could separate the discussion into two:
1. Should Magnum introduce a new bay type, in which master
nodes are managed by Magnum (not users themselves)? Like what
GCE [1] or ECS [2] does.
2. How to consolidate the control services that originally runs
on master nodes of each cluster?

Note that the proposal is for adding a new COE (not for chan

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-13 Thread Kai Qiang Wu
Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great
benefits for utilisation here.
For user cases, looks like following:

 user A want to have a  COE provision.
 user B want to have a separate COE.  (different tenant, non-share)
 user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
   different tenant users want to use same control node to manage different
nodes, it seems that try to make COE openstack tenant aware, it also means
you want to introduce another control schedule layer above the COEs,  we
need to think about the if it is typical user case, and what's the benefit
compared with containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: Guz Egor <guz_e...@yahoo.com>, "OpenStack Development Mailing
List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   13/02/2016 11:02 am
Subject:Re: [openstack-dev] [magnum]swarm + compose = k8s?



Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
can be achieved without implementing a shared COE. We could move all the
master nodes out of user tenants, containerize them, and consolidate them
into a set of VMs/Physical servers.

I think we could separate the discussion into two:
  1.   Should Magnum introduce a new bay type, in which master
  nodes are managed by Magnum (not users themselves)? Like what GCE [1]
  or ECS [2] does.
  2.   How to consolidate the control services that originally runs
  on master nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the
existing COEs). That means users will continue to provision existing
self-managed COE (k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to
"schedulers war" (personally I tired from these debates Mesos vs Kub vs
Swarm).
If your  concern is just utilization you can always run control plane at
"agent/slave" nodes, there main reason why operators (at least in our case)
keep them
separate because they need different attention (e.g. I almost don't care
why/when "agent/slave" node died, but always double check that master node
was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when
developers want run just docker container without installing anything
locally
(e.g docker-machine). But in most cases it's just examples from internet or
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor


From: Hongbin Lu <hongbin...@huawei.com>
To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container
resource [1] reminded me the use case Kris mentioned below. I am going to
propose a preliminary idea to address the use case. Of course, we could
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only
minion/worker/slave nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips,
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and
worker nodes. In these COEs, control services (i.e. scheduler) run on
master nodes, and containers run on worker nodes. If we can port the COE
control services to Magnum control plate and share them with all tenants,
we eliminate the need of master nodes thus improving resource utilization.
In the new COE, users create/manage containers through Magnum API
endpoints. Magnum is responsible to spin tenant VMs, schedule containers to
the VMs, and manage the life-cycle of those containers. Unlike other COEs,
containers created by this COE are considered as Op

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-15 Thread Kai Qiang Wu
Hi  Stdake,

There is a patch about Atomic 23 support in Magnum.  And atomic 23 uses
kubernetes 1.0.6, and docker 1.9.1.
>From Steve Gordon, I learnt they did have a two-weekly release. To me it
seems each atomic 23 release not much difference, (minor change)
The major rebases/updates may still have to wait for e.g. Fedora Atomic 24.

So maybe we not need to test every Atomic 23 two-weekly.
Pick one or update old, when we find it is integrated with new kubernetes
or docker, etcd etc. If other small changes(not include security), seems
not need to update so frequently, it can save some efforts.


What do you think ?

Thanks

Best Wishes,
----
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Steven Dake (stdake)" <std...@cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   16/03/2016 03:23 am
Subject:Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro



WFM as long as we stick to the spirit of the proposal and don't end up in a
situation where there is only one distribution.  Others in the thread had
indicated there would be only one distribution in tree, which I'd find
disturbing for reasons already described on this thread.

While we are about it, we should move to the latest version of atomic and
chase atomic every two weeks on their release.  Thoughts?

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: Monday, March 14, 2016 at 8:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro



From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-14-16 4:49 PM
To: OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro

Steve,

I think you may have misunderstood our intent here. We are not
seeking to lock in to a single OS vendor. Each COE driver can
have a different OS. We can have multiple drivers per COE. The
point is that drivers should be simple, and therefore should
support one Bay node OS each. That would mean taking what we
have today in our Kubernetes Bay type implementation and
breaking it down into two drivers: one for CoreOS and another
for Fedora/Atomic. New drivers would start out in a contrib
directory where complete functional testing would not be
required. In order to graduate one out of contrib and into the
realm of support of the Magnum dev team, it would need to have
a full set of tests, and someone actively maintaining it.
  OK. It sounds like the proposal allows more than one OS to be
  in-tree, as long as the second OS goes through an incubation process.
  If that is what you mean, it sounds reasonable to me.

Multi-personality driers would be relatively complex. That
approach would slow down COE specific feature development, and
complicate maintenance that is needed as new versions of the
dependency chain are bundled in (docker, k8s, etcd, etc.). We
have all agreed that having integration points that allow for
alternate OS selection is still our direction. This follows the
pattern that we set previously when deciding what networking
options to support. We will have one that’s included as a
default, and a way to plug in alternates.

Here is what I expect to see when COE drivers are implemented:

Docker Swarm:
Default driver Fedora/Atomic
Alternate driver: TBD

Kubernetes:
Default driver Fedora/Atomic
Alternate driver: CoreOS

Apache Mesos/Marathon:
Default driver: Ubuntu
Alternate driver: TBD

We can allow an arbitrary number of alternates. Those TBD items
can be initially added to the contrib directory, and with the
right level of community support can be advanced to defaults if
shown to wor

Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-11 Thread Kai Qiang Wu
Run both Chronos and Marathon on Mesos
and Run Chronos on top of Marathon seems two different cases.


I think if #1 Add Chronos to the mesos bay , #2 is possible to archive
that. Only if we find frameworks(heat templates) not able to handle that,
we could use option #1.
But still flexible is better, I think.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Guz Egor <guz_e...@yahoo.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   12/04/2016 04:36 am
Subject:Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has
this setup. Also we can provide several line instructions how to run
Chronos on top of Marathon.

honestly I don't see how #2 will work, because Marathon installation is
different from Aurora installation.

---
Egor

From: Kai Qiang Wu <wk...@cn.ibm.com>
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Sent: Sunday, April 10, 2016 6:59 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

#2 seems more flexible, and if it be proved it can "make the SAME mesos bay
applied with mutilple frameworks." It would be great. Which means, one
mesos bay should support multiple frameworks.




Thanks


Best Wishes,
--------

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193


Follow your heart. You are miracle!

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t
feel strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



My preference is #1, but I don’t feel strong to exclude #2. I would agree
to go with #2 for now and switch back to #1 if there is a demand from
users. For Ton’s suggestion to push Marathon into the introduced
configuration hook, I think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
I would agree that #2 is the most flexible option, providing a well defined
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use
this new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin
Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


On Apr 8, 2016, at 3:15 PM, Hongbin Lu <hongbin...@huawei.com> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we
discussed several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have
two mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos
frameworks, such as Chronos. With this option, Magnum team doesn’t need to
maintain extra framework configuration. However, users need to do it
themselves.
This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate
Marathon and Chronos into two different bay types. As a result, each bay
type becomes easier to maintain, but those two mesos framework cannot share
resources (a key feature of mesos is to have different frameworks running
on the same cluster to increase resource utilization).Which option you
prefer? O

Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-10 Thread Kai Qiang Wu
#2 seems more flexible, and if it be proved it can "make the SAME mesos bay
applied with mutilple frameworks." It would be great. Which means, one
mesos bay should support multiple frameworks.




Thanks


Best Wishes,
--------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   11/04/2016 12:06 am
Subject:Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



My preference is #1, but I don’t feel strong to exclude #2. I would agree
to go with #2 for now and switch back to #1 if there is a demand from
users. For Ton’s suggestion to push Marathon into the introduced
configuration hook, I think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



I would agree that #2 is the most flexible option, providing a well defined
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use
this new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other
frameworks but not Marathon.
Ton,

Inactive hide details for Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8,
2016, at 3:15 PM, Hongbin Lu <hongbin.lu@huawei.comAdrian Otto
---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



On Apr 8, 2016, at 3:15 PM, Hongbin Lu <hongbin...@huawei.com>
wrote:

Hi team,
I would like to give an update for this thread. In the last
team, we discussed several options to introduce Chronos to our
mesos bay:
1. Add Chronos to the mesos bay. With this option,
the mesos bay will have two mesos frameworks by
default (Marathon and Chronos).
2. Add a configuration hook for users to configure
additional mesos frameworks, such as Chronos. With
this option, Magnum team doesn’t need to maintain
extra framework configuration. However, users need
to do it themselves.

This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With
this option, we separate Marathon and Chronos into
two different bay types. As a result, each bay type
becomes easier to maintain, but those two mesos
framework cannot share resources (a key feature of
mesos is to have different frameworks running on
the same cluster to increase resource utilization).
Which option you prefer? Or you have other suggestions? Advices
are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a
DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon.

---
Egor

From: Jay Lau <jay.lau@gmail.com>
To: OpenStack Development Mailing List (not for usage
questions) <openstack-dev@lists.openstack.org>
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a
DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also
add Chronos to Mesos Bay to make it can handle both long
running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki <
michal.roste...@gm

Re: [openstack-dev] [Magnum] COE drivers spec

2016-03-19 Thread Kai Qiang Wu
Here are some of my raw points,


1. For the driver mentioned, I think we not necessary use bay-driver here,
as have network-driver, volume-driver, maybe it is not needed to introduce
driver in bay level.(bay is higher level than network or volume)

maybe like

coes/
   swarm/
   mesos/
   kubernetes

Each coes include, take swarm as example,

coes/
swarm/
 default/
 contrib/
Or we not use contrib here, just like (one is support by default, others
are contributed by more contributors and tested in jenkins pipeline)
 coes/
 swarm/
 atomic/
 ubuntu/


We have BaseCoE, other specific CoE inherit from that, Each CoE have
related life management operations, like Create, Update, Get, Delete life
cycle management.



2.  We need to think more about scale manager, which involves scale cluster
up and down, Maybe a related auto-scale and manual scale ways.


The user cases, like as a Cloud Administrator, I could easily use OpenStack
to provide CoEs cluster, and manage CoE life cycle. and scale CoEs.
CoEs could do its best to use OpenStack network, and volume services to
provide CoE related network, volume support.


Others interesting case(not required), if user just want to deploy one
container in Magnum, we schedule it to the right CoE, (if user manual
specify, it would schedule to the specific CoE)


Or more user cases .



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jamie Hannaford <jamie.hannaf...@rackspace.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   17/03/2016 07:24 pm
Subject:[openstack-dev] [Magnum] COE drivers spec



Hi all,

I'm writing the spec for the COE drivers, and I wanted some feedback about
what it should include. I'm trying to reconstruct some of the discussion
that was had at the mid-cycle meet-up, but since I wasn't there I need to
rely on people who were :)

From my perspective, the spec should recommend the following:

1. Change the BayModel `coe` attribute to `bay_driver`, the value of which
will correspond to the name of the directory where the COE code will
reside, i.e. drivers/{driver_name}

2. Introduce a base Driver class that each COE driver extends. This would
reside in the drivers dir too. This base driver will specify the interface
for interacting with a Bay. The following operations would need to be
defined by each COE driver: Get, Create, List, List detailed, Update,
Delete. Each COE driver would implement each operation differently
depending on their needs, but would satisfy the base interface. The base
class would also contain common logic to avoid code duplication. Any
operations that fall outside this interface would not exist in the COE
driver class, but rather an extension situated elsewhere. The JSON payloads
for requests would differ from COE to COE.

Cinder already uses this approach to great effect for volume drivers:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py

Question: Is this base class a feasible idea for Magnum? If so, do we need
any other operations in the base class that I haven't mentioned?

3. Each COE driver would have its own Heat template for creating a bay
node. It would also have a template definition that lists the JSON
parameters which are fed into the Heat template.

Question: From a very top-level POV, what logic or codebase changes would
Magnum need Heat templates in the above way?

4. Removal of all old code that does not fit the above paradigm.

​---

Any custom COE operations that are not common Bay operations (i.e. the six
listed in #2) would reside in a COE extension. This is outside of the scope
of the COE drivers spec and would require an entirely different spec that
utilizes a common paradigm for extensions in OpenStack. Such a spec would
also need to cover how the conductor would link off to each COE. Is this
summary correct?

Does Magnum already have a scale manager? If not, should this be introduced
as a separate BP/spec?

Is there anything else that a COE drivers spec need to cover which I have
not mentioned?​

Jamie


Rackspace International GmbH a company registered in the Canton of
Zurich, Switzerland (company identification number CH-020.4.047.077-1)
whose registered office is at Pfingstweidstrasse 60, 8005 Zurich,
Switzerland. Rackspace International GmbH privacy policy can be viewed at
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-ma

Re: [openstack-dev] [magnum] Generate atomic images usingdiskimage-builder

2016-03-23 Thread Kai Qiang Wu
1) +1  diskimage-builder maybe a better place to external consumption.

2) For the image size difference(big), I think we may need to know what's
the issue for that.
Maybe Redhat guys know something about it.


Thanks


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ton Ngo" <t...@us.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   24/03/2016 01:12 am
Subject:Re: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder



Hi Yolanda,
Thank you for making a huge improvement from the manual process of building
the Fedora Atomic image.
Although Atomic does publish a public OpenStack image that is being
considered in this patch:
https://review.openstack.org/#/c/276232/
in the past we have come to many situations where we need an image with
specific version of certain software
for features or bug fixes (Kubernetes, Docker, Flannel, ...). So the
automated and customizable build process
will be very helpful.

With respect to where to land the patch, I think diskimage-builder is a
reasonable target.
If it does not land there, Magnum does currently have 2 sets of
diskimage-builder elements for Mesos image
and Ironic image, so it is also reasonable to submit the patch to Magnum.
With the new push to reorganize
into drivers for COE and distro, the elements would be a natural fit for
Fedora Atomic.

As for periodic image build, it's a good idea to stay current with the
distro, but we should avoid the situation
where something new in the image breaks a COE and we are stuck for awhile
until a fix is made. So instead of
an automated periodic build, we might want to stage the new image to make
sure it's good before switching.

One question: I notice the image built by DIB is 871MB, similar to the
manually built image, while the
public image from Atomic is 486MB. It might be worthwhile to understand the
difference.

Ton Ngo,

Inactive hide details for Yolanda Robla Mota ---03/23/2016 04:12:54 AM---Hi
I wanted to start a discussion on how Fedora AtomicYolanda Robla Mota
---03/23/2016 04:12:54 AM---Hi I wanted to start a discussion on how Fedora
Atomic images are being

From: Yolanda Robla Mota <yolanda.robla-m...@hpe.com>
To: <openstack-dev@lists.openstack.org>
Date: 03/23/2016 04:12 AM
Subject: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder



Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
The image needs to be built manually, uploaded to fedorapeople, and then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to consume
any tree we need, so images can be customized on demand. I generated one
image using this element, and uploaded to fedora people. The image has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps. This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm doing now
- generate this element internally on magnum. So we can have a directory
in magnum project, called "elements", and have the fedora-atomic element
here. This will give us more control on the element behaviour, and will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be to
periodically generate images using a magnum job, and upload these images
to OpenStack Infra mirrors. Currently the image is based on Fedora F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the tests
can be changed, to consume these internals images. By this way the tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests can
be more reilable, because we will be removing an external dependency.

So i'd like to g

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-19 Thread Kai Qiang Wu
HI Steve,

Some points to highlight here:

1> There are some work discussion about COE dynamic supports across
different OS distro.


2>  For atomic, we did have many requirements before, it was an old story,
seem some not meet our needs (which once asked in atomic IRC channel or
community) So we built some images by ourselves. But if atomic community
could provide related support, it would more beneficial for both( as we use
it, it would be tested by us daily jenkins and developers)


Maybe for the requirements, need some clear channel, like:


1>  What's the official channel to open requirements to Atomic community ?
Is it github or something else which can easily track ?

2> What's the normal process to deal with such requirements, and coordinate
ways.

3> Others





Thanks

Best Wishes,
------------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon <sgor...@redhat.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   17/03/2016 09:24 pm
Subject:Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro



----- Original Message -
> From: "Kai Qiang Wu" <wk...@cn.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
> Sent: Tuesday, March 15, 2016 3:20:46 PM
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro
>
> Hi  Stdake,
>
> There is a patch about Atomic 23 support in Magnum.  And atomic 23 uses
> kubernetes 1.0.6, and docker 1.9.1.
> From Steve Gordon, I learnt they did have a two-weekly release. To me it
> seems each atomic 23 release not much difference, (minor change)
> The major rebases/updates may still have to wait for e.g. Fedora Atomic
24.

Well, the emphasis here is on *may*. As was pointed out in that same thread
[1] rebases certainly can occur although those builds need to get karma in
the fedora build system to be pushed into updates and subsequently included
in the next rebuild (e.g. see [2] for a newer K8S build). The main point is
that if a rebase involves introducing some element of backwards
incompatibility then that would have to wait to the next major (F24) -
outside of that there is some flexibility.

> So maybe we not need to test every Atomic 23 two-weekly.
> Pick one or update old, when we find it is integrated with new kubernetes
> or docker, etcd etc. If other small changes(not include security), seems
> not need to update so frequently, it can save some efforts.

A question I have posed before and that I think will need to be answered if
Magnum is indeed to move towards the model for handling drivers proposed in
this thread is what are the expectations Magnum has for each image/coe
combination in terms of versions of key components for a given Magnum
release, and what are the expectations Magnum has for same when looking
forwards to say Newton.

Based on our discussion it seemed like there were some issues that mean
kubernetes-1.1.0 would be preferable for example (although that it wasn't
there was in fact itself a bug it would seem, but regardless it's a valid
example), but is that expectation documented somewhere? It seems like based
on feature roadmap it should be possible to at least put forward minimum
required versions for key components (e.g. docker, k8s, flanel, etcd for
the K8S COE)? This would make it easier to guide the relevant upstreams to
ensure their images support the Magnum team's needs and at least minimize
the need to do custom builds if not eliminate it.

-Steve

[1]
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/ZJARDKSB3KGMKLACCZSQALZHV54PAJUB/

[2] https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4

> From:  "Steven Dake (stdake)" <std...@cisco.com>
> To:"OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date:  16/03/2016 03:23 am
> Subject:   Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
>
>
> WFM as long as we stick to the spirit of the proposal and don't end up in
a
> situation where there is only one distribution.  Others in the thread had
> indicated there would be only one distribution in tree, which I'd find
> distu

Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Kai Qiang Wu
Hi Dolph,
It seems use one configuration could simply like below:

nova.conf:

**
client_region_name = RegionOne


All clients would use that region instead of create many different
section/properties(what nova do now) for that.



But I'd like to hear what's nova/keystone developers option for that. Why
we design like that ? :)




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Dolph Mathews <dolph.math...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   04/03/2016 12:46 pm
Subject:Re: [openstack-dev] [keystone][nova] Many same "region_name"
configuration really meaingful for Multi-region customers?



Unless someone on the operations side wants to speak up and defend
cross-region nova-cinder or nova-neutron interactions as being a legitimate
use case, I'd be in favor of a single region identifier.

However, both of these configuration blocks should ultimately be used to
configure keystoneauth, so I would be in favor of whatever solution
simplifies configuration for keystoneauth.

On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu <wk...@cn.ibm.com> wrote:
  Hi All,


  Right now, we found that nova.conf have many places for region_name
  configuration. Check below:

  nova.conf

  ***
  [cinder]
  os_region_name = ***

  [neutron]
  region_name= ***



  ***


  From some mult-region environments observation, those two regions would
  always config same value.
  Question 1: Does nova support config different regions in nova.conf ?
  Like below

  [cinder]

  os_region_name = RegionOne

  [neutron]
  region_name= RegionTwo


  From Keystone point, I suspect those regions can access from each other.


  Question 2: If all need to config with same value, why we not use single
  region_name in nova.conf ? (instead of create many region_name in same
  file )

  Is it just for code maintenance or else consideration ?



  Could nova and keystone community members help this question ?


  Thanks


  Best Wishes,
  
------------

  Kai Qiang Wu (吴开强 Kennan)
  IBM China System and Technology Lab, Beijing

  E-mail: wk...@cn.ibm.com
  Tel: 86-10-82451647
  Address: Building 28(Ring Building), ZhongGuanCun Software Park,
  No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
  


  Follow your heart. You are miracle!

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-03 Thread Kai Qiang Wu





Hi,

From my previous 3K+ nodes 6+ regions OpenStack operations experience in
Yandex, I found useless to have Cinder and Neutron services in cross-region
manner.

BTW, nova-neutron cross-region interactions are still legitimate use case:
you may utilize one neutron for many nova regions.

  >>> Sorry not quite understand here.  Do you mean you want this feature
cross-region(nova-neutron) ? Or it has been supported now in Openstack
nova, neutron ?




--
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



  On Mar 4, 2016, at 7:43 AM, Dolph Mathews <dolph.math...@gmail.com>
  wrote:

  Unless someone on the operations side wants to speak up and defend
  cross-region nova-cinder or nova-neutron interactions as being a
  legitimate use case, I'd be in favor of a single region identifier.

  However, both of these configuration blocks should ultimately be used
  to configure keystoneauth, so I would be in favor of whatever
  solution simplifies configuration for keystoneauth.

  On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu <wk...@cn.ibm.com>
  wrote:
Hi All,


Right now, we found that nova.conf have many places for region_name
configuration. Check below:

nova.conf

***
[cinder]
os_region_name = ***

[neutron]
region_name= ***



***


From some mult-region environments observation, those two regions
would always config same value.
Question 1: Does nova support config different regions in
nova.conf ? Like below

[cinder]

os_region_name = RegionOne

[neutron]
region_name= RegionTwo


From Keystone point, I suspect those regions can access from each
other.


Question 2: If all need to config with same value, why we not use
single region_name in nova.conf ? (instead of create many
region_name in same file )

Is it just for code maintenance or else consideration ?



Could nova and keystone community members help this question ?


Thanks


Best Wishes,

--------

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193



Follow your heart. You are miracle!


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org
  ?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
(See attached file: signature.asc)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-01 Thread Kai Qiang Wu

Hi All,


Right now, we found that nova.conf have many places for region_name
configuration. Check below:

nova.conf

***
[cinder]
os_region_name = ***

[neutron]
region_name= ***



***


From some mult-region environments observation, those two regions would
always config same value.
Question 1: Does nova support config different regions in nova.conf ? Like
below

[cinder]

os_region_name = RegionOne

[neutron]
region_name= RegionTwo


From Keystone point, I suspect those regions can access from each other.


Question 2:  If all need to config with same value, why we not use single
region_name in nova.conf ? (instead of create many region_name in same
file )

Is it just for code maintenance or else consideration ?



Could nova and keystone community members help this question ?


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-05 Thread Kai Qiang Wu
+1, Really good contribution. :)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   大�V元央 <yuany...@oeilvert.org>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   05/03/2016 06:25 pm
Subject:Re: [openstack-dev] [magnum-ui] Proposed Core addition, and
removal notice



+1, welcome Shu.

-Yuanying

On Sat, Mar 5, 2016 at 09:37 Bradley Jones (bradjone) <bradj...@cisco.com>
wrote:
  +1

  Shu has done some great work in magnum-ui and will be a welcome addition
  to the core team.

  Thanks,
  Brad

  > On 5 Mar 2016, at 00:29, Adrian Otto <adrian.o...@rackspace.com> wrote:
  >
  > Magnum UI Cores,
  >
  > I propose the following changes to the magnum-ui core group [1]:
  >
  > + Shu Muto
  > - Dims (Davanum Srinivas), by request - justified by reduced activity
  level.
  >
  > Please respond with your +1 votes to approve this change or -1 votes to
  oppose.
  >
  > Thanks,
  >
  > Adrian
  >
  __

  > OpenStack Development Mailing List (not for usage questions)
  > Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Kai Qiang Wu
I tend to agree that multiple os  support is OK (we can limited to popular
ones first, like redhat os, ubuntu os) But we not tend to cover all OS, it
would much burden for maintain, and extra small requirements should be
maintain by 3-rd party if possible(through drivers).




Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon <sgor...@redhat.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Cc: Martin Andre <maan...@redhat.com>, Josh Berkus
<jber...@redhat.com>
Date:   01/03/2016 08:25 pm
Subject:Re: [openstack-dev] [magnum]Discussion  of  
supporting
single/multiple OS distro



- Original Message -
> From: "Steve Gordon" <sgor...@redhat.com>
> To: "Guz Egor" <guz_e...@yahoo.com>, "OpenStack Development Mailing List
(not for usage questions)"
> <openstack-dev@lists.openstack.org>
>
> - Original Message -
> > From: "Guz Egor" <guz_e...@yahoo.com>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> >
> > Adrian,
> > I disagree, host OS is very important for operators because of
integration
> > with all internal tools/repos/etc.
> > I think it make sense to limit OS support in Magnum main source. But
not
> > sure
> > that Fedora Atomic is right choice,first of all there is no
documentation
> > about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> > community.
>
> Project Atomic documentation for the most part lives here:
>
> http://www.projectatomic.io/docs/
>
> To help us improve it, it would be useful to know what you think is
missing.
> E.g. I saw recently in the IRC channel it was discussed that there is no
> documentation on (re)building the image but this is the first hit in a
> Google search for same and it seems to largely match what has been copied
> into Magnum's docs for same:
>
>
http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/

>
> I have no doubt that there are areas where the documentation is lacking,
but
> it's difficult to resolve a claim that there is no documentation at all.
I
> recently kicked off a thread over on the atomic list to try and relay
some
> of the concerns that were raised on this list and in the IRC channel
> recently, it would be great if Magnum folks could chime in with more
> specifics:
>
>
https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9

>
> Separately I had asked about containerization of kubernetes/etcd/flannel
> which remains outstanding:
>
>
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/

>
> Fedora Atomic builds do seem to be hitting their planned two weekly
update
> cadence now though which may alleviate this concern at least somewhat in
the
> interim:
>
>
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/

> https://fedorahosted.org/cloud/ticket/139
>
> Thanks,
>
> Steve

I meant to add, I don't believe choosing a single operating system image to
support - regardless of which it is - is the right move for users and
largely agree with what Ton Ngo put forward in his most recent post in the
thread. I'm simply highlighting that there are folks willing/able to work
on improving things from the Atomic side and we are endeavoring to provide
them actionable feedback from the Magnum community to do so.

Thanks,

Steve

> > It make sense to go with Ubuntu (I believe it's still most adopted
> > platform in all three COEs and OpenStack deployments) and CoreOS
(is
> > highly adopted/tested in Kub community and Mesosphere DCOS uses it as
> > well).
> >  We can implement CoreOS support as driver and users can use it as
> >  reference
> > implementation.
>
>
> > --- Egor
> >   From: Adrian Otto <adrian.o...@rackspace.com>
> >  To: OpenStack Development Mailing List (not for usage questions)
> >  <openstack-dev@lists.openstack.org>
> >  Sent: Monday, February 29, 2016 10:36 AM
> >

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Kai Qiang Wu
We found some issue about atomic host run some docker volume plugin, while
atomic and docker volume plugin both not sure what's the root cause of
that.

here is the link
https://github.com/docker/docker/issues/18005#issuecomment-190215862


Also I did not find atomic image update quickly, ( but k8s and docker both
release quickly, which can lacks of new feature applied in our
development), I think atomic have a gap for that.




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon <sgor...@redhat.com>
To: Guz Egor <guz_e...@yahoo.com>, "OpenStack Development Mailing
List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Cc: Josh Berkus <jber...@redhat.com>
Date:   01/03/2016 08:19 pm
Subject:Re: [openstack-dev] [magnum] Discussion of  supporting
single/multiple OS distro



- Original Message -
> From: "Guz Egor" <guz_e...@yahoo.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
>
> Adrian,
> I disagree, host OS is very important for operators because of
integration
> with all internal tools/repos/etc.
> I think it make sense to limit OS support in Magnum main source. But not
sure
> that Fedora Atomic is right choice,first of all there is no documentation
> about it and I don't think it's used/tested a lot by Docker/Kub/Mesos
> community.

Project Atomic documentation for the most part lives here:

http://www.projectatomic.io/docs/

To help us improve it, it would be useful to know what you think is
missing. E.g. I saw recently in the IRC channel it was discussed that there
is no documentation on (re)building the image but this is the first hit in
a Google search for same and it seems to largely match what has been copied
into Magnum's docs for same:


http://www.projectatomic.io/blog/2014/08/build-your-own-atomic-centos-or-fedora/


I have no doubt that there are areas where the documentation is lacking,
but it's difficult to resolve a claim that there is no documentation at
all. I recently kicked off a thread over on the atomic list to try and
relay some of the concerns that were raised on this list and in the IRC
channel recently, it would be great if Magnum folks could chime in with
more specifics:


https://lists.projectatomic.io/projectatomic-archives/atomic/2016-February/thread.html#9


Separately I had asked about containerization of kubernetes/etcd/flannel
which remains outstanding:


https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/XICO4NJCTPI43AWG332EIM2HNFYPZ6ON/


Fedora Atomic builds do seem to be hitting their planned two weekly update
cadence now though which may alleviate this concern at least somewhat in
the interim:


https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/CW5BQS3ODAVYJGAJGAZ6UA3XQMKEISVJ/

https://fedorahosted.org/cloud/ticket/139

Thanks,

Steve

> It make sense to go with Ubuntu (I believe it's still most adopted
> platform in all three COEs and OpenStack deployments)     and CoreOS (is
> highly adopted/tested in Kub community and Mesosphere DCOS uses it as
well).
>  We can implement CoreOS support as driver and users can use it as
reference
> implementation.


> --- Egor
>   From: Adrian Otto <adrian.o...@rackspace.com>
>  To: OpenStack Development Mailing List (not for usage questions)
>  <openstack-dev@lists.openstack.org>
>  Sent: Monday, February 29, 2016 10:36 AM
>  Subject: Re: [openstack-dev] [magnum] Discussion of supporting
>  single/multiple OS distro
>
> Consider this: Which OS runs on the bay nodes is not important to end
users.
> What matters to users is the environments their containers execute in,
which
> has only one thing in common with the bay node OS: the kernel. The linux
> syscall interface is stable enough that the various linux distributions
can
> all run concurrently in neighboring containers sharing same kernel. There
is
> really no material reason why the bay OS choice must match what distro
the
> container is based on. Although I’m persuaded by Hongbin’s concern to
> mitigate risk of future changes WRT whatever OS distro is the prevailing
one
> for bay nodes, there are a few items of concern about duality I’d like to
> zero in on:
> 1) Participation from Magnum contributors to support the CoreOS speci

Re: [openstack-dev] [magnum] Discuss the blueprint"support-private-registry"

2016-03-30 Thread Kai Qiang Wu
I agree to that support-private-registry should be secure. As insecure
seems not much useful for production use.
Also I understood the point setup related CA could be diffcult than normal
HTTP, but we want to know if
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

Could address the issue and make templates clearer to understood ? If
related patch or spec proposed, we are glad to review and make it better.




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   30/03/2016 09:09 pm
Subject:Re: [openstack-dev] [magnum] Discuss the
blueprint   "support-private-registry"



Hi.

On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao <liyong.q...@intel.com> wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind China
great
> wall and can not have access of gcr.io directly, after checking our
> cloud-init script, I see that
>
> lots of code are *hard coded* to using gcr.io, I personally though this
is
> not good idea. We can not force user/customer to have internet access in
> their environment.
>
> I proposed to use insecure-registry to give customer/user (Chinese or
whom
> doesn't have gcr.io access) a chance to switch use their own
> insecure-registry to deploy
> k8s/swarm bay.
>
> For your question:
>>  Is the private registry secure or insecure? If secure, how to
handle
>> the authentication secrets. If insecure, is it OK to connect a secure
bay to
>> an insecure registry?
> An insecure-resigtry should be 'secure' one, since customer need to setup
it
> and make sure it's clear one and in this case, they could be a private
> cloud.
>
>>  Should we provide an instruction for users to pre-install the private
>> registry? If not, how to verify the correctness of this feature?
>
> The simply way to pre-install private registry is using insecure-resigtry
> and docker.io has very simple steps to start it [1]
> for other, docker registry v2 also supports using TLS enable mode but
this
> will require to tell docker client key and crt file which will make
> "support-private-registry" complex.
>
> [1] https://docs.docker.com/registry/
> [2]https://docs.docker.com/registry/deploying/

'support-private-registry' and 'allow-insecure-registry' sound different to
me.

We're using an internal docker registry at CERN (v2, TLS enabled), and
have the magnum nodes setup to use it.

We just install our CA certificates in the nodes (cp to
etc/pki/ca-trust/source/anchors/, update-ca-trust) - had to change the
HEAT templates for that, and submitted a blueprint to be able to do
similar things in a cleaner way:
https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig

That's all that is needed, the images are then prefixed with the
registry dns location when referenced - example:
docker.cern.ch/my-fancy-image.

Things we found on the way:
- registry v2 doesn't seem to allow anonymous pulls (you can always
add an account with read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if you want
to push/pull with authentication.

Cheers,
  Ricardo

>
>
>
> On 2016年03月30日 07:23, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> This is the item we didn’t have time to discuss in our team meeting, so I
> started the discussion in here.
>
>
>
> Here is the blueprint:
> https://blueprints.launchpad.net/magnum/+spec/support-private-registry .
Per
> my understanding, the goal of the BP is to allow users to specify the url
of
> their private docker registry where the bays pull the kube/swarm images
(if
> they are not able to access docker hub or other public registry). An
> assumption is that users need to pre-install their own private registry
and
> upload all the required images to there. There are several potential
issues
> of this proposal:
>
> ・ Is the private registry secure or insecure? If secure, how to
> handle the authentication secrets. If inse

Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-04-02 Thread Kai Qiang Wu
+ 1 for Eli :)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   01/04/2016 02:20 am
Subject:[openstack-dev] [magnum] Proposing Eli Qiao for Magnum core
reviewer team



Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His
contribution started from about 10 months ago. Along the way, he
implemented several important blueprints and fixed a lot of bugs. His
contribution covers various aspects (i.e. APIs, conductor, unit/functional
tests, all the COE templates, etc.), which shows that he has a good
understanding of almost every pieces of the system. The feature set he
contributed to is proven to be beneficial to the project. For example, the
gate testing framework he heavily contributed to is what we rely on every
days. His code reviews are also consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
According to the OpenStack Governance process [1], we require a minimum of
4 +1 votes within a 1 week voting window (consider this proposal as a +1
vote from me). A vote of -1 is a veto. If we cannot get enough votes or
there is a veto vote prior to the end of the voting window, Eli is not able
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin

 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-27 Thread Kai Qiang Wu
Hi Mike,

Since right now, we have also support bay-update (node_count)

I am thinking the following case:

1>  baymodel-create have default flavor, and extra labels specify the(other
node flavors) requirements,

if (other node flavors) count <= bay(node_count), the extra nodes would be
created use default flavor

if (other node flavors) count  > bay(node_count), it should pop error,
since it not quite clear why flavor to use

2> magnum bay-update k8sbay replace node_count < existed node_count,  it
should be OK. same as old behavior
 if node_count > existed node_count, all new nodes would use default
flavor_id, (if not, we need to find what's the better policy to handle
that)

Refer:
https://github.com/openstack/magnum/blob/master/doc/source/dev/quickstart.rst





What do you think ?



Thanks


Best Wishes,
------------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ma, Wen-Tao (Mike, HP Servers-PSC-BJ)" <wentao...@hpe.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   27/04/2016 03:10 pm
Subject:Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
provision minion nodes



Hi Hong bin,
Thanks very much. It’s good suggestion, I think it is a good way by using
labels for extra flavors. But I notice that there is not the �Cnode-count
parameter in baymodel.
So I think it doesn’t need specify minion-flavor-0 counts by �Cnode-count.
We can specify all of the flavor id and count ratio in the labels. It will
check the minion node count with this ratio of labels when creating magnum
bay that specified total minion node count . If the node-count in baycreate
doesn’t match with the flavor ratio, it will return the ratio match error
message.   If there is not the multi-flavor-ratio key in lables, it will
just use  minion-flavor-0  to create 10 minion nodes.
$ magnum baymodel-create --name k8sbaymodel --flavor-id minion-flavor-0
--labels multi-
flavor-ratio=minion-flavor-0:3,minions-flavor-1:5,minion-flavor-2:2
$  magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 10
Do you think about it?

> -Original Message-
> From: Ma, Wen-Tao (Mike, HP Servers-PSC-BJ) [mailto:wentao...@hpe.com]
> Sent: April-26-16 3:01 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
> provision minion nodes
>
> Hi Hongbin, Ricardo
> This is mike, I am working with Gary now.
> Thanks for Ricardo's good suggestion. I have tried the "map/index"
> method ,  we can use it to passed the minion_flavor_map and the index
> into the minion cluster stack. It does work well.
> I think we can update magnum baymodel-create to set the N minion
> flavors in the minion_flavor_map and assign minion counts for each
> flavor.
> For example :
> magnum baymodel-create --name k8s-bay-model  --flavor-id minion-flavor-
> 0:3,minion-flavor-1:5, minion-flavor-2:2. It will create 3 types flavor

The suggested approach seems to break the existing behaviour. I think it is
better to support this feature in a backward-compatible way. How about
using labels:

$ magnum baymodel-create --name k8sbaymodel --flavor-id minion-flavor-0
--node-count 3 --labels
extra-flavor-ids=minions-flavor-1:5,minion-flavor-2:2

> minion node and total minion nodes  count is 10. The magnum baymode.py
> will parse  this  dictionary and pass them to the heat template
> parameters minion_flavor_map, minion_flavor_count_map. Then the heat
> stack will work well.
>
> kubecluster-fedora-ironic.yaml
> parameters:
>   minion_flavor_map:
> type: json
> default:
>   '0': minion-flavor-0
>   '1': minion-flavor-1
>   '2': minion-flavor-2
>
>   minion_flavor_count_map:
> type: json
> default:
>   '0': 3
>   '1': 5
>   '2': 2
>
> resources:
> kube_minions_flavors:
> type: OS::Heat::ResourceGroup
> properties:
>   count: { get_param: minion_flavors_counts }
>   resource_def:
> type: kubecluster-minion-fedora-ironic.yaml
> properties:
>   minion_flavor_map: {get_param: minion_flavor_map}
>   minion_flavor_count_map: {get_param: minion_flavor_count_map}
>   minion_flavor_index: '%index%'
>
> How do you think about this interface in magnum baymodel to support N
> falvor to provision minion nodes? Do you have any comment

Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Kai Qiang Wu
I prefer option 1 with 3, which is easy to maintain. and also accounting
for user experience.
for option 2, is it really good to expose helpful message in API layer ?




Thanks.


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Cc: Qun XK Wang/China/IBM@IBMCN
Date:   12/05/2016 03:53 am
Subject:[openstack-dev] [magnum] How to document 'labels'



Hi all,

This is a continued discussion from the last team meeting. For recap,
‘labels’ is a property in baymodel and is used by users to input additional
key-pair pairs to configure the bay. In the last team meeting, we discussed
what is the best way to document ‘labels’. In general, I heard three
options:
  1.   Place the documentation in Magnum CLI as help text (as
  Wangqun proposed [1][2]).
  2.   Place the documentation in Magnum server and expose them via
  the REST API. Then, have the CLI to load help text of individual
  properties from Magnum server.
  3.   Place the documentation in a documentation server (like
  developer.openstack.org/…), and add the doc link to the CLI help
  text.
For option #1, I think an advantage is that it is close to end-users, thus
providing a better user experience. In contrast, Tom Cammann pointed out a
disadvantage that the CLI help text might be easier to become out of date.
For option #2, it should work but incurs a lot of extra work. For option
#3, the disadvantage is the user experience (since user need to click the
link to see the documents) but it makes us easier to maintain. I am
thinking if it is possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

2016-04-18 Thread Kai Qiang Wu
Magnum has some kind of support to provide volume support in containers,
also have some proposal topics to integrate Manila if possible.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Zhipeng Huang <zhipengh...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Cc: Taku Fukushima <tfukush...@midokura.com>, Antoni Segura
Puimedon <t...@midokura.com>, Irena Berezovsky
<ir...@midokura.com>
Date:   19/04/2016 06:50 am
Subject:Re: [openstack-dev] [magnum][kuryr] Shared session in design
summit



Thanks Hongbin,

Actually we have a new project called Fuxi that sorta like Kuryr for
storage, would be glad to discuss the ideas

On Tue, Apr 19, 2016 at 5:26 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
  Hi all,





  The Magnum-Kuryr joined session was scheduled to Thursday 11:50 – 12:30:
  https://www.openstack.org/summit/austin-2016/summit-schedule/events/9099
  . I am looking forward to seeing you all there.





  In addition, Magnum will have another session for container storage:
  https://www.openstack.org/summit/austin-2016/summit-schedule/events/9098
  . I saw Kuryr recently expanded its scope to storage so it would be great
  if the relevant Kuryr contributors can join the storage session as well.





  Best regards,


  Hongbin





  From: Gal Sagie [mailto:gal.sa...@gmail.com]
  Sent: March-30-16 10:36 AM
  To: OpenStack Development Mailing List (not for usage questions); Antoni
  Segura Puimedon; Fawad Khaliq; Mohammad Banikazemi; Taku Fukushima; Irena
  Berezovsky; Mike Spreitzer
  Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design
  summit





  All these slots are fine with me, added Kuryr team as CC to make sure
  most can attend any of these times.











  On Wed, Mar 30, 2016 at 5:12 PM, Hongbin Lu <hongbin...@huawei.com>
  wrote:


  Gal,





  Thursday 4:10 – 4:50 conflicts with a Magnum workroom session, but we can
  choose from:


  · 11:00 – 11:40


  · 11:50 – 12:30


  · 3:10 – 3:50





  Please let us know if some of the slots don’t work well with your
  schedule.





  Best regards,


  Hongbin





  From: Gal Sagie [mailto:gal.sa...@gmail.com]
  Sent: March-30-16 2:00 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design
  summit





  Anything you pick is fine with me, Kuryr fishbowl session is on
  Thursday 4:10 - 4:50, i personally


  think the Magnum integration is important enough and i dont mind using
  this time for the session as well.





  Either way i am also ok with the 11-11:40 and the 11:50-12:30 sessions or
  the 3:10-3:50





  On Tue, Mar 29, 2016 at 11:32 PM, Hongbin Lu <hongbin...@huawei.com>
  wrote:


  Hi all,





  As discussed before, our team members want to establish a shared session
  between Magnum and Kuryr. We expected a lot of attendees in the session
  so we need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl
  session, and they possibly need it for other purposes. A solution is to
  promote one of the Magnum fishbowl session to be the shared session, or
  leverage one of the free fishbowl slot. The schedule is as below.





  Please vote your favorite time slot:
  http://doodle.com/poll/zuwercgnw2uecs5y .





  Magnum fishbowl session:


  · 11:00 - 11:40 (Thursday)


  · 11:50 - 12:30


  · 1:30 - 2:10


  · 2:20 - 3:00


  · 3:10 - 3:50





  Free fishbowl slots:


  · 9:00 – 9:40 (Thursday)


  · 9:50 – 10:30


  · 3:10 – 3:50 (conflict with Magnum session)


  · 4:10 – 4:50 (conflict with Magnum session)


  · 5:00 – 5:40 (conflict with Magnum session)





  Best regards,


  Hongbin



  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev









  --


  Best Regards ,

  The G.



  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listin

Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Kai Qiang Wu
Hi Duan Li,

Not sure if I get your point very clearly.

1> Magnum did support :
https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/baymodel.py#L65

flavor-id for minion node
master-flavor-id for master node

So your K8s cluster could have such two kinds of flavors.


2> For one question about ironic case (I found you deploy on ironic), I did
not think Magnum templates now support ironic case now.
As ironic VLAN related feature are still developing, and not merged(many
patches are under review, pick one for example
https://review.openstack.org/#/c/277853)


I am not sure how would you use ironic for k8s cluster ?

Also in this summit
https://etherpad.openstack.org/p/magnum-newton-design-summit-topics, we
will have session about ironic cases:
here it is : Ironic Integration: Add support for Ironic virt-driver

If you had ways to make ironic work with Magnum, we welcome your
contribution for that topic.


Thanks

Best Wishes,
--------
Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)"
<li-gong.d...@hpe.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   20/04/2016 03:46 pm
Subject:[openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
provision   minion nodes



Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:
  -  There are 2 kind of baremetal machines in customer site:
  one is legacy machines which doesn’t support UEFI secure boot and
  others are new machines which support UEFI secure boot. User want to
  use Magnum to provisions a Magnum bay of Kubernetes from these 2 kind
  of baremetal machines and for the machines supporting secure boot,
  user wants to use UEFI secure boot to boot them up. And 2 Kubernetes
  label(secure-booted and non-secure-booted) are created and User can
  deploy their data-senstive/cirtical workload/containers/pods on the
  baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is
“extra_spec: secure_boot=True” and the other doesn’t specify it) based on
the Ironic feature(
https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it
is reasonable from your point? If you agree, we can write design spec and
implement this feature?

Regards,
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-21 Thread Kai Qiang Wu
Hi Duan Li,

We welcome to that contribution if you had. Just make sure that spec can be
flexible handle 2 ~ N flavor cases. That could handle future requirements
for N flavors, as in the ML, I found some ops had such requirements.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)"
<li-gong.d...@hpe.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   21/04/2016 02:31 pm
Subject:Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
provision minion nodes



Hi Eli,

This is exactly what I want. If you guys think this requirement is
reasonable, I’d like to commit a design spec so that we could discuss it in
details.

Regards,
Gary Duan

From: Eli Qiao [mailto:liyong.q...@intel.com]
Sent: Wednesday, April 20, 2016 5:08 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
provision minion nodes


Kannan,

I think Duan Li is talking about using both 2 kinds of (secure-booted and
non-secure-booted) node deploy *minion* node.

The scenario may like this:
let say 2 flavors:
  flavor_secure
  flavor_none_secure
For now, flavor-id in baymodel can only be set as one value, Duan Li's
requirement is to using flavor-id = [flavor_none_secure, flavor_secure]
and provision one cluster which minion nodes are build from 2 types of
flavor, then after cluster(bay ) provision finished , passing lable to
let k8s cluster to chose a minion node to start pod on that specific node.


For now, Magnum doesn't support it yet, I think it good to have it, but the
implementation may be differnece per COE since after we
provision bay, the scheduler work are done by k8s/swarm/mesos.

Eli.

On 2016年04月20日 16:36, Kai Qiang Wu wrote:
  Hi Duan Li,

  Not sure if I get your point very clearly.

  1> Magnum did support :
  
https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/baymodel.py#L65


  flavor-id for minion node
  master-flavor-id for master node

  So your K8s cluster could have such two kinds of flavors.


  2> For one question about ironic case (I found you deploy on ironic),
  I did not think Magnum templates now support ironic case now.
  As ironic VLAN related feature are still developing, and not merged
  (many patches are under review, pick one for example
  https://review.openstack.org/#/c/277853)


  I am not sure how would you use ironic for k8s cluster ?

  --
  Best Regards, Eli Qiao (乔立勇)
  Intel OTC China
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread Kai Qiang Wu
Perhaps it is better to append details, what error you hit ? and check
devstack irc channel if it is not magnum part, and if something like magnum
error message, check magnum IRC channel or open a bug.
Sounds OK ?

On Sat, Aug 6, 2016 at 2:46 AM, Yasemin DEMİRAL (BİLGEM BTE) <
yasemin.demi...@tubitak.gov.tr> wrote:

>
> I follow this page but when I run ./stack command, it gives error. It
> didnt create openstack user, the error about rabbitmq connection.
> I didnt work successfully :/
> --
> *Kimden: *"Spyros Trigazis" 
> *Kime: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Gönderilenler: *5 Ağustos Cuma 2016 19:32:11
> *Konu: *Re: [openstack-dev] [magnum] devstack magnum.conf
>
>
> Hi,
>
> better follow the quickstart guide [1].
>
> Cheers,
> Spyros
>
> [1] http://docs.openstack.org/developer/magnum/dev/quickstart.html
>
> On 5 August 2016 at 06:22, Yasemin DEMİRAL (BİLGEM BTE) <
> yasemin.demi...@tubitak.gov.tr> wrote:
>
>>
>> Hi
>>
>> I try to magnum on devstack, in the manual  Configure magnum: section
>> has sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command,
>> but there is no magnum.conf.
>>  What should i do ?
>>
>> Thanks
>>
>> Yasemin
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread Kai Qiang Wu
+1  for both

On Wed, Nov 9, 2016 at 1:11 PM, Kumari, Madhuri 
wrote:

> +1 for both.
>
> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: Tuesday, November 8, 2016 12:36 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Magnum] New Core Reviewers
>
> Magnum Core Team,
>
> I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum
> Core Reviewers. Please respond with your votes.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev