[openstack-dev] [Containers][Magnum] Questions on dbapi

2014-12-31 Thread Hongbin Lu
Hi all,

I am writing tests for the Magnum dbapi. I have several questions about its
implementation and appreciate if someone could comment on them.

* Exceptions: The exceptions below were ported from Ironic but don't seem
to make sense in Magnum. I think we should purge them from the code except
InstanceAssociated and NodeAssociated. Do everyone agree?

*class InstanceAssociated(Conflict):*
*message = _(Instance %(instance_uuid)s is already associated with a
node,*
* it cannot be associated with this other node %(node)s)*

*class BayAssociated(InvalidState):*
*message = _(Bay %(bay)s is associated with instance %(instance)s.)*

*class ContainerAssociated(InvalidState):*
*message = _(Container %(container)s is associated with *
*instance %(instance)s.)*

*class PodAssociated(InvalidState):*
*message = _(Pod %(pod)s is associated with instance %(instance)s.)*

*class ServiceAssociated(InvalidState):*
*message = _(Service %(service)s is associated with *
*instance %(instance)s.)*

*NodeAssociated: it is used but definition missing*

*BayModelAssociated: it is used but definition missing*

* APIs: the APIs below seem to be ported from Ironic Node, but it seems we
won't need them all. Again, I think we should purge some of them that does
not make sense. In addition, these APIs are defined without being call.
Does it make sense remove them for now, and add them one by one later when
they are actually needed.

*def reserve_bay(self, tag, bay_id):*
*Reserve a bay.*

*def release_bay(self, tag, bay_id):*
*Release the reservation on a bay.*

*def reserve_baymodel(self, tag, baymodel_id):*
*Reserve a baymodel.*

*def release_baymodel(self, tag, baymodel_id):*
*Release the reservation on a baymodel.*

*def reserve_container(self, tag, container_id):*
*Reserve a container.*

*def reserve_node(self, tag, node_id):*
*Reserve a node.*

*def release_node(self, tag, node_id):*
*Release the reservation on a node.*

*def reserve_pod(self, tag, pod_id):*
*Reserve a pod.*

*def release_pod(self, tag, pod_id):*
*Release the reservation on a pod.*

*def reserve_service(self, tag, service_id):*
*Reserve a service.*

*def release_service(self, tag, service_id):*
*Release the reservation on a service.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Hongbin Lu
-1

On Mon, Feb 16, 2015 at 10:20 PM, Steven Dake (stdake) std...@cisco.com
wrote:

  The initial magnum core team was founded at a meeting where several
 people committed to being active in reviews and writing code for Magnum.
 Nearly all of the folks that made that initial commitment have been active
 in IRC, on the mailing lists, or participating in code reviews or code
 development.

  Out of our core team of 9 members [1], everyone has been active in some
 way except for Dmitry.  I propose removing him from the core team.  Dmitry
 is welcome to participate in the future if he chooses and be held to the
 same high standards we have held our last 4 new core members to that didn’t
 get an initial opt-in but were voted in by their peers.

  Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1
 from any core acts as a veto meaning Dmitry will remain in the core team.

  [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] multicloud support for ec2

2015-01-28 Thread Hongbin Lu
Hi,

I would appreciate if someone replies the email below. Thanks.

Best regards,
Hongbin

On Sun, Jan 25, 2015 at 12:03 AM, Hongbin Lu hongbin...@gmail.com wrote:

 Hi Heat team,

 I am looking for a solution to bridge between OpenStack and EC2. According
 to documents, it seems that Heat has multicloud support but the remote
 cloud(s) must be OpenStack. I wonder if Heat supports multicloud in the
 context of supporting remote EC2 cloud. For example, does Heat support a
 remote stack that contains resources from EC2 cloud? As a result, creating
 a stack will provision local OpenStack resources along with remote EC2
 resources.

 If this feature is not supported, will the dev team accept blueprint
 and/or contributions for that?

 Thanks,
 Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] multicloud support for ec2

2015-01-24 Thread Hongbin Lu
Hi Heat team,

I am looking for a solution to bridge between OpenStack and EC2. According
to documents, it seems that Heat has multicloud support but the remote
cloud(s) must be OpenStack. I wonder if Heat supports multicloud in the
context of supporting remote EC2 cloud. For example, does Heat support a
remote stack that contains resources from EC2 cloud? As a result, creating
a stack will provision local OpenStack resources along with remote EC2
resources.

If this feature is not supported, will the dev team accept blueprint and/or
contributions for that?

Thanks,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Memory recommendation for running magnum with devstack

2015-03-19 Thread Hongbin Lu
Hi Surojit,

I think 8G of RAM and 80G of disk should be considered the minimum. The
guide will create 3 m1.small VMs (each with 2G of RAM and 20G of disk), and
2 volumes (5G each).

In your case, I am not sure why you get the memory error. Probably, you
could walk-around it by creating a flavor with less computing resources,
then use the new flavor to create the cluster:

# create a new flavor with 1G of RAM and 10G of disk
$ nova flavor-create m2.small 1234 1024 10 1

$ magnum baymodel-create --name testbaymodel --image-id fedora-21-atomic \
   --keypair-id testkey \
   --external-network-id $NIC_ID \
   --dns-nameserver 8.8.8.8 --flavor-id m2.small \
   --docker-volume-size 5

Thanks,
Hongbin

On Thu, Mar 19, 2015 at 11:06 PM, Surojit Pathak suro.p...@gmail.com
wrote:

 Team,

 Do we have a ballpark amount for the memory of the devstack machine to run
 magnum? I am running devstack as a VM with (4 VCPU/50G-Disk/8G-Mem) and
 running magnum on it as per[1].

 I am observing the kube-Nodes goes often in SHUTOFF state. If I do 'nova
 reset-state', the instance goes into ERROR state with message indicating
 that it has run out of memory[2].

 Do we have any recommendation on the size of the RAM for the deployment
 described in[1]?

 --
 Regards,
 SURO

 [1] -https://github.com/stackforge/magnum/blob/master/
 doc/source/dev/dev-quickstart.rst
 [2] - internal error: process exited while connecting to monitor: Cannot
 set up guest memory 'pc.ram'


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-10 Thread Hongbin Lu
Hi Adrian,

On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Magnum Team,

 In the following review, we have the start of a discussion about how to
 tackle bay status:

 https://review.openstack.org/159546

 I think a key issue here is that we are not subscribing to an event feed
 from Heat to tell us about each state transition, so we have a low degree
 of confidence that our state will match the actual state of the stack in
 real-time. At best, we have an eventually consistent state for Bay
 following a bay creation.

 Here are some options for us to consider to solve this:

 1) Propose enhancements to Heat (or learn about existing features) to emit
 a set of notifications upon state changes to stack resources so the state
 can be mirrored in the Bay resource.


A drawback of this option is that it increases the difficulty of
trouble-shooting. In my experience of using Heat (SoftwareDeployments in
particular), Ironic and Trove, one of the most frequent errors I
encountered is that the provisioning resources stayed in deploying state
(never went to completed). The reason is that they were waiting a callback
signal from the provisioning resource to indicate its completion, but the
callback signal was blocked due to various reasons (e.g. incorrect firewall
rules, incorrect configs, etc.). Troubling-shooting such problem is
generally harder.



 2) Spawn a task to poll the Heat stack resource for state changes, and
 express them in the Bay status, and allow that task to exit once the stack
 reaches its terminal (completed) state.

 3) Don’t store any state in the Bay object, and simply query the heat
 stack for status as needed.


 Are each of these options viable? Are there other options to consider?
 What are the pro/con arguments for each?

 Thanks,

 Adrian



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-03-23 Thread Hongbin Lu
Hi Madhuri,

Amazing work! I wouldn't concern the code duplication and modularity issue
since the codes are generated. However, there is another concern here: if
we find a bug/improvement of the generated code, we probably need to modify
the generator. The question is if the upstream will accept the
modifications? If yes, how fast the patch will go through.

I would prefer to maintain a folk of the generator. By this way, we would
have full control of the generated code. Thoughts?

Thanks,
Hongbin

On Mon, Mar 23, 2015 at 10:11 AM, Steven Dake (stdake) std...@cisco.com
wrote:



   From: Madhuri Rai madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 23, 2015 at 1:53 AM
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [magnum] swagger-codegen generated code for
 python-k8sclient

   Hi All,

 This is to have a discussion on the blueprint for implementing
 python-k8client for magnum.

 https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

 I have committed the code generated by swagger-codegen at
 https://review.openstack.org/#/c/166720/.
 But I feel the quality of the code generated by swagger-codegen is not
 good.

 Some of the points:
 1) There is lot of code duplication. If we want to generate code for two
 or more versions, same code is duplicated for each API version.
 2) There is no modularity. CLI code for all the APIs are written in same
 file.

 So, I would like your opinion on this. How should we proceed further?


  Madhuri,

  First off, spectacular that you figured out how to do this!  Great great
 job!  I suspected the swagger code would be a bunch of garbage.  Just
 looking over the review, the output isn’t too terribly bad.  It has some
 serious pep8 problems.

  Now that we have seen the swagger code generator works, we need to see
 if it produces useable output.  In other words, can the API be used by the
 magnum backend.  Google is “all-in” on swagger for their API model.
 Realistically maintaining a python binding would be a huge job.  If we
 could just use swagger for the short term, even though its less then ideal,
 that would be my preference.  Even if its suboptimal.  We can put a readme
 in the TLD saying the code was generated by a a code generator and explain
 how to generate the API.

  One last question.  I didn’t see immediately by looking at the api, but
 does it support TLS auth?  We will need that.

  Super impressed!

  Regards
 -steve



 Regards,
 Madhuri Kumari


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Thanks Jay,

I checked the kubelet log. There are a lot of Watch closed error like
below. Here is the full log http://fpaste.org/188964/46261561/ .

*Status:Failure, Message:unexpected end of JSON input, Reason:*
*Status:Failure, Message:501: All the given peers are not reachable*

Please note that my environment was setup by following the quickstart
guide. It seems that all the kube components were running (checked by using
systemctl status command), and all nodes can ping each other. Any further
suggestion?

Thanks,
Hongbin


On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe

[openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-21 Thread Hongbin Lu
Hi all,

I tried to go through the new redis example at the quickstart guide [1],
but was not able to go through. I was blocked by connecting to the redis
slave container:

*$ docker exec -i -t $REDIS_ID redis-cli*
*Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
Connection refused*

Here is the container log:

*$ docker logs $REDIS_ID*
*Error: Server closed the connection*
*Failed to find master.*

It looks like the redis master disappeared at some point. I tried to check
the status in about every one minute. Below is the output.

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Pending*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Pending*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Failed*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

Is anyone able to reproduce the problem above? If yes, I am going to file a
bug.

Thanks,
Hongbin

[1]
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Hi Jay,

I tried the native k8s commands (in a fresh bay):

*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-master.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-service.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-controller.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-controller.yaml*

It still didn't work (same symptom as before). I cannot spot any difference
between the original yaml file and the parsed yam file. Any other idea?

Thanks,
Hongbin

On Sun, Feb 22, 2015 at 8:38 PM, Jay Lau jay.lau@gmail.com wrote:

 I suspect that there are some error after the pod/services parsed, can you
 please use the native k8s command have a try first then debug k8s api part
 to check the difference of the original json file and the parsed json file?
 Thanks!

 kubectl create -f .json xxx



 2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container
 failed to start, there might be something wrong for your minions node.
 Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide
 [1], but was not able to go through. I was blocked by connecting to the
 redis slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5

Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-30 Thread Hongbin Lu
+1!

On Apr 28, 2015, at 11:14 PM, Steven Dake (stdake) std...@cisco.com wrote:

 Hi folks,
 
 I would like to nominate Madhuri Kumari  to the core team for Magnum.  Please 
 remember a +1 vote indicates your acceptance.  A –1 vote acts as a complete 
 veto.
 
 Why Madhuri for core?
 She participates on IRC heavily
 She has been heavily involved in a really difficult project  to remove 
 Kubernetes kubectl and replace it with a native python language binding which 
 is really close to be done (TM)
 She provides helpful reviews and her reviews are of good quality
 Some of Madhuri’s stats, where she performs in the pack with the rest of the 
 core team:
 
 reviews: http://stackalytics.com/?release=kilomodule=magnum-group
 commits: 
 http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits
 
 Please feel free to vote if your a Magnum core contributor.
 
 Regards
 -steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need a vote from the core team

2015-06-04 Thread Hongbin Lu
Could we have a new group magnum-ui-core and include magnum-core as a subgroup, 
like the heat-coe-tempalte-core group.

Thanks,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][horizon] Making a dashboard for Magnum - need 
a vote from the core team

Hey folks,

I think it is critical for self-service needs that we have a Horizon dashboard 
to represent Magnum.  I know the entire Magnum team has no experience in UI 
development, but I have found atleast one volunteer Bradley Jones to tackle the 
work.

I am looking for more volunteers to tackle this high impact effort to bring 
Containers to OpenStack either in the existing Magnum core team or as new 
contributors.   If your interested, please chime in on this thread.

As far as how to get patches approved, there are two models we can go with.

Option #1:
We add these UI folks to the magnum-core team and trust them not to +2/+A 
Magnum infrastructure code.  This also preserves us as one team with one 
mission.

Option #2:
We make a new core team magnum-ui-core.  This presents special problems if the 
UI contributor team isn't large enough to get reviews in.  I suspect Option #2 
will be difficult to execute.

Cores, please vote on Option #1, or Option #2, and Adrian can make a decision 
based upon the results.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Question about retrieving resource_list from ResourceGroup

2015-06-26 Thread Hongbin Lu
Hi team,

I would like to start my question by using a sample template:

heat_template_version: 2014-10-16
parameters:
  count:
type: number
default: 5
  removal_list:
type: comma_delimited_list
default: []
resources:
  sample_group:
type: OS::Heat::ResourceGroup
properties:
  count: {get_param: count}
  removal_policies: [{resource_list: {get_param: removal_list}}]
  resource_def:
type: testnested.yaml
outputs:
  resource_list:
value: # How to output a list of resources of sample_group? Like 
resource_list: ['0', '1', '2', '3', '4']?

As showed above, this template has a resource group that contains resources 
defined in a nested template. First, I am going to use this template to create 
a stack. Then, I am going to update the stack to scale down the resource group 
by specifying (through parameters) a subset of resources that I want to remove. 
For example,

$  heat stack-create -f test.yaml test

$ heat stack-show test

$ heat stack-update -f test.yaml -P count=3;removal_list=1,3 test

I want to know if it is possible to output a resource_list that lists all the 
removal candidate, so that I can programmatically process the list to compile 
another list (that is removal_list) which will be passed back to the template 
as a parameter. Any help will be appreciated.

Thanks,
Honbgin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-13 Thread Hongbin Lu
Thanks Adrian. Sounds good.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-13-15 2:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Hongbin,

Good use case. I suggest that we add a parameter to magnum bay-create that will 
allow the user to override the baymodel.apiserver_port attribute with a new 
value that will end up in the bay.api_address attribute as part of the URL. 
This approach assumes implementation of the magnum-api-address-url blueprint. 
This way we solve for the use case, and don't need a new attribute on the bay 
resource that requires users to concatenate multiple attribute values in order 
to get a native client tool working.
Adrian

On Jun 12, 2015, at 6:32 PM, Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com wrote:
A use case could be the cloud is behind a proxy and the API port is filtered. 
In this case, users have to start the service in an alternative port.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
wk...@cn.ibm.commailto:wk...@cn.ibm.com wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://ip:port

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

graycol.gifJay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint




I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so

Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-12 Thread Hongbin Lu
A use case could be the cloud is behind a proxy and the API port is filtered. 
In this case, users have to start the service in an alternative port.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint

Thanks for raising this for discussion. Although I do think that the API port 
humber should be expressed in a URL that the local client can immediately use 
for connecting a native client to the API, I am not convinced that this needs 
to be a separate attribute on the Bay resource.

In general, I think it’s a reasonable assumption that nova instances will have 
unique IP addresses assigned to them (public or private is not an issue here) 
so unique port numbers for running the API services on alternate ports seems 
like it may not be needed. I’d like to have input from at least one Magnum user 
explaining an actual use case for this feature before accepting this blueprint.

One possible workaround for this would be to instruct those who want to run 
nonstandard ports to copy the heat template, and specify a new heat template as 
an alternate when creating the BayModel, which can implement the port number as 
a parameter. If we learn that this happens a lot, we should revisit this as a 
feature in Magnum rather than allowing it through an external workaround.

I’d like to have a generic feature that allows for arbitrary key/value pairs 
for parameters and values to be passed to the heat stack create call so that 
this, and other values can be passed in using the standard magnum client and 
API without further modification. I’m going to look to see if we have a BP for 
this, and if not, I will make one.

Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
wk...@cn.ibm.commailto:wk...@cn.ibm.com wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service endpoint. If it is 
that case, user would use that info

htttp(s)://ip:port

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have default hard code 
port in that

2) If some users want to change port, (through heat, we can do that)  We need 
add such flexibility for users.
That's  bp 
https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port try to 
solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is not necessary to 
customize the ports. we can drop the bp.


Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

graycol.gifJay Lau ---06/11/2015 01:17:42 PM---I think that we have a similar 
bp before: https://blueprints.launchpad.net/magnum/+spec/override-nat

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 06/11/2015 01:17 PM
Subject: Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port 
Blueprint




I think that we have a similar bp before: 
https://blueprints.launchpad.net/magnum/+spec/override-native-rest-port

 I have some discussion before with Larsks, it seems that it does not make much 
sense to customize this port as the kubernetes/swarm/mesos cluster will be 
created by heat and end user do not need to care the ports,different 
kubernetes/swarm/mesos cluster will have different IP addresses so there will 
be no port conflict.

2015-06-11 9:35 GMT+08:00 Kai Qiang Wu 
wk...@cn.ibm.commailto:wk...@cn.ibm.com:
I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we 
involved policy) create that baymodel, have less flexiblity for other users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay, it 
is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China 

Re: [openstack-dev] [Magnum] Add periodic task threading for conductor server

2015-06-14 Thread Hongbin Lu
I think option #3 is the most desired choice in performance’s point of view, 
because magnum is going to support multiple conductors and all conductors share 
the same DB. However, if each conductor runs its own thread for periodic task, 
we will end up to have multiple instances of tasks for doing the same job 
(syncing heat’s state to magnum’s DB). I think magnum should have only one 
instance of periodic task since the replicated instance of tasks will stress 
the computing and networking resources.

Best regards,
Hongbin

From: Qiao,Liyong [mailto:liyong.q...@intel.com]
Sent: June-14-15 9:38 PM
To: openstack-dev@lists.openstack.org
Cc: qiaoliy...@gmail.com
Subject: [openstack-dev] [Magnum] Add periodic task threading for conductor 
server

hi magnum team,

I am planing to add periodic task for magnum conductor service, it will be good
to sync task status with heat and container service. and I have already have a 
WIP
patch[1], I'd like to start a discussion on the implement.

Currently, conductor service is an rpc server, and it has several handlers
endpoints = [
docker_conductor.Handler(),
k8s_conductor.Handler(),
bay_conductor.Handler(),
conductor_listener.Handler(),
]
all handler runs in the rpc server.

1. my patch [1] is to add periodic task functions in each handlers (if it 
requires such tasks)
and setup these functions when start rpc server, add them to a thread group.
so for example:

if we have task in bay_conductor.Handler() and docker_conductor.Handler(),
then adding 2 threads to current service's tg. each thread run it own periodic 
tasks.

the advantage is we separate each handler's task job to separate thread.
but hongbin's concern is if it will has some impacts on horizontally 
scalability.

2. another implement is put all tasks in a thread, this thread will run all
tasks(for bay,k8s, docker etc), just like sahara does see [2]

3 last one is start a new service in a separate process to run tasks.( I think 
this
will be too heavy/wasteful)

I'd like to get what's your suggestion, thanks in advance.

[1] https://review.openstack.org/#/c/187090/4
[2] 
https://github.com/openstack/sahara/blob/master/sahara/service/periodic.py#L118


--

BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for Magnum

2015-05-31 Thread Hongbin Lu
+1!

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: May-31-15 1:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for 
Magnum

Hi core team,

Kennan (Kai Qiang Wu's nickname) has really done a nice job in Magnum 
contributions.  I would like to propose Kennan for the core reviewer team.  I 
don't think we necessarily need more core reviewers on Magnum, but Kennan has 
demonstrated a big commitment to Magnum and is a welcome addition in my opinion.

For the lifetime of the project, Kennan has contributed 8% of the reviews, and 
8% of the commits.  Kennan is also active in IRC.  He meets my definition of 
what a core developer for Magnum should be.

Consider my proposal to be a +1 vote.

Please vote +1 to approve or vote -1 to veto.  A single -1 vote acts as a veto, 
meaning Kennan would not be approved for the core team.  I believe we require 3 
+1 core votes presently, but since our core team is larger now then when we 
started, I'd like to propose at our next team meeting we formally define the 
process by which we accept new cores post this proposal for Magnum into the 
magnum-core group and document it in our wiki.

I'll leave voting open for 1 week until June 6th to permit appropriate time for 
the core team to vote.  If there is a unanimous decision or veto prior to that 
date, I'll close voting and make the appropriate changes in gerrit as 
appropriate.

Thanks
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-06-01 Thread Hongbin Lu
Hi Jay,

For your question “what is the mesos object that we want to manage”, the short 
answer is it depends. There are two options I can think of:

1.   Don’t manage any object from Marathon directly. Instead, we can focus 
on the existing Magnum objects (i.e. container), and implements them by using 
Marathon APIs if it is possible. Use the abstraction ‘container’ as an example. 
For a swarm bay, container will be implemented by calling docker APIs. For a 
mesos bay, container could be implemented by using Marathon APIs (it looks the 
Marathon’s object ‘app’ can be leveraged to operate a docker container). The 
effect is that Magnum will have a set of common abstractions that is 
implemented differently by different bay type.

2.   Do manage a few Marathon objects (i.e. app). The effect is that Magnum 
will have additional API object(s) that is from Marathon (like what we have for 
existing k8s objects: pod/service/rc).
Thoughts?

Thanks
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: May-29-15 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

I want to mention that there is another mesos framework named as chronos: 
https://github.com/mesos/chronos , it is used for job orchestration.

For others, please refer to my comments in line.

2015-05-29 7:45 GMT+08:00 Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com:
I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source: https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

My comments in-line below.


Begin forwarded message:

From: hongbin hongbin...@huawei.commailto:hongbin...@huawei.com
Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay type
Date: May 28, 2015 at 2:11:29 PM PDT
To: adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: hongbin hongbin...@huawei.commailto:hongbin...@huawei.com

Blueprint changed by hongbin:

Whiteboard set to:

I did a preliminary research on possible implementations. I think this BP can 
be implemented in two steps.
1. Develop a heat template for provisioning mesos cluster.
2. Implement a magnum conductor for managing the mesos cluster.

Agreed, thanks for filing this blueprint!
For 2, the conductor is mainly used to manage objects for CoE, k8s has pod, 
service, rc, what is the mesos object that we want to manage? IMHO, mesos is a 
resource manager and it needs to be worked with some frameworks to provide 
services.


First, I want to emphasis that mesos is not a service (It looks like a
library). Therefore, mesos doesn't have web API, and most users don't
use mesos directly. Instead, they use a mesos framework that is on top
of mesos. Therefore, a mesos bay needs to have a mesos framework pre-
configured so that magnum can talk to the framework to manage the bay.
There are several framework choices. Below is a list of frameworks that
look like a fit (in my opinion). A exhaustive list of framework can be
found here [1].

1. Marathon [2]
This is a framework controlled by a company (mesosphere [3]). It is open source 
through. It supports running app on clusters of docker containers. It is 
probably the most widely-used mesos framework for long-running application.

Marathon offers a REST API, whereas Aroura does not (unless one has 
materialized in the last month). This was the one we discussed in our Vancouver 
design summit, and we agreed that those wanting to use Apache Mesos are 
probably expecting this framework.


2. Aurora [4]
This is a framework governed by Apache Software Foundation. It looks very 
similar to Marathon, but maybe more advanced in nature. It has been used by 
Twitter at scale. Here [5] is a detailed comparison between Marathon and Aurora.

We should have an alternate bay template for Aroura in our contrib directory. 
If users like Aroura better than Marathon, we can discuss making it the default 
template, and put the Marathon template in the contrib directory.


3. Kubernetes/Docker swarm
It looks the swarm-mesos is not ready yet. I cannot find any thing about that 
(besides several videos on Youtube). The kubernetes-mesos is there [6]. In 
theory, magnum should be able to deploy a mesos bay and talk to the bay through 
kubernetes API. An advantage is that we can reuse the kubernetes conductor. A 
disadvantage is that it is not a 'mesos' way to manage containers. Users from 
mesos community are probably more comfortable to manage containers through 
Marathon/Aurora.

If you want Kubernetes, you should use the Kubernetes bay type. If you want 
Kubernetes controlling Mesos, make a custom Heat template for that, and we can 
put it into contrib.
Agree, even using Mesos as resource manager, end user can still use magnum API 
to create pod, service, and rc.

If you want Swarm controlling Mesos, then you want BOTH a Swarm bay *and* a 
Mesos bay, 

Re: [openstack-dev] [openstackclient] [magnum] Review of object and actions for magnumclient implementation

2015-05-24 Thread Hongbin Lu
Hi Ronald,

I think the “update” action is definitely appropriate to use, since it is not 
specific to magnum (Heat and Ironic use it as well). By looking through the 
existing list of actions [1], it looks the start/stop actions fits into the 
openstackclient resume/suspend actions. The “execute” action looks to be 
magnum-specific, and it is rarely used. We could either skip adding it to 
openstackclient or propose a new action in here.

Thanks,
Hongbin

[1] http://docs.openstack.org/developer/python-openstackclient/commands.html

From: Ronald Bradford [mailto:m...@ronaldbradford.com]
Sent: May-24-15 2:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstackclient] [magnum] Review of object and 
actions for magnumclient implementation


I have outlined in the blueprint  
(https://blueprints.launchpad.net/python-openstackclient/+spec/magnum-support) 
the object and actions mappings that are currently available in the 
magnumclient.

I have separated the list of actions that are presently used and actions that 
are not for review and discussion. Specifically These actions DO NOT match.

bay [ update ]
container [ execute | start | stop ]
pod [ update ]
replication controller [ update ]
service [ update ]

I would appreciate feedback on if these actions update, execute, start 
and stop are appropriate to use.

Regards

Ronald
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-06-29 Thread Hongbin Lu
Agree. The motivation of pulling templates out of Magnum tree is hoping these 
templates can be leveraged by a larger community and get more feedback. 
However, it is unlikely to be the case in practise, because different people 
has their own version of templates for addressing different use cases. It is 
proven to be hard to consolidate different templates even if these templates 
share a large amount of duplicated code (recall that we have to copy-and-paste 
the original template to add support for Ironic and CoreOS). So, +1 for 
stopping usage of heat-coe-templates.

Best regards,
Hongbin

-Original Message-
From: Tom Cammann [mailto:tom.camm...@hp.com] 
Sent: June-29-15 11:16 AM
To: openstack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates

Hello team,

I've been doing work in Magnum recently to align our templates with the 
upstream templates from larsks/heat-kubernetes[1]. I've also been porting 
these changes to the stackforge/heat-coe-templates[2] repo.

I'm currently not convinced that maintaining a separate repo for Magnum 
templates (stackforge/heat-coe-templates) is beneficial for Magnum or the 
community.

Firstly it is very difficult to draw a line on what should be allowed into the 
heat-coe-templates. We are currently taking out changes[3] that introduced 
useful autoscaling capabilities in the templates but that didn't fit the 
Magnum plan. If we are going to treat the heat-coe-templates in that way then 
this extra repo will not allow organic development of new and old container 
engine templates that are not tied into Magnum.
Another recent change[4] in development is smart autoscaling of bays which 
introduces parameters that don't make a lot of sense outside of Magnum.

There are also difficult interdependency problems between the templates and the 
Magnum project such as the parameter fields. If a required parameter is added 
into the template the Magnum code must be also updated in the same commit to 
avoid functional test failures. This can be avoided using Depends-On: 
#xx
feature of gerrit, but it is an additional overhead and will require some CI 
setup.

Additionally we would have to version the templates, which I assume would be 
necessary to allow for packaging. This brings with it is own problems.

As far as I am aware there are no other people using the heat-coe-templates 
beyond the Magnum team, if we want independent growth of this repo it will need 
to be adopted by other people rather than Magnum commiters.

I don't see the heat templates as a dependency of Magnum, I see them as a truly 
fundamental part of Magnum which is going to be very difficult to cut out and 
make reusable without compromising Magnum's development process.

I would propose to delete/deprecate the usage of heat-coe-templates and 
continue with the usage of the templates in the Magnum repo. How does the team 
feel about that?

If we do continue with the large effort required to try and pull out the 
templates as a dependency then we will need increase the visibility of repo and 
greatly increase the reviews/commits on the repo. We also have a fairly 
significant backlog of work to align the heat-coe-templates with the templates 
in heat-coe-templates.

Thanks,
Tom

[1] https://github.com/larsks/heat-kubernetes
[2] https://github.com/stackforge/heat-coe-templates
[3] https://review.openstack.org/#/c/184687/
[4] https://review.openstack.org/#/c/196505/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][blueprint] magnum-service-list

2015-07-29 Thread Hongbin Lu
Suro,

I think service/pod/rc are k8s-specific. +1 for Jay’s suggestion about renaming 
COE-specific command, since the new naming style looks consistent with other 
OpenStack projects. In addition, it will eliminate name collision of different 
COEs. Also, if we are going to support pluggable COE, adding prefix to 
COE-specific command is unavoidable.

Best regards,
Hongbin

From: SURO [mailto:suro.p...@gmail.com]
Sent: July-29-15 4:03 PM
To: Jay Lau
Cc: s...@yahoo-inc.com; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [magnum][blueprint] magnum-service-list

Hi Jay,

'service'/'pod'/'rc' are conceptual abstraction at magnum level. Yes, the 
abstraction was inspired from the same in kubernetes, but the data stored in DB 
about a 'service' is properly abstracted and not k8s-specific at the top level.

If we plan to change this to 'k8s-service-list', the same applies for even 
creation and other actions. This will give rise to COE-specific command and 
concepts and which may proliferate further. Instead, we can abstract swarm's 
service concept under the umbrella of magnum's 'service' concept without 
creating k8s-service and swarm-service.

I suggest we should keep the concept/abstraction at Magnum level, as it is.


Regards,

SURO

irc//freenode: suro-patz
On 7/28/15 7:59 PM, Jay Lau wrote:
Hi Suro,
Sorry for late. IMHO, even the magnum service-list is getting data from DB, 
but the DB is actually persisting some data for Kubernetes service, so my 
thinking is it possible to change magnum service-list to magnum 
k8s-service-list, same for pod and rc.
I know this might bring some trouble for backward compatibility issue, not sure 
if it is good to do such modification at this time. Comments?
Thanks

2015-07-27 20:12 GMT-04:00 SURO 
suro.p...@gmail.commailto:suro.p...@gmail.com:
Hi all,
As we did not hear back further on the requirement of this blueprint, I propose 
to keep the existing behavior without any modification.

We would like to explore the decision on this blueprint on our next weekly IRC 
meeting[1].


Regards,

SURO

irc//freenode: suro-patz



[1] - https://wiki.openstack.org/wiki/Meetings/Containers
2015-07-28

UTC 2200 Tuesday



On 7/21/15 4:54 PM, SURO wrote:
Hi all, [special attention: Jay Lau] The bp[1] registered, asks for the 
following implementation -

  *   'magnum service-list' should be similar to 'nova service-list'
  *   'magnum service-list' should be moved to be ' magnum k8s-service-list'. 
Also similar holds true for 'pod-list'/'rc-list'
As I dug some details, I find -

  *   'magnum service-list' fetches data from OpenStack DB[2], instead of the 
COE endpoint. So technically it is not k8s-specific. magnum is serving data for 
objects modeled as 'service', just the way we are catering for 'magnum 
container-list' in case of swarm bay.
  *   If magnum provides a way to get the COE endpoint details, users can use 
native tools to fetch the status of the COE-specific objects, viz. 'kubectl get 
services' here.
  *   nova has lot more backend services, e.g. cert, scheduler, consoleauth, 
compute etc. in comparison to magnum's conductor only. Also, not all the api's 
have this 'service-list' available.
With these arguments in view, can we have some more explanation/clarification 
in favor of the ask in the blueprint? [1] - 
https://blueprints.launchpad.net/magnum/+spec/magnum-service-list [2] - 
https://github.com/openstack/magnum/blob/master/magnum/objects/service.py#L114

--

Regards,

SURO

irc//freenode: suro-patz



--
Thanks,
Jay Lau (Guangya Liu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]horizontal scalability

2015-08-03 Thread Hongbin Lu
Adrian,

If the reason to avoid leader election is because it is complicated and error 
prone, this argument may not be true. Leader election is complicated in a pure 
distributed system in which there is no centralized storage. However, Magnum 
has a centralized database, so it is possible to implement a very simple leader 
election algorithm. For example, we can let each conductor register itself in 
the DB and elect the first registered conductor to be the leader. How about 
that?

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: August-03-15 12:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]horizontal scalability


On Aug 2, 2015, at 7:40 PM, 王华 
wanghua.hum...@gmail.commailto:wanghua.hum...@gmail.com wrote:

Hi all,

As discussed in the Vancouver Summit, we are going to drop the bay lock 
implementation. Instead, each conductor will call Heat concurrently and rely on 
heat for concurrency control. However, I think we need an approach for state 
convergence from heat to magnum. Either periodic task [1] or heat notification 
[2] looks like a candidate.
[1] https://blueprints.launchpad.net/magnum/+spec/add-periodic-task
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-March/058898.html
--hongbin
If we use periodic task to sync state from heat to magnum, I think we should 
make periodic task a independent process and magnum-conductor only operates 
heat stack.
How to make periodic task high available?
1.We can run several periodic tasks.
2.Or we can use a leader selection mechanism to make only a periodic task 
running
and other periodic tasks waiting.
Shall we make periodic task a independent process? How to make periodic task 
high available?
Good question. The traditional solution for handling this in a distributed 
system is to hold a leader election. The elected leader is responsible for 
dispatching the job to a queue that one available worker will pick up and run. 
However, that may not actually be needed in our case. Consider the question:

What harm will come if all master nodes in the cluster perform the same 
periodic task?

One drawback is that more resources will be consumed than necessary 
(efficiency). As long as updates to the Magnum database are transactional, 
having concurrent updates to the same bay is actually not something we expect 
would result in data corruption. Worst case the same update would be processed 
multiple times. The advantage of using this approach is that we would not need 
to implement any form of leader selection. This would keep our implementation 
simpler, and less error prone.

We could still supervise each periodic task process so if they end up crashing 
on each node, they would be restarted. This is simple to do from a patent 
process that calls os.wait() on the child task.

Thoughts?

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]password for registry v2

2015-08-13 Thread Hongbin Lu
Hi Wanghua,

For the question about how to pass user password to bay nodes, there are 
several options here:

1.   Directly inject the password to bay nodes via cloud-init. This might 
be the simplest solution. I am not sure if it is OK in security aspect.

2.   Inject a scoped Keystone trust to bay nodes and use it to fetch user 
password from Barbican (suggested by Adrian).

3.   Leverage the solution proposed by Kevin Fox [1]. This might be a 
long-term solution.

For the security concerns about storing credential in a config file, I need 
clarification. What is the config file? Is it a dokcer registry v2 config file? 
I guess the credential stored there will be used to talk to swift. Is that 
correct? In general, it is insecure to store user credential inside a VM, 
because anyone can take a snapshot of the VM and boot another VM from the 
snapshot. Maybe storing a scoped credential in the config file could mitigate 
the security risk. Not sure if there is a better solution.

[1] https://review.openstack.org/#/c/186617/

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: August-13-15 4:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum]password for registry v2

Hi all,

In order to add registry v2 to bay nodes[1], authentication information is 
needed for the registry to upload and download files from swift. The swift 
storage-driver in registry now needs the parameters as described in [2]. User 
password is needed. How can we get the password?

1. Let user pass password in baymodel-create.
2. Use user token to get password from keystone

Is it suitable to store user password in db?

It may be insecure to store password in db and expose it to user in a config 
file even if the password is encrypted. Heat store user password in db before, 
and now change to keystone trust[3]. But if we use keystone trust, the swift 
storage-driver does not support it. If we use trust, we expose magnum user's 
credential in a config file, which is also insecure.

Is there a secure way to implement this bp?

[1] https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master
[2] 
https://github.com/docker/distribution/blob/master/docs/storage-drivers/swift.md
[3] https://wiki.openstack.org/wiki/Keystone/Trusts

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

2015-07-21 Thread Hongbin Lu
Adrian,

I definitely agree with #1 - #5. I am just trying to understand the nova virt 
driver for hyper approach. As Peng mentioned, hyper is a hypervisor-based 
substitute for container, but magnum is not making a special virt driver for 
container host creation (Instead, magnum leverages the existing virt driver to 
do that, such as libvirt and ironic). What I don’t understand is why we need a 
dedicated nova-hyper virt driver for host creation, but we are not doing the 
same for docker host creation. What makes hyper special so that we have to make 
a virt driver for it? Or do I miss something?

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: July-19-15 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

Peng,

You are not the first to think this way, and it's one of the reasons we did not 
integrate Containers with OpenStack in a meaningful way a full year earlier. 
Please pay attention closely.

1) OpenStack's key influences care about two personas: 1.1) Cloud Operators 
1.2) Cloud Consumers. If you only think in terms of 1.2, then your idea will 
get killed. Operators matter.

2) Cloud Operators need a consistent way to bill for the IaaS services the 
provide. Nova emits all of the RPC messages needed to do this. Having a second 
nova that does this slightly differently is a really annoying problem that will 
make Operators hate the software. It's better to use nova, have things work 
consistently, and plug in virt drivers to it.

3) Creation of a host is only part of the problem. That's the easy part. Nova 
also does a bunch of other things too. For example, say you want to live 
migrate a guest from one host to another. There is already functionality in 
Nova for doing that.

4) Resources need to be capacity managed. We call this scheduling. Nova has a 
pluggable scheduler to help with the placement of guests on hosts. Magnum will 
not.

5) Hosts in a cloud need to integrate with a number of other services, such as 
an image service, messaging, networking, storage, etc. If you only think in 
terms of host creation, and do something without nova, then you need to 
re-integrate with all of these things.

Now, I probably left out examples of lots of other things that Nova does. What 
I have mentioned us enough to make my point that there are a lot of things that 
Magnum is intentionally NOT doing that we expect to get from Nova, and I will 
block all code that gratuitously duplicates functionality that I believe 
belongs in Nova. I promised our community I would not duplicate existing 
functionality without a very good reason, and I will keep that promise.

Let's find a good way to fit Hyper with OpenStack in a way that best leverages 
what exists today, and is least likely to be rejected. Please note that the 
proposal needs to be changed from where it is today to achieve this fit.

My fist suggestion is to find a way to make a nova virt driver for Hyper, which 
could allow it to be used with all of our current Bay types in Magnum.

Thanks,

Adrian


 Original message 
From: Peng Zhao p...@hyper.shmailto:p...@hyper.sh
Date: 07/19/2015 5:36 AM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper
Thanks Jay.

Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos. I 
just think bay isn't a must in this case, and we don't need nova to provision 
BM hosts, which makes things more complicated imo.

Peng


-- Original --
From:  Jay Laujay.lau@gmail.commailto:jay.lau@gmail.com;
Date:  Sun, Jul 19, 2015 10:36 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper

Hong Bin,
I have some online discussion with Peng, seems hyper is now integrating with 
Kubernetes and also have plan integrate with mesos for scheduling. Once mesos 
integration finished, we can treat mesos+hyper as another kind of bay.
Thanks

2015-07-19 4:15 GMT+08:00 Hongbin Lu 
hongbin...@huawei.commailto:hongbin...@huawei.com:
Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:p...@hyper.shmailto:p...@hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Hongbin Lu
Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1 If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small  'VM' m1.middle  'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2 @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3 Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the 
idea of using Nova flavor directly. Why we introduc]Hongbin Lu ---07/16/2015 
04:44:14 AM---+1 for the idea of using Nova flavor directly. Why we introduced 
the “platform” field to indicate “v

From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:
* platform: vm or baremetal
* os: atomic, coreos, …
* coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:
* flavor: baremetal, m1.small, m1.medium,  …
* os: atomic, coreos, ...
* coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-16 Thread Hongbin Lu
I am OK with server_type as well.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-16-15 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


+ 1 about server_type.

I also think it is OK.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Adrian Otto ---07/16/2015 03:18:04 PM---I’d be 
comfortable with server_type. Adrian]Adrian Otto ---07/16/2015 03:18:04 
PM---I’d be comfortable with server_type. Adrian

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 07/16/2015 03:18 PM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





I’d be comfortable with server_type.

Adrian
On Jul 15, 2015, at 11:51 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

After more thinking, I agree with Hongbin that instance_type might make 
customer confused with flavor, what about using server_type?

Actually, nova has concept of server group, the servers in this group can be 
vm. pm or container.

Thanks!

2015-07-16 11:58 GMT+08:00 Kai Qiang Wu 
wk...@cn.ibm.commailto:wk...@cn.ibm.com:
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs others case 
first.
Attach:  initial patch (about the discussion): 
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not reach a 
meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
   No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

graycol.gifHongbin Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for the 
confusion. To clarify, I was thinking how to name the field you proposed in 
baymo

From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 07/16/2015 11:47 AM

Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1 If used flavor, it is nova specific concept: take two as example,
  m1.small, or m1.middle.
 m1.small  'VM' m1.middle  'VM'
 Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2 @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3 Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-14 Thread Hongbin Lu
I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Hongbin Lu
+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:

・ platform: vm or baremetal

・ os: atomic, coreos, …

・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:

・ flavor: baremetal, m1.small, m1.medium,  …

・ os: atomic, coreos, ...

・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, July 14, 2015 at 7:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu hongbin...@huawei.commailto:hongbin...@huawei.com
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk

Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

2015-07-18 Thread Hongbin Lu
Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. 
Then, who is doing the multi-host scheduling, Hyper or something else? Were you 
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to 
integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:p...@hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with 
Hyper

Hi, Adrian, Jay and all,

There could be a much longer version of this, but let me try to explain in a 
minimalist way.

Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to 
isolate different tenants' containers. In other words, bay is single-tenancy. 
For BM-based bay, the single tenancy is a worthy tradeoff, given the 
performance merits of LXC vs VM. However, for a VM-based bay, there is no 
performance gain, but single tenancy seems a must, due to the lack of isolation 
in container. Hyper, as a hypervisor-based substitute for container, brings the 
much-needed isolation, and therefore enables multi-tenancy. In HyperStack, we 
don't really need Ironic to provision multiple Hyper bays. On the other hand,  
the entire HyperStack cluster is a single big bay. Pretty similar to how Nova 
works.

Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. 
So when someone submits a Docker Compose app, HyperStack would launch HyperVMs 
and call Cinder/Neutron to setup the volumes and network. The architecture is 
quite simple.

Here are a blog I'd like to recommend: 
https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html

Let me know your questions.

Thanks,
Peng

-- Original --
From:  Adrian 
Ottoadrian.o...@rackspace.commailto:adrian.o...@rackspace.com;
Date:  Thu, Jul 16, 2015 11:02 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwith Hyper

Jay,

Hyper is a substitute for a Docker host, so I expect it could work equally well 
for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes 
“pod” are similar, but different. I’m not yet convinced that integrating Hyper 
host creation direct with Magnum (and completely bypassing nova) is a good 
idea. It probably makes more sense to implement use nova with the ironic dirt 
driver to provision Hyper hosts so we can use those as substitutes for Bay 
nodes in our various Bay types. This would fit in the place were we use Fedora 
Atomic today. We could still rely on nova to do all of the machine instance 
management and accounting like we do today, but produce bays that use Hyper 
instead of a Docker host. Everywhere we currently offer CoreOS as an option we 
could also offer Hyper as an alternative, with some caveats.

There may be some caveats/drawbacks to consider before committing to a Hyper 
integration. I’ll be asking those of Peng also on this thread, so keep an eye 
out.

Thanks,

Adrian

On Jul 16, 2015, at 3:23 AM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

Thanks Peng, then I can see two integration points for Magnum and Hyper:
1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: 
docker and hyper mode, the end user can select which mode they want to use. For 
such case, we do not need to create a new bay but may need some enhancement for 
current k8s bay
2) After mesos and hyper integration,  we can treat mesos and hyper as a new 
bay to magnum. Just like what we are doing now for mesos+marathon.
Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao p...@hyper.shmailto:p...@hyper.sh:
Hi Jay,

Yes, we are working with the community to integrate Hyper with Mesos and K8S. 
Since Hyper uses Pod as the default job unit, it is quite easy to integrate 
with K8S. Mesos takes a bit more efforts, but still straightforward.

We expect to finish both integration in v0.4 early August.

Best,
Peng

-
Hyper - Make VM run like Container



On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:
Hi Peng,

Just want to get more for Hyper. If we create a hyper bay, then can I set up 
multiple hosts in a hyper bay? If so, who will do the scheduling, does mesos or 
some others integrate with hyper?
I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao p...@hyper.shmailto:p...@hyper.sh:






-- Original --
From:  “Adrian 
Otto”adrian.o...@rackspace.commailto:adrian.o...@rackspace.com;
Date:  Wed, Jul 15, 2015 02:31 AM
To:  “OpenStack Development Mailing List (not for usage 

[openstack-dev] [magnum][devstack] Request actions on a suspicious commit

2015-07-13 Thread Hongbin Lu
Hi,

I sent this email to request investigations on a suspicious commit [1] from 
devstack, which possibly breaks magnum's functional gate test. The first 
breakage at Magnum side occurred at Jul 10 4:18 PM [2], which is about half an 
hour after the suspicious commit being merged. By dipping into the codes, the 
suspicious commit injected several domain variables, which are incompatible 
with Keystone v2, to the rc files. The presence of domain variables is proven 
to be the cause of the breakage [3]. So, could I respectfully ask someone from 
devstack team to look into the suspicious commit and conduct appropriate 
actions on it if necessary.

Thanks,
Hongbin

[1] https://review.openstack.org/#/c/193935/3
[2] https://review.openstack.org/#/c/199212/
[3]  https://review.openstack.org/#/c/200835/6
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-25 Thread Hongbin Lu
Hi Steve,

Thanks for your contributions. Personally, I would like to think for your 
mentorship and guidance when I was new to Magnum. It helps me a lot to pick up 
everything. Best wish for your adventure in Kolla.

Best regards,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: October-25-15 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

Hey folks,

It is with sadness that I find myself under the situation to have to write this 
message.  I have the privilege of being involved in two of the most successful 
and growing projects (Magnum, Kolla) in OpenStack.  I chose getting involved in 
two major initiatives on purpose, to see if I could do the job; to see if  I 
could deliver two major initiatives at the same time.  I also wanted it to be a 
length of time that was significant - 1+ year.  I found indeed I was able to 
deliver both Magnum and Kolla, however, the impact on my personal life has not 
been ideal.

The Magnum engineering team is truly a world class example of how an Open 
Source project should be constructed and organized.  I hope some young academic 
writes a case study on it some day but until then, my gratitude to the Magnum 
core reviewer team is warranted by the level of  their sheer commitment.

I am officially focusing all of my energy on Kolla going forward.  The Kolla 
core team elected me as PTL (or more accurately didn't elect anyone else;) and 
I really want to be effective for them, especially in what I feel is Kolla's 
most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally in 
Cisco.  Some of these have born fruit already including the Heat resources, the 
Horizon plugin, and of course the Networking plugin system.  I will also 
continue to support Magnum from a resources POV where I can do so (like the 
fedora image storage for example).  What I won't be doing is reviewing Magnum 
code (serving as a gate), or likely making much technical contribution to 
Magnum in the future.  On the plus side I've replaced myself with many many 
more engineers from Cisco who should be much more productive combined then I 
could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people or 
the technology.  I think the people are fantastic! And the technology - well I 
helped design the entire architecture!  I am letting Magnum grow up without me 
as I have other children that need more direct attention.  I think this 
viewpoint shows trust in the core reviewer team, but feel free to make your own 
judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple 
disciplines at once.  Without Perry as a role model, Magnum may have never 
happened (or would certainly be much different then it is today). Being a solid 
hybrid engineer has a long ramp up time and is really difficult, but also very 
rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-13 Thread Hongbin Lu
I am going to share something that might be off the topic a bit.

Yesterday, I was pulled to the #openstack-infra channel to participant a 
discussion, which is related to the atomic image download in Magnum. It looks 
the infra team is not satisfied with the large image size. In particular, they 
need to double the timeout to accommodate the job [1] [2], which made them 
unhappy. Is there a way to reduce the image size? Or even better, is it 
possible to build the image locally instead of downloading it?

[1] https://review.openstack.org/#/c/242742/
[2] https://review.openstack.org/#/c/244847/

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: November-13-15 12:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.


Right now, we seems can not reduce devstack runtime. ANd @Ton, yes, download 
image time seems OK in jenkins job, it found about 4~5 mins

But bay-creation time is interesting topic, it seems something related with 
heat or VM setup time consumption. But needs some investigation.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for "Ton Ngo" ---13/11/2015 01:13:47 pm---Thanks Eli for 
the analysis.  I notice that the time to downloa]"Ton Ngo" ---13/11/2015 
01:13:47 pm---Thanks Eli for the analysis. I notice that the time to download 
the image is only around 1:15 mins

From: "Ton Ngo" >
To: "OpenStack Development Mailing List \(not for usage questions\)" 
>
Date: 13/11/2015 01:13 pm
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on 
gate.





Thanks Eli for the analysis. I notice that the time to download the image is 
only around 1:15 mins out of some 21 mins to set up devstack. So it seems 
trying to reduce the size of the image won't make a significant improvement in 
the devstack time. I wonder how the image size affects the VM creation time for 
the cluster. If we can look at the Heat event stream, we might get an idea.
Ton,


[Inactive hide details for Egor Guz ---11/12/2015 05:25:15 PM---Eli, First of 
all I would like to say thank you for your effort]Egor Guz ---11/12/2015 
05:25:15 PM---Eli, First of all I would like to say thank you for your effort 
(I never seen so many path sets ;)),

From: Egor Guz >
To: 
"openstack-dev@lists.openstack.org" 
>
Date: 11/12/2015 05:25 PM
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on 
gate.




Eli,

First of all I would like to say thank you for your effort (I never seen so 
many path sets ;)), but I don’t think we should remove “tls_disabled=True” 
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for 
some reasons.

I think it’s good idea to group tests per pipeline we should definitely follow 
it.

―
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 11, 2015 at 23:02
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) 

Re: [openstack-dev] [magnum] Failed to create swarm bay with fedora-21-atomic-5-d181.qcow2

2015-10-16 Thread Hongbin Lu
Hi Mars,

I cannot reproduce the error. My best guess is that your VMs don’t have 
external internet access (Could you verify it by ssh into one of your VM and 
type “curl openstack.org” ?). If not, please create a bug to report the error 
(https://bugs.launchpad.net/magnum).

Thanks,
Hongbin

From: Mars Ma [mailto:wenc...@gmail.com]
Sent: October-16-15 2:37 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2

Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the bay 
went to failed status with status reason: Resource CREATE failed: 
WaitConditionFailure: 
resources.swarm_nodes.resources[0].resources.node_agent_wait_condition: 
swarm-agent service failed to start.
debug inside swarm node, found that docker failed to start, lead to swarm-agent 
and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker -v
Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
http://paste.openstack.org/show/476450/




Thanks & Best regards !
Mars Ma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Creating pods results in "EOF occurred in violation of protocol" exception

2015-10-14 Thread Hongbin Lu
Hi Bertrand,

Thanks for reporting the error. I confirmed that this error was consistently 
reproducible. A bug ticket was created for that.

https://bugs.launchpad.net/magnum/+bug/1506226

Best regards,
Hongbin

-Original Message-
From: Bertrand NOEL [mailto:bertrand.n...@cern.ch] 
Sent: October-14-15 8:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Creating pods results in "EOF occurred in 
violation of protocol" exception

Hi,
I try Magnum, following instructions on the quickstart page [1]. I successfully 
create the baymodel and the bay. When I run the command to create redis pods 
(_magnum pod-create --manifest ./redis-master.yaml --bay k8sbay_), client side, 
it timeouts. And server side (m-cond.log), I get the following stack trace. It 
also happens with other Kubernetes examples.
I try with Ubuntu 14.04, with Magnum at commit 
fc8f412c87ea0f9dc0fc1c24963013e6d6209f27.


2015-10-14 12:16:40.877 ERROR oslo_messaging.rpc.dispatcher
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Exception during message 
handling: [Errno 8] _ssl.c:510: EOF occurred in violation of protocol
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 142, in _dispatch_and_reply
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 186, in _dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 129, in _do_dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, 
**new_args)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/conductor/handlers/k8s_conductor.py", line 89, in 
pod_create
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
namespace='default')
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/apis/apiv_api.py",
line 3596, in create_namespaced_pod
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
callback=params.get('callback'))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 320, in call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response_type, 
auth_settings, callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 148, in __call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
post_params=post_params, body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 350, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
265, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.IMPL.POST(*n, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
187, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
133, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher headers=headers)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 72, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher **urlopen_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 149, in 
request_encode_body
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.urlopen(method, url, **extra_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 161, in 
urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response = 
conn.urlopen(method, u.request_uri, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 588, 
in urlopen
2015-10-14 12:16:40.877 

[openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Hongbin Lu
Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won't 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:

* It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won't have QoS guarantee.

* The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:

* The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Tom Cammann for core

2015-07-10 Thread Hongbin Lu
+1 Welcome Tom!

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: July-09-15 10:21 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [magnum] Tom Cammann for core

Team,

Tom Cammann (tcammann) has become a valued Magnum contributor, and consistent 
reviewer helping us to shape the direction and quality of our new 
contributions. I nominate Tom to join the magnum-core team as our newest core 
reviewer. Please respond with a +1 vote if you agree. Alternatively, vote -1 to 
disagree, and include your rationale for consideration.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-27 Thread Hongbin Lu
-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project (like Nova), 
but not for a young project like Magnum. It incurs overheads for contributors  
reviewers to check the object compatibility in each patch. As you mentioned, 
the key benefit of this feature is supporting different version of magnum 
components running at the same time (i.e. running magnum-api 1.0 with 
magnum-conductor 1.1). I don't think supporting this advanced use case is a 
must at the current stage.

However, I don't mean to against merging patches of this feature. I just 
disagree to enforce the rule of object version change in the near future.

Best regards,
Hongbin

From: Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
Sent: August-26-15 4:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we don't 
change object versions.

The idea of objects is that each change in their fields should be versioned, 
documentation about the change should also be written in a comment inside the 
object and the obj_make_compatible method should be implemented or updated. See 
an example here:
https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades from next 
release or maybe it's still too early?

If yes, I think core reviewers should start checking for these incompatible 
changes.

To clarify, rolling upgrades means support for running magnum services at 
different versions at the same time.
In Nova, there is an RPC call in the conductor to backport objects, which is 
called when older code gets an object it doesn't understand. This patch does 
this in Magnum: https://review.openstack.org/#/c/184791/ .

I can report bugs and propose patches with version changes for this release, to 
get the effort started.

In Mitaka, when Grenade gets multi-node support, it can be used to add CI tests 
for rolling upgrades in Magnum.


/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Steps to upload magnum images

2015-09-06 Thread Hongbin Lu
Hi team,

As you may know, magnum is tested with pre-built Fedora Atomic images. 
Basically, these images are standard atomic image with k8s packages 
pre-installed. The images can be downloaded from fedorapeople.org [1]. In most 
cases, you are able to test magnum by using images there. If you are not 
satisfied by existing images, you are welcome to build a new image and share it 
with the team. Here [2] is the instruction for how to build an new atomic 
image. After you successfully build an image, you may want to upload it to the 
public file server, which is what I am going to talk about.

Below are the steps to upload an image:

1.   Register an account in here https://admin.fedoraproject.org/accounts/

2.   Sign the contributor agreement (On the home page after you login: "My 
Account" -> "Contributor Agreement").

3.   Upload your public key ("My Account" -> "Public SSH Key").

4.   Apply to join the magnum group ("Join a group" -> search "magnum" -> 
"apply"). If you cannot find the "apply" link under "Status" (I didn't), you 
can wait a few minutes or skip this step and ask Steven Dake to add you to the 
group instead.

5.   Ping Steven Dake (std...@cisco.com) to approve your application.

6.   After 30-60 minutes, you should be able to SSH to the file server (ssh 
@fedorapeople.org). Our images are stored in /srv/groups/magnum.

Notes on using the file server:

* Avoid type "sudo ...".

* Activities there are logged, so don't expect any privacy.

* Not all contents are allowed. Please make sure your uploaded contents 
are acceptable before uploading it.

[1] https://fedorapeople.org/groups/magnum/
[2] 
https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-build-atomic-image.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Vote for our weekly meeting schedule

2015-09-10 Thread Hongbin Lu
Hi team,

Currently, magnum weekly team meeting is scheduled at Tuesday UTC1600 and 
UTC2200. As our team growing, contributors from different timezones joined and 
actively participated. I worried that our current meeting schedule (which was 
decided a long time ago) might not be update-to-date to reflect the convenience 
of our contributors. Therefore, a doodle pool was created:

http://doodle.com/poll/76ix26i2pdz89vz4

The proposed time slots were made according to an estimation of current 
geographic distribution of active contributors. Please feel free to ping me if 
you want to propose additional time slots. Finally, please vote your meeting 
time. The team will decide whether to adjust the current schedule according to 
the results. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Maintaining cluster API in upgrades

2015-09-14 Thread Hongbin Lu
Hi Ryan,

I think pushing python k8sclient out of magnum tree (option 3) is the decision, 
which was made in Vancouver Summit (if I remembered correctly). It definitely 
helps for solving the k8s versioning problems.

Best regards,
Hongbin

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: September-14-15 6:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Maintaining cluster API in upgrades

I have some food for thought with regards to upgrades that was provoked by some 
incorrect usage of Magnum which led me to finding [1].

Let's say we're running a cloud with Liberty Magnum, which works with 
Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so now 
Magnum conductor in Mitaka works with Kubernetes v2 API. What would happen if I 
upgrade from L to M with Magnum? My existing Magnum/k8s stuff will be on v1, so 
having Mitaka conductor attempt to interact with that stuff will cause it to 
blow up right? The k8s API calls will fail because the communicating components 
are using differing versions of the API (assuming there are backwards 
incompatibilities).

I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do API 
discovery to figure out which version of the API to use
  - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to use 
the current version of the API
  - In my head, this would work kind of like a database migration, but it seems 
like it would be a lot harder

3. Maintain cluster clients outside of the Magnum tree
  - This would make maintaining the client compatibilities a lot easier
  - Would help eliminate the cruft of merging 48k lines for a swagger generated 
client [2]
  - Having the client outside of tree would allow for a simple pip install
  - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant to be 
more of a "what if" discussion. I am also up for suggestions other than the 3 
above.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html
[2] https://review.openstack.org/#/c/217427/


--

Thanks,



Ryan Rossiter (rlrossit)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][ptl] Magnum PTL Candidacy

2015-09-17 Thread Hongbin Lu
Hi all,

I would like to announce my candidacy for the PTL position of Magnum.

I involved in Magnum project starting from December 2014. At that time, 
Magnum's code base is much smaller than right now. Since then, I worked with a 
diverse set of team members to land features, discuss the roadmap, fix the 
gate, do pre-release test, fix the documentation, etc.. Thanks to our team 
efforts, in the past a few months, I saw important features were landed one 
after another, which I really proud of.

To address the question why I am a good candidate for Magnum, here are the key 
reasons:
* I contributed a lot to Magnum's code base and feature set.
* I am familiar with every aspects of the project, and understand both the big 
picture and technical details.
* I will have time and resources to take the PTL responsibility, mostly because 
I am a full time allocated to this project.
* I love container.
* I care the project.
Here is more details of my involvement in the project:
http://stackalytics.com/?module=magnum-group
https://github.com/openstack/magnum/graphs/contributors

In my opinion, Magnum needs to focus on the following in the next cycle:
* Production ready: Work on everything that are possible to make it happen.
* Baremetal: Complete and optimize our Ironic integration to enable running 
containers on baremetal.
* Container network: deliver a network solution that is high performance and 
customizable.
* UI: Integrate with Horizon to give users a friendly interface to use Magnum.
* Pluggable COE: A pluggable design is a key feature for users to customize 
Magnum, which is always the case.
* Grow community: Attract more contributors to contribute to Magnum.

If elected, I will strictly follow the principal of being a OpenStack project, 
especially the four opens. The direction of the project will be 
community-driven as always.

I hope you will give me an opportunity to serve as Magnum's PTL in the next 
cycle.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] PTL nomination period is now over

2015-09-17 Thread Hongbin Lu
Hi,

I am fine to have an election with Adrian Otto, and potentially with other 
candidates who are also late.

Best regards,
Hongbin

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: September-17-15 4:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][elections] PTL nomination period is now over

On Thu, Sep 17, 2015 at 3:19 PM, Anne Gentle 
> wrote:


On Thu, Sep 17, 2015 at 3:15 PM, John Griffith 
> wrote:


On Thu, Sep 17, 2015 at 2:00 PM, Doug Hellmann 
> wrote:
Excerpts from Morgan Fainberg's message of 2015-09-17 12:51:33 -0700:

> I think this is all superfluous however and we should simply encourage
> people to not wait until the last minute. Waiting to see who is
> running/what the field looks like isn't as important as standing up and
> saying you're interested in running.

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​My dog ate my homework...
My car wouldn't start...
I couldn't figure out what UTC time was...

The guidelines seemed pretty clear:
Any member of an election electorate can propose their candidacy for the same 
election until September 17, 05:59 UTC​

That being said, a big analysis on date/time selection etc doesn't really seem 
warranted here or harping on the fact that something 'went wrong'.  I as a TC 
member have no problem saying "things happen" and those that have submitted 
candidacy albeit late and are unopposed are in.. no muss no fuss.  I *think* 
we're all reasonable adults and don't know that anybody had in mind that the TC 
would arbitrarily assign somebody that wasn't even listed as a PTL for one of 
the mentioned projects.


It's not so simple for Magnum, with 2 late candidacies. We'll figure it out but 
yes, we have work to do.


It could be simple: Let magnum have an election with both candidates. As Monty 
said:

"... this is a great example of places where human judgement is better than 
rules."
Thanks,
Kyle

Anne


Moving on,
John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Hongbin Lu
+1 for both. Welcome!

From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I personally don’t feel that pain is worth the gain.



___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Do you mean this proposal 
http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html
 ? It looks like a support of hierarchical role/privilege, and I couldn't find 
anything related to resource sharing. I am not sure if it can address the use 
cases Kris mentioned.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: October-01-15 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

* Project A

* Project A-1

* Project A-2

Then you can assign users to projects in the following ways:

* Assign team 1 members to both Project A and Project A-1

* Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10-20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don't want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to "look like" it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.

>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative ove

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Hongbin Lu
+1 from me as well.

I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of the main goal.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM, PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz 
> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto 
>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
>>
 wrote:


+1

From: Tom Cammann 
>>
Reply-To: 
"openstack-dev@lists.openstack.org>"
 
>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org>"
 
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz 
>>
To: 
"openstack-dev@lists.openstack.org">
 
>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or 

Re: [openstack-dev] [magnum] Handling password for k8s

2015-09-20 Thread Hongbin Lu
Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don't 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay's 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a better solution. So leaving aside the issue 
of how k8s should be changed, the question is: is this approach reasonable for 
the time, or is there a better approach?

Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Associating patches with bugs/bps (Please don't hurt me)

2015-09-18 Thread Hongbin Lu
For the guidance, I saw the judgement is a bit subjective. It could happen that 
a contributor think his/her patch is trivial (or it is not fixing a function 
defect), but a reviewer think the opposite. For example, I find it hard to 
judge when I reviewed the following patches:

https://review.openstack.org/#/c/224183/
https://review.openstack.org/#/c/224198/
https://review.openstack.org/#/c/224184/

It could be helpful if the guide can provide some examples of what is a trivial 
patch, and what is not. OpenStack uses this approach to define what is a 
good/bad commit message, which I find it quite helpful.

https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_bad_practice
https://wiki.openstack.org/wiki/GitCommitMessages#Examples_of_good_practice

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: September-17-15 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Associating patches with bugs/bps (Please 
don't hurt me)

For posterity, I have recorded this guidance in our Contributing Wiki:

See the NOTE section under:

https://wiki.openstack.org/wiki/Magnum/Contributing#Identify_bugs

Excerpt:

"NOTE: If you are fixing something trivial, that is not actually a functional 
defect in the software, you can do that without filing a bug ticket, if you 
don't want it to be tracked when we tally this work between releases. If you do 
this, just mention it in the commit message that it's a trivial change that 
does not require a bug ticket. You can reference this guideline if it comes up 
in discussion during the review process. Functional defects should be tracked 
in bug tickets. New features should be tracked in blueprints. Trivial features 
may be tracked using a bug ticket marked as 'Wishlist' importance."

I hope that helps.

Adrian

On Sep 17, 2015, at 2:01 PM, Adrian Otto 
> wrote:

Let’s apply sensible reason. If it’s a new feature or a bug, it should be 
tracked against an artifact like a bug ticket or a blueprint. If it’s truly 
trivia, we don’t care. I can tell you that some of the worst bugs I have ever 
seen in my career had fixes that were about 4 bytes long. That did not make 
them any less serious.

If you are fixing an actual legitimate bug that has a three character fix, and 
you don’t want it to be tracked as the reviewer, then you can say so in the 
commit message. We can act accordingly going forward.

Adrian

On Sep 17, 2015, at 1:53 PM, Assaf Muller 
> wrote:


On Thu, Sep 17, 2015 at 4:09 PM, Jeff Peeler 
> wrote:

On Thu, Sep 17, 2015 at 3:28 PM, Fox, Kevin M 
> wrote:
I agree. Lots of projects have this issue. I submitted a bug fix once that 
literally was 3 characters long, and it took:
A short commit message, a long commit message, and a full bug report being 
filed and cross linked. The amount of time writing it up was orders of 
magnitude longer then the actual fix.

Seems a bit much...

Looking at this review, I'd go a step farther and argue that code cleanups like 
this one should be really really easy to get through. No one likes to do them, 
so we should be encouraging folks that actually do it. Not pile up roadblocks.

It is indeed frustrating. I've had a few similar reviews (in other projects - 
hopefully it's okay I comment here) as well. Honestly, I think if a given team 
is willing to draw the line as for what is permissible to commit without bug 
creation, then they should be permitted that freedom.

However, that said, I'm sure somebody is going to point out that come release 
time having the list of bugs fixed in a given release is handy, spelling errors 
included.

We've had the same debate in Neutron and we relaxed the rules. We don't require 
bugs for trivial changes. In fact, my argument has always been: Come release
time, when we say that the Neutron community fixed so and so bugs, we would be 
lying if we were to include fixing spelling issues in comments. That's not a 
bug.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Hongbin Lu
Ryan,

Thanks for sharing your inputs. By looking through your response, I couldn't 
find the reasoning about why a young project is the perfect time to enforce a 
strict object version rule. I think a young project often starts with a static 
(or non-frequently changing) version until a point in time the project reaches 
a certain level of maturity. Isn't it? As a core reviewer of Magnum, I observe 
that the project is under fast development and objects are changing from time 
to time. It is very heavy to do all the work for strictly enforcing the version 
(bump version number, document the changed fields, re-generate the hashes, 
implement the compatibility check, etc.). Instead, I would prefer to let all 
objects stay in a beta version, until a time in future, the team decides to 
start bumping it.

Best regards,
Hongbin

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: August-27-15 2:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] versioned objects changes

If you want my inexperienced opinion, a young project is the perfect time to 
start this. Nova has had a bunch of problems with versioned objects that don't 
get realized until the next release (because that's the point in time at which 
grenade (or worse, operators) catch this). At that point, you then need to hack 
things around and backport them in order to get them working in the old branch. 
[1] is an excellent example of Nova having to backport a fix to an object 
because we weren't using strict object testing.

I don't feel that this should be adding overhead to contributors and reviewers. 
With [2], this test absolutely helps both contributors and reviewers. Yes, it 
requires fixing things when a change happens to an object. Learning to do 
this fix to update object hashes is extremely easy to do and I hope my 
updated comment on there makes it even easier (also be aware I am new to 
OpenStack  Nova as of about 2 months ago, so this stuff was new to me too not 
very long ago).

I understand that something like [2] will cause a test to fail when you make a 
major change to a versioned object. But you *want* that. It helps reviewers 
more easily catch contributors to say You need to update the version, because 
the hash changed. The sooner you start using versioned objects in the way they 
are designed, the smaller the upfront cost, and it will also be a major savings 
later on if something like [1] pops up.

[1]: https://bugs.launchpad.net/nova/+bug/1474074
[2]: https://review.openstack.org/#/c/217342/
On 8/27/2015 9:46 AM, Hongbin Lu wrote:
-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project (like Nova), 
but not for a young project like Magnum. It incurs overheads for contributors  
reviewers to check the object compatibility in each patch. As you mentioned, 
the key benefit of this feature is supporting different version of magnum 
components running at the same time (i.e. running magnum-api 1.0 with 
magnum-conductor 1.1). I don't think supporting this advanced use case is a 
must at the current stage.

However, I don't mean to against merging patches of this feature. I just 
disagree to enforce the rule of object version change in the near future.

Best regards,
Hongbin

From: Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
Sent: August-26-15 4:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we don't 
change object versions.

The idea of objects is that each change in their fields should be versioned, 
documentation about the change should also be written in a comment inside the 
object and the obj_make_compatible method should be implemented or updated. See 
an example here:
https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades from next 
release or maybe it's still too early?

If yes, I think core reviewers should start checking for these incompatible 
changes.

To clarify, rolling upgrades means support for running magnum services at 
different versions at the same time.
In Nova, there is an RPC call in the conductor to backport objects, which is 
called when older code gets an object it doesn't understand. This patch does 
this in Magnum: https://review.openstack.org/#/c/184791/ .

I can report bugs and propose patches with version changes for this release, to 
get the effort started.

In Mitaka, when Grenade gets multi-node support, it can be used to add CI tests 
for rolling upgrades in Magnum.


/ Greg





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ

Re: [openstack-dev] [magnum] Mesos Conductor

2015-12-07 Thread Hongbin Lu
2015 10:47:33 +0800
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

@bharath,

1) actually, if you mean use container-create(delete) to do on mesos bay for 
apps. I am not sure how different the interface between docker interface and 
mesos interface. One point that when you introduce that feature, please not 
make docker container interface more complicated than now. I worried that 
because it would confuse end-users a lot than the unified benefits. (maybe as 
optional parameter to pass one json file to create containers in mesos)

2) For the unified interface, I think it need more thoughts, we need not bring 
more trouble to end-users to learn new concepts or interfaces, except we could 
have more clear interface, but different COES vary a lot. It is very challenge.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: 
wk...@cn.ibm.com<mailto:wk...@cn.ibm.com><mailto:wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for bharath thiruveedula ---19/11/2015 10:31:58 
am---@hongin, @adrian I agree with you. So can we go ahea]bharath thiruveedula 
---19/11/2015 10:31:58 am---@hongin, @adrian I agree with you. So can we go 
ahead with magnum container-create(delete) ... for

From:  bharath thiruveedula 
<bharath_...@hotmail.com<mailto:bharath_...@hotmail.com><mailto:bharath_...@hotmail.com<mailto:bharath_...@hotmail.com>>>
To:  OpenStack Development Mailing List not for usage questions 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Date:  19/11/2015 10:31 am
Subject:  Re: [openstack-dev] [magnum] Mesos Conductor





@hongin, @adrian I agree with you. So can we go ahead with magnum 
container-create(delete) ... for mesos bay (which actually create 
mesos(marathon) app internally)?

@jay, yes we multiple frameworks which are using mesos lib. But the mesos bay 
we are creating uses marathon. And we had discussion in irc on this topic, and 
I was asked to implement initial version for marathon. And agree with you to 
have unified client interface for creating pod,app.

Regards
Bharath T


Date: Thu, 19 Nov 2015 10:01:35 +0800
From: 
jay.lau@gmail.com<mailto:jay.lau@gmail.com><mailto:jay.lau@gmail.com<mailto:jay.lau@gmail.com>>
To: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot limited 
to Marathon + Mesos as there are many frameworks can run on top of Mesos, such 
as Chronos, Kubernetes etc, we may need to consider more for Mesos integration 
as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>>
 wrote:

Bharath,

I agree with Hongbin on this. Let’s not expand magnum to deal with apps or 
appgroups in the near term. If there is a strong desire to add these things, we 
could allow it by having a plugin/extensions interface for the Magnum API to 
allow additional COE specific features. Honestly, it’s just going to be a 
nuisance to keep up with the various upstreams until they become completely 
stable from an API perspective, and no additional changes are likely. All of 
our COE’s still have plenty of maturation ahead of them, so this is the wrong 
time to wrap them.

If someone really wants apps and appgroups, (s)he could add that to an 
experimental branch of the magnum client, and have it interact with the 
marathon API directly rather than trying to represent those resources in 
Magnum. If that tool became popular, then we could revisit this topic for 
further consideration.

Adrian

> On Nov 18, 2015, at 3:21 PM, Hongbin Lu 
> <hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com<mailto:hongbin...@huawei.com>>>
>  wrote:
>
> Hi Bharath,
>
> I agree the “container” part. We can implement “magnum container-create ..” 
> for mesos bay in the way you mentioned. Personally, I don’t like to introduce 
> “apps” and “appgroups” resources to Magnum, because they are already provided 
&g

Re: [openstack-dev] Mesos Conductor using container-create operations

2015-12-09 Thread Hongbin Lu
As Bharath mentioned, I am +1 to extend the "container" object to Mesos bay. In 
addition, I propose to extend "container" to k8s as well (the details are 
described in this BP [1]). The goal is to promote this API resource to be 
technology-agnostic and make it portable across all COEs. I am going to justify 
this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I want 
to deploy my app to a container. I have basic knowledge of container but not 
familiar with specific container tech. I want a simple and intuitive API to 
operate a container (i.e. CRUD), like how I operated a VM before. I find it 
hard to learn the DSL introduced by a specific COE (k8s/marathon). Most 
importantly, I want my deployment to be portable regardless of the choice of 
cluster management system and/or container runtime. I want OpenStack to be the 
only integration point, because I don't want to be locked-in to specific 
container tech. I want to avoid the risk that a specific container tech being 
replaced by another in the future. Optimally, I want Keystone to be the only 
authentication system that I need to deal with. I don't want the extra 
complexity to deal with additional authentication system introduced by specific 
COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs introduced 
in the future).

That's it. I would appreciate if you can share your thoughts on this proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in 
container-create[1] as long as we have suitable use case. But I honestly feel 
to have some kind of support for mesos + marathon apps, because magnum supports 
COE related functionalities for docker swarm (container-create) and k8s 
(pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create and 
support in mesos-conductor. Currently we have container-create only for docker 
swarm bay. Let's have support for the same command for mesos bay with out any 
changes in client side.

Let me know your suggestions.

Regards
Bharath T
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-17 Thread Hongbin Lu
Suro,

FYI. In before, we tried a distributed lock implementation for bay operations 
(here are the patches [1,2,3,4,5]). However, after several discussions online 
and offline, we decided to drop the blocking implementation for bay operations, 
in favor of non-blocking implementation (which is not implemented yet). You can 
find more discussion in here [6,7].

For the async container operations, I would suggest to consider a non-blocking 
approach first. If it is impossible and we need a blocking implementation, 
suggest to use the bay operations patches below as a reference.

[1] https://review.openstack.org/#/c/171921/
[2] https://review.openstack.org/#/c/172603/
[3] https://review.openstack.org/#/c/172772/
[4] https://review.openstack.org/#/c/172773/
[5] https://review.openstack.org/#/c/172774/
[6] https://blueprints.launchpad.net/magnum/+spec/horizontal-scale
[7] https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: December-16-15 10:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: s...@yahoo-inc.com
Subject: Re: [openstack-dev] [magnum] Magnum conductor async container 
operations


> On Dec 16, 2015, at 6:24 PM, Joshua Harlow  wrote:
> 
> SURO wrote:
>> Hi all,
>> Please review and provide feedback on the following design proposal 
>> for implementing the blueprint[1] on async-container-operations -
>> 
>> 1. Magnum-conductor would have a pool of threads for executing the 
>> container operations, viz. executor_threadpool. The size of the 
>> executor_threadpool will be configurable. [Phase0] 2. Every time, 
>> Magnum-conductor(Mcon) receives a container-operation-request from 
>> Magnum-API(Mapi), it will do the initial validation, housekeeping and 
>> then pick a thread from the executor_threadpool to execute the rest 
>> of the operations. Thus Mcon will return from the RPC request context 
>> much faster without blocking the Mapi. If the executor_threadpool is 
>> empty, Mcon will execute in a manner it does today, i.e. 
>> synchronously - this will be the rate-limiting mechanism - thus 
>> relaying the feedback of exhaustion.
>> [Phase0]
>> How often we are hitting this scenario, may be indicative to the 
>> operator to create more workers for Mcon.
>> 3. Blocking class of operations - There will be a class of 
>> operations, which can not be made async, as they are supposed to 
>> return result/content inline, e.g. 'container-logs'. [Phase0] 4. 
>> Out-of-order considerations for NonBlocking class of operations - 
>> there is a possible race around condition for create followed by 
>> start/delete of a container, as things would happen in parallel. To 
>> solve this, we will maintain a map of a container and executing 
>> thread, for current execution. If we find a request for an operation 
>> for a container-in-execution, we will block till the thread completes 
>> the execution. [Phase0]
> 
> Does whatever do these operations (mcon?) run in more than one process?

Yes, there may be multiple copies of magnum-conductor running on separate hosts.

> Can it be requested to create in one process then delete in another? 
> If so is that map some distributed/cross-machine/cross-process map 
> that will be inspected to see what else is manipulating a given 
> container (so that the thread can block until that is not the case... 
> basically the map is acting like a operation-lock?)

That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not ready” 
error if it’s not ready yet. If this happens in a part of the system that is 
unattended by a user, we can re-queue the call to retry after a minimum delay 
so that it proceeds only when the ready state is reached in the resource, or 
terminated after a maximum number of attempts, or if the resource enters an 
error state. This would allow other work to proceed while the retry waits in 
the queue.

> If it's just local in one process, then I have a library for u that 
> can solve the problem of correctly ordering parallel operations ;)

What we are aiming for is a bit more distributed. 

Adrian

>> This mechanism can be further refined to achieve more asynchronous 
>> behavior. [Phase2] The approach above puts a prerequisite that 
>> operations for a given container on a given Bay would go to the same 
>> Magnum-conductor instance.
>> [Phase0]
>> 5. The hand-off between Mcon and a thread from executor_threadpool 
>> can be reflected through new states on the 'container' object. These 
>> states can be helpful to recover/audit, in case of Mcon restart. 
>> [Phase1]
>> 
>> Other considerations -
>> 1. Using eventlet.greenthread instead of real threads => This 
>> approach would require further refactoring the execution 

Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Hongbin Lu
Hi Tom,

If I remember correctly, the decision is to drop the COE-specific API (Pod, 
Service, Replication Controller) in the next API version. I think a good way to 
do that is to put a deprecated warning in current API version (v1) for the 
removed resources, and remove them in the next API version (v2).

An alternative is to drop them in current API version. If we decide to do that, 
we need to bump the micro-version [1], and ask users to specify the 
microversion as part of the requests when they want the removed APIs.

[1] 
http://docs.openstack.org/developer/nova/api_microversions.html#removing-an-api-method

Best regards,
Hongbin

-Original Message-
From: Cammann, Tom [mailto:tom.camm...@hpe.com] 
Sent: December-16-15 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs

I have been noticing a fair amount of redundant work going on in magnum, 
python-magnumclient and magnum-ui with regards to APIs we have been intending 
to drop support for. During the Tokyo summit it was decided that we should 
support for only COE APIs that all COEs can support which means dropping 
support for Kubernetes specific APIs for Pod, Service and Replication 
Controller.

Egor has submitted a blueprint[1] “Unify container actions between all COEs” 
which has been approved to cover this work and I have submitted the first of 
many patches that will be needed to unify the APIs.

The controversial patches are here: https://review.openstack.org/#/c/258485/ 
and https://review.openstack.org/#/c/258454/

These patches are more a forcing function for our team to decide how to 
correctly deprecate these APIs as I mention there is a lot of redundant work 
going on these APIs. Please let me know your thoughts.

Tom

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mesos Conductor using container-create operations

2015-12-10 Thread Hongbin Lu
Hi Ton,

Thanks for the feedback. Here is a clarification. The proposal is neither for 
using existing DSL to express a container, nor for investing a new DSL. 
Instead, I proposed to hide the complexity of existing DSLs and expose a simple 
API to users. For example, if users want to create a container, they could type 
something like:

magnum container-create –name XXX –image XXX –command XXX

Magnum will process the request and translate it to COE-specific API calls. For 
k8s, we could dynamically generate a pod with a single container and fill the 
pod with the inputted values (image, command, etc.). Similarly, in marathon, we 
could generate an app based on inputs. A key advantage of that is simple and 
doesn’t require COE-specific knowledge.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: December-10-15 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations


I think extending the container object to Mesos via command like 
container-create is a fine idea. Going into details, however, we run into some 
complication.
1. The user would still have to choose a DSL to express the container. This 
would have to be a kube and/or swarm DSL since we don't want to invent a new 
one.
2. For Mesos bay in particular, kube or swarm may be running on top of Mesos 
along side with Marathon, so somewhere along the line, Magnum has to be able to 
make the distinction and handle things appropriately.

We should think through the scenarios carefully to come to agreement on how 
this would work.

Ton Ngo,


[Inactive hide details for Hongbin Lu ---12/09/2015 03:09:23 PM---As Bharath 
mentioned, I am +1 to extend the "container" object]Hongbin Lu ---12/09/2015 
03:09:23 PM---As Bharath mentioned, I am +1 to extend the "container" object to 
Mesos bay. In addition, I propose

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 12/09/2015 03:09 PM
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations





As Bharath mentioned, I am +1 to extend the “container” object to Mesos bay. In 
addition, I propose to extend “container” to k8s as well (the details are 
described in this BP [1]). The goal is to promote this API resource to be 
technology-agnostic and make it portable across all COEs. I am going to justify 
this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I want 
to deploy my app to a container. I have basic knowledge of container but not 
familiar with specific container tech. I want a simple and intuitive API to 
operate a container (i.e. CRUD), like how I operated a VM before. I find it 
hard to learn the DSL introduced by a specific COE (k8s/marathon). Most 
importantly, I want my deployment to be portable regardless of the choice of 
cluster management system and/or container runtime. I want OpenStack to be the 
only integration point, because I don’t want to be locked-in to specific 
container tech. I want to avoid the risk that a specific container tech being 
replaced by another in the future. Optimally, I want Keystone to be the only 
authentication system that I need to deal with. I don't want the extra 
complexity to deal with additional authentication system introduced by specific 
COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs introduced 
in the future).

That's it. I would appreciate if you can share your thoughts on this proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in 
container-create[1] as long as we have suitable use case. But I honestly feel 
to have some kind of support for mesos + marathon apps, because magnum supports 
COE related functionalities for docker swarm (container-create) and k8s 
(pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create and 
support in mesos-conductor. Currently we have container-create only for docker 
swarm bay. Let's have support for the same command for mesos bay with out any 
changes in client side.

Let me know your suggestions.

Regards
Bharath 
T__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-req

Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-12 Thread Hongbin Lu
Hi,

As Kai Qiang mentioned, magnum gate recently had a bunch of random failures, 
which occurred on creating a nova instance with 2G of RAM. According to the 
error message, it seems that the hypervisor tried to allocate memory to the 
nova instance but couldn’t find enough free memory in the host. However, by 
adding a few “nova hypervisor-show XX” before, during, and right after the 
test, it showed that the host has 6G of free RAM, which is far more than 2G. 
Here is a snapshot of the output [1]. You can find the full log here [2].

Another observation is that most of the failure happened on a node with name 
“devstack-trusty-ovh-*” (You can verify it by entering a query [3] at 
http://logstash.openstack.org/ ). It seems that the jobs will be fine if they 
are allocated to a node other than “ovh”.

Any hints to debug this issue further? Suggestions are greatly appreciated.

[1] http://paste.openstack.org/show/481746/
[2] 
http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html
[3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: December-09-15 7:23 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for 
"Cannot set up guest memory 'pc.ram': Cannot allocate memory"


Hi All,

I am not sure what changes these days, We found quite often now, the Jenkins 
failed for:


http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/libvirt/libvirtd.txt.gz

2015-12-09 
08:52:27.892+:
 22957: debug : qemuMonitorJSONCommandWithFd:264 : Send command 
'{"execute":"qmp_capabilities","id":"libvirt-1"}' for write with FD -1
2015-12-09 
08:52:27.892+:
 22957: debug : qemuMonitorSend:959 : QEMU_MONITOR_SEND_MSG: mon=0x7fa66400c6f0 
msg={"execute":"qmp_capabilities","id":"libvirt-1"}
 fd=-1
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:347 : dispatching to max 0 clients, 
called from event watch 6
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:347 : dispatching to max 0 clients, 
called from event watch 6
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:347 : dispatching to max 0 clients, 
called from event watch 6
2015-12-09 
08:52:27.941+:
 22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 
08:52:28.070+:
 22951: error : qemuMonitorIORead:554 : Unable to read from monitor: Connection 
reset by peer
2015-12-09 
08:52:28.070+:
 22951: error : qemuMonitorIO:690 : internal error: early end of file from 
monitor: possible problem:
Cannot set up guest memory 'pc.ram': Cannot allocate memory




It not hit such resource issue before. I am not sure if Infra or nova guys know 
about it ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-07 Thread Hongbin Lu
Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org] 
Sent: January-07-16 6:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
> Hi folks,
> 
> It looks the swarm func test is currently unstable, which negatively 
> impacts the patch submission workflow. I proposed to remove it from 
> Jenkins gate (but keep it in Jenkins check), until it becomes stable.
> Please find the details in the review
> (https://review.openstack.org/#/c/264998/) and let me know if you have 
> any concern.
> 
Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-07 Thread Hongbin Lu
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch submission workflow. I proposed to remove it from Jenkins gate (but 
keep it in Jenkins check), until it becomes stable. Please find the details in 
the review (https://review.openstack.org/#/c/264998/) and let me know if you 
have any concern.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-21 Thread Hongbin Lu
If we decide to support quotas in CaaS layer (i.e. limit the # of bays), its 
implementation doesn't have to be redundant to IaaS layer (from Nova, Cinder, 
etc.). The implementation could be a layer on top of IaaS, in which requests 
need to pass two layers of quotas to succeed. There would be three cases:

1.   A request exceeds CaaS layer quota. Then, magnum rejects the request.

2.   A request is within CaaS layer quota, and accepted by magnum. Magnum 
calls Heat to create a stack, which will fail if the stack exceeds IaaS layer 
quota. In this case, magnum catch and re-throw the exception to users.

3.   A request is within both CaaS and IaaS layer quota, and the request 
succeeds.

I think the debate here is whether it would be useful to implement an extra 
layer of quota management system in Magnum. My guess is "yes", if operators 
want to hide the underline infrastructure, and expose a pure CaaS solution to 
its end-users. If the operators don't use Magnum in this way, then I will vote 
for "no".

Also, we can reference other platform-level services (like Trove and Sahara) to 
see if they implemented an extra layer of quota management system, and we could 
use it as a decision point.

Best regards,
Honbgin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: December-20-15 12:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

This sounds like a source-of-truth concern. From my perspective the solution is 
not to create redundant quotas. Simply quota the Magnum resources. Lower level 
limits *could* be queried by magnum prior to acting to CRUD the lower level 
resources. In the case we could check the maximum allowed number of (or access 
rate of) whatever lower level resource before requesting it, and raising an 
understandable error. I see that as an enhancement rather than a must-have. In 
all honesty that feature is probably more complicated than it's worth in terms 
of value.

--
Adrian

On Dec 20, 2015, at 6:36 AM, Jay Lau 
> wrote:
I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT need 
call nova, cinder, neutron to create the Bay resources. But both Nova and 
Cinder has its own quota policy, if we define quota again in Magnum, then how 
to handle the conflict? Another point is that limiting the Bay by quota seems a 
bit coarse-grainded as different bays may have different configuration and 
resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote 
> wrote:
Food for thought - there is a cost to FIPs (in the case of public IP 
addresses), security groups (to a lesser extent, but in terms of the 
computation of many hundreds of them), etc. Administrators may wish to enforce 
quotas on a variety of resources that are direct costs or indirect costs (e.g. 
# of bays, where a bay consists of a number of multi-VM / multi-host pods and 
services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on 
Magnum-specific constructs only, correct? Resources created by Magnum COEs 
should be governed by existing quota policies governing said resources (e.g. 
Nova and vCPUs).

Lee

On Dec 16, 2015, at 1:56 PM, Tim Bell 
> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.

The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.


Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost

real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real

things that they must pay for. If they have enough Nova quota to allocate
1

billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:

Hi All,

Currently, it is possible to create 

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-21 Thread Hongbin Lu
Jay,

I think we should agree on a general direction before asking for a spec. It is 
bad to have contributors spend time working on something that might not be 
accepted.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: December-20-15 6:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Thanks Adrian and Tim, I saw that @Vilobh already uploaded a patch 
https://review.openstack.org/#/c/259201/ here, perhaps we can first have a spec 
and discuss there. ;-)

On Mon, Dec 21, 2015 at 2:44 AM, Tim Bell 
> wrote:
Given the lower level quotas in Heat, Neutron, Nova etc., the error feedback is 
very important. A Magnum “cannot create” message requires a lot of debugging 
whereas a “Floating IP quota exceeded” gives a clear root cause.

Whether we quota Magnum resources or not, some error scenarios and appropriate 
testing+documentation would be a great help for operators.

Tim

From: Adrian Otto 
[mailto:adrian.o...@rackspace.com]
Sent: 20 December 2015 18:50
To: OpenStack Development Mailing List (not for usage questions) 
>

Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

This sounds like a source-of-truth concern. From my perspective the solution is 
not to create redundant quotas. Simply quota the Magnum resources. Lower level 
limits *could* be queried by magnum prior to acting to CRUD the lower level 
resources. In the case we could check the maximum allowed number of (or access 
rate of) whatever lower level resource before requesting it, and raising an 
understandable error. I see that as an enhancement rather than a must-have. In 
all honesty that feature is probably more complicated than it's worth in terms 
of value.

--
Adrian

On Dec 20, 2015, at 6:36 AM, Jay Lau 
> wrote:
I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT need 
call nova, cinder, neutron to create the Bay resources. But both Nova and 
Cinder has its own quota policy, if we define quota again in Magnum, then how 
to handle the conflict? Another point is that limiting the Bay by quota seems a 
bit coarse-grainded as different bays may have different configuration and 
resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote 
> wrote:
Food for thought - there is a cost to FIPs (in the case of public IP 
addresses), security groups (to a lesser extent, but in terms of the 
computation of many hundreds of them), etc. Administrators may wish to enforce 
quotas on a variety of resources that are direct costs or indirect costs (e.g. 
# of bays, where a bay consists of a number of multi-VM / multi-host pods and 
services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on 
Magnum-specific constructs only, correct? Resources created by Magnum COEs 
should be governed by existing quota policies governing said resources (e.g. 
Nova and vCPUs).

Lee

On Dec 16, 2015, at 1:56 PM, Tim Bell 
> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.

The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.

Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
things that they must pay for. If they have enough Nova quota to allocate
1
billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a 

Re: [openstack-dev] [openstack][magnum]a problem about trust

2015-12-23 Thread Hongbin Lu
+1

Thanks Steven for pointing out the pitfall.

Best regards,
Hongbin

-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com] 
Sent: December-23-15 3:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum]a problem about trust

On Tue, Dec 22, 2015 at 04:55:17PM +0800, 王华 wrote:
>Hi all,
>When we create a trust to a trustee with a role, the trustor must have
>this role. Here is a problem I meet in my bp [1]. I need to create a trust
>with a role, with the trust docker registry can access Swift to store
>images. But the trustor (the user who uses magnum) may not have the role.
>How can we address this problem?

FYI we had this exact same issue in Heat some time ago:

https://bugs.launchpad.net/heat/+bug/1376562

>There are two ways.
>1. Add the role to the trustor with the magnum admin user privilege. But
>when we need to delete the role, we can not know whether the role is added
>by magnum or is owned by the trustor.
>2. Let magnum trust the trustee with the role. We can assign the role to
>magnum before we start magnum.Â

As you'll see from the bug report above, neither of these solutions work, 
because you can't know at the point of delegation what roles may be needed in 
the future when impersonating the trustor.

So, having a special role which is always delegated (as you are proposing, and 
Heat used to do) doesn't work very well in environments where RBAC is actually 
used.

The solution we reached in Heat is to delegate all roles, as determined by 
keystone/authtoken, with the option for deployers to specify some specific role 
or list of roles if they have an RBAC scheme where the expected roles are 
predictable:

See https://review.openstack.org/128509 for the Heat solution.

HTH!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-21 Thread Hongbin Lu
Hi Jeremy,

If you can make the swap size consistent, that would be terrific. Consistent 
settings across test nodes can improve the predictability of the test results. 
Thanks for the assistant from infra team to locate the cause of this error. We 
greatly appreciate it.

Best regards,
Hongbin

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: December-21-15 9:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often 
for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

On 2015-12-20 12:35:34 -0800 (-0800), Clark Boylan wrote:
> Looking at the dstat logs for a recent fail [0], it did help in that 
> more memory is available. You now have over 1GB available but still 
> less than 2GB.
[...]

As Clark also pointed out in IRC, the workers where this is failing lack a swap 
memory device. I have a feeling if you swapoff before this point in the job 
you'll see it fail everywhere.

For consistency, we probably should make sure that our workers have similarly 
sized swap devices (using a swapfile on the rootfs if necessary).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-20 Thread Hongbin Lu
Hi Clark,

Thanks for the fix. Unfortunately, it doesn't seem to help. The error still 
occurred [1] after you increased the memory restriction, and as before, most of 
them occurred in OVH host. Any further suggestion?

[1] http://status.openstack.org/elastic-recheck/#1521237

Best regards,
Hongbin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org] 
Sent: December-15-15 5:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often 
for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> > 
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random 
> > failures, which occurred on creating a nova instance with 2G of RAM.
> > According to the error message, it seems that the hypervisor tried 
> > to allocate memory to the nova instance but couldn’t find enough 
> > free memory in the host. However, by adding a few “nova 
> > hypervisor-show XX” before, during, and right after the test, it 
> > showed that the host has 6G of free RAM, which is far more than 2G. 
> > Here is a snapshot of the output [1]. You can find the full log here [2].
> If you look at the dstat log
> http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnu
> m-k8s/5305d7a/logs/screen-dstat.txt.gz
> the host has nowhere near 6GB free memory and less than 2GB. I think 
> you actually are just running out of memory.
> > 
> > Another observation is that most of the failure happened on a node 
> > with name “devstack-trusty-ovh-*” (You can verify it by entering a 
> > query [3] at http://logstash.openstack.org/ ). It seems that the 
> > jobs will be fine if they are allocated to a node other than “ovh”.
> I have just done a quick spot check of the total memory on 
> devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free 
> -m` and the results are 7480, 7732, and 6976 megabytes respectively. 
> Despite using 8GB flavors in each case there is variation and OVH 
> comes in on the low end for some reason. I am guessing that you fail 
> here more often because the other hosts give you just enough extra 
> memory to boot these VMs.
To follow up on this we seem to have tracked this down to how the linux kernel 
restricts memory at boot when you don't have a contiguous chunk of system 
memory. We have worked around this by increasing the memory restriction to 
9023M at boot which gets OVH inline with Rackspace and slightly increases 
available memory on HPCloud (because it actually has more of it).

You should see this fix in action after image builds complete tomorrow (they 
start at 1400UTC ish).
> 
> We will have to look into why OVH has less memory despite using 
> flavors that should be roughly equivalent.
> > 
> > Any hints to debug this issue further? Suggestions are greatly 
> > appreciated.
> > 
> > [1] http://paste.openstack.org/show/481746/
> > [2]
> > http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-mag
> > num-swarm/56d79c3/console.html [3] 
> > https://review.openstack.org/#/c/254370/2/queries/1521237.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-11-25 Thread Hongbin Lu
Here is a bit more context.

Currently, at k8s and swarm bay, some required binaries (i.e. etcd and flannel) 
are built into image and run at host. We are exploring the possibility to 
containerize some of these system components. The rationales are (i) it is 
infeasible to build custom packages into an atomic image and (ii) it is 
infeasible to upgrade individual component. For example, if there is a bug in 
current version of flannel and we know the bug was fixed in the next version, 
we need to upgrade flannel by building a new image, which is a tedious process.

To containerize flannel, we need a second docker daemon, called 
docker-bootstrap [1]. In this setup, pods are running on the main docker 
daemon, and flannel and etcd are running on the second docker daemon. The 
reason is that flannel needs to manage the network of the main docker daemon, 
so it needs to run on a separated daemon.

Daneyon, I think it requires separated storage because it needs to run a 
separated docker daemon (unless there is a way to make two docker daemons share 
the same storage).

Wanghua, is it possible to leverage Cinder volume for that. Leveraging external 
storage is always preferred [2].

[1] 
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker
[2] http://www.projectatomic.io/docs/docker-storage-recommendation/

Best regards,
Hongbin

From: Daneyon Hansen (danehans) [mailto:daneh...@cisco.com]
Sent: November-25-15 11:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap



From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 25, 2015 at 5:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum]storage for docker-bootstrap

Hi all,

I am working on containerizing etcd and flannel. But I met a problem. As 
described in [1], we need a docker-bootstrap. Docker and docker-bootstrap can 
not use the same storage, so we need some disk space for it.

I reviewed [1] and I do not see where the bootstrap docker instance requires 
separate storage.

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. The disk space left is too same 
for docker-bootstrap. Even if the root_gb of the instance flavor is 20G, only 
8G can be used in our image. I want to make it bigger. One way is we can add 
the disk space left in the vda as vda3 into atomicos vg after the instance 
starts and we allocate two logic volumes for docker-bootstrap. Another way is 
when we create the image, we allocate two logic volumes for docker-bootstrap. 
The second way has a advantage. It doesn't have to make filesystem when the 
instance is created which is time consuming.

What is your opinion?

Best Regards
Wanghua

[1] 
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode/master.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-26 Thread Hongbin Lu
Jay,

Agree and disagree. Containerize some COE daemons will facilitate the version 
upgrade and maintenance. However, I don’t think it is correct to blindly 
containerize everything unless there is an investigation performed to 
understand the benefits and costs of doing that. Quoted from Egor, the common 
practice in k8s is to containerize everything except kublet, because it seems 
it is just too hard to containerize everything. In the case of mesos, I am not 
sure if it is a good idea to move everything to containers, given the fact that 
it is relatively easy to manage and upgrade debian packages at Ubuntu. However, 
in the new CoreOS mesos bay [1], meos daemons will run at containers.

In summary, I think the correct strategy is to selectively containerize some 
COE daemons, but we don’t have to containerize *all* COE daemons.

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: November-26-15 2:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

Thanks Kai Qing, I filed a bp for mesos bay here 
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu 
> wrote:

Hi Jay,

For the Kubernetes COE container ways, I think @Hua Wang is doing that.

For the swarm COE, the swarm already has master and agent running in container

For the mesos, it still not have container work until now, Maybe someone 
already draft bp on it ? Not quite sure



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi, It is 
becoming more and more popular to use docker container]Jay Lau ---26/11/2015 
07:15:59 am---Hi, It is becoming more and more popular to use docker container 
run some

From: Jay Lau >
To: OpenStack Development Mailing List 
>
Date: 26/11/2015 07:15 am
Subject: [openstack-dev] [magnum] Using docker container to run COE daemons





Hi,

It is becoming more and more popular to use docker container run some 
applications, so what about leveraging this in Magnum?

What I want to do is that we can put all COE daemons running in docker 
containers, because now Kubernetes, Mesos and Swarm support running in docker 
container and there are already some existing docker images/dockerfiles which 
we can leverage.

So what about update all COE templates to use docker container to run COE 
daemons and maintain some dockerfiles for different COEs in Magnum? This can 
reduce the maintain effort for COE as if there are new versions and we want to 
upgrade, just update the dockerfile is enough. Comments?

--
Thanks,
Jay Lau (Guangya 
Liu)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Hongbin Lu
Hi Bharath,

I agree the "container" part. We can implement "magnum container-create .." for 
mesos bay in the way you mentioned. Personally, I don't like to introduce 
"apps" and "appgroups" resources to Magnum, because they are already provided 
by native tool [1]. I couldn't see the benefits to implement a wrapper API to 
offer what native tool already offers. However, if you can point out a valid 
use case to wrap the API, I will give it more thoughts.

Best regards,
Hongbin

[1] https://docs.mesosphere.com/using/cli/marathonsyntax/

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: November-18-15 1:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Mesos Conductor

Hi all,

I am working on the blueprint 
[1]. As per my 
understanding, we have two resources/objects in mesos+marathon:

1)Apps: combination of instances/containers running on multiple hosts 
representing a service.[2]
2)Application Groups: Group of apps, for example we can have database 
application group which consists mongoDB app and MySQL App.[3]

So I think we need to have two resources 'apps' and 'appgroups' in mesos 
conductor like we have pod and rc for k8s. And regarding 'magnum container' 
command, we can create, delete and retrieve container details as part of mesos 
app itself(container =  app with 1 instance). Though I think in mesos case 
'magnum app-create ..."  and 'magnum container-create ...' will use the same 
REST API for both cases.

Let me know your opinion/comments on this and correct me if I am wrong

[1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
[2]https://mesosphere.github.io/marathon/docs/application-basics.html
[3]https://mesosphere.github.io/marathon/docs/application-groups.html


Regards
Bharath T
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Issue on history of renamed file/folder

2015-11-18 Thread Hongbin Lu
Hi team,

I would like to start this ML to discuss the git rename issue. Here is the 
problem. In Git, it is handy to retrieve commit history of a file/folder. There 
are several ways to do that. In CLI, you can run "git log ..." to show the 
history. In Github, you can click "History" bottom on top of the file. The 
history of a file is traced back to the commit in which the file was created or 
renamed. In other words, renaming a file will cut the commit history of the 
file. If you want to trace the full history of a renamed file, in CLI, you can 
use "git log -follow ...". However, this feature is not supported in Github.

A way to mitigate the issue is to avoid renaming file/folder if it is not for 
fixing a functional defect (e.g. for improving the naming style). If we do 
that, we sacrifice quality of file/folder names to get a more traceable 
history. On the other hand, if we don't do that, we have to tolerant the 
history disconnection in Github. I want to discuss which solution is preferred? 
Or there is a better way to handle it?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-08 Thread Hongbin Lu
Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

> On Jan 7, 2016, at 3:34 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> Clark,
> 
> That is true. The check pipeline must pass in order to enter the gate 
> pipeline. Here is the problem we are facing. A patch that was able to pass 
> the check pipeline is blocked in gate pipeline, due to the instability of the 
> test. The removal of unstable test from gate pipeline aims to unblock the 
> patches that already passed the check.
> 
> An alternative is to remove the unstable test from check pipeline as well or 
> mark it as non-voting test. If that is what the team prefers, I will adjust 
> the review accordingly.
> 
> Best regards,
> Honbgin
> 
> -Original Message-
> From: Clark Boylan [mailto:cboy...@sapwetik.org]
> Sent: January-07-16 6:04 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func 
> test from gate
> 
> On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
>> Hi folks,
>> 
>> It looks the swarm func test is currently unstable, which negatively 
>> impacts the patch submission workflow. I proposed to remove it from 
>> Jenkins gate (but keep it in Jenkins check), until it becomes stable.
>> Please find the details in the review
>> (https://review.openstack.org/#/c/264998/) and let me know if you 
>> have any concern.
>> 
> Removing it from gate but not from check doesn't necessarily help much 
> because you can only enter the gate pipeline once the change has a +1 from 
> Jenkins. Jenkins applies the +1 after check tests pass.
> 
> Clark
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-03 Thread Hongbin Lu
I agree that heterogeneous cluster is more advanced and harder to control, but 
I don't get why we (as service developers/providers) care about that. If there 
is a significant portion of users asking for advanced topologies (i.e. 
heterogeneous cluster) and willing to deal with the complexities, Magnum should 
just provide them (unless there are technical difficulties or other valid 
arguments). From my point of view, Magnum should support the basic use cases 
well (i.e. homogenous), *and* be flexible to accommodate various advanced use 
cases if we can.

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 7:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> I am really struggling to accept the idea of heterogeneous clusters. My
> experience causes me to question whether a heterogeneus cluster makes
> sense for Magnum. I will try to explain why I have this hesitation:
> 
> 1) If you have a heterogeneous cluster, it suggests that you are using
> external intelligence to manage the cluster, rather than relying on it
> to be self-managing. This is an anti-pattern that I refer to as “pets"
> rather than “cattle”. The anti-pattern results in brittle deployments
> that rely on external intelligence to manage (upgrade, diagnose, and
> repair) the cluster. The automation of the management is much harder
> when a cluster is heterogeneous.
> 
> 2) If you have a heterogeneous cluster, it can fall out of balance.
> This means that if one of your “important” or “large” members fail,
> there may not be adequate remaining members in the cluster to continue
> operating properly in the degraded state. The logic of how to track and
> deal with this needs to be handled. It’s much simpler in the
> heterogeneous case.
> 
> 3) Heterogeneous clusters are complex compared to homogeneous clusters.
> They are harder to work with, and that usually means that unplanned
> outages are more frequent, and last longer than they with a homogeneous
> cluster.
> 
> Summary:
> 
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
> 
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the
> cluster controller
>   - More reliable
> 
> My bias is to assert that applications that want a heterogeneous mix of
> system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up
> with a composition of simple systems rather than a larger complex one.
> 
> Adrian
> 
> 
> > On Jun 1, 2016, at 3:02 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> >
> > Personally, I think this is a good idea, since it can address a set
> of similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud.
> For example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Message-
> >> From: Hongbin Lu
> >> Sent: June-01-16 11:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> >> managing the bay nodes
> >>
> >> Hi team,
> >>
> >> A blueprint was created for tracking this idea:
> >> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> >> nodes . I won't approve the BP until there is a team decision on
> >> accepting/rejecting the idea.
> >>
> >> From the discussion in design summit, it looks everyone is OK with
> >> the idea in general (with some disagreements in the API style).
> >> However, from the last team meeting, it looks some people disagree
> >> with the idea fundamentally. so I re-raised this ML to re-discuss.
> >>
> >> If you agree or disagree w

[openstack-dev] Announcing Higgins -- Container management service for OpenStack

2016-06-03 Thread Hongbin Lu
Hi all,

We would like to introduce you a new container project for OpenStack called 
Higgins (might be renamed later [1]).

Higgins is a Container Management service for OpenStack. The key objective of 
the Higgins project is to enable tight integration between OpenStack and 
container technologies. In before, there is no perfect solution that can 
effectively bring containers to OpenStack. Magnum provides service to provision 
and manage Container Orchestration Engines (COEs), such as Kubernetes, Docker 
Swarm, and Apache Mesos, on top of Nova instances, but container management is 
out of its scope [2]. Nova-docker enables operating Docker containers from 
existing Nova APIs, but it can't support container features that go beyond the 
compute model. Heat docker plugin allows using Docker containers as Heat 
resources, but it has a similar limitation as nova-docker. Generally speaking, 
OpenStack is lack of a container management service that can integrate 
containers with OpenStack, and Higgins is created to fill the gap.

Higgins aims to provide an OpenStack-native API for launching and managing 
containers backed by different container technologies, such as Docker, Rocket 
etc. Higgins doesn't require calling other services/tools to provision the 
container infrastructure. Instead, it relies on existing infrastructure that is 
setup by operators. In our vision, the key value Higgins brings to OpenStack is 
enabling one platform for provisioning and managing VMs, baremetals, and 
containers as compute resource. In particular, VMs, baremetals, and containers 
will share the following:
- Single authentication and authorization system: Keystone
- Single UI Dashboard: Horizon
- Single resource and quota management
- Single block storage pools: Cinder
- Single networking layer: Neutron
- Single CLI: OpenStackClient
- Single image management: Glance
- Single Heat template for orchestration
- Single resource monitoring and metering system: Telemetry

For more information, please find below:
Wiki: https://wiki.openstack.org/wiki/Higgins
The core team: https://review.openstack.org/#/admin/groups/1382,members
Team meeting: Every Tuesday 0300 UTC at #openstack-meeting

NOTE: we are looking for feedback to shape the project roadmap. If you're 
interested in this project, we appreciate your inputs in the etherpad: 
https://etherpad.openstack.org/p/container-management-service

Best regards,
The Higgins team

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095746.html
[2] https://review.openstack.org/#/c/311476/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] The Magnum Midcycle

2016-06-07 Thread Hongbin Lu
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-06-07 Thread Hongbin Lu
Hi Heat team,

A question inline.

Best regards,
Hongbin

> -Original Message-
> From: Steven Hardy [mailto:sha...@redhat.com]
> Sent: March-03-16 3:57 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
> different availability zones
> 
> On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
> > On 02/03/16 05:50, Mathieu Velten wrote:
> > >Hi all,
> > >
> > >I am looking at a way to spawn nodes in different specified
> > >availability zones when deploying a cluster with Magnum.
> > >
> > >Currently Magnum directly uses predefined Heat templates with Heat
> > >parameters to handle configuration.
> > >I tried to reach my goal by sticking to this model, however I
> > >couldn't find a suitable Heat construct that would allow that.
> > >
> > >Here are the details of my investigation :
> > >- OS::Heat::ResourceGroup doesn't allow to specify a list as a
> > >variable that would be iterated over, so we would need one
> > >ResourceGroup by AZ
> > >- OS::Nova::ServerGroup only allows restriction at the hypervisor
> > >level
> > >- OS::Heat::InstanceGroup has an AZs parameter but it is marked
> > >unimplemented , and is CFN specific.
> > >- OS::Nova::HostAggregate only seems to allow adding some metadatas
> > >to a group of hosts in a defined availability zone
> > >- repeat function only works inside the properties section of a
> > >resource and can't be used at the resource level itself, hence
> > >something like that is not allowed :
> > >
> > >resources:
> > >   repeat:
> > > for_each:
> > >   <%az%>: { get_param: availability_zones }
> > > template:
> > >   rg-<%az%>:
> > > type: OS::Heat::ResourceGroup
> > > properties:
> > >   count: 2
> > >   resource_def:
> > > type: hot_single_server.yaml
> > > properties:
> > >   availability_zone: <%az%>
> > >
> > >
> > >The only possibility that I see is generating a ResourceGroup by AZ,
> > >but it would induce some big changes in Magnum to handle
> > >modification/generation of templates.
> > >
> > >Any ideas ?
> >
> > This is a long-standing missing feature in Heat. There are two
> > blueprints for this (I'm not sure why):
> >
> > https://blueprints.launchpad.net/heat/+spec/autoscaling-
> availabilityzo
> > nes-impl
> > https://blueprints.launchpad.net/heat/+spec/implement-
> autoscalinggroup
> > -availabilityzones
> >
> > The latter had a spec with quite a lot of discussion:
> >
> > https://review.openstack.org/#/c/105907
> >
> > And even an attempted implementation:
> >
> > https://review.openstack.org/#/c/116139/
> >
> > which was making some progress but is long out of date and would need
> > serious work to rebase. The good news is that some of the changes I
> > made in Liberty like https://review.openstack.org/#/c/213555/ should
> > hopefully make it simpler.
> >
> > All of which is to say, if you want to help then I think it would be
> > totally do-able to land support for this relatively early in Newton :)
> >
> >
> > Failing that, the only think I can think to try is something I am
> > pretty sure won't work: a ResourceGroup with something like:
> >
> >   availability_zone: {get_param: [AZ_map, "%i"]}
> >
> > where AZ_map looks something like {"0": "az-1", "1": "az-2", "2":
> > "az-1", ...} and you're using the member index to pick out the AZ to
> > use from the parameter. I don't think that works (if "%i" is resolved
> > after get_param then it won't, and I suspect that's the case) but
> it's
> > worth a try if you need a solution in Mitaka.
> 
> Yeah, this won't work if you attempt to do the map/index lookup in the
> top-level template where the ResourceGroup is defined, but it *does*
> work if you pass both the map and the index into the nested stack, e.g
> something like this (untested):
> 
> $ cat rg_az_map.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   az_map:
> type: json
> default:
>   '0': az1
>   '1': az2
> 
> resources:
>  AGroup:
> type: OS::Heat::ResourceGroup
> properties:
>   count: 2
>   resource_def:
> type: server_mapped_az.yaml
> properties:
>   availability_zone_map: {get_param: az_map}
>   index: '%index%'
> 
> $ cat server_mapped_az.yaml
> heat_template_version: 2015-04-30
> 
> parameters:
>   availability_zone_map:
> type: json
>   index:
> type: string
> 
> resources:
>  server:
> type: OS::Nova::Server
> properties:
>   image: the_image
>   flavor: m1.foo
>   availability_zone: {get_param: [availability_zone_map, {get_param:
> index}]}

This is nice. It seems to address our heterogeneity requirement at *deploy* 
time. However, I wonder what is the runtime behavior. For example, I deploy a 
stack by:
$ heat stack-create -f rg_az_map.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rg_az_map.yaml 

Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-06-07 Thread Hongbin Lu
Hi all,

Thanks for your votes. Eli Qiao has been added to the core team: 
https://review.openstack.org/#/admin/groups/1382,members .

Best regards,
Hongbin

> -Original Message-
> From: Chandan Kumar [mailto:chku...@redhat.com]
> Sent: June-01-16 12:27 AM
> To: Sheel Rana Insaan
> Cc: Hongbin Lu; adit...@nectechnologies.in;
> vivek.jain.openst...@gmail.com; Shuu Mutou; Davanum Srinivas; hai-
> x...@xr.jp.nec.com; Yuanying; Kumari, Madhuri; yanya...@cn.ibm.com;
> flw...@catalyst.net.nz; OpenStack Development Mailing List (not for
> usage questions); Qi Ming Teng; sitlani.namr...@yahoo.in;
> qiaoliy...@gmail.com
> Subject: Re: [Higgins] Proposing Eli Qiao to be a Higgins core
> 
> Hello,
> 
> 
> > On Jun 1, 2016 3:09 AM, "Hongbin Lu" <hongbin...@huawei.com> wrote:
> >>
> >> Hi team,
> >>
> >>
> >>
> >> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins
> core.
> >> Normally, the requirement to join the core team is to consistently
> >> contribute to the project for a certain period of time. However,
> >> given the fact that the project is new and the initial core team was
> >> formed based on a commitment, I am fine to propose a new core based
> >> on a strong commitment to contribute plus a few useful
> >> patches/reviews. In addition, Eli Qiao is currently a Magnum core
> and
> >> I believe his expertise will be an asset of Higgins team.
> >>
> >>
> 
> +1 from my side.
> 
> Thanks,
> 
> Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-07 Thread Hongbin Lu
Hi all,

According to the decision at the last team meeting, we will rename the project 
to “Zun”. Eli Qiao has submitted a rename request: 
https://review.openstack.org/#/c/326306/ . The infra team will rename the 
project in gerrit and git in the next maintenance windows (possibly a couple 
months after). At the meanwhile, I propose to start using the new name by 
ourselves. That includes:
- Use the new launchpad project: https://launchpad.net/zun (need helps to copy 
your bugs and BPs to the new LP project)
- Send emails with “[Zun]” in the subject
- Use the new IRC channel #openstack-zun
(others if you can think of)

If you have any concern or suggestion, please don’t hesitate to contact us. 
Thanks.

Best regards,
Hongbin

From: Yanyan Hu [mailto:huyanya...@gmail.com]
Sent: June-02-16 3:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Aha, it's pretty interesting, I vote for Zun as well :)

2016-06-02 12:56 GMT+08:00 Fei Long Wang 
<feil...@catalyst.net.nz<mailto:feil...@catalyst.net.nz>>:
+1 for Zun, I love it and it's definitely a good container :)


On 02/06/16 15:46, Monty Taylor wrote:
> On 06/02/2016 06:29 AM, 秀才 wrote:
>> i suggest a name Zun :)
>> please see the reference: https://en.wikipedia.org/wiki/Zun
> It's available on pypi and launchpad. I especially love that one of the
> important examples is the "Four-goat square Zun"
>
> https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun
>
> I don't get a vote - but I vote for this one.
>
>> -- Original --
>> *From: * "Rochelle 
>> Grober";<rochelle.gro...@huawei.com<mailto:rochelle.gro...@huawei.com>>;
>> *Date: * Thu, Jun 2, 2016 09:47 AM
>> *To: * "OpenStack Development Mailing List (not for usage
>> questions)"<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>;
>> *Cc: * "Haruhiko 
>> Katou"<har-ka...@ts.jp.nec.com<mailto:har-ka...@ts.jp.nec.com>>;
>> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Well, you could stick with the wine bottle analogy  and go with a bigger
>> size:
>>
>> Jeroboam
>> Methuselah
>> Salmanazar
>> Balthazar
>> Nabuchadnezzar
>>
>> --Rocky
>>
>> -Original Message-
>> From: Kumari, Madhuri 
>> [mailto:madhuri.kum...@intel.com<mailto:madhuri.kum...@intel.com>]
>> Sent: Wednesday, June 01, 2016 3:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Haruhiko Katou
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Thanks Shu for providing suggestions.
>>
>> I wanted the new name to be related to containers as Magnum is also
>> synonym for containers. So I have few options here.
>>
>> 1. Casket
>> 2. Canister
>> 3. Cistern
>> 4. Hutch
>>
>> All above options are free to be taken on pypi and Launchpad.
>> Thoughts?
>>
>> Regards
>> Madhuri
>>
>> -Original Message-
>> From: Shuu Mutou 
>> [mailto:shu-mu...@rf.jp.nec.com<mailto:shu-mu...@rf.jp.nec.com>]
>> Sent: Wednesday, June 1, 2016 11:11 AM
>> To: 
>> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
>> Cc: Haruhiko Katou <har-ka...@ts.jp.nec.com<mailto:har-ka...@ts.jp.nec.com>>
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> I found container related names and checked whether other project uses.
>>
>> https://en.wikipedia.org/wiki/Straddle_carrier
>> https://en.wikipedia.org/wiki/Suezmax
>> https://en.wikipedia.org/wiki/Twistlock
>>
>> These words are not used by other project on PYPI and Launchpad.
>>
>> ex.)
>> https://pypi.python.org/pypi/straddle
>> https://launchpad.net/straddle
>>
>>
>> However the chance of renaming in N cycle will be done by Infra-team on
>> this Friday, we would not meet the deadline. So
>>
>> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
>> name for next renaming chance (after a half year)
>>
>> Thoughts?
>>
>>
>> Regards,
>> Shu
>>
>>
>>> -Original Message-
>>> From: Hongbin Lu 
>>> [mailto:hongbin...@huawei.com<mailto:hongbin...@huawei.com>]
>>> Sent: Wednesday, June 01, 2016 11:37 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> <openstack-dev@lists.openst

[openstack-dev] [Higgins][Zun] Project roadmap

2016-06-12 Thread Hongbin Lu
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-06-10 Thread Hongbin Lu
Yuanying,

The etherpads you pointed to were a few years ago and the information looks a 
bit outdated. I think we can collaborate a similar etherpad with updated 
information (i.e. remove container runtimes that we don’t care, add container 
runtimes that we care). The existing etherpad can be used as a starting point. 
What do you think?

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: June-01-16 12:43 AM
To: OpenStack Development Mailing List (not for usage questions); Sheel Rana 
Insaan
Cc: adit...@nectechnologies.in; yanya...@cn.ibm.com; flw...@catalyst.net.nz; Qi 
Ming Teng; sitlani.namr...@yahoo.in; Yuanying; Chandan Kumar
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Just F.Y.I.

When Magnum wanted to become “Container as a Service”,
There were some discussion about API design.

* https://etherpad.openstack.org/p/containers-service-api
* https://etherpad.openstack.org/p/openstack-containers-service-api



2016年6月1日(水) 12:09 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan 
[mailto:ranasheel2...@gmail.com<mailto:ranasheel2...@gmail.com>]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in<mailto:adit...@nectechnologies.in>; 
vivek.jain.openst...@gmail.com<mailto:vivek.jain.openst...@gmail.com>; 
flw...@catalyst.net.nz<mailto:flw...@catalyst.net.nz>; Shuu Mutou; Davanum 
Srinivas; OpenStack Development Mailing List (not for usage questions); Chandan 
Kumar; hai...@xr.jp.nec.com<mailto:hai...@xr.jp.nec.com>; Qi Ming Teng; 
sitlani.namr...@yahoo.in<mailto:sitlani.namr...@yahoo.in>; Yuanying; Kumari, 
Madhuri; yanya...@cn.ibm.com<mailto:yanya...@cn.ibm.com>
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Hongbin Lu
Thanks CERN for offering the host. We will discuss the dates and location in 
the next team meeting [1].

[1] 
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-06-14_1600_UTC

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis <strig...@gmail.com<mailto:strig...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com<mailto:rocha.po...@gmail.com>]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Hongbin Lu


From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
Sent: May-25-16 6:55 AM
To: OpenStack Development Mailing List (not for usage questions); Gal Sagie; 
openstack-operators
Subject: Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode 
openstack setup



On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa 
<devv...@gmail.com<mailto:devv...@gmail.com>> wrote:
Hello Akshay,

responses inline:

On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
> Hi,
> I have a 4 node openstack setup (1 controller, 1 network, 2 compute nodes).
> I want to install kuryr in liberty version. I cannot find a package in
> ubuntu repo.

There is not yet official version of Kuryr. You'll need to install using the
current master branch of the repo[1] (by cloning it, install dependencies and
`python setup.py install`

 Or you could run it dockerized. Read the "repo info" in [2]
We are working on having the packaging ready, but we are splitting the repos 
first,
so it will take a while for plain distro packages.

> -How do i install kuryr?
If the README.rst file of the repository is not enough for you in terms of
installation and configuration, please let us know what's not clear.

> - what are the components that need to be installed on the respective
> nodes?

You need to run the kuryr libnetwork's service in all the nodes that you use as
docker 'workers'

and your chosen vendor's neutron agents. For example, for MidoNet it's
midolman, for ovs it would be the neutron ovs agent.


> - Do i need to install magnum for docker swarm?

Not familiar with Magnum.. Can not help you here.


If you want to run docker swarm in bare metal, you do not need Magnum. Only
keystone and Neutron.
You'd put docker swarm, neutron and keystone running in one node, and then
have N nodes with docker engine, kuryr/libnetwork and the neutron agents of
the vendor of your choice.
[Hongbin Lu] Yes, Magnum is optional if you prefer to install swarm/k8s/mesos 
manually or by other tools. What Magnum offers is basically an automation of 
deployment plus a few management operations (i.e. scaling the cluster at 
runtime). From my point of view, if you prefer to skip Magnum, the main 
disadvantage is the missing of ability to get a tenant-scoped swarm/k8s/mesos 
cluster on-demand. In such case, you might have a static k8s/swarm/mesos 
cluster that is shared across multiple tenants.

> - Can i use docker swarm, kubernetes, mesos in openstack without using
> kuryr?

You can use swarm and kubernetes in OpenStack with Kuryr using Magnum. It will
use neutron networking for providing nets to the VMs that will run the 
swarm/kubernetes
cluster. Inside the VMs, another overlay done by flannel will be used (in k8s, 
in
swarm I have not tried it).

[Hongbin Lu] Yes, I think Flannel is an alternative of Kuryr. If using Magnum, 
Flannel is supported in k8s and swarm. Magnum supports 3 Flannel backends: udp, 
vxlan, and host-gw. If you want an overlay solution, you can choose udp or 
vxlan. If you want a high-performance solution, the host-gw backend should work 
well.


What will be the disadvantages?

The disadvantages are that you do not get explicit Neutron networking for your 
containers,
you get less networking isolation for your VMs/containers and if you want the 
highest
performance, you have to change the default flannel mode.
[Hongbin Lu] That is true. If using Magnum, the default flannel backend is 
“udp”. Users need to turn on the “host-gw” backend to get the highest 
performance. We are discussing if it makes sense to change the default to 
“host-gw”, so that users will get the non-overlay performance by default. In 
addition, Magnum team is working with Kuryr team to bring Kuryr as the second 
network driver. For the comparison between Flannel and Kuryr, I think the main 
disadvantage of Flannel is not about performance (because the flannel host-gw 
backend should provide a similar performance as Kuryr), but is the ability to 
tightly integrate containers with Neutron.



Only docker swarm right now. The kubernetes one will be addressed soon.

>
> Thanks
> Akshay
Thanks to you for giving it a try!


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

There are a bunch of people much more experienced than me in Kuryr. I hope I
haven't said anything stupid.

Best regards,

[1]: http://github.com/openstack/kuryr
 [2] https://hub.docker.com/r/kuryr/libnetwork/


--
Jaume Devesa
Software Engineer at Midokura
PGP key: 35C2D6B2 @ keyserver.ubuntu.com<http://keyserver.ubuntu.com>

__

Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-10 Thread Hongbin Lu
+1

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: June-10-16 5:33 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou
> Subject: [openstack-dev] [magnum-ui][magnum] Proposed Core addition,
> and removal notice
> 
> Hi team,
> 
> I propose the following changes to the magnum-ui core group.
> 
> + Thai Tran
>   http://stackalytics.com/report/contribution/magnum-ui/90
>   I'm so happy to propose Thai as a core reviewer.
>   His reviews have been extremely valuable for us.
>   And he is active Horizon core member.
>   I believe his help will lead us to the correct future.
> 
> - David Lyle
> 
> http://stackalytics.com/?metric=marks_type=openstack=al
> l=magnum-ui_id=david-lyle
>   No activities for Magnum-UI since Mitaka cycle.
> 
> - Harsh Shah
>   http://stackalytics.com/report/users/hshah
>   No activities for OpenStack in this year.
> 
> - Ritesh
>   http://stackalytics.com/report/users/rsritesh
>   No activities for OpenStack in this year.
> 
> Please respond with your +1 votes to approve this change or -1 votes to
> oppose.
> 
> Thanks,
> Shu
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quesion about Openstack Containers and Magnum

2016-06-11 Thread Hongbin Lu
Hi,

It looks Spyros already answered your question: 
http://lists.openstack.org/pipermail/openstack-dev/2016-June/097083.html . Is 
anything else we can help, or you have further questions?

Best regards,
Hongbin

From: zhihao wang [mailto:wangzhihao...@hotmail.com]
Sent: June-11-16 1:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Quesion about Openstack Containers and Magnum


Dear Openstack Dev Members:

I would like to install the Magnum on OpenStack to manage Docker Containers.
I have a openstack Liberty production setup. one controller node, and a few 
compute nodes.

I am wondering how can I install Openstack Magnum on OpenStack Liberty on 
distributed production environment (1 controller node and some compute nodes)?

I know I can install Magnum using desstack, but I dont want the developer 
version,

Is there a way/guide to install it on production environment?

Thanks
Wally
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
Sent: June-13-16 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: yanya...@cn.ibm.com; Qi Ming Teng; adit...@nectechnologies.in; 
sitlani.namr...@yahoo.in; flw...@catalyst.net.nz; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap



On Mon, Jun 13, 2016 at 12:10 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.

Great! Let us know anytime we can help

* Leverage Cinder for container data volume.
Have you considered fuxi?

https://github.com/openstack/fuxi
[Hongbin Lu] We discussed if we should leverage Kuryr/Fuxi for storage, but we 
are unclear what this project offer exactly and how it works. The maturity of 
the project is also a concern, but we will revisit it later.


* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.

What about have the scheduler pluggable instead of having a lot of 
configuration options?
[Hongbin Lu] For short-term, no. We will implement a very simple scheduler to 
start. For long-term, we will wait for the scheduler-as-a-service project: 
https://wiki.openstack.org/wiki/Gantt . I believe Gantt will have a pluggable 
architecture so that your requirement will be satisfied. If not, we will 
revisit it.


** Support hypervisor-based container runtimes.

Is that hyper.sh?
[Hongbin Lu] It could be, or Clear Container, or something similar.



The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

Will it have ordering primitives, i.e. this container won't start until this 
one is up and running. ?
I also wonder whether the Higgins container abstraction will have rich status 
reporting that can be used in ordering.
For example, whether it can differentiate started containers from those that 
are already listening in their exposed
ports.
[Hongbin Lu] I am open to that, but needs to discuss the idea further.


NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Hongbin Lu
 address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.

Tom


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: 
https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
- Create a new project for this efforts
- Alter Magnum mission statement to clarify its goal (Magnum is not a container 
service, it is sort of a COE management service)

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
- Clear Container support: Clear Container needs to integrate with COEs first. 
After the integration is done, Magnum team will revisit bringing the Clear 
Container COE to Magnum.
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a 
new DCOS bay type. Then, deprecate and delete the mesos bay type.
- Start enforcing API deprecation policy: 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
- Freeze API v1 after some patches are merged.
- Multi-tenancy within a bay: not the priority in Newton cycle
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-14 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: June-14-16 3:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> On 13/06/16 18:46 +, Hongbin Lu wrote:
> >
> >
> >> -Original Message-
> >> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> >> Sent: June-13-16 1:43 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> >>
> >>
> >>
> >> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> >> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> >> Hi team,
> >> >>
> >> >> During the team meetings these weeks, we collaborated the initial
> >> >> project roadmap. I summarized it as below. Please review.
> >> >>
> >> >> * Implement a common container abstraction for different
> container
> >> >> runtimes. The initial implementation will focus on supporting
> >> >> basic container operations (i.e. CRUD).
> >> >
> >> > What COE's are being considered for the first implementation? Just
> >> > docker and kubernetes?
> >[Hongbin Lu] Container runtimes, docker in particular, are being
> considered for the first implementation. We discussed how to support
> COEs in Zun but cannot reach an agreement on the direction. I will
> leave it for further discussion.
> >
> >> >
> >> >> * Focus on non-nested containers use cases (running containers on
> >> >> physical hosts), and revisit nested containers use cases (running
> >> >> containers on VMs) later.
> >> >> * Provide two set of APIs to access containers: The Nova APIs and
> >> the
> >> >> Zun-native APIs. In particular, the Zun-native APIs will expose
> >> >> full container capabilities, and Nova APIs will expose
> >> >> capabilities that are shared between containers and VMs.
> >> >
> >> > - Is the nova side going to be implemented in the form of a Nova
> >> > driver (like ironic's?)? What do you mean by APIs here?
> >[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova.
> The idea is similar to Ironic.
> >
> >> >
> >> > - What operations are we expecting this to support (just CRUD
> >> > operations on containers?)?
> >[Hongbin Lu] We are working on finding the list of operations to
> support. There is a BP for tracking this effort:
> https://blueprints.launchpad.net/zun/+spec/api-design .
> >
> >> >
> >> > I can see this driver being useful for specialized services like
> >> Trove
> >> > but I'm curious/concerned about how this will be used by end users
> >> > (assuming that's the goal).
> >[Hongbin Lu] I agree that end users might not be satisfied by basic
> container operations like CRUD. We will discuss how to offer more to
> make the service to be useful in production.
> 
> I'd probably leave this out for now but this is just my opinion.
> Personally, I think users, if presented with both APIs - nova's and
> Zun's - they'll prefer Zun's.
> 
> Specifically, you don't interact with a container the same way you
> interact with a VM (but I'm sure you know all these way better than me).
> I guess my concern is that I don't see too much value in this other
> than allowing specialized services to run containers through Nova.

ACK

> 
> 
> >> >
> >> >
> >> >> * Leverage Neutron (via Kuryr) for container networking.
> >> >> * Leverage Cinder for container data volume.
> >> >> * Leverage Glance for storing container images. If necessary,
> >> >> contribute to Glance for missing features (i.e. support layer of
> >> >> container images).
> >> >
> >> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> >> This support is very minimalistic in nature, since it doesn't do
> >> anything beyond just storing a docker FS tar ball.
> >> I think it was felt that, further support for docker FS was needed.
> >> While there were suggestions of private docker registry, having
> >> something in band (w.r.t openstack) maybe desirable.
> >[Hongbin Lu] Yes, Glance doesn't support layer of container images
> which is a missing feature.
> 
> Yup, I didn't mean to imply that would do it all for you rather that
> there's been some progress there. As far 

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi Tim,

Thanks for providing the host. We discussed the midcycle location at the last 
team meeting. It looks a significant number of Magnum team members has 
difficulties to travel to Geneva, so we are not able to hold the midcycle at 
CERN. Thanks again for the willingness to host us.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis <strig...@gmail.com<mailto:strig...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com<mailto:rocha.po...@gmail.com>]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi team,

As discussed in the team meeting, we are going to choose between Austin and San 
Francisco. A doodle pool was created to select the location: 
http://doodle.com/poll/2x9utspir7vk8ter . Please cast your vote there. On 
behalf of Magnum team, thanks Rackspace for providing the host.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-09-16 3:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Rackspace is willing to host in Austin, TX or San Antonio, TX, or San 
Francisco, CA.

--
Adrian

On Jun 7, 2016, at 1:35 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
> 
> Brandon,
> 
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
> 
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
> 
> Adrian
> 
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
> <brandon.lo...@rackspace.com> wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Thanks.

Actually, we were looking for lbaas v1 and the linked document seems to mainly 
talk about v2, but we are migrating to v2 so I am satisfied.

Best regards,
Hongbin

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: June-14-16 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation 
guide

Let us know if this is what you're looking for:

http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

Thanks Major Hayden for writing it up.
Anne

On Tue, Jun 14, 2016 at 3:54 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto 
> [mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
>
> Brandon,
>
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
>
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
>
> Adrian
>
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
> <brandon.lo...@rackspace.com<mailto:brandon.lo...@rackspace.com>> wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
www.justwriteclick.com<http://www.justwriteclick.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


> -Original Message-
> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> Sent: June-13-16 1:43 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> 
> 
> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> Hi team,
> >>
> >> During the team meetings these weeks, we collaborated the initial
> >> project roadmap. I summarized it as below. Please review.
> >>
> >> * Implement a common container abstraction for different container
> >> runtimes. The initial implementation will focus on supporting basic
> >> container operations (i.e. CRUD).
> >
> > What COE's are being considered for the first implementation? Just
> > docker and kubernetes?
[Hongbin Lu] Container runtimes, docker in particular, are being considered for 
the first implementation. We discussed how to support COEs in Zun but cannot 
reach an agreement on the direction. I will leave it for further discussion.

> >
> >> * Focus on non-nested containers use cases (running containers on
> >> physical hosts), and revisit nested containers use cases (running
> >> containers on VMs) later.
> >> * Provide two set of APIs to access containers: The Nova APIs and
> the
> >> Zun-native APIs. In particular, the Zun-native APIs will expose full
> >> container capabilities, and Nova APIs will expose capabilities that
> >> are shared between containers and VMs.
> >
> > - Is the nova side going to be implemented in the form of a Nova
> > driver (like ironic's?)? What do you mean by APIs here?
[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova. The idea 
is similar to Ironic.

> >
> > - What operations are we expecting this to support (just CRUD
> > operations on containers?)?
[Hongbin Lu] We are working on finding the list of operations to support. There 
is a BP for tracking this effort: 
https://blueprints.launchpad.net/zun/+spec/api-design .

> >
> > I can see this driver being useful for specialized services like
> Trove
> > but I'm curious/concerned about how this will be used by end users
> > (assuming that's the goal).
[Hongbin Lu] I agree that end users might not be satisfied by basic container 
operations like CRUD. We will discuss how to offer more to make the service to 
be useful in production.

> >
> >
> >> * Leverage Neutron (via Kuryr) for container networking.
> >> * Leverage Cinder for container data volume.
> >> * Leverage Glance for storing container images. If necessary,
> >> contribute to Glance for missing features (i.e. support layer of
> >> container images).
> >
> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> This support is very minimalistic in nature, since it doesn't do
> anything beyond just storing a docker FS tar ball.
> I think it was felt that, further support for docker FS was needed.
> While there were suggestions of private docker registry, having
> something in band (w.r.t openstack) maybe desirable.
[Hongbin Lu] Yes, Glance doesn't support layer of container images which is a 
missing feature.

> >> * Support enforcing multi-tenancy by doing the following:
> >> ** Add configurable options for scheduler to enforce neighboring
> >> containers belonging to the same tenant.
> >> ** Support hypervisor-based container runtimes.
> >>
> >> The following topics have been discussed, but the team cannot reach
> >> consensus on including them into the short-term project scope. We
> >> skipped them for now and might revisit them later.
> >> * Support proxying API calls to COEs.
> >
> > Any link to what this proxy will do and what service it'll talk to?
> > I'd generally advice against having proxy calls in services. We've
> > just done work in Nova to deprecate the Nova Image proxy.
[Hongbin Lu] Maybe "proxy" is not the right word. What I mean is to translate 
the request to API calls of COEs. For example, users request to create a 
container in Zun, then Zun creates a single-container pod in k8s.

> >
> >> * Advanced container operations (i.e. keep container alive, load
> >> balancer setup, rolling upgrade).
> >> * Nested containers use cases (i.e. provision container hosts).
> >> * Container composition (i.e. support docker-compose like DSL).
> >>
> >> NOTE: I might forgot and mis-understood something. Please feel free
> >> to point out if anything is wrong or missing.
> >
> > It sounds you've got more than enough to work on for now,

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Hongbin Lu
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> 
> Hi Hongbin.
> 
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
> 
> Let us know.
> 
> Ricardo
> 
> 
> 
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu <hongbin...@huawei.com>
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-28 Thread Hongbin Lu
Joshua,

Good point. It is optimal start off a project with a greater vision and work on 
getting there. However, it will take time to build consensus within the team. 
You are welcome to participant to build the project vision. Right now, it looks 
the team doesn't have consensus for the long-term scope yet, but we do have 
consensus on the short-term objectives. In such case, I would suggest the team 
to focus on the basic at current stage. At the meanwhile, we are holding weekly 
meeting to let everyone share their long-term vision and drive censuses on them.

Best regards,
Hongbin

> -Original Message-
> From: Joshua Harlow [mailto:harlo...@fastmail.com]
> Sent: May-27-16 4:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [higgins] Continued discussion from the
> last team meeting
> 
> I get this idea, I just want to bring up the option that if u only
> start off with a basic vision, then u basically get that as a result,
> vs IMHO where u start off with a bigger/greater vision and work on
> getting there.
> 
> I'd personally rather work on a project that has a advanced vision vs
> one that has 'just do the basics' but thats just my 2 cents...
> 
> Hongbin Lu wrote:
> > I agree with you and Qiming. The Higgins project should start with
> > basic functionalities and revisit advanced features later.
> >
> > Best regards,
> >
> > Hongbin
> >
> > *From:*Yanyan Hu [mailto:huyanya...@gmail.com]
> > *Sent:* May-24-16 11:06 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [higgins] Continued discussion from
> the
> > last team meeting
> >
> > Hi, Hongbing, thanks a lot for the summary! The following is my
> > thoughts on those two questions left:
> >
> > About container composition, it is a really useful and important
> > feature for enduser. But based on my understanding, user can actually
> > achieve the same goal by leveraging other high level OpenStack
> services, e.g.
> > defining a Heat template with Higgins container resources and
> > app/service (softwareconfig/softwaredeployment resources) running
> > inside containers. In future we can implement related functionality
> > inside Higgins to better support this kind of use cases natively. But
> > in current stage, I suggest we focus on container primitive and its
> > basic operations.
> >
> > For container host management, I agree we should expose related API
> > interfaces to operator(admin). Ideally, Higgins should be able to
> > manage all container hosts(baremetal and VM) automatically, but
> manual
> > intervention could be necessary in many pratical use cases. But I
> > suggest to hide these API interfaces from endusers since it's not
> > their responsibility to manage those hosts.
> >
> > Thanks.
> >
> > 2016-05-25 4:55 GMT+08:00 Hongbin Lu <hongbin...@huawei.com
> > <mailto:hongbin...@huawei.com>>:
> >
> > Hi all,
> >
> > At the last team meeting, we tried to define the scope of the Higgins
> > project. In general, we agreed to focus on the following features as
> > an initial start:
> >
> > ·Build a container abstraction and use docker as the first
> implementation.
> >
> > ·Focus on basic container operations (i.e. CRUD), and leave advanced
> > operations (i.e. keep container alive, rolling upgrade, etc.) to
> users
> > or other projects/services.
> >
> > ·Start with non-nested container use cases (e.g. containers on
> > physical hosts), and revisit nested container use cases (e.g.
> > containers on VMs) later.
> >
> > The items below needs further discussion so I started this ML to
> discuss it.
> >
> > 1.Container composition: implement a docker compose like feature
> >
> > 2.Container host management: abstract container host
> >
> > For #1, it seems we broadly agreed that this is a useful feature. The
> > argument is where this feature belongs to. Some people think this
> > feature belongs to other projects, such as Heat, and others think it
> > belongs to Higgins so we should implement it. For #2, we were mainly
> > debating two things: where the container hosts come from (provisioned
> > by Nova or provided by operators); should we expose host management
> > APIs to end-users? Thoughts?
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-28 Thread Hongbin Lu


> -Original Message-
> From: Zane Bitter [mailto:zbit...@redhat.com]
> Sent: May-27-16 6:31 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
> Gap analysis: Heat as a k8s orchestrator
> 
> I spent a bit of time exploring the idea of using Heat as an external
> orchestration layer on top of Kubernetes - specifically in the case of
> TripleO controller nodes but I think it could be more generally useful
> too - but eventually came to the conclusion it doesn't work yet, and
> probably won't for a while. Nevertheless, I think it's helpful to
> document a bit to help other people avoid going down the same path, and
> also to help us focus on working toward the point where it _is_
> possible, since I think there are other contexts where it would be
> useful too.
> 
> We tend to refer to Kubernetes as a "Container Orchestration Engine"
> but it does not actually do any orchestration, unless you count just
> starting everything at roughly the same time as 'orchestration'. Which
> I wouldn't. You generally handle any orchestration requirements between
> services within the containers themselves, possibly using external
> services like etcd to co-ordinate. (The Kubernetes project refer to
> this as "choreography", and explicitly disclaim any attempt at
> orchestration.)
> 
> What Kubernetes *does* do is more like an actively-managed version of
> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map
> of resource names to server UUIDs and it creates a SoftwareDeployment
> for each server. You have to generate the list of servers somehow to
> give it (the easiest way is to obtain it from the output of another
> ResourceGroup containing the servers). If e.g. a server goes down you
> have to detect that externally, and trigger a Heat update that removes
> it from the templates, redeploys a replacement server, and regenerates
> the server list before a replacement SoftwareDeployment is created. In
> constrast, Kubernetes is running on a cluster of servers, can use rules
> to determine where to run containers, and can very quickly redeploy
> without external intervention in response to a server or container
> falling over. (It also does rolling updates, which Heat can also do
> albeit in a somewhat hacky way when it comes to SoftwareDeployments -
> which we're planning to fix.)
> 
> So this seems like an opportunity: if the dependencies between services
> could be encoded in Heat templates rather than baked into the
> containers then we could use Heat as the orchestration layer following
> the dependency-based style I outlined in [1]. (TripleO is already
> moving in this direction with the way that composable-roles uses
> SoftwareDeploymentGroups.) One caveat is that fully using this style
> likely rules out for all practical purposes the current Pacemaker-based
> HA solution. We'd need to move to a lighter-weight HA solution, but I
> know that TripleO is considering that anyway.
> 
> What's more though, assuming this could be made to work for a
> Kubernetes cluster, a couple of remappings in the Heat environment file
> should get you an otherwise-equivalent single-node non-HA deployment
> basically for free. That's particularly exciting to me because there
> are definitely deployments of TripleO that need HA clustering and
> deployments that don't and which wouldn't want to pay the complexity
> cost of running Kubernetes when they don't make any real use of it.
> 
> So you'd have a Heat resource type for the controller cluster that maps
> to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
> and a bunch of software deployments that map to either a
> OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
> directly or a Kubernetes Pod resource to be named later.
> 
> The first obstacle is that we'd need that Kubernetes Pod resource in
> Heat. Currently there is no such resource type, and the OpenStack API
> that would be expected to provide that API (Magnum's /container
> endpoint) is being deprecated, so that's not a long-term solution.[2]
> Some folks from the Magnum community may or may not be working on a
> separate project (which may or may not be called Higgins) to do that.
> It'd be some time away though.
> 
> An alternative, though not a good one, would be to create a Kubernetes
> resource type in Heat that has the credentials passed in somehow. I'm
> very against that though. Heat is just not good at handling credentials
> other than Keystone ones. We haven't ever created a resource type like
> this before, except for the Docker one in /contrib that serves as a
> prime example of what *not* to do. And if it doesn't make sense to wrap
> an OpenStack API around this then IMO it isn't going to make any more
> sense to wrap a Heat resource around it.

There are ways to alleviate the credential handling issue. First, 

Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Hongbin Lu
I don’t think it is a good to re-invent docker-compose in Higgins. Instead, we 
should leverage existing libraries/tools if we can.

Frankly, I don’t think Higgins should interpret any docker-compose like DSL in 
server, but maybe it is a good idea to have a CLI extension to interpret 
specific DSL and translate it to a set of REST API calls to Higgins server. The 
solution should be generic enough so that we can re-use it to interpret another 
DSL (e.g. pod, TOSCA, etc.) in the future.

Best regards,
Hongbin

From: Denis Makogon [mailto:lildee1...@gmail.com]
Sent: May-31-16 3:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Docker-compose support

Hello.

It is hard to tell if given API will be final version, but i tried to make it 
similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow 
>:
Cool good to know,

I see 
https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66

Would that be the primary API? Hard to tell what is the API there actually, 
haha. Is it the run() method?

I was thinking more along the line that higgins could be a 'interpreter' of the 
same docker-compose format (or similar format); if the library that is being 
created takes a docker-compose file and turns it into a 'intermediate' 
version/format that'd be cool. The compiled version would then be 'executable' 
(and introspectable to) by say higgins (which could say traverse over that 
intermediate version and activate its own code to turn the intermediate 
versions primitives into reality), or a docker-compose service could or ...

What abou TOSCA? From my own perspective compose format is too limited, so it 
is really necessary to consider regarding use of TOSCA in Higgins workflows.


Libcompose also seems to be targeted at a higher level library, from at least 
reading the summary, neither seem to be taking a compose yaml file, turning it 
into a intermediate format, exposing that intermediate format to others for 
introspection/execution (and also likely providing a default execution engine 
that understands that format) but instead both just provide an equivalent of:

That's why i've started this thread, as community we have use cases for Higgins 
itself and for compose but most of them are not formalized or even written. 
Isn't this a good time to define them?

  project = make_project(yaml_file)
  project.run/up()

Which probably isn't the best API for something like a web-service that uses 
that same library to have. IMHO having a long running run() method

Well, compose allows to run detached executions for most of its API calls. By 
use of events, we can track service/containers statuses (but it is not really 
trivial).

exposed, without the necessary state tracking, ability to 
interrupt/pause/resume that run() method and such is not going to end well for 
users of that lib (especially a web-service that needs to periodically be 
`service webservice stop` or restart, or ...).

Yes, agreed. But docker or swarm by itself doesn't provide such API (can't tell 
the same for K8t).

Denis Makogon wrote:
Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The
whole idea is to make docker-compose look similar to libcompose [3]
(written on Go).

If we need to utilize docker-compose features in Higgins i'd recommend
to work on this with Docker community and convince them to land that
patch to upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Hongbin Lu
Hi team,

I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core. 
Normally, the requirement to join the core team is to consistently contribute 
to the project for a certain period of time. However, given the fact that the 
project is new and the initial core team was formed based on a commitment, I am 
fine to propose a new core based on a strong commitment to contribute plus a 
few useful patches/reviews. In addition, Eli Qiao is currently a Magnum core 
and I believe his expertise will be an asset of Higgins team.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from existing Higgins core team within a 1 week voting window (consider 
this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get 
enough votes or there is a veto vote prior to the end of the voting window, Eli 
is not able to join the core team and needs to wait 30 days to reapply.

The voting is open until Tuesday June 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-05-31 Thread Hongbin Lu
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan [mailto:ranasheel2...@gmail.com]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in; vivek.jain.openst...@gmail.com; 
flw...@catalyst.net.nz; Shuu Mutou; Davanum Srinivas; OpenStack Development 
Mailing List (not for usage questions); Chandan Kumar; hai...@xr.jp.nec.com; Qi 
Ming Teng; sitlani.namr...@yahoo.in; Yuanying; Kumari, Madhuri; 
yanya...@cn.ibm.com
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-31 Thread Hongbin Lu
Shu,

According to the feedback from the last team meeting, Gatling doesn't seem to 
be a suitable name. Are you able to find an alternative name?

Best regards,
Hongbin

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: May-24-16 4:30 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou
> Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Hi all,
> 
> Unfortunately "higgins" is used by media server project on Launchpad
> and CI software on PYPI. Now, we use "python-higgins" for our project
> on Launchpad.
> 
> IMO, we should rename project to prevent increasing points to patch.
> 
> How about "Gatling"? It's only association from Magnum. It's not used
> on both Launchpad and PYPI.
> Is there any idea?
> 
> Renaming opportunity will come (it seems only twice in a year) on
> Friday, June 3rd. Few projects will rename on this date.
> http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
> And if project name issue will be fixed, I'd like to propose UI
> subproject.
> 
> Thanks,
> Shu
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Hongbin Lu
Madhuri,

It looks both of us agree the idea of having heterogeneous set of nodes. For 
the implementation, I am open to alternative (I supported the work-around idea 
because I cannot think of a feasible implementation by purely using Heat, 
unless Heat support "for" logic which is very unlikely to happen. However, if 
anyone can think of a pure Heat implementation, I am totally fine with that).

Best regards,
Hongbin

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: June-02-16 12:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi Hongbin,
> 
> I also liked the idea of having heterogeneous set of nodes but IMO such
> features should not be implemented in Magnum, thus deviating Magnum
> again from its roadmap. Whereas we should leverage Heat(or may be
> Senlin) APIs for the same.
> 
> I vote +1 for this feature.
> 
> Regards,
> Madhuri
> 
> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, June 2, 2016 3:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Personally, I think this is a good idea, since it can address a set of
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Hongbin Lu
> > Sent: June-01-16 11:44 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi team,
> >
> > A blueprint was created for tracking this idea:
> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > nodes . I won't approve the BP until there is a team decision on
> > accepting/rejecting the idea.
> >
> > From the discussion in design summit, it looks everyone is OK with
> the
> > idea in general (with some disagreements in the API style). However,
> > from the last team meeting, it looks some people disagree with the
> > idea fundamentally. so I re-raised this ML to re-discuss.
> >
> > If you agree or disagree with the idea of manually managing the Heat
> > stacks (that contains individual bay nodes), please write down your
> > arguments here. Then, we can start debating on that.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > Sent: May-16-16 5:28 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > The discussion at the summit was very positive around this
> > requirement
> > > but as this change will make a large impact to Magnum it will need
> a
> > > spec.
> > >
> > > On the API of things, I was thinking a slightly more generic
> > > approach to incorporate other lifecycle operations into the same
> API.
> > > Eg:
> > > magnum bay-manage  
> > >
> > > magnum bay-manage  reset –hard
> > > magnum bay-manage  rebuild
> > > magnum bay-manage  node-delete  magnum bay-manage
> > >  node-add –flavor  magnum bay-manage  node-reset
> > >  magnum bay-manage  node-list
> > >
> > > Tom
> > >
> > > From: Yuanying OTSUKA <yuany...@oeilvert.org>
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > questions)" <openstack-dev@lists.openstack.org>
> > > Date: Monday, 16 May 2016 at 01:07
> > >

Re: [openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-26 Thread Hongbin Lu
I agree with you and Qiming. The Higgins project should start with basic 
functionalities and revisit advanced features later.

Best regards,
Hongbin

From: Yanyan Hu [mailto:huyanya...@gmail.com]
Sent: May-24-16 11:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Continued discussion from the last team 
meeting

Hi, Hongbing, thanks a lot for the summary! The following is my thoughts on 
those two questions left:
About container composition, it is a really useful and important feature for 
enduser. But based on my understanding, user can actually achieve the same goal 
by leveraging other high level OpenStack services, e.g. defining a Heat 
template with Higgins container resources and app/service 
(softwareconfig/softwaredeployment resources) running inside containers. In 
future we can implement related functionality inside Higgins to better support 
this kind of use cases natively. But in current stage, I suggest we focus on 
container primitive and its basic operations.

For container host management, I agree we should expose related API interfaces 
to operator(admin). Ideally, Higgins should be able to manage all container 
hosts(baremetal and VM) automatically, but manual intervention could be 
necessary in many pratical use cases. But I suggest to hide these API 
interfaces from endusers since it's not their responsibility to manage those 
hosts.
Thanks.

2016-05-25 4:55 GMT+08:00 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Hi all,

At the last team meeting, we tried to define the scope of the Higgins project. 
In general, we agreed to focus on the following features as an initial start:

• Build a container abstraction and use docker as the first 
implementation.

• Focus on basic container operations (i.e. CRUD), and leave advanced 
operations (i.e. keep container alive, rolling upgrade, etc.) to users or other 
projects/services.

• Start with non-nested container use cases (e.g. containers on 
physical hosts), and revisit nested container use cases (e.g. containers on 
VMs) later.
The items below needs further discussion so I started this ML to discuss it.

1.   Container composition: implement a docker compose like feature

2.   Container host management: abstract container host
For #1, it seems we broadly agreed that this is a useful feature. The argument 
is where this feature belongs to. Some people think this feature belongs to 
other projects, such as Heat, and others think it belongs to Higgins so we 
should implement it. For #2, we were mainly debating two things: where the 
container hosts come from (provisioned by Nova or provided by operators); 
should we expose host management APIs to end-users? Thoughts?

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,

Yanyan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the
> idea in general (with some disagreements in the API style). However,
> from the last team meeting, it looks some people disagree with the idea
> fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat
> stacks (that contains individual bay nodes), please write down your
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic approach
> > to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage
> >  node-add –flavor  magnum bay-manage  node-reset
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA <yuany...@oeilvert.org>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev@lists.openstack.org>
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap,
> > Magnum manages bay nodes by using ResourceGroup from Heat. This
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example,
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set of
> > flavors [2]. For the request features above, ResourceGroup won’t work
> > very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually
> > create Heat stack for each bay nodes. For example, for creating a
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big Heat stack as right now):
> > * A kube cluster stack that manages the g

[openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-01 Thread Hongbin Lu
Hi lbaas team,

I wonder if there is an operator-facing installation guide for neutron-lbaas. I 
asked that because Magnum is working on an installation guide [1] and 
neutron-lbaas is a dependency of Magnum. We want to link to an official lbaas 
guide so that our users will have a completed instruction. Any pointer?

[1] https://review.openstack.org/#/c/319399/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Hi team,

A blueprint was created for tracking this idea: 
https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes . I 
won't approve the BP until there is a team decision on accepting/rejecting the 
idea.

From the discussion in design summit, it looks everyone is OK with the idea in 
general (with some disagreements in the API style). However, from the last team 
meeting, it looks some people disagree with the idea fundamentally. so I 
re-raised this ML to re-discuss.

If you agree or disagree with the idea of manually managing the Heat stacks 
(that contains individual bay nodes), please write down your arguments here. 
Then, we can start debating on that.

Best regards,
Hongbin

> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: May-16-16 5:28 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> The discussion at the summit was very positive around this requirement
> but as this change will make a large impact to Magnum it will need a
> spec.
> 
> On the API of things, I was thinking a slightly more generic approach
> to incorporate other lifecycle operations into the same API.
> Eg:
> magnum bay-manage  
> 
> magnum bay-manage  reset –hard
> magnum bay-manage  rebuild
> magnum bay-manage  node-delete  magnum bay-manage 
> node-add –flavor  magnum bay-manage  node-reset 
> magnum bay-manage  node-list
> 
> Tom
> 
> From: Yuanying OTSUKA <yuany...@oeilvert.org>
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" <openstack-dev@lists.openstack.org>
> Date: Monday, 16 May 2016 at 01:07
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi,
> 
> I think, user also want to specify the deleting node.
> So we should manage “node” individually.
> 
> For example:
> $ magnum node-create —bay …
> $ magnum node-list —bay
> $ magnum node-delete $NODE_UUID
> 
> Anyway, if magnum want to manage a lifecycle of container
> infrastructure.
> This feature is necessary.
> 
> Thanks
> -yuanying
> 
> 
> 2016年5月16日(月) 7:50 Hongbin Lu
> <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap,
> Magnum manages bay nodes by using ResourceGroup from Heat. This
> approach works but it is infeasible to manage the heterogeneity across
> bay nodes, which is a frequently demanded feature. As an example, there
> is a request to provision bay nodes across availability zones [1].
> There is another request to provision bay nodes with different set of
> flavors [2]. For the request features above, ResourceGroup won’t work
> very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually
> create Heat stack for each bay nodes. For example, for creating a
> cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat
> stacks (instead of 1 big Heat stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes
> or a group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> availability-zone us-east-1 ….
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> availability-zone us-east-2 …
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-
> zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-
> flavor
> 
> Best regards,
> Hongbin
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >