Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in make-cert.sh from Magnum

2016-02-04 Thread Guz Egor
Wanghua,
Could you elaborate why using token is problem? Provision cluster takes 
deterministic time and expiration time shouldn't be a problem (e.g. we can 
always assume that provision shouldn't take more than hour for example). Also 
we can generate new token every time when we update stack, can't we?   ---  Egor
  From: Corey O'Brien 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 4, 2016 8:24 PM
 Subject: Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in 
make-cert.sh from Magnum
   
There currently isn't a way to distinguish between user who creates the bay and 
the nodes in the bay because the user is root on those nodes. Any credential 
that the node uses to communicate with Magnum is going to be accessible to the 
user.
Since we already have the trust, that seems like the best way to proceed for 
now just to get something working.

Corey
On Thu, Feb 4, 2016 at 10:53 PM 王华  wrote:

Hi all,
Magnum now use a token to get CA certificate in make-cert.sh. Token has a 
expiration time. So we should change this method. Here are two proposals.
1. Use trust which I have introduced in [1]. The way has a disadvantage. We 
can't limit the access to some APIs. For example, if we want to add a 
limitation that some APIs can only be accessed from Bay and can't be accessed 
by users outside. We need a way to distinguish these users, fromBay or from 
outside.
2. We create a user with the role to access Magnum. The way is used in Heat. 
Heat creates a user for each stack to communicate with Heat. We can add a role 
to the user which is already introduced in [1]. The user can directly access 
Magnum for some limited APIs. With trust id, the user can access other services.
[1] https://review.openstack.org/#/c/268852/
Regards,Wanghua__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] gate issues

2016-02-04 Thread Guz Egor
Corey, I think we should do more investigation before applying any "hot" 
patches. E.g. I look at several failures today and honestly there is no way to 
find out reasons.I believe we are not copying logs 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L163)
 during test failure,  we register handler at setUp 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L244),
 but Swarm tests, createbay in setUpClass 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/swarm/test_swarm_python_client.py#L48)
 which called before setUp.So there is no way to see any logs from vm.
sorry, I cannot submit patch/debug by myself because I will get my laptop back 
only on Tue ):
---  Egor
  From: Corey O'Brien 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 4, 2016 9:03 PM
 Subject: [openstack-dev] [Magnum] gate issues
   
So as we're all aware, the gate is a mess right now. I wanted to sum up some of 
the issues so we can figure out solutions.
1. The functional-api job sometimes fails because bays timeout building after 1 
hour. The logs look something like 
this:magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
 [3733.626171s] ... FAILEDI can reproduce this hang on my devstack with etcdctl 
2.0.10 as described in this bug 
(https://bugs.launchpad.net/magnum/+bug/1541105), but apparently either my fix 
with using 2.2.5 (https://review.openstack.org/#/c/275994/) is incomplete or 
there is another intermittent problem because it happened again even with that 
fix: 
(http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html)
2. The k8s job has some sort of intermittent hang as well that causes a similar 
symptom as with swarm. https://bugs.launchpad.net/magnum/+bug/1541964
3. When the functional-api job runs, it frequently destroys the VM causing the 
jenkins slave agent to die. Example: 
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.htmlWhen
 this happens, zuul re-queues a new build from the start on a new VM. This can 
happen many times in a row before the job completes.I chatted with 
openstack-infra about this and after taking a look at one of the VMs, it looks 
like memory over consumption leading to thrashing was a possible culprit. The 
sshd daemon was also dead but the console showed things like "INFO: task 
kswapd0:77 blocked for more than 120 seconds". A cursory glance and following 
some of the jobs seems to indicate that this doesn't happen on RAX VMs which 
have swap devices unlike the OVH VMs as well.
4. In general, even when things work, the gate is really slow. The sequential 
master-then-node build process in combination with underpowered VMs makes bay 
builds take 25-30 minutes when they do succeed. Since we're already close to 
tipping over a VM, we run functional tests with concurrency=1, so 2 bay builds 
means almost the entire allotted devstack testing time (generally 75 minutes of 
actual test time available it seems).
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Guz Egor
Hongbin,
I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs 
Swarm).If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep themseparate because they need different attention (e.g. I almost don't 
care why/when "agent/slave" node died, but always double check that master node 
was repaired or replaced).   
One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally 
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):  
But we definitely should discuss it during midcycle next week.   --- Egor
  From: Hongbin Lu 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 11, 2016 8:50 PM
 Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
   
 Hi team,    Sorry 
for bringing up this old thread, but a recent debate on container resource [1] 
reminded me the use case Kris mentioned below. I am going to propose a 
preliminary idea to address the use case. Of course, we could continue the 
discussion in the team meeting or midcycle.    Idea: Introduce a docker-native 
COE, which consists of only minion/worker/slave nodes (no master nodes). Goal: 
Eliminate duplicated IaaS resources (master node VMs, lbaas vips, floating ips, 
etc.) Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and 
worker nodes. In these COEs, control services (i.e. scheduler) run on master 
nodes, and containers run on worker nodes. If we can port the COE control 
services to Magnum control plate and share them with all tenants, we eliminate 
the need of master nodes thus improving resource utilization. In the new COE, 
users create/manage containers through Magnum API endpoints. Magnum is 
responsible to spin tenant VMs, schedule containers to the VMs, and manage the 
life-cycle of those containers. Unlike other COEs, containers created by this 
COE are considered as OpenStack-manage resources. That means they will be 
tracked in Magnum DB, and accessible by other OpenStack services (i.e. Horizon, 
Heat, etc.).    What do you feel about this proposal? Let’s discuss.    
[1]https://etherpad.openstack.org/p/magnum-native-api    Best regards, Hongbin  
  From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?    We are looking 
at deploying magnum as an answer for how do we do containers company wide at 
Godaddy.  I am going to agree with both you and josh.    I agree that managing 
one large system is going to be a pain and pas experience tells me this wont be 
practical/scale, however from experience I also know exactly the pain Josh is 
talking about.    We currently have ~4k projects in our internal openstack 
cloud, about 1/4 of the projects are currently doing some form of containers on 
their own, with more joining every day.  If all of these projects were to 
convert of to the current magnum configuration we would suddenly be attempting 
to support/configure ~1k magnum clusters.  Considering that everyone will want 
it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.    From my point of view an ideal use case for companies 
like ours (yahoo/godaddy) would be able to support hierarchical projects in 
magnum.  That way we could create a project for each department, and then the 
subteams of those departments can have their own projects.  We create a a bay 
per department.  Sub-projects if they want to can support creation of their own 
bays (but support of the kube cluster would then fall to that team).  When a 
sub-project spins up a pod on a bay, minions get created inside that teams sub 
projects and the containers in that pod run on the capacity that was spun up  
under that project, the minions for each pod would be a in a scaling group and 
as such grow/shrink as dictated by load.    The above would make it so where we 
support a minimal, yet imho reasonable, number of kube clusters, give people 
who can't/don’t want to fall inline with the provided resource a way to make 
their own and still offer a "good enough for a single company" level of 
multi-tenancy. >Joshua, >   >If you share resources, you give up multi-tenancy. 
 No COE system has the >concept of multi-tenancy (kubernetes has some basic 

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Guz Egor
Adrian,
I disagree, host OS is very important for operators because of integration with 
all internal tools/repos/etc.  
I think it make sense to limit OS support in Magnum main source. But not sure 
that Fedora Atomic is right choice,first of all there is no documentation about 
it and I don't think it's used/tested a lot by Docker/Kub/Mesos community.It 
make sense to go with Ubuntu (I believe it's still most adopted platform in all 
three COEs and OpenStack deployments)     and CoreOS (is highly adopted/tested 
in Kub community and Mesosphere DCOS uses it as well). We can implement CoreOS 
support as driver and users can use it as reference implementation. 
--- Egor
  From: Adrian Otto 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Monday, February 29, 2016 10:36 AM
 Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro
   
Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay node OS: the kernel. The linux 
syscall interface is stable enough that the various linux distributions can all 
run concurrently in neighboring containers sharing same kernel. There is really 
no material reason why the bay OS choice must match what distro the container 
is based on. Although I’m persuaded by Hongbin’s concern to mitigate risk of 
future changes WRT whatever OS distro is the prevailing one for bay nodes, 
there are a few items of concern about duality I’d like to zero in on:
1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.
2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.
3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.
If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template. 
I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing and ready to address the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?
Thanks,
Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu  wrote:
Hi team,  This is a continued discussion from a review [1]. Corey O'Brien 
suggested to have Magnum support a single OS distro (Atomic). I disagreed. I 
think we should bring the discussion to here to get broader set of inputs.   
Corey O'Brien From the midcycle, we decided we weren't going to continue to 
support 2 different versions of the k8s template. Instead, we were going to 
maintain the Fedora Atomic version of k8s and remove the coreos templates from 
the tree. I don't think we should continue to develop features for coreos k8s 
if that is true. In addition, I don't think we should break the coreos template 
by adding the trust token as a heat parameter.  Hongbin Lu I was on the 
midcycle and I don't remember any decision to remove CoreOS support. Why you 
want to remove CoreOS templates from the tree. Please note that this is a very 
big decision and please discuss it with the team thoughtfully and make sure 
everyone agree.  Corey O'Brien Removing the coreos templates was a part of the 
COE drivers decision. Since each COE driver will only support 1 
distro+version+coe we discussed which ones to support in tree. The decision was 
that instead of trying to support every distro and every version for every coe, 
the magnum tree would only have support for 1 version of 1 distro for each of 
the 3 COEs (swarm/docker/mesos). Since we already are going to support Atomic 
for swarm, removing coreos and keeping Atomic for kubernetes was the favored 
choice.  Hongbin Lu Strongly disagree. It is a huge risk to support a single 
distro. The selected distro could die in the future. Who knows. Why make Magnum 
take this huge risk? Again, the decision of supporting single distro is a very 
big decision. Please bring it up to the team and have it discuss thoughtfully 
before making any 

Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread Guz Egor
-1
who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort 
(http://kubernetes.io/docs/user-guide/services/#type-nodeport)functionality is 
not going to work without public ip and this is very handy feature.  
--- Egor
  From: 王华 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Wednesday, March 30, 2016 8:41 PM
 Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
   
Hi yuanying,
I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pulldocker images from docker hub in nodes floating ips are needed. To reduce 
theusage of floating ip, we can use proxy. Only some nodes have floating ips, 
andother nodes can access docker hub by proxy.
Best Regards,Wanghua
On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao  wrote:

  Hi Yuanying,
 +1 
 I think we can add option on whether to using floating ip address since IP 
address are
 kinds of resource which not wise to waste.
 
 On 2016年03月31日 10:40, 大塚元央 wrote:
  
 Hi team, 
  Previously, we had a reason why all nodes should have floating ips [1]. But 
now we have a LoadBalancer features for masters [2] and minions [3]. And also 
minions do not necessarily need to have floating ips [4]. I think it’s the time 
to remove floating ips from all nodes.
  
  I know we are using floating ips in gate to get log files, So it’s not good 
idea to remove floating ips entirely. 
  I want to introduce `disable-floating-ips-to-nodes` parameter to bay model. 
  Thoughts? 
  [1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html 
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha [3]: 
https://blueprints.launchpad.net/magnum/+spec/external-lb [4]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html 
  Thanks -yuanying   
  
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
Best Regards, Eli Qiao (乔立勇)
Intel OTC China 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-11 Thread Guz Egor
+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has this 
setup. Also we can provide several line instructions how to run Chronos on top 
of Marathon.
honestly I don't see how #2 will work, because Marathon installation is 
different from Aurora installation. 
--- Egor
  From: Kai Qiang Wu <wk...@cn.ibm.com>
 To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org> 
 Sent: Sunday, April 10, 2016 6:59 PM
 Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
   
#2 seems more flexible, and if it be proved it can "make the SAME mesos bay 
applied with mutilple frameworks." It would be great. Which means, one mesos 
bay should support multiple frameworks.




Thanks


Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park, 
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle! 

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t feel 
strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay



My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.
 
Best regards,
Hongbin
 
From: Ton Ngo [mailto:t...@us.ibm.com] 
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
 I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm. 
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


   
   
   
   
On Apr 8, 2016, at 3:15 PM, Hongbin Lu <hongbin...@huawei.com> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:   
   
   
   
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.
This is my preference.

Adrian   
   
   
   
   
   
   
   
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).Which option you prefer? Or 
you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com] 
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon. 

--- 
EgorFrom: Jay Lau <jay.lau@gmail.com>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org> 
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki <michal.roste...@gmail.com> 
wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it i

Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-03-25 Thread Guz Egor
Jay, I think we should check license first. I believe DCOS is commercial 
product from Mesosphere. And you can use community version at AWS for free,but 
only because Mesosphere allows it and there is no source code.
--- Egor
  From: Jay Lau 
 To: OpenStack Development Mailing List  
 Sent: Thursday, March 24, 2016 11:57 PM
 Subject: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
   
Hi Magnum,The current mesos bay only include mesos and marathon, it is better 
to enhance the mesos bay have more components and finally enhance it to a DCOS 
which focus on container service based on mesos.For more detail, please refer 
to 
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/The
 mesosphere now has a template on AWS which can help customer deploy a DCOS on 
AWS, it would be great if Magnum can also support it based on OpenStack.I filed 
a bp here https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please 
show your comments if any.
-- 
Thanks,

Jay Lau (Guangya Liu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-31 Thread Guz Egor
Hongbin, It's correct, I was involved in two big OpenStack private cloud 
deployments and we never had public ips.In such case Magnum shouldn't create 
any private networks, operator need to provide network id/name or it should use 
default  (we used to have networking selection logic in scheduler) .
--- Egor
  From: Hongbin Lu <hongbin...@huawei.com>
 To: Guz Egor <guz_e...@yahoo.com>; OpenStack Development Mailing List (not for 
usage questions) <openstack-dev@lists.openstack.org> 
 Sent: Thursday, March 31, 2016 7:29 AM
 Subject: RE: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
   
#yiv8309482815 #yiv8309482815 -- _filtered #yiv8309482815 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered #yiv8309482815 
{font-family:SimSun;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8309482815 
{panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv8309482815 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;} _filtered #yiv8309482815 
{font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;} _filtered #yiv8309482815 
{panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv8309482815 
{font-family:Consolas;panose-1:2 11 6 9 2 2 4 3 2 4;}#yiv8309482815 
#yiv8309482815 p.yiv8309482815MsoNormal, #yiv8309482815 
li.yiv8309482815MsoNormal, #yiv8309482815 div.yiv8309482815MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;}#yiv8309482815 a:link, 
#yiv8309482815 span.yiv8309482815MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv8309482815 a:visited, #yiv8309482815 
span.yiv8309482815MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv8309482815 pre 
{margin:0cm;margin-bottom:.0001pt;font-size:10.0pt;}#yiv8309482815 
p.yiv8309482815MsoAcetate, #yiv8309482815 li.yiv8309482815MsoAcetate, 
#yiv8309482815 div.yiv8309482815MsoAcetate 
{margin:0cm;margin-bottom:.0001pt;font-size:8.0pt;}#yiv8309482815 
span.yiv8309482815HTMLPreformattedChar {font-family:Consolas;}#yiv8309482815 
span.yiv8309482815hoenzb {}#yiv8309482815 span.yiv8309482815BalloonTextChar 
{}#yiv8309482815 span.yiv8309482815EmailStyle22 {color:#1F497D;}#yiv8309482815 
.yiv8309482815MsoChpDefault {font-size:10.0pt;} _filtered #yiv8309482815 
{margin:72.0pt 72.0pt 72.0pt 72.0pt;}#yiv8309482815 
div.yiv8309482815WordSection1 {}#yiv8309482815 Egor,    I agree with what you 
said, but I think we need to address the problem that some clouds are lack of 
public IP addresses. It is not uncommon that a private cloud is running without 
public IP addresses, and they already figured out how to route traffics in and 
out. In such case, a bay doesn’t need to have floating IPs and the NodePort 
feature seems to work with the private IP address.    Generally speaking, I 
think it is useful to have a feature that allows bays to work without public IP 
addresses. I don’t want to end up in a situation that Magnum is unusable 
because the clouds don’t have enough public IP addresses.    Best regards, 
Hongbin    From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?    -1    who is going to run/support this proxy? also keep in mind that 
Kubernetes Service/NodePort 
(http://kubernetes.io/docs/user-guide/services/#type-nodeport) functionality is 
not going to work without public ip and this is very handy feature.      ---  
Egor    From:王华 <wanghua.hum...@gmail.com>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?    Hi yuanying,    I agree to reduce the usage of floating IP. But as 
far as I know, if we need to pull docker images from docker hub in nodes 
floating ips are needed. To reduce the usage of floating ip, we can use proxy. 
Only some nodes have floating ips, and other nodes can access docker hub by 
proxy.    Best Regards, Wanghua    On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
<liyong.q...@intel.com> wrote:

 Hi Yuanying,
+1 
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.    On 2016年03月31日 10:40, 大塚元央 wrote: 
Hi team,    Previously, we had a reason why all nodes should have floating ips 
[1]. But now we have a LoadBalancer features for masters [2] and minions [3]. 
And also minions do not necessarily need to have floating ips [4]. I think it’s 
the time to remove floating ips from all nodes.    I know we are using floating 
ips in gate to get log files, So it’s not good idea to remove floating ips 
entirely.    I want to introduce `disable-floating-ips-to-nodes` parameter to 
bay model.    Thoughts?    [1]: 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html [2]: 
https://blueprints.launchpad.net/magnum/+spec/make-master-ha [3]: 
https://b

Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-19 Thread Guz Egor
Jay,
Mesosphere open sourced DC/OS today under Apache 2 license 
https://mesosphere.com/blog/2016/04/19/open-source-dcos/
--- Egor
  From: Jay Lau 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Friday, March 25, 2016 7:01 PM
 Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
   
Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.
Thanks,
On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki  
wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container service based on mesos.

For more detail, please refer to
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/

The mesosphere now has a template on AWS which can help customer deploy
a DCOS on AWS, it would be great if Magnum can also support it based on
OpenStack.

I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please show
your comments if any.

--
Thanks,

Jay Lau (Guangya Liu)



Sorry if I'm missing something, but isn't DCOS a closed source software?

However, the "DCOS cli"[1] seems to be working perfectly with Marathon and 
Mesos installed by any way if you configure it well. I think that the thing 
which can be done in Magnum is to make the experience with "DOCS" tools as easy 
as possible by using open source components from Mesosphere.

Cheers,
Michal

[1] https://github.com/mesosphere/dcos-cli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Cache docker images

2016-04-19 Thread Guz Egor
Kevin, I agree this is not ideal solution, but it's probably the best option to 
deal with public cloud "stability" (e.g. we switched to the same model at AWS 
andgot really good boost in provisioning time and reduce # failures during 
cluster provisioning). And if application need guarantee "fresh" image, it uses 
 force pull option in Marathon.
--- Egor
  From: "Fox, Kevin M" 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Tuesday, April 19, 2016 1:04 PM
 Subject: Re: [openstack-dev] [Magnum]Cache docker images
   
#yiv4843844425 #yiv4843844425 _filtered #yiv4843844425 {font-family:SimSun;} 
_filtered #yiv4843844425 {} _filtered #yiv4843844425 {font-family:Calibri;} 
_filtered #yiv4843844425 {font-family:Tahoma;} _filtered #yiv4843844425 
{font-family:Consolas;}#yiv4843844425 p.yiv4843844425MsoNormal, #yiv4843844425 
li.yiv4843844425MsoNormal, #yiv4843844425 div.yiv4843844425MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:12.0pt;color:black;}#yiv4843844425 
a:link, #yiv4843844425 span.yiv4843844425MsoHyperlink 
{color:blue;text-decoration:underline;}#yiv4843844425 a:visited, #yiv4843844425 
span.yiv4843844425MsoHyperlinkFollowed 
{color:purple;text-decoration:underline;}#yiv4843844425 pre 
{margin:0cm;margin-bottom:.0001pt;font-size:10.0pt;color:black;}#yiv4843844425 
span.yiv4843844425HTMLPreformattedChar 
{font-family:Consolas;color:black;}#yiv4843844425 
span.yiv4843844425EmailStyle19 {color:#1F497D;}#yiv4843844425 
.yiv4843844425MsoChpDefault {font-size:10.0pt;} _filtered #yiv4843844425 
{margin:72.0pt 72.0pt 72.0pt 72.0pt;}#yiv4843844425 #yiv4843844425 P 
{margin-top:0;margin-bottom:0;}I'm kind of uncomfortable as an op with the 
prebundled stuff. how do you upgrade things when needed if there is no way to 
pull updated images from a central place?

Thanks,
Kevin
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Tuesday, April 19, 2016 11:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]Cache docker images

Eli, The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers. Best regards,Hongbin From: taget 
[mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum]Cache docker images hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)

On 2016年04月19日 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfigOn 
2016年04月19日 10:57,王华 wrote:
Hi all,  We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:1. Pulling docker images 
over the Internet is slow and fragile.2. Some clouds don't have external 
Internet access. It is suggested to build all the required images into the 
cloud images to resolved the issue. Here is a solution:We export the docker 
images as tar files, and put the tar files into a dir in the image when we 
build the image. And we add scripts to load the tar files in cloud-init, so 
that we don't need to download the docker images. Any advice for this solution 
or any better solution? Regards,Wanghua


__OpenStack
 Development Mailing List (not for usage questions)Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- Best Regards, Eli Qiao (乔立勇)Intel OTC China


__OpenStack
 Development Mailing List (not for usage questions)Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- Best Regards, Eli Qiao (乔立勇)
__
OpenStack Development Mailing List (not for usage questions)