Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Dean, Thanks for your reply. > On Mar 20, 2017, at 2:18 PM, Dean Troyer wrote: > > On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto > wrote: >> the argument is actually the service name, such as “ec2”. This is >> the same way the openstack cli

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto wrote: > the argument is actually the service name, such as “ec2”. This is > the same way the openstack cli works. Perhaps there is another tool that you > are referring to. Have I misunderstood something? I am going to

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Jay, On Mar 20, 2017, at 12:35 PM, Jay Pipes > wrote: On 03/20/2017 03:08 PM, Adrian Otto wrote: Team, Stephen Watson has been working on an magnum feature to add magnum commands to the openstack client by implementing a plugin:

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
hat might be burdensome for interactive use and wanted to find something shorter that would still make sense. Thanks, Adrian > > Best regards, > Hongbin > >> -Original Message- >> From: Jay Pipes [mailto:jaypi...@gmail.com] >> Sent: March-20-17 3:35 PM >&g

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu
hich is the approach adopted by AWS. Best regards, Hongbin > -Original Message- > From: Jay Pipes [mailto:jaypi...@gmail.com] > Sent: March-20-17 3:35 PM > To: openstack-dev@lists.openstack.org > Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum > comman

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Jay Pipes
On 03/20/2017 03:08 PM, Adrian Otto wrote: Team, Stephen Watson has been working on an magnum feature to add magnum commands to the openstack client by implementing a plugin: https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc In review of this work, a

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
> Thanks, > Kevin > > From: Adrian Otto [adrian.o...@rackspace.com] > Sent: Monday, March 20, 2017 12:08 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands > in osc? >

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Fox, Kevin M
What about coe? Thanks, Kevin From: Adrian Otto [adrian.o...@rackspace.com] Sent: Monday, March 20, 2017 12:08 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands

[openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Team, Stephen Watson has been working on an magnum feature to add magnum commands to the openstack client by implementing a plugin: https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc In review of this work, a question has resurfaced, as to what the client

Re: [openstack-dev] [magnum] [ocata] after installation, magnum is not found

2017-03-09 Thread Spyros Trigazis
Hi, You haven't installed the magnum client. The service is running but you don't have the client. You need the client installed and to create and source the RC file. Spyros On 9 March 2017 at 07:49, Yu Wei wrote: > Hi guys, > > After installing openstack ocata magnum,

[openstack-dev] [magnum] [ocata] after installation, magnum is not found

2017-03-08 Thread Yu Wei
Hi guys, After installing openstack ocata magnum, magnum is not found. However, magnum-api and magnum-conduct are running well. How could I fix such problem? Is this bug in ocata? [root@controller bin]# systemctl status openstack-magnum-api.service openstack-magnum-conductor.service ●

Re: [openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-02-10 Thread Davanum Srinivas
Dear Magnum Team, Please see review: https://review.openstack.org/#/c/432421/ It depends on the requirements review: https://review.openstack.org/#/c/432409/ Thanks, Dims On Mon, Jan 30, 2017 at 11:54 AM, Antoni Segura Puimedon wrote: > > > On Thu, Jan 26, 2017 at 12:41

Re: [openstack-dev] [magnum] devstack/heat problem with master_wait_condition

2017-02-10 Thread Syed Armani
Hello Stanisław, Were you able to solve this issue? Cheers, Syed On Wed, Aug 26, 2015 at 2:14 PM, Sergey Kraynev wrote: > Hi Stanislaw, > > Your host with Fedora should have special config file, which will send > signal to WaitCondition. > For good example please take a

Re: [openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-01-30 Thread Antoni Segura Puimedon
On Thu, Jan 26, 2017 at 12:41 PM, Davanum Srinivas wrote: > Team, > > A bit of history, we had a client generated from swagger definition for a > while in Magnum, we plucked it out into python-k8sclient which then got > used by fuel-ccp, kuryr etc. Recently the kuberneted team

[openstack-dev] [magnum] PTL Candidacy for Pike

2017-01-28 Thread Adrian Otto
Team, I announce my candidacy for, and respectfully request your support to serve as your Magnum PTL again for the Pike release cycle. Here are are my achievements and OpenStack experience and that make me the best choice for this role: * Founder of the OpenStack Containers Team * Established

[openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-01-26 Thread Davanum Srinivas
Team, A bit of history, we had a client generated from swagger definition for a while in Magnum, we plucked it out into python-k8sclient which then got used by fuel-ccp, kuryr etc. Recently the kuberneted team started an effort called client-python. Please see 1.0.0b1 announcement. * It's on

Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-25 Thread Kevin Lefevre
erhead to deprecate the old version and roll out the new version. > > > > Best regards, > > Hongbin > > > > From: Spyros Trigazis [mailto:strig...@gmail.com] > Sent: January-24-17 3:47 PM > To: OpenStack Development Mailing List (not for usage questions) >

Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Spyros Trigazis
ment but it saves your overhead to deprecate the old version and roll > out the new version. > > > > Best regards, > > Hongbin > > > > *From:* Spyros Trigazis [mailto:strig...@gmail.com] > *Sent:* January-24-17 3:47 PM > *To:* OpenStack Development Mailing L

Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Hongbin Lu
but it saves your overhead to deprecate the old version and roll out the new version. Best regards, Hongbin From: Spyros Trigazis [mailto:strig...@gmail.com] Sent: January-24-17 3:47 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [magnum] CoreOS template

Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Spyros Trigazis
Hi. IMO, you should add a BP and start by adding a v2 driver in /contrib. Cheers, Spyros On Jan 24, 2017 20:44, "Kevin Lefevre" wrote: > Hi, > > The CoreOS template is not really up to date and in sync with upstream > CoreOS « Best Practice »

[openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Kevin Lefevre
Hi, The CoreOS template is not really up to date and in sync with upstream CoreOS « Best Practice » (https://github.com/coreos/coreos-kubernetes), it is more a port of th fedora atomic template but CoreOS has its own Kubernetes deployment method. I’d like to implement the changes to sync

[openstack-dev] [magnum] PTL nomination is open until Jan 29

2017-01-23 Thread Hongbin Lu
Hi all, I sent this email to encourage you to run for the Magnum PTL for Pike [1]. I think most of the audience are in this ML so I sent the message to here. First, I would like to thank for your interest in the Magnum project. It is great to work with you to build the project and make it

[openstack-dev] [Magnum] Feature freeze coming today

2017-01-23 Thread Adrian Otto
Team, I will be starting our feature freeze today. We have a few more patches to consider for merge before we enter the freeze. I’ll let you all know when each has been considered, and we are ready to begin the freeze. Thanks, Adrian

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2017-01-03 Thread Adrian Otto
List (not for usage questions) Subject: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages Hi all, In magnum, we implement cluster drivers for the different combinations of COEs (Container Orchestration Engines) and Operating Systems. The reasoning behind it is to b

Re: [openstack-dev] [magnum-ui][horizon] use json-schema-form on workflow

2016-11-29 Thread Shuu Mutou
t;openstack-dev@lists.openstack.org> > Subject: Re: [openstack-dev] [magnum-ui][horizon] use json-schema-form on > workflow > > From a quick scan, it looks like you're using it several times in the same > workflow? Why not just use the existing tabs type and create a single form? > Have a look

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-26 Thread Yatin Karel
ail.com] Sent: Friday, November 18, 2016 8:04 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages Hi all, In magnum, we implement cluster drivers for the different combinations of COEs (Container

Re: [openstack-dev] [magnum-ui][horizon] use json-schema-form on workflow

2016-11-25 Thread Rob Cresswell
From a quick scan, it looks like you're using it several times in the same workflow? Why not just use the existing tabs type and create a single form? Have a look at https://review.openstack.org/#/c/348969/for reference. Rob On 25 Nov 2016 08:28, "Shuu Mutou"

[openstack-dev] [magnum-ui][horizon] use json-schema-form on workflow

2016-11-25 Thread Shuu Mutou
Hi Thai, We're trying to use json-schema-form in workflow, but 'required' attribute doesn't work. So we can push 'create' button without input. Could you check following patch? https://review.openstack.org/#/c/400701/ best regards, Shu Muto

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-22 Thread Ricardo Rocha
gt; Date: Friday, November 18, 2016 at 8:34 AM > To: "OpenStack Development Mailing List (not for usage questions)" > <openstack-dev@lists.openstack.org> > Subject: [openstack-dev] [magnum] Managing cluster drivers as individual > distro packages > > Hi all, >

Re: [openstack-dev] [Magnum] Question of k8s multiple master support.

2016-11-21 Thread 渥美 慶彦
November 22, 2016 7:40 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Magnum] Question of k8s multiple master support. Hi all, Can I create the bay of Kubernetes which has multiple master? I'm able to create the bay of Kubernetes single master, but failed in creatin

Re: [openstack-dev] [Magnum] Question of k8s multiple master support.

2016-11-21 Thread Yatin Karel
r further queries. Thanks and Regards Yatin Karel From: 渥美 慶彦 [atsumi.yoshih...@po.ntts.co.jp] Sent: Tuesday, November 22, 2016 7:40 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Magnum] Question of k8s multiple master support. Hi all, Can

Re: [openstack-dev] [Magnum] Question of k8s multiple master support.

2016-11-21 Thread 峰北红
Hi atsumi, Multiple masters must be created together with load balancer. Create cluster template with `--master-lb-enabled`. magnum cluster-template-create --name k8s-cluster-template --image-id test.qcow2 \ --keypair-id testkey \

[openstack-dev] [Magnum] Question of k8s multiple master support.

2016-11-21 Thread 渥美 慶彦
Hi all, Can I create the bay of Kubernetes which has multiple master? I'm able to create the bay of Kubernetes single master, but failed in creating the bay of Kubernetes multiple master. I'm using CoreOS-1010.3.0, Magnum 2.0.1. Best regards, --

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-18 Thread Drago Rosson
v@lists.openstack.org>> Date: Friday, November 18, 2016 at 8:34 AM To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>> Subject: [openstack-dev] [magnum] Managing cluster driver

[openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-18 Thread Spyros Trigazis
Hi all, In magnum, we implement cluster drivers for the different combinations of COEs (Container Orchestration Engines) and Operating Systems. The reasoning behind it is to better encapsulate driver-specific logic and to allow operators deploy custom drivers with their deployment specific

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-14 Thread Grant, Jaycen V
(not for usage questions) <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [Magnum] New Core Reviewers Hi All, Thanks all for your votes and support for this new Role. It is an honour to work with all of you and will continue my participation in the community. Regards,

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread Yatin Karel
:34 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Magnum] New Core Reviewers Jaycen and Yatin, You have each been added as new core reviewers. Congratulations to you both, and thanks for stepping up to take on this new role! Cheers, Adrian

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread Adrian Otto
Jaycen and Yatin, You have each been added as new core reviewers. Congratulations to you both, and thanks for stepping up to take on this new role! Cheers, Adrian > On Nov 7, 2016, at 11:06 AM, Adrian Otto wrote: > > Magnum Core Team, > > I propose Jaycen Grant

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread taget
+1 for both. On 2016年11月08日 03:06, Adrian Otto wrote: Magnum Core Team, I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core Reviewers. Please respond with your votes. Thanks, -- Best Regards, Eli Qiao (乔立勇), Intel OTC.

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread Kai Qiang Wu
ling List (not for usage questions) < > openstack-dev@lists.openstack.org> > Subject: [openstack-dev] [Magnum] New Core Reviewers > > Magnum Core Team, > > I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum > Core Reviewers. Please r

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-08 Thread Kumari, Madhuri
+1 for both. -Original Message- From: Adrian Otto [mailto:adrian.o...@rackspace.com] Sent: Tuesday, November 8, 2016 12:36 AM To: OpenStack Development Mailing List (not for usage questions) <openstack-dev@lists.openstack.org> Subject: [openstack-dev] [Magnum] New Core Reviewers

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-08 Thread Spyros Trigazis
t: November-07-16 2:06 PM >> > To: OpenStack Development Mailing List (not for usage questions) >> > Subject: [openstack-dev] [Magnum] New Core Reviewers >> > >> > Magnum Core Team, >> > >> > I

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-07 Thread Yuanying OTSUKA
-Original Message- > > From: Adrian Otto [mailto:adrian.o...@rackspace.com] > > Sent: November-07-16 2:06 PM > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [Magnum] New Core Reviewers > > > > Magnum Co

Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-07 Thread Hongbin Lu
Development Mailing List (not for usage questions) > Subject: [openstack-dev] [Magnum] New Core Reviewers > > Magnum Core Team, > > I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum > Core Reviewers. Please respond with your votes.

[openstack-dev] [Magnum] New Core Reviewers

2016-11-07 Thread Adrian Otto
Magnum Core Team, I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core Reviewers. Please respond with your votes. Thanks, Adrian Otto __ OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?

2016-11-02 Thread Rikimaru Honjo
Hi Yuanying, Thank you for explaining. I consider changing my environment or OS. Regards, On 2016/11/01 19:13, Yuanying OTSUKA wrote: Hi, Rikimaru. Currently, k8s-CoreOS driver dosen’t have way to disable internet access. But k8s-fedora driver has. See, below blueprint. *

Re: [openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?

2016-11-01 Thread Yuanying OTSUKA
Hi, Rikimaru. Currently, k8s-CoreOS driver dosen’t have way to disable internet access. But k8s-fedora driver has. See, below blueprint. * https://blueprints.launchpad.net/magnum/+spec/support-insecure-registry Maybe you can bring this feature to k8s-coreos driver. Thanks -yuanying

[openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?

2016-11-01 Thread Rikimaru Honjo
Hi all, Can I use magnum + CoreOS on the environment which is not able to access the internet? I'm trying it, but CoreOS often accesses to "quay.io". Please share the knowledge if you know about it. I'm using CoreOS, kubernetes, Magnum 2.0.1. Regards, -- Rikimaru Honjo

Re: [openstack-dev] [magnum]What version of coreos should I use for stable/mitaka?

2016-10-26 Thread Rikimaru Honjo
Hi Hongbin, Thanks a lot! I try to use the version 1030.0.0! Best regards, On 2016/10/25 22:48, Hongbin Lu wrote: As recorded in this bug report [1]. The version 1030.0.0 was reported to work with mitaka. [1] https://bugs.launchpad.net/magnum/+bug/1615854 On Mon, Oct 24, 2016 at 3:58 AM,

Re: [openstack-dev] [magnum]What version of coreos should I use for stable/mitaka?

2016-10-25 Thread Hongbin Lu
As recorded in this bug report [1]. The version 1030.0.0 was reported to work with mitaka. [1] https://bugs.launchpad.net/magnum/+bug/1615854 On Mon, Oct 24, 2016 at 3:58 AM, Rikimaru Honjo < honjo.rikim...@po.ntts.co.jp> wrote: > Hello, > > I'm using magnum which is stable/mitaka. > And, I

[openstack-dev] [Magnum] Magnum Sessions for Barcelona Summit Attendees

2016-10-24 Thread Adrian Otto
Team, For those of you attending the Barcelona summit this week, please add the following sessions to your calendar, in addition to the Containers track:

[openstack-dev] [magnum]What version of coreos should I use for stable/mitaka?

2016-10-24 Thread Rikimaru Honjo
Hello, I'm using magnum which is stable/mitaka. And, I failed to create a bay by the following bug. (I chose coreos as OS, and kubernetes as COE.) https://bugs.launchpad.net/magnum/+bug/1605554 But I'd like to use still stable/mitaka. What version of coreos should I use? Best regards, --

Re: [openstack-dev] [Magnum] Draft logo & a sneak peek

2016-10-20 Thread Josh Berkus
On 10/19/2016 12:12 PM, Hongbin Lu wrote: > Please find below for the draft of Magnum mascot. > Huh. I was expecting a big bottle of bubbly. ;-) -- -- Josh Berkus Project Atomic Red Hat OSAS __ OpenStack Development

Re: [openstack-dev] [Magnum][Kuryr][Keystone] Securing services in container orchestration

2016-10-20 Thread Adam Young
On 10/09/2016 10:57 PM, Ton Ngo wrote: Hi Keystone team, We have a scenario that involves securing services for container and this has turned out to be rather difficult to solve, so we would like to bring to the larger team for ideas. Examples of this scenario: 1. Kubernetes cluster: To

[openstack-dev] [Magnum] Draft logo & a sneak peek

2016-10-19 Thread Hongbin Lu
Hi team, Please find below for the draft of Magnum mascot. Best regards, Hongbin From: Heidi Joy Tretheway [mailto:heidi...@openstack.org] Sent: October-19-16 2:54 PM To: Hongbin Lu Subject: Your draft logo & a sneak peek Hi Hongbin, We're excited to show you the draft version of your project

Re: [openstack-dev] [Magnum]

2016-10-10 Thread Qiao, Liyong
t for usage questions)" <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>> 主题: Re: [openstack-dev] [Magnum] Hi zhangshuai, We can only tell from the screenshots that the k8s master node failed. You will likely need to use the CLI for further debuggi

[openstack-dev] [Magnum][Kuryr][Keystone] Securing services in container orchestration

2016-10-09 Thread Ton Ngo
Hi Keystone team, We have a scenario that involves securing services for container and this has turned out to be rather difficult to solve, so we would like to bring to the larger team for ideas. Examples of this scenario: 1. Kubernetes cluster: To support the load balancer and

Re: [openstack-dev] [Magnum] What is Magnum's behaviour wrt Images in Glance and which ones are downloaded to COE ?

2016-10-05 Thread Waines, Greg
ck-dev@lists.openstack.org> Subject: Re: [openstack-dev] [Magnum] What is Magnum's behaviour wrt Images in Glance and which ones are downloaded to COE ? On 2016年07月20日 00:35, Waines, Greg wrote: I created a Docker-Swarm bay and successfully launched a simple container. What is the behavior of MAG

Re: [openstack-dev] [magnum] Swarm Mode -- to come?

2016-10-03 Thread Fabrizio Soppelsa
> <openstack-dev@lists.openstack.org> > Date: 10/02/2016 11:26 AM > Subject: [openstack-dev] [magnum] Swarm Mode -- to come? > > > > > Hello, > how about the (newest) Swarm Mode (Docker 1.12+) support in Magnum? > I don’

Re: [openstack-dev] Magnum:

2016-10-03 Thread Spyros Trigazis
Hi Kamal. On 3 October 2016 at 09:33, kamalakannan sanjeevan < chirukamalakan...@gmail.com> wrote: > Hi All, > > I have installed Mitaka on ubuntu14.04. I have tried an all in one > installation along with cinder using dd and then creating the > cinder-volumes at /dev/loop2. The network neutron

[openstack-dev] Magnum:

2016-10-03 Thread kamalakannan sanjeevan
Hi All, I have installed Mitaka on ubuntu14.04. I have tried an all in one installation along with cinder using dd and then creating the cinder-volumes at /dev/loop2. The network neutron is using linuxbridge with vxlan. I am able to create instances that do not have internet reachability for

Re: [openstack-dev] [magnum] Swarm Mode -- to come?

2016-10-03 Thread Ton Ngo
com> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 10/02/2016 11:26 AM Subject: [openstack-dev] [magnum] Swarm Mode -- to come? Hello, how about the (newest) Swarm Mode (Docker 1.12+) supp

[openstack-dev] [magnum] Swarm Mode -- to come?

2016-10-02 Thread Fabrizio Soppelsa
Hello, how about the (newest) Swarm Mode (Docker 1.12+) support in Magnum? I don’t find any blueprint on Launchpad on the matter yet, is this going to be worked? Ta, Fabrizio. __ OpenStack Development Mailing List (not for

Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Hongbin Lu
t; > > > *From: *Ton Ngo <t...@us.ibm.com> > *Reply-To: *"OpenStack Development Mailing List (not for usage > questions)" <openstack-dev@lists.openstack.org> > *Date: *Tuesday, September 27, 2016 at 10:58 PM > *To: *"OpenStack Development Mailing List (not fo

Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Steven Dake (stdake)
.org> Date: Tuesday, September 27, 2016 at 10:58 PM To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitak

Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-28 Thread Ton Ngo
. Ton, From: "Steven Dake (stdake)" <std...@cisco.com> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 09/27/2016 10:18 PM Subject: Re: [openstack-dev] [magnum] F

Re: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-27 Thread Steven Dake (stdake)
ot;OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Subject: [openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka) Does anyone have a pointer to a Fedora Atomic image tha

[openstack-dev] [Magnum] PTL Candidacy

2016-09-16 Thread Adrian Otto
I announce my candidacy for, and respectfully request your support to serve as your Magnum PTL for the Ocata release cycle. Here are are my achievements and OpenStack experience and that make me the best choice for this role: * Founder of the OpenStack Containers Team * Established vision and

[openstack-dev] [magnum] PTL candidacy

2016-09-13 Thread Hongbin Lu
Hi, I would like to announce my candidacy for re-election as Magnum PTL. My involvement in Magnum began in December 2014, in which the project was at a very early stage. Since then, I have been working with the team to explore the roadmap, implement and refine individual components, and

[openstack-dev] [magnum] Fedora Atomic image that supports kubernetes external load balancer (for stable/mitaka)

2016-09-08 Thread Dane Leblanc (leblancd)
Does anyone have a pointer to a Fedora Atomic image that works with stable/mitaka Magnum, and supports the kubernetes external load balancer feature [1]? I'm trying to test the kubernetes external load balancer feature with stable/mitaka Magnum. However, when I try to bring up a load-balanced

Re: [openstack-dev] [Magnum] Release schedule of magnumclient

2016-08-24 Thread Hongbin Lu
penstack-dev@lists.openstack.org > Cc: Haruhiko Katou > Subject: Re: [openstack-dev] [Magnum] Release schedule of magnumclient > > Hi Hongbin, > > Also, Magnum-UI will meet "Soft StringFreeze" next week. > > If the following BP [1] can reach to release in this cyc

Re: [openstack-dev] [Magnum] Release schedule of magnumclient

2016-08-24 Thread Shuu Mutou
pec/rename-bay-to-cluster Thanks, Shu > -Original Message- > From: Hongbin Lu [mailto:hongbin...@huawei.com] > Sent: Wednesday, August 24, 2016 6:32 AM > To: openstack-dev@lists.openstack.org > Subject: [openstack-dev] [Magnum] Release schedule of magnumclient > > H

[openstack-dev] [Magnum] Release schedule of magnumclient

2016-08-23 Thread Hongbin Lu
Hi team, As discussed at the team meeting, Aug 29-02 (next week) is the final release for client libraries [1]. We are going to freeze the python-magnumclient repo for preparing the client release. If you have *client* patches for newton release, please submit it by the end of this week.

[openstack-dev] [Magnum] about word "baymodel"

2016-08-19 Thread Shuu Mutou
Hi folks, I recognize that "baymodel" or "Baymodel" is correct, and "bay model" or "BayModel" is not correct. Magnum-UI implemented using former since Rob's last patch. Before the implementation, Rob seemed to ask on IRC. What is truth? And please check

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-19 Thread hie...@vn.fujitsu.com
ct: Re: [openstack-dev] [Magnum] Next auto-scaling feature design? We have had numerous discussion on this topic, including a presentation and a design session in Tokyo, but we have not really arrived at a consensus yet. Part of the problem is that auto-scaling at the container level is still

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Ton Ngo
ailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 08/18/2016 12:26 PM Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design? > -Original Message- > From: hie...@vn.fujitsu.com [mailto:hie...@vn.fujitsu

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Hongbin Lu
> -Original Message- > From: hie...@vn.fujitsu.com [mailto:hie...@vn.fujitsu.com] > Sent: August-18-16 3:57 AM > To: openstack-dev@lists.openstack.org > Subject: [openstack-dev] [Magnum] Next auto-scaling feature design? > > Hi Magnum folks, > > I have

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
iginal Message- From: Tim Bell [mailto:tim.b...@cern.ch] Sent: Thursday, August 18, 2016 3:19 PM To: OpenStack Development Mailing List (not for usage questions) <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [Magnum] Next auto-scaling feature design? > On 18 Aug

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Tim Bell
> On 18 Aug 2016, at 09:56, hie...@vn.fujitsu.com wrote: > > Hi Magnum folks, > > I have some interests in our auto scaling features and currently testing with > some container monitoring solutions such as heapster, telegraf and > prometheus. I have seen the PoC session corporate with Senlin

[openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread hie...@vn.fujitsu.com
Hi Magnum folks, I have some interests in our auto scaling features and currently testing with some container monitoring solutions such as heapster, telegraf and prometheus. I have seen the PoC session corporate with Senlin in Austin and have some questions regarding of this design: - We have

[openstack-dev] [Magnum] Using common tooling for API docs

2016-08-12 Thread Hongbin Lu
Hi team, As mentioned in the email below, Magnum are not using common tooling for generating API docs, so we are excluded from the common navigation of OpenStack API. I think we need to prioritize the work to fix it. BTW, I notice there is a WIP patch [1] for generating API docs by using

Re: [openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread Ton Ngo
ailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 08/11/2016 10:52 AM Subject: Re: [openstack-dev] [magnum] ssh and http connection lost Hi I want to use docker swarm bay. i must be configure because of our network system configuration, docke

Re: [openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread yasemin
tack Development Mailing List (not for usage questions)" > <openstack-dev@lists.openstack.org> > Date: 08/11/2016 03:39 AM > Subject: Re: [openstack-dev] [magnum] ssh and http connection lost > > > > > > docker0 bridge is 172.24.. network, it is default.

Re: [openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread Ton Ngo
) <yasemin.demi...@tubitak.gov.tr> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 08/11/2016 03:39 AM Subject: Re: [openstack-dev] [magnum] ssh and http connection lost docker0 bridge is 172.24..

Re: [openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread BİLGEM BTE
docker0 bridge is 172.24.. network, it is default. What can i configure network settings ? - Orijinal Mesaj - Kimden: "taget" <qiaoliy...@gmail.com> Kime: openstack-dev@lists.openstack.org Gönderilenler: 11 Ağustos Perşembe 2016 11:26:07 Konu: Re: [openstack-d

Re: [openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread taget
Seems there's no relationship with Magnum service at all, you may need to fix figure out why you can not access your test machine. As for as I know, bay-create won't block your network access. - Eli. On 2016年08月11日 16:13, Yasemin DEMİRAL (BİLGEM BTE) wrote: Hi I installed magnum on

[openstack-dev] [magnum] ssh and http connection lost

2016-08-11 Thread BİLGEM BTE
Hi I installed magnum on devstack successfully. I tired the "magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count 1 " command in the guide . but in i lost my ssh connection for my test machine and http connection for horizon. what can i do ? i can not connect to terminal

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Zane Bitter
On 07/08/16 19:52, Clint Byrum wrote: Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: On 05/08/16 21:48, Ricardo Rocha wrote: Hi. Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of requests should be higher but we had some internal issues. We have a

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Ricardo Rocha
On Tue, Aug 9, 2016 at 10:00 PM, Clint Byrum wrote: > Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200: >> Hi. >> >> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum wrote: >> > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: >>

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Clint Byrum
Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200: > Hi. > > On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum wrote: > > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: > >> On 05/08/16 21:48, Ricardo Rocha wrote: > >> > Hi. > >> > > >> > Quick

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Zane Bitter
On 08/08/16 17:09, Ricardo Rocha wrote: * trying the convergence_engine: as far as i could see this is already there, just not enabled by default. We can give it a try and let you know how it goes if there's no obvious drawback. Would it just work with the current schema? We're running heat

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
gt; *From:*Ricardo Rocha [mailto:rocha.po...@gmail.com] >> *Sent:* August-05-16 5:48 AM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [magnum] 2 million requests / sec, 100s >> of nodes >> >> >> >

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Zane Bitter
On 05/08/16 12:01, Hongbin Lu wrote: Add [heat] to the title to get more feedback. Best regards, Hongbin *From:*Ricardo Rocha [mailto:rocha.po...@gmail.com] *Sent:* August-05-16 5:48 AM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* Re: [openstack-dev

Re: [openstack-dev] [Magnum] Adding opensuse as new driver to Magnum

2016-08-08 Thread Michal Jura
-Original Message- From: Murali Allada [mailto:murali.all...@rackspace.com] Sent: August-04-16 12:38 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Magnum] Adding opensuse as new driver to Magnum Michal, The right place for drivers is the /drivers folder. Take a look

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Tim Bell
On 08 Aug 2016, at 11:51, Ricardo Rocha > wrote: Hi. On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum > wrote: Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: On 05/08/16 21:48, Ricardo

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
On Mon, Aug 8, 2016 at 11:51 AM, Ricardo Rocha wrote: > Hi. > > On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum wrote: >> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: >>> On 05/08/16 21:48, Ricardo Rocha wrote: >>> > Hi. >>> > >>> > Quick

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
Hi. On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum wrote: > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: >> On 05/08/16 21:48, Ricardo Rocha wrote: >> > Hi. >> > >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number >> > of requests should

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ton Ngo
com> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.org> Date: 08/07/2016 12:59 PM Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes Hi Ton. I think we should. Also i

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Clint Byrum
Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: > On 05/08/16 21:48, Ricardo Rocha wrote: > > Hi. > > > > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number > > of requests should be higher but we had some internal issues. We have > > a submission for

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Steve Baker
ng List \(not for usage questions\)" <openstack-dev@lists.openstack.org <mailto:openstack-dev@lists.openstack.org>> Date: 06/17/2016 12:10 PM Subject: Re: [openstack-dev] [magnum] 2 m

<    1   2   3   4   5   6   7   8   9   10   >