Re: [openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default. You can upgrade to v1.10.11-1. From a quick test v1.11.5-1 is also compatible with 1.9.x. We are working to make this painless, sorry you have to ssh to the nodes for now. Cheers, Spyros On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis wrote: >

[openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all, Following the vulnerability [0], with magnum rocky and the kubernetes driver on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To upgrade the apiserver in existing clusters, on the master node(s) you can run: sudo atomic pull --storage ostree

Re: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Vikrant Aggarwal
Hi Feilong, Thanks for your reply. Kindly find the below outputs. [root@packstack1 ~]# rpm -qa | grep -i magnum python-magnum-7.0.1-1.el7.noarch openstack-magnum-conductor-7.0.1-1.el7.noarch openstack-magnum-ui-5.0.1-1.el7.noarch openstack-magnum-api-7.0.1-1.el7.noarch

Re: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Feilong Wang
Hi Vikrant, Before we dig more, it would be nice if you can let us know the version of your Magnum and Heat. Cheers. On 30/11/18 12:12 AM, Vikrant Aggarwal wrote: > Hello Team, > > Trying to deploy on K8 on fedora atomic. > > Here is the output of cluster template: > ~~~ > [root@packstack1

[openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Vikrant Aggarwal
Hello Team, Trying to deploy on K8 on fedora atomic. Here is the output of cluster template: ~~~ [root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 WARNING: The magnum client is deprecated and will be removed in a future

[openstack-dev] [magnum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter
Greetings, Magnum team! As you may be aware, I've been working with other folks in the community on documenting a vision for OpenStack clouds (formerly known as the 'Technical Vision') - essentially to interpret the mission statement in long-form, in a way that we can use to actually help

[openstack-dev] [magnum] Upcoming meeting 2018-09-11 Tuesday UTC 2100

2018-09-11 Thread Spyros Trigazis
Hello team, This is a reminder for the upcoming magnum meeting [0]. For convenience you can import this from here [1] or view it in html here [2]. Cheers, Spyros [0] https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting [1]

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin
Now with Fedora 26 I have etcd available but etcd fails. [root@swarm-u2rnie4d4ik6-master-0 ~]# /usr/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" --listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --debug 2018-08-23 14:34:15.596516 E | etcdmain: error verifying flags,

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin
Found the issue, I assume I have to use Fedora Atomic 26 until Rocky where I can start using Fedora Atomic 27. Will Fedora Atomia 28 be supported for Rocky? https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld in system containers, In Fedora Atomic 27 etcd and flanneld are

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin
Thanks for all of your help everyone, I've been busy with other thing but was able to pick up where I left regarding Magnum. After fixing some issues I have been able to provision a working Kubernetes cluster. I'm still having issues with getting Docker Swarm working, I've tried with both

[openstack-dev] [magnum] [magnum-ui] show certificate button bug requesting reviews

2018-08-23 Thread Tobias Urdin
Hello, Requesting reviews from the magnum-ui core team for https://review.openstack.org/#/c/595245/ I'm hoping that we could make quick due of this and be able to backport it to the stable/rocky release, would be ideal to backport it for stable/queens as well. Best regards Tobias

Re: [openstack-dev] [magnum] K8s Conformance Testing

2018-08-21 Thread Mohammed Naser
Hi Chris, This is an awesome effort. We can provide nested virt resources which are leveraged by Kata at the moment. Thanks! Mohammed Sent from my iPhone > On Aug 21, 2018, at 6:38 PM, Chris Hoge wrote: > > As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions, >

[openstack-dev] [magnum] K8s Conformance Testing

2018-08-21 Thread Chris Hoge
As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions, we're moving forward with obtaining Kubernetes Conformance certification for Magnum. While conformance test jobs aren't reliably running in the gate yet, the requirements of the program make sumbitting results manually on

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-04 Thread Joe Topjian
We recently deployed Magnum and I've been making my way through getting both Swarm and Kubernetes running. I also ran into some initial issues. These notes may or may not help, but thought I'd share them in case: * We're using Barbican for SSL. I have not tried with the internal x509keypair. * I

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Bogdan Katynski
> On 3 Aug 2018, at 13:46, Tobias Urdin wrote: > > Kubernetes: > * Master etcd does not start because /run/etcd does not exist This could be an issue with etcd rpm. With Systemd, /run is an in-memory tmpfs and is wiped on reboots. We’ve come across a similar issue in mariadb rpm on CentOS 7:

[openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Tobias Urdin
Hello, I'm testing around with Magnum and have so far only had issues. I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been able to get it working. Running Queens, is there any information about supported images? Is

Re: [openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread T. Nichole Williams
+1, you’ve got my vote :D T. Nichole Williams tribe...@tribecc.us > On Jul 27, 2018, at 6:35 AM, Spyros Trigazis wrote: > > Hello OpenStack community! > > I would like to nominate myself as PTL for the Magnum project for the > Stein cycle. > > In the last cycle magnum became more stable

[openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread Spyros Trigazis
Hello OpenStack community! I would like to nominate myself as PTL for the Magnum project for the Stein cycle. In the last cycle magnum became more stable and is reaching the point of becoming a feature complete solution for providing managed container clusters for private or public OpenStack

Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-07-24 Thread Spyros Trigazis
Hello list, After trial and error this is the new layout of the magnum meetings plus office hours. 1. The meeting moves to Tuesdays 2100 UTC starting today 2.1 Office hours for strigazi Tuesdays: 1300 to 1400 UTC 2.2 Office hours for flwang Wednesdays : 2200 to 2300 UTC Cheers, Spyros [0]

Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Lingxian Kong
Huge +1 Cheers, Lingxian Kong On Tue, Jul 17, 2018 at 7:04 PM, Yatin Karel wrote: > +2 Well deserved. > > Welcome Feilong and Thanks for all the Great Work!!! > > > Regards > Yatin Karel > > On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis > wrote: > > Hello list, > > > > I'm excited to

Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Yatin Karel
+2 Well deserved. Welcome Feilong and Thanks for all the Great Work!!! Regards Yatin Karel On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis wrote: > Hello list, > > I'm excited to nominate Feilong as Core Reviewer for the Magnum project. > > Feilong has contributed many features like Calico

[openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Spyros Trigazis
Hello list, I'm excited to nominate Feilong as Core Reviewer for the Magnum project. Feilong has contributed many features like Calico as an alternative CNI for kubernetes, make coredns scale proportionally to the cluster, improved admin operations on clusters and improved multi-master

Re: [openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Fei Long Wang
Hi Andrei, Thanks for raising this issue. I'm keen to review and happy to help. I just done a quick look for https://review.openstack.org/#/c/578356, it looks good for me. As for heat-container-eingine issue, it's probably a bug. I will test an propose a patch, which needs to release a new image

[openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Andrei Ozerov
Greetings. Has anyone successfully deployed Magnum in the multi-regional OpenStack installation? In my case different services (Nova, Heat) have different public endpoint in every region. I couldn't start Kube-apiserver until I added "region" to a kube_openstack_config. I created a story with

Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-25 Thread Fei Long Wang
Hi Spyros, Thanks for posting the discussion output. I'm not sure I can follow the idea of simplifying CNI configuration. Though we have both calico and flannel for k8s, but if we put both of them into single one config script. The script could be very complex. That's why I think we should define

Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-25 Thread Spyros Trigazis
Hello again, After Thursday's meeting I want to summarize what we discussed and add some pointers. - Work on using the out-of-tree cloud provider and move to the new model of defining it https://storyboard.openstack.org/#!/story/1762743 https://review.openstack.org/#/c/577477/ -

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Remo Mattei
Thanks Fei, I did post the question on that channel no much noise there though.. I would really like to get this configured since we are pushing for production. Thanks > On Jun 20, 2018, at 8:27 PM, Fei Long Wang wrote: > > Hi Remo, > > I can't see obvious issue from the log you posted.

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Fei Long Wang
Hi Remo, I can't see obvious issue from the log you posted. You can pop up at #openstack-containers IRC channel as for Magnum questions. Cheers. On 21/06/18 08:56, Remo Mattei wrote: > Hello guys, what will be the right channel to as a question about > having K8 (magnum working with Tripleo)? 

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Remo Mattei
Hello guys, what will be the right channel to as a question about having K8 (magnum working with Tripleo)? I have the following errors.. http://pastebin.mattei.co/index.php/view/2d1156f1 Any tips are appreciated. Thanks Remo > On Jun 19, 2018, at 2:13 PM, Fei Long Wang wrote: > > Hi

[openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-20 Thread Spyros Trigazis
Hello list, We are going to have a second weekly meeting for magnum for 3 weeks as a test to reach out to contributors in the Americas. You can join us tomorrow (or today for some?) at 1700UTC in #openstack-containers . Cheers, Spyros

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-19 Thread Fei Long Wang
Hi there, For people who maybe still interested in this issue. I have proposed a patch, see https://review.openstack.org/576029 And I have verified with Sonobuoy for both multi masters (3 master nodes) and single master clusters, all worked. Any comments will be appreciated. Thanks. On 21/05/18

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-05-20 Thread Sergey Filatov
Hi! I’d like to initiate a discussion about this bug: [1]. To resolve this issue we need to generate a secret cert and pass it to master nodes. We also need to store it somewhere to support scaling. This issue is specific for kubernetes drivers. Currently in magnum we have a general cert manager

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-04-23 Thread Spyros Trigazis
Hi Sergey, In magnum queens we can set the private ca as a service account key. Here [1] we can set the ca.key file. When the label cert_manager_api is set to true. Cheers, Spyros [1]

[openstack-dev] [magnum] K8S apiserver key sync

2018-04-20 Thread Sergey Filatov
Hello, I looked into k8s drivers for magnum I see that each api-server on master node generates it’s own service-account-key-file. This causes issues with service-accounts authenticating on api-server. (In case api-server endpoint moves). As far as I understand we should have either all

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Ricardo Rocha
Hi. I had added an item for this: https://bugs.launchpad.net/magnum/+bug/1752433 after the last reply and a bit of searching around. It's not urgent but we already got a couple cases in our deployment. Cheers, Ricardo On Thu, Mar 1, 2018 at 3:44 PM, Spyros Trigazis wrote:

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Spyros Trigazis
Hello, After discussion with the keystone team at the above session, keystone will not provide a way to transfer trusts nor application credentials, since it doesn't address the above problem (the member that leaves the team can auth with keystone if he has the trust/app-creds). In magnum we

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-27 Thread Ricardo Rocha
Hi Lance. On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad wrote: > > > On 02/26/2018 10:17 AM, Ricardo Rocha wrote: >> Hi. >> >> We have an issue on the way Magnum uses keystone trusts. >> >> Magnum clusters are created in a given project using HEAT, and require >> a trust

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Lance Bragstad
On 02/26/2018 10:17 AM, Ricardo Rocha wrote: > Hi. > > We have an issue on the way Magnum uses keystone trusts. > > Magnum clusters are created in a given project using HEAT, and require > a trust token to communicate back with OpenStack services - there is > also integration with Kubernetes

[openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Ricardo Rocha
Hi. We have an issue on the way Magnum uses keystone trusts. Magnum clusters are created in a given project using HEAT, and require a trust token to communicate back with OpenStack services - there is also integration with Kubernetes via a cloud provider. This trust belongs to a given user,

[openstack-dev] [magnum] Example bringup of Istio on Magnum k8s + Octavia

2018-02-19 Thread Timothy Swanson (tiswanso)
In case anyone is interested in the details, I went through the exercise of a basic bringup of Istio on Magnum k8s (with stable/pike): https://tiswanso.github.io/istio/istio_on_magnum.html I hope to update with follow-on items that may also be explored, such as: - Istio automatic side-car

Re: [openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Jeremy Stanley
On 2018-02-08 18:29:18 -0500 (-0500), Doug Hellmann wrote: [...] > Another alternative is to change the job configuration for magnum to use > release-openstack-server instead of publish-to-pypi, at least for the > near term. That would give the magnum team more time to make the changes > need to

Re: [openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-02-08 13:00:52 -0600: > The release job for magnum failed, but luckily it was after tagging and > branching the release. It was not able to get to the point of uploading a > tarball to http://tarballs.openstack.org/magnum/ though. > > The problem the

[openstack-dev] [magnum] Release of openstack/magnum failed

2018-02-08 Thread Sean McGinnis
Apologies, I forwarded the wrong one just a bit ago. See below for the actual links to the magnum release job failures if you wish to take a look. Sean - Forwarded message from z...@openstack.org - Date: Thu, 08 Feb 2018 18:06:54 + From: z...@openstack.org To:

[openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Sean McGinnis
The release job for magnum failed, but luckily it was after tagging and branching the release. It was not able to get to the point of uploading a tarball to http://tarballs.openstack.org/magnum/ though. The problem the job encountered is that magnum is now configured to publish to Pypi. The

[openstack-dev] [magnum] New meeting time Tue 1000UTC

2018-02-05 Thread Spyros Trigazis
Hello, Heads up, the containers team meeting has changed from 1600UTC to 1000UTC. See you there tomorrow at #openstack-meeting-alt ! Spyros __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:

[openstack-dev] [magnum] Rocky Magnum PTL candidacy

2018-02-03 Thread Spyros Trigazis
Dear Stackers, I would like to nominate myself as PTL for the Magnum project for the Rocky cycle. I have been consistently contributing to Magnum since February 2016 and I am a core reviewer since August 2016. Since then, I have contributed to significant features like cluster drivers, add

Re: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ?

2018-01-30 Thread Waines, Greg
s.openstack.org> Cc: "Nasir, Shoaib" <shoaib.na...@windriver.com> Subject: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ? Hey there, We have just recently integrated MAGNUM into our OpenStack Distribution. QUESTION: When

[openstack-dev] [magnum] Any plan to resume nodegroup work?

2018-01-29 Thread Wan-yen Hsu
Hi, I saw magnum nodegroup specs https://review.openstack.org/425422, https://review.openstack.org/433680, and https://review.openstack.org/425431 were last updated a year ago. is there any plan to resume this work or is it superseded by other specs or features? Thanks! Regards, Wan-yen

[openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ?

2018-01-19 Thread Waines, Greg
Hey there, We have just recently integrated MAGNUM into our OpenStack Distribution. QUESTION: When MAGNUM is creating the ‘instances’ for the COE master and minion nodes, WHY does it create the instances with ports using ‘fixed-ips’ ? - instead of just letting the instance’s port

Re: [openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers

2018-01-09 Thread Spyros Trigazis
Hi Greg, You can try to build an image with this process [1]. I haven't used for some time since we rely on the upstream image. Another option that I would like to investigate is to build a system container with frakti or clear container similar to these container images [2] [3] [4]. Then you

[openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers

2018-01-08 Thread Waines, Greg
Hey there, I am currently running magnum with the fedora-atomic image that is installed as part of the devstack installation of magnum. This fedora-atomic image has kubernetes with a CRI of the standard docker container. Where can i find (or how do i build) a fedora-atomic image with

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-28 Thread Sergio Morales Acuña
Can you help explain or point me to more information about your comments on this: "For RBAC, you need 1.8 and with Pike you can get it. just by changing one parameter." I checked the repo on github and RBAC was referenced only in a comment. No labels. What parameter? "In fedora atomic 27

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-24 Thread Spyros Trigazis
Hi Sergio, On 22 November 2017 at 20:37, Sergio Morales Acuña wrote: > Dear Spyros: > > Thanks for your answer. I'm moving my cloud to Pike!. > > The problems I encountered were with the TCP listeners for the etcd's > LoadBalancer and the "curl -sf" from the nodes to the etcd

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Sergio Morales Acuña
Dear Spyros: Thanks for your answer. I'm moving my cloud to Pike!. The problems I encountered were with the TCP listeners for the etcd's LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to put a -k). I'm using Kolla Binary with Centos 7, so I also have problems with

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Hongbin Lu
As a record, if magnum team doesn't interest to maintain the CoreOS driver, it is an indication that this driver should be spitted out and maintained by another team. CoreOS is one of the prevailing container OS. I believe there will be a lot of interests after the split. Disclaim: I am an author

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
I forgot to include the Pike release notes https://docs.openstack.org/releasenotes/magnum/pike.html Spyros On 22 November 2017 at 09:29, Spyros Trigazis wrote: > Hi Sergio, > > On 22 November 2017 at 03:31, Sergio Morales Acuña wrote: >> I'm using

Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
Hi Sergio, On 22 November 2017 at 03:31, Sergio Morales Acuña wrote: > I'm using Openstack Ocata and trying Magnum. > > I encountered a lot of problems but I been able to solved many of them. Which problems did you encounter? Can you be more specific? Can we solve them for

[openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-21 Thread Sergio Morales Acuña
I'm using Openstack Ocata and trying Magnum. I encountered a lot of problems but I been able to solved many of them. Now I'm curious about some aspects of Magnum: ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to create a custom fedora-atomic-27? What about RBAC? ¿Any one

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Spyros Trigazis
Hi Vahric, A very important reason that we use fedora atomic is that we are no maintaining our special image. We use the upstream operating system and we rely on the Fedora Project and we contribute back to it. If we use ubuntu we would need to maintain our special qcow image. We also use the

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Ricardo Rocha
Hi again. On Wed, Nov 1, 2017 at 9:47 PM, Vahric MUHTARYAN wrote: > Hello Ricardo , > > Thanks for your explanation and answers. > One more question, what is the possibility to keep using Newton (right now i > have it) and use latest Magnum features like swarm mode without

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-01 Thread Vahric MUHTARYAN
Hello Ricardo , Thanks for your explanation and answers. One more question, what is the possibility to keep using Newton (right now i have it) and use latest Magnum features like swarm mode without upgrade Openstack ? Does it possible ? Regards VM On 30.10.2017 01:19, "Ricardo Rocha"

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-10-29 Thread Ricardo Rocha
Hi Vahric. On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN wrote: > Hello All , > > > > I found some blueprint about supporting Docker Swarm Mode > https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support > > > > I understood that related development is not over

[openstack-dev] [Magnum] Docker Swarm Mode Support

2017-10-27 Thread Vahric MUHTARYAN
Hello All , I found some blueprint about supporting Docker Swarm Mode https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support I understood that related development is not over yet and no any Openstack version or Magnum version to test it also looks like some more thing to do.

[openstack-dev] [magnum] docker registry in minion node didn't work.

2017-10-10 Thread KiYoun Sung
Hello, Magnum team. I Installed Openstack newton and magnum. I installed Magnum by source. I want to use docker-registry and connect to "admin" account object store, but I don't want to explosure admin password. I created cluster-template below options. - coe: kubernetes - os:

Re: [openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-22 Thread Spyros Trigazis
2:20 PM > To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org> > Cc: "Sun, Yicheng (Jerry)" <jerry@windriver.com> > Subject: [openstack-dev] [magnum] issue with > admin_osc.keystone().trustee_domain_id > > > > We are in t

Re: [openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-22 Thread Waines, Greg
eg.wai...@windriver.com> Reply-To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org> Date: Wednesday, September 20, 2017 at 12:20 PM To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org> Cc: "Sun, Yicheng (Jerry)" <jer

[openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-20 Thread Waines, Greg
We are in the process of integrating MAGNUM into our OpenStack distribution. We are working with NEWTON version of MAGNUM. We have the MAGNUM processes up and running and configured. However we are seeing the following error (see stack trace below) on virtually all MAGNUM CLI calls. The code

Re: [openstack-dev] [magnum] Weekly meetings

2017-08-28 Thread Spyros Trigazis
Hello, As discussed in last week's meeting [0], this week and next we will discuss plans about Queens and review blueprints. So, if you want to add discussion items please bring them up tomorrow or next week in our weekly meeting. If for any reason, you can't attend you can start a thread in the

[openstack-dev] [magnum] Weekly meetings

2017-08-22 Thread Spyros Trigazis
Hello, Recently we decided to have bi-weekly meetings. Starting from today we will have weekly meetings again. From now on, we will have our meeting every Tuesday at 1600 UTC in #openstack-meeting-alt . For today, that is in 13 minutes. Cheers, Spyros

[openstack-dev] [magnum] PTL Candidacy for Queens

2017-08-04 Thread Spyros Trigazis
Hello! I would like to nominate myself as PTL for the Magnum project for the Queens cycle. I have been consistently contributing to Magnum since February 2016 and I am a core reviewer since August 2016. Since then, I have contributed to significant features like cluster drivers, add Magnum tests

[openstack-dev] [magnum] spec for cluster federation

2017-08-03 Thread Ricardo Rocha
Hi. We've recently started looking at federating kubernetes clusters, using some of our internal Magnum clusters and others deployed in external clouds. With kubernetes 1.7 most of the functionality we need is already available. Looking forward we submitted a spec to integrate this into Magnum:

Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-08-01 Thread Mark Goddard
nstack-dev@lists. > openstack.org> > *Date: *Friday, July 28, 2017 at 5:08 AM > *To: *"openstack-dev@lists.openstack.org" <openstack-dev@lists. > openstack.org> > *Subject: *Re: [openstack-dev] [magnum] Interface configuration and > assumptions for masters/minions launch

Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-28 Thread Waines, Greg
AM To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum Hi Greg, Magnum clusters currently support using only a single network for all comm

Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-28 Thread Mark Goddard
Hi Greg, Magnum clusters currently support using only a single network for all communication. See the heat templates[1][2] in the drivers. . On the bare metal side, currently ironic effectively supports using only a single network interface due to a lack of support for physical network awareness.

Re: [openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-20 Thread Mark Goddard
Hi Greg, You're correct - magnum features support for running on top of VMs or baremetal. Currently baremetal is supported for kubernetes on Fedora core only[1]. There is a cluster template parameter 'server_type', which should be set to 'BM' for baremetal clusters. In terms of how this works

[openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-17 Thread Waines, Greg
When MAGNUM launches a VM or Ironic instance for a COE master or minion node, with the COE Image, What is the interface configuration and assumptions for these nodes ? e.g. - only a single interface ? - master and minion communication over that interface ? - communication to Docker Registry or

[openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-17 Thread Waines, Greg
I believe the MAGNUM architecture supports using either a VM Instance or an Ironic Instance as the Host for the COE’s masters and minions. How is this done / abstracted within the MAGNUM Architecture ? i.e. is there a ‘container-host-driver API’ that is defined; and implemented for both VM and

Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread KiYoun Sung
Hello, Spyros/ Thank you for your reply. I executed "kubectl create" command in my openstack controller node. I downloaded kubectl binary, it's version is 2.5.1. Below are my steps. 1) install openstack newton by fuel 10.0 2) install magnum by source (master branch) in controller node 3) install

Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 13:58, Spyros Trigazis wrote: > > > On 17 May 2017 at 06:25, KiYoun Sung wrote: > >> Hello, >> Magnum team. >> >> I Installed Openstack newton and magnum. >> I installed Magnum by source(master branch). >> >> I have two questions. >>

Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 06:25, KiYoun Sung wrote: > Hello, > Magnum team. > > I Installed Openstack newton and magnum. > I installed Magnum by source(master branch). > > I have two questions. > > 1. > After installation, > I created kubernetes cluster and it's CREATE_COMPLETE, >

[openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-16 Thread KiYoun Sung
Hello, Magnum team. I Installed Openstack newton and magnum. I installed Magnum by source(master branch). I have two questions. 1. After installation, I created kubernetes cluster and it's CREATE_COMPLETE, and I want to create kubernetes pod. My create script is below.

Re: [openstack-dev] [magnum] magnum cluster-create for kubernetes-template was failed.

2017-05-12 Thread Mark Goddard
Hi, I also hit the loopingcall error while running magnum 4.1.1 (ocata). It is tracked by this bug: https://bugs.launchpad.net/magnum/+bug/1666790. I cherry picked the fix to ocata locally, but this needs to be done upstream as well. I think that the heat stack create timeout is unrelated to

[openstack-dev] [magnum] magnum cluster-create for kubernetes-template was failed.

2017-05-11 Thread KiYoun Sung
Hello, Magnum Team. I installed magnum on Openstack Ocata(by fuel 11.0). I referred to this guide.(https://docs.openstack.org/project-install-guide/ container-infrastructure-management/ocata/install.html) Below is my installation information. root@controller:~# dpkg -l | grep magnum magnum-api

Re: [openstack-dev] [magnum][containers] Size of userdata in drivers

2017-05-04 Thread Ricardo Rocha
Hi Kevin. We've hit this locally in the past, and adding core-dns i see the sample for kubernetes atomic. Spyros is dropping some fragments that are not needed to temporarily get around the issue. Is there any trick in Heat we can use? zipping the fragments should give some gain, is this

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-27 Thread Adrian Otto
> On Mar 22, 2017, at 5:48 AM, Ricardo Rocha wrote: > > Hi. > > One simplification would be: > openstack coe create/list/show/config/update > openstack coe template create/list/show/update > openstack coe ca show/sign I like Ricardo’s suggestion above. I think we should

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-22 Thread Ricardo Rocha
Hi. One simplification would be: openstack coe create/list/show/config/update openstack coe template create/list/show/update openstack coe ca show/sign This covers all the required commands and is a bit less verbose. The cluster word is too generic and probably adds no useful info. Whatever it

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Qiming Teng
On Tue, Mar 21, 2017 at 10:50:13AM -0400, Jay Pipes wrote: > On 03/20/2017 09:24 PM, Qiming Teng wrote: > >On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote: > >>On 03/20/2017 03:08 PM, Adrian Otto wrote: > >>>Team, > >>> > >>>Stephen Watson has been working on an magnum feature to add

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Kumari, Madhuri
: Tuesday, March 21, 2017 7:25 PM To: OpenStack Development Mailing List (not for usage questions) <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc? IMO, coe is a little confusing. It is a term used by people related s

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Spyros Trigazis
IMO, coe is a little confusing. It is a term used by people related somehow to the magnum community. When I describe to users how to use magnum, I spent a few moments explaining what we call coe. I prefer one of the following: * openstack magnum cluster create|delete|... * openstack mcluster

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Jay Pipes
On 03/20/2017 09:24 PM, Qiming Teng wrote: On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote: On 03/20/2017 03:08 PM, Adrian Otto wrote: Team, Stephen Watson has been working on an magnum feature to add magnum commands to the openstack client by implementing a plugin:

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Anne Gentle
On Mon, Mar 20, 2017 at 4:38 PM, Dean Troyer wrote: > On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto > wrote: > > So, to be clear, this would result in the following command for what we > currently use “magnum cluster create” for: > > > > openstack

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Monty Taylor
On 03/20/2017 08:16 PM, Dean Troyer wrote: > On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor wrote: >>> [Hongbin Lu] >>> I think the style would be more consistent if all the resources are >>> qualified or un-qualified, not the mix of both. > >> So - swift got here first, it

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Qiming Teng
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote: > On 03/20/2017 03:08 PM, Adrian Otto wrote: > >Team, > > > >Stephen Watson has been working on an magnum feature to add magnum commands > >to the openstack client by implementing a plugin: > > >

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor wrote: >> [Hongbin Lu] >> I think the style would be more consistent if all the resources are >> qualified or un-qualified, not the mix of both. > So - swift got here first, it wins, it gets container. The fine folks in >

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Monty Taylor
On 03/20/2017 05:39 PM, Hongbin Lu wrote: > > >> -Original Message- >> From: Dean Troyer [mailto:dtro...@gmail.com] >> Sent: March-20-17 5:19 PM >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: Re: [openstack-dev] [magn

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu
> -Original Message- > From: Dean Troyer [mailto:dtro...@gmail.com] > Sent: March-20-17 5:19 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum > commands in osc? > > On M

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 22:19:14 +: > I was unsure, so I found him on IRC to clarify, and he pointed me to the > openstack/service-types-authority repository, where I submitted patch 445694 > for review. We have three distinct identifiers in play: > > 1) Our

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Clint, On Mar 20, 2017, at 3:02 PM, Clint Byrum > wrote: Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +: Jay, On Mar 20, 2017, at 12:35 PM, Jay Pipes > wrote:

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +: > Jay, > > On Mar 20, 2017, at 12:35 PM, Jay Pipes > > wrote: > > On 03/20/2017 03:08 PM, Adrian Otto wrote: > Team, > > Stephen Watson has been working on an magnum feature to add

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto wrote: > So, to be clear, this would result in the following command for what we > currently use “magnum cluster create” for: > > openstack coe cluster create … > > Is this right? Yes. dt -- Dean Troyer

  1   2   3   4   5   6   7   8   9   10   >