Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Ricardo Rocha
should check with the heat team what is their plan. > > Cheers, > Spyros > > On 27 February 2018 at 20:53, Ricardo Rocha <rocha.po...@gmail.com> wrote: >> >> Hi Lance. >> >> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad <lbrags...@gmail.com> >&

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-27 Thread Ricardo Rocha
Hi Lance. On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad <lbrags...@gmail.com> wrote: > > > On 02/26/2018 10:17 AM, Ricardo Rocha wrote: >> Hi. >> >> We have an issue on the way Magnum uses keystone trusts. >> >> Magnum clusters are created in a giv

[openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Ricardo Rocha
Hi. We have an issue on the way Magnum uses keystone trusts. Magnum clusters are created in a given project using HEAT, and require a trust token to communicate back with OpenStack services - there is also integration with Kubernetes via a cloud provider. This trust belongs to a given user,

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Ricardo Rocha
control plane for Magnum or are able to split it. Cheers, Ricardo > > Regards > VM > > On 30.10.2017 01:19, "Ricardo Rocha" <rocha.po...@gmail.com> wrote: > > Hi Vahric. > > On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN <vah...@doruk.net.tr>

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-10-29 Thread Ricardo Rocha
Hi Vahric. On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN wrote: > Hello All , > > > > I found some blueprint about supporting Docker Swarm Mode > https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support > > > > I understood that related development is not over

[openstack-dev] [magnum] spec for cluster federation

2017-08-03 Thread Ricardo Rocha
Hi. We've recently started looking at federating kubernetes clusters, using some of our internal Magnum clusters and others deployed in external clouds. With kubernetes 1.7 most of the functionality we need is already available. Looking forward we submitted a spec to integrate this into Magnum:

Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-06-02 Thread Ricardo Rocha
Hi Hongbin. Regarding your comments below, some quick clarifications for people less familiar with Magnum. 1. Rexray / Cinder integration - Magnum uses an alpine based rexray image, compressed size is 33MB (the download size), so pretty good - Deploying a full Magnum cluster of 128 nodes takes

Re: [openstack-dev] [magnum][containers] Size of userdata in drivers

2017-05-04 Thread Ricardo Rocha
Hi Kevin. We've hit this locally in the past, and adding core-dns i see the sample for kubernetes atomic. Spyros is dropping some fragments that are not needed to temporarily get around the issue. Is there any trick in Heat we can use? zipping the fragments should give some gain, is this

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-22 Thread Ricardo Rocha
Hi. One simplification would be: openstack coe create/list/show/config/update openstack coe template create/list/show/update openstack coe ca show/sign This covers all the required commands and is a bit less verbose. The cluster word is too generic and probably adds no useful info. Whatever it

Re: [openstack-dev] [neutron]

2017-01-27 Thread Ricardo Rocha
Hi. Do you have a pointer to how you extended the driver to have this? Thanks! Ricardo On Fri, Nov 18, 2016 at 2:02 PM, ZZelle wrote: > Hello, > > AFAIK, it's not possible. > > I did a similar thing by extending neutron iptables driver in order to set > "pre-rules". > > Best

Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-19 Thread Ricardo Rocha
Hi. It would be great to meet in any case. We've been exploring Atomic system containers (as in 'atomic install --system ...') for our internal plugins at CERN, and having some issues with runc and selinux definitions plus some atomic command bugs. It's mostly due to the config.json being a hard

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-22 Thread Ricardo Rocha
Hi. I think option 1. is the best one right now, mostly to reduce the impact on the ongoing developments. Upgrades, flattening, template versioning and node groups are supposed to land a lot of patches in the next couple months, moving the drivers into separate repos now could be a distraction.

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Ricardo Rocha
ker's message of 2016-08-08 10:11:29 +1200: >> >> On 05/08/16 21:48, Ricardo Rocha wrote: >> >> > Hi. >> >> > >> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number >> >> > of requests should b

Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
Hi. On Mon, Aug 8, 2016 at 6:17 PM, Zane Bitter <zbit...@redhat.com> wrote: > On 05/08/16 12:01, Hongbin Lu wrote: >> >> Add [heat] to the title to get more feedback. >> >> >> >> Best regards, >> >> Hongbin >> >> >> >&

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
On Mon, Aug 8, 2016 at 11:51 AM, Ricardo Rocha <rocha.po...@gmail.com> wrote: > Hi. > > On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum <cl...@fewbar.com> wrote: >> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: >>> On 05/08/16 21:48, Ricardo Roch

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
Hi. On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum <cl...@fewbar.com> wrote: > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200: >> On 05/08/16 21:48, Ricardo Rocha wrote: >> > Hi. >> > >> > Quick update is 1000 nodes and 7 million reqs/sec :

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ricardo Rocha
t should be fairly > straightforward. You just need to make > sure the local storage of the flavor is sufficient to host the containers > in the benchmark. > If you think this is a common scenario, we can open a blueprint for this > option. > Ton, > > [image: Inactive hide detail

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Ricardo Rocha
t \(not for usage questions\)" < > openstack-dev@lists.openstack.org> > Date: 06/17/2016 12:10 PM > Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of > nodes > > -- > > > > Thanks Ricardo for sharing the data,

[openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Ricardo Rocha
Hi. Just thought the Magnum team would be happy to hear :) We had access to some hardware the last couple days, and tried some tests with Magnum and Kubernetes - following an original blog post from the kubernetes team. Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-15 Thread Ricardo Rocha
t; > CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2 > > > > Cheers, > > Spyros > > > > > > On 8 June 2016 at 16:01, Hongbin Lu <hongbin...@huawei.com> wrote: > > Ricardo, > > Thanks for the offer. Would I know where is the exact loca

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Ricardo Rocha
Hi Hongbin. Not sure how this fits everyone, but we would be happy to host it at CERN. How do people feel about it? We can add a nice tour of the place as a bonus :) Let us know. Ricardo On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu wrote: > Hi all, > > > > Please find

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-07 Thread Ricardo Rocha
+1 on this. Another use case would be 'fast storage' for dbs, 'any storage' for memcache and web servers. Relying on labels for this makes it really simple. The alternative of doing it with multiple clusters adds complexity to the cluster(s) description by users. On Fri, Jun 3, 2016 at 1:54 AM,

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-03 Thread Ricardo Rocha
Hi. On Mon, May 2, 2016 at 7:11 PM, Cammann, Tom wrote: > Thanks for the write up Hongbin and thanks to all those who contributed to > the design summit. A few comments on the summaries below. > > 6. Ironic Integration: >

Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Ricardo Rocha
/2016/04/containers-and-cern-cloud.html Hopefully the pointers to the relevant blueprints for some of the issues we found will be useful for others. Cheers, Ricardo On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha <rocha.po...@gmail.com> wrote: > Hi. > > We're running a Magnum pilot

Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Ricardo Rocha
Hi Hongbin. On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu wrote: > > > > > From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) > [mailto:li-gong.d...@hpe.com] > Sent: April-20-16 3:39 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject:

Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Ricardo Rocha
Hi. On Wed, Apr 20, 2016 at 5:43 PM, Fox, Kevin M wrote: > If the ops are deploying a cloud big enough to run into that problem, I > think they can deploy a scaled out docker registry of some kind too, that > the images can point to? Last I looked, it didn't seem very

Re: [openstack-dev] [magnum] Discuss the blueprint "support-private-registry"

2016-03-30 Thread Ricardo Rocha
Hi. On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao wrote: > > Hi Hongbin > > Thanks for starting this thread, > > > > I initial propose this bp because I am in China which is behind China great > wall and can not have access of gcr.io directly, after checking our > cloud-init

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ricardo Rocha
Hi. We're on the way, the API is using haproxy load balancing in the same way all openstack services do here - this part seems to work fine. For the conductor we're stopped due to bay certificates - we don't currently have barbican so local was the only option. To get them accessible on all

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ricardo Rocha
us to think creatively about how to strike the right balance >>> between re-implementing existing technology, and making that technology >>> easily accessible. >>> >>> Thanks, >>> >>> Adrian >>> >>>> >>>> Best regards, >>

Re: [openstack-dev] [magnum] containers across availability zones

2016-02-24 Thread Ricardo Rocha
>> adding tags to the docker daemon on the bay nodes as part of the swarm heat >> template. That would allow the filter selection you described. >> >> Adrian >> >> > On Feb 23, 2016, at 4:11 PM, Ricardo Rocha <rocha.po...@gmail.com> >> > wrote: &g

[openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Ricardo Rocha
Hi. Has anyone looked into having magnum bay nodes deployed in different availability zones? The goal would be to have multiple instances of a container running on nodes across multiple AZs. Looking at docker swarm this could be achieved using (for example) affinity filters based on labels.

Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Ricardo Rocha
Hi. I agree with this. It's great magnum does the setup and config of the container cluster backends, but we could also call heat ourselves if that would be it. Taking a common use case we have: - create and expose a volume using a nfs backend so that multiple clients can access the same data

Re: [openstack-dev] New [puppet] module for Magnum project

2015-11-25 Thread Ricardo Rocha
Hi. We've started implementing a similar module here, i just pushed it to: https://github.com/cernops/puppet-magnum It already does a working magnum-api/conductor, and we'll add configuration for additional conf options this week - to allow alternate heat templates for the bays. I've done some