should check with the heat team what is their plan.
>
> Cheers,
> Spyros
>
> On 27 February 2018 at 20:53, Ricardo Rocha <rocha.po...@gmail.com> wrote:
>>
>> Hi Lance.
>>
>> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad <lbrags...@gmail.com>
>&
Hi Lance.
On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad <lbrags...@gmail.com> wrote:
>
>
> On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
>> Hi.
>>
>> We have an issue on the way Magnum uses keystone trusts.
>>
>> Magnum clusters are created in a giv
Hi.
We have an issue on the way Magnum uses keystone trusts.
Magnum clusters are created in a given project using HEAT, and require
a trust token to communicate back with OpenStack services - there is
also integration with Kubernetes via a cloud provider.
This trust belongs to a given user,
control plane for Magnum or are able to split
it.
Cheers,
Ricardo
>
> Regards
> VM
>
> On 30.10.2017 01:19, "Ricardo Rocha" <rocha.po...@gmail.com> wrote:
>
> Hi Vahric.
>
> On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN <vah...@doruk.net.tr>
Hi Vahric.
On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN wrote:
> Hello All ,
>
>
>
> I found some blueprint about supporting Docker Swarm Mode
> https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>
>
>
> I understood that related development is not over
Hi.
We've recently started looking at federating kubernetes clusters,
using some of our internal Magnum clusters and others deployed in
external clouds. With kubernetes 1.7 most of the functionality we need
is already available.
Looking forward we submitted a spec to integrate this into Magnum:
Hi Hongbin.
Regarding your comments below, some quick clarifications for people
less familiar with Magnum.
1. Rexray / Cinder integration
- Magnum uses an alpine based rexray image, compressed size is 33MB
(the download size), so pretty good
- Deploying a full Magnum cluster of 128 nodes takes
Hi Kevin.
We've hit this locally in the past, and adding core-dns i see the
sample for kubernetes atomic.
Spyros is dropping some fragments that are not needed to temporarily
get around the issue. Is there any trick in Heat we can use? zipping
the fragments should give some gain, is this
Hi.
One simplification would be:
openstack coe create/list/show/config/update
openstack coe template create/list/show/update
openstack coe ca show/sign
This covers all the required commands and is a bit less verbose. The
cluster word is too generic and probably adds no useful info.
Whatever it
Hi.
Do you have a pointer to how you extended the driver to have this?
Thanks!
Ricardo
On Fri, Nov 18, 2016 at 2:02 PM, ZZelle wrote:
> Hello,
>
> AFAIK, it's not possible.
>
> I did a similar thing by extending neutron iptables driver in order to set
> "pre-rules".
>
> Best
Hi.
It would be great to meet in any case.
We've been exploring Atomic system containers (as in 'atomic install
--system ...') for our internal plugins at CERN, and having some
issues with runc and selinux definitions plus some atomic command
bugs. It's mostly due to the config.json being a hard
Hi.
I think option 1. is the best one right now, mostly to reduce the
impact on the ongoing developments.
Upgrades, flattening, template versioning and node groups are supposed
to land a lot of patches in the next couple months, moving the drivers
into separate repos now could be a distraction.
ker's message of 2016-08-08 10:11:29 +1200:
>> >> On 05/08/16 21:48, Ricardo Rocha wrote:
>> >> > Hi.
>> >> >
>> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
>> >> > of requests should b
Hi.
On Mon, Aug 8, 2016 at 6:17 PM, Zane Bitter <zbit...@redhat.com> wrote:
> On 05/08/16 12:01, Hongbin Lu wrote:
>>
>> Add [heat] to the title to get more feedback.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>&
On Mon, Aug 8, 2016 at 11:51 AM, Ricardo Rocha <rocha.po...@gmail.com> wrote:
> Hi.
>
> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum <cl...@fewbar.com> wrote:
>> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>>> On 05/08/16 21:48, Ricardo Roch
Hi.
On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum <cl...@fewbar.com> wrote:
> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>> On 05/08/16 21:48, Ricardo Rocha wrote:
>> > Hi.
>> >
>> > Quick update is 1000 nodes and 7 million reqs/sec :
t should be fairly
> straightforward. You just need to make
> sure the local storage of the flavor is sufficient to host the containers
> in the benchmark.
> If you think this is a common scenario, we can open a blueprint for this
> option.
> Ton,
>
> [image: Inactive hide detail
t \(not for usage questions\)" <
> openstack-dev@lists.openstack.org>
> Date: 06/17/2016 12:10 PM
> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
>
> --
>
>
>
> Thanks Ricardo for sharing the data,
Hi.
Just thought the Magnum team would be happy to hear :)
We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.
Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
t;
> CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2
>
>
>
> Cheers,
>
> Spyros
>
>
>
>
>
> On 8 June 2016 at 16:01, Hongbin Lu <hongbin...@huawei.com> wrote:
>
> Ricardo,
>
> Thanks for the offer. Would I know where is the exact loca
Hi Hongbin.
Not sure how this fits everyone, but we would be happy to host it at
CERN. How do people feel about it? We can add a nice tour of the place
as a bonus :)
Let us know.
Ricardo
On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu wrote:
> Hi all,
>
>
>
> Please find
+1 on this. Another use case would be 'fast storage' for dbs, 'any
storage' for memcache and web servers. Relying on labels for this
makes it really simple.
The alternative of doing it with multiple clusters adds complexity to
the cluster(s) description by users.
On Fri, Jun 3, 2016 at 1:54 AM,
Hi.
On Mon, May 2, 2016 at 7:11 PM, Cammann, Tom wrote:
> Thanks for the write up Hongbin and thanks to all those who contributed to
> the design summit. A few comments on the summaries below.
>
> 6. Ironic Integration:
>
/2016/04/containers-and-cern-cloud.html
Hopefully the pointers to the relevant blueprints for some of the
issues we found will be useful for others.
Cheers,
Ricardo
On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha <rocha.po...@gmail.com> wrote:
> Hi.
>
> We're running a Magnum pilot
Hi Hongbin.
On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu wrote:
>
>
>
>
> From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
> [mailto:li-gong.d...@hpe.com]
> Sent: April-20-16 3:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject:
Hi.
On Wed, Apr 20, 2016 at 5:43 PM, Fox, Kevin M wrote:
> If the ops are deploying a cloud big enough to run into that problem, I
> think they can deploy a scaled out docker registry of some kind too, that
> the images can point to? Last I looked, it didn't seem very
Hi.
On Wed, Mar 30, 2016 at 3:59 AM, Eli Qiao wrote:
>
> Hi Hongbin
>
> Thanks for starting this thread,
>
>
>
> I initial propose this bp because I am in China which is behind China great
> wall and can not have access of gcr.io directly, after checking our
> cloud-init
Hi.
We're on the way, the API is using haproxy load balancing in the same
way all openstack services do here - this part seems to work fine.
For the conductor we're stopped due to bay certificates - we don't
currently have barbican so local was the only option. To get them
accessible on all
us to think creatively about how to strike the right balance
>>> between re-implementing existing technology, and making that technology
>>> easily accessible.
>>>
>>> Thanks,
>>>
>>> Adrian
>>>
>>>>
>>>> Best regards,
>>
>> adding tags to the docker daemon on the bay nodes as part of the swarm heat
>> template. That would allow the filter selection you described.
>>
>> Adrian
>>
>> > On Feb 23, 2016, at 4:11 PM, Ricardo Rocha <rocha.po...@gmail.com>
>> > wrote:
&g
Hi.
Has anyone looked into having magnum bay nodes deployed in different
availability zones? The goal would be to have multiple instances of a
container running on nodes across multiple AZs.
Looking at docker swarm this could be achieved using (for example)
affinity filters based on labels.
Hi.
I agree with this. It's great magnum does the setup and config of the
container cluster backends, but we could also call heat ourselves if
that would be it.
Taking a common use case we have:
- create and expose a volume using a nfs backend so that multiple
clients can access the same data
Hi.
We've started implementing a similar module here, i just pushed it to:
https://github.com/cernops/puppet-magnum
It already does a working magnum-api/conductor, and we'll add
configuration for additional conf options this week - to allow
alternate heat templates for the bays.
I've done some
33 matches
Mail list logo