Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Ricardo Rocha
On Tue, Aug 9, 2016 at 10:00 PM, Clint Byrum  wrote:
> Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200:
>> Hi.
>>
>> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
>> > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>> >> On 05/08/16 21:48, Ricardo Rocha wrote:
>> >> > Hi.
>> >> >
>> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
>> >> > of requests should be higher but we had some internal issues. We have
>> >> > a submission for barcelona to provide a lot more details.
>> >> >
>> >> > But a couple questions came during the exercise:
>> >> >
>> >> > 1. Do we really need a volume in the VMs? On large clusters this is a
>> >> > burden, and local storage only should be enough?
>> >> >
>> >> > 2. We observe a significant delay (~10min, which is half the total
>> >> > time to deploy the cluster) on heat when it seems to be crunching the
>> >> > kube_minions nested stacks. Once it's done, it still adds new stacks
>> >> > gradually, so it doesn't look like it precomputed all the info in 
>> >> > advance
>> >> >
>> >> > Anyone tried to scale Heat to stacks this size? We end up with a stack
>> >> > with:
>> >> > * 1000 nested stacks (depth 2)
>> >> > * 22000 resources
>> >> > * 47008 events
>> >> >
>> >> > And already changed most of the timeout/retrial values for rpc to get
>> >> > this working.
>> >> >
>> >> > This delay is already visible in clusters of 512 nodes, but 40% of the
>> >> > time in 1000 nodes seems like something we could improve. Any hints on
>> >> > Heat configuration optimizations for large stacks very welcome.
>> >> >
>> >> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
>> >> max_resources_per_stack = -1
>> >>
>> >> Enforcing this for large stacks has a very high overhead, we make this
>> >> change in the TripleO undercloud too.
>> >>
>> >
>> > Wouldn't this necessitate having a private Heat just for Magnum? Not
>> > having a resource limit per stack would leave your Heat engines
>> > vulnerable to being DoS'd by malicious users, since one can create many
>> > many thousands of resources, and thus python objects, in just a couple
>> > of cleverly crafted templates (which is why I added the setting).
>> >
>> > This makes perfect sense in the undercloud of TripleO, which is a
>> > private, single tenant OpenStack. But, for Magnum.. now you're talking
>> > about the Heat that users have access to.
>>
>> We have it already at -1 for these tests. As you say a malicious user
>> could DoS, right now this is manageable in our environment. But maybe
>> move it to a per tenant value, or some special policy? The stacks are
>> created under a separate domain for magnum (for trustees), we could
>> also use that for separation.
>>
>> A separate heat instance sounds like an overkill.
>>
>
> It does, but there's really no way around it. If Magnum users are going
> to create massive stacks, then all of the heat engines will need to be
> able to handle massive stacks anyway, and a quota system would just mean
> that only Magnum gets to fully utilize those engines, which doesn't
> really make much sense at all, does it?

The best might be to see if there are improvements possible either in
the Heat engine (lots of what Zane mentioned seems to be of help,
we're willing to try that) or in the way Magnum creates the stacks.

In any case, things work right now just not perfect yet. Still ok to
get 1000 node clusters deployed in < 25min, people can handle that :)

Thanks!

Ricardo

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Clint Byrum
Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200:
> Hi.
> 
> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
> > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
> >> On 05/08/16 21:48, Ricardo Rocha wrote:
> >> > Hi.
> >> >
> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
> >> > of requests should be higher but we had some internal issues. We have
> >> > a submission for barcelona to provide a lot more details.
> >> >
> >> > But a couple questions came during the exercise:
> >> >
> >> > 1. Do we really need a volume in the VMs? On large clusters this is a
> >> > burden, and local storage only should be enough?
> >> >
> >> > 2. We observe a significant delay (~10min, which is half the total
> >> > time to deploy the cluster) on heat when it seems to be crunching the
> >> > kube_minions nested stacks. Once it's done, it still adds new stacks
> >> > gradually, so it doesn't look like it precomputed all the info in advance
> >> >
> >> > Anyone tried to scale Heat to stacks this size? We end up with a stack
> >> > with:
> >> > * 1000 nested stacks (depth 2)
> >> > * 22000 resources
> >> > * 47008 events
> >> >
> >> > And already changed most of the timeout/retrial values for rpc to get
> >> > this working.
> >> >
> >> > This delay is already visible in clusters of 512 nodes, but 40% of the
> >> > time in 1000 nodes seems like something we could improve. Any hints on
> >> > Heat configuration optimizations for large stacks very welcome.
> >> >
> >> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
> >> max_resources_per_stack = -1
> >>
> >> Enforcing this for large stacks has a very high overhead, we make this
> >> change in the TripleO undercloud too.
> >>
> >
> > Wouldn't this necessitate having a private Heat just for Magnum? Not
> > having a resource limit per stack would leave your Heat engines
> > vulnerable to being DoS'd by malicious users, since one can create many
> > many thousands of resources, and thus python objects, in just a couple
> > of cleverly crafted templates (which is why I added the setting).
> >
> > This makes perfect sense in the undercloud of TripleO, which is a
> > private, single tenant OpenStack. But, for Magnum.. now you're talking
> > about the Heat that users have access to.
> 
> We have it already at -1 for these tests. As you say a malicious user
> could DoS, right now this is manageable in our environment. But maybe
> move it to a per tenant value, or some special policy? The stacks are
> created under a separate domain for magnum (for trustees), we could
> also use that for separation.
> 
> A separate heat instance sounds like an overkill.
> 

It does, but there's really no way around it. If Magnum users are going
to create massive stacks, then all of the heat engines will need to be
able to handle massive stacks anyway, and a quota system would just mean
that only Magnum gets to fully utilize those engines, which doesn't
really make much sense at all, does it?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
On Mon, Aug 8, 2016 at 11:51 AM, Ricardo Rocha  wrote:
> Hi.
>
> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
>> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>>> On 05/08/16 21:48, Ricardo Rocha wrote:
>>> > Hi.
>>> >
>>> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
>>> > of requests should be higher but we had some internal issues. We have
>>> > a submission for barcelona to provide a lot more details.
>>> >
>>> > But a couple questions came during the exercise:
>>> >
>>> > 1. Do we really need a volume in the VMs? On large clusters this is a
>>> > burden, and local storage only should be enough?
>>> >
>>> > 2. We observe a significant delay (~10min, which is half the total
>>> > time to deploy the cluster) on heat when it seems to be crunching the
>>> > kube_minions nested stacks. Once it's done, it still adds new stacks
>>> > gradually, so it doesn't look like it precomputed all the info in advance
>>> >
>>> > Anyone tried to scale Heat to stacks this size? We end up with a stack
>>> > with:
>>> > * 1000 nested stacks (depth 2)
>>> > * 22000 resources
>>> > * 47008 events
>>> >
>>> > And already changed most of the timeout/retrial values for rpc to get
>>> > this working.
>>> >
>>> > This delay is already visible in clusters of 512 nodes, but 40% of the
>>> > time in 1000 nodes seems like something we could improve. Any hints on
>>> > Heat configuration optimizations for large stacks very welcome.
>>> >
>>> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
>>> max_resources_per_stack = -1
>>>
>>> Enforcing this for large stacks has a very high overhead, we make this
>>> change in the TripleO undercloud too.
>>>
>>
>> Wouldn't this necessitate having a private Heat just for Magnum? Not
>> having a resource limit per stack would leave your Heat engines
>> vulnerable to being DoS'd by malicious users, since one can create many
>> many thousands of resources, and thus python objects, in just a couple
>> of cleverly crafted templates (which is why I added the setting).
>>
>> This makes perfect sense in the undercloud of TripleO, which is a
>> private, single tenant OpenStack. But, for Magnum.. now you're talking
>> about the Heat that users have access to.
>
> We have it already at -1 for these tests. As you say a malicious user
> could DoS, right now this is manageable in our environment. But maybe
> move it to a per tenant value, or some special policy? The stacks are
> created under a separate domain for magnum (for trustees), we could
> also use that for separation.

For reference we also changed max_stacks_per_tenant, which is:
# Maximum number of stacks any one tenant may have active at one time. (integer
# value)

For the 1000 node bay test we had to increase it.

>
> A separate heat instance sounds like an overkill.
>
> Cheers,
> Ricardo
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Ricardo Rocha
Hi.

On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
> Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>> On 05/08/16 21:48, Ricardo Rocha wrote:
>> > Hi.
>> >
>> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
>> > of requests should be higher but we had some internal issues. We have
>> > a submission for barcelona to provide a lot more details.
>> >
>> > But a couple questions came during the exercise:
>> >
>> > 1. Do we really need a volume in the VMs? On large clusters this is a
>> > burden, and local storage only should be enough?
>> >
>> > 2. We observe a significant delay (~10min, which is half the total
>> > time to deploy the cluster) on heat when it seems to be crunching the
>> > kube_minions nested stacks. Once it's done, it still adds new stacks
>> > gradually, so it doesn't look like it precomputed all the info in advance
>> >
>> > Anyone tried to scale Heat to stacks this size? We end up with a stack
>> > with:
>> > * 1000 nested stacks (depth 2)
>> > * 22000 resources
>> > * 47008 events
>> >
>> > And already changed most of the timeout/retrial values for rpc to get
>> > this working.
>> >
>> > This delay is already visible in clusters of 512 nodes, but 40% of the
>> > time in 1000 nodes seems like something we could improve. Any hints on
>> > Heat configuration optimizations for large stacks very welcome.
>> >
>> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
>> max_resources_per_stack = -1
>>
>> Enforcing this for large stacks has a very high overhead, we make this
>> change in the TripleO undercloud too.
>>
>
> Wouldn't this necessitate having a private Heat just for Magnum? Not
> having a resource limit per stack would leave your Heat engines
> vulnerable to being DoS'd by malicious users, since one can create many
> many thousands of resources, and thus python objects, in just a couple
> of cleverly crafted templates (which is why I added the setting).
>
> This makes perfect sense in the undercloud of TripleO, which is a
> private, single tenant OpenStack. But, for Magnum.. now you're talking
> about the Heat that users have access to.

We have it already at -1 for these tests. As you say a malicious user
could DoS, right now this is manageable in our environment. But maybe
move it to a per tenant value, or some special policy? The stacks are
created under a separate domain for magnum (for trustees), we could
also use that for separation.

A separate heat instance sounds like an overkill.

Cheers,
Ricardo

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ton Ngo

Hi Ricardo,
 Great to have feedback from real use case.  Spyros and I had a
discussion on this in Austin
and we sketched out the implementation.  Once you open the blueprint, we
will add the details
and consider additional scenarios.
Ton,



From:   Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   08/07/2016 12:59 PM
Subject:        Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi Ton.

I think we should. Also in cases where multiple volume types are available
(in our case with different iops) there would be additional parameters
required to select the volume type. I'll add it this week.

It's a detail though, spawning container clusters with Magnum is now super
easy (and fast!).

Cheers,
  Ricardo

On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo <t...@us.ibm.com> wrote:
  Hi Ricardo,
  For your question 1, you can modify the Heat template to not create the
  Cinder volume and tweak the call to
  configure-docker-storage.sh to use local storage. It should be fairly
  straightforward. You just need to make
  sure the local storage of the flavor is sufficient to host the containers
  in the benchmark.
  If you think this is a common scenario, we can open a blueprint for this
  option.
  Ton,

  Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55 AM---Hi.
  Quick update is 1000 nodes and 7 million reqs/sec :) - Ricardo Rocha
  ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7 million
  reqs/sec :) - and the number of

  From: Ricardo Rocha <rocha.po...@gmail.com>
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Date: 08/05/2016 04:51 AM



  Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
  nodes



  Hi.

  Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
  requests should be higher but we had some internal issues. We have a
  submission for barcelona to provide a lot more details.

  But a couple questions came during the exercise:

  1. Do we really need a volume in the VMs? On large clusters this is a
  burden, and local storage only should be enough?

  2. We observe a significant delay (~10min, which is half the total time
  to deploy the cluster) on heat when it seems to be crunching the
  kube_minions nested stacks. Once it's done, it still adds new stacks
  gradually, so it doesn't look like it precomputed all the info in advance

  Anyone tried to scale Heat to stacks this size? We end up with a stack
  with:
  * 1000 nested stacks (depth 2)
  * 22000 resources
  * 47008 events

  And already changed most of the timeout/retrial values for rpc to get
  this working.

  This delay is already visible in clusters of 512 nodes, but 40% of the
  time in 1000 nodes seems like something we could improve. Any hints on
  Heat configuration optimizations for large stacks very welcome.

  Cheers,
    Ricardo

  On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <bto...@us.ibm.com> wrote:
Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680

Inactive hide details for Ton Ngo---06/17/2016 12:10:33 PM---Thanks
Ricardo for sharing the data, this is really encouraging! TTon
Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data,
this is really encouraging! Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
    Date: 06/17/2016 12:10 PM
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
100s of nodes








Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
AM---Hi. Just thought the Magnum team would be happy to hear :)
Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
Magnum team would be happy to hear :)

From: Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
        Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) r

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Clint Byrum
Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
> On 05/08/16 21:48, Ricardo Rocha wrote:
> > Hi.
> >
> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
> > of requests should be higher but we had some internal issues. We have 
> > a submission for barcelona to provide a lot more details.
> >
> > But a couple questions came during the exercise:
> >
> > 1. Do we really need a volume in the VMs? On large clusters this is a 
> > burden, and local storage only should be enough?
> >
> > 2. We observe a significant delay (~10min, which is half the total 
> > time to deploy the cluster) on heat when it seems to be crunching the 
> > kube_minions nested stacks. Once it's done, it still adds new stacks 
> > gradually, so it doesn't look like it precomputed all the info in advance
> >
> > Anyone tried to scale Heat to stacks this size? We end up with a stack 
> > with:
> > * 1000 nested stacks (depth 2)
> > * 22000 resources
> > * 47008 events
> >
> > And already changed most of the timeout/retrial values for rpc to get 
> > this working.
> >
> > This delay is already visible in clusters of 512 nodes, but 40% of the 
> > time in 1000 nodes seems like something we could improve. Any hints on 
> > Heat configuration optimizations for large stacks very welcome.
> >
> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
> max_resources_per_stack = -1
> 
> Enforcing this for large stacks has a very high overhead, we make this 
> change in the TripleO undercloud too.
> 

Wouldn't this necessitate having a private Heat just for Magnum? Not
having a resource limit per stack would leave your Heat engines
vulnerable to being DoS'd by malicious users, since one can create many
many thousands of resources, and thus python objects, in just a couple
of cleverly crafted templates (which is why I added the setting).

This makes perfect sense in the undercloud of TripleO, which is a
private, single tenant OpenStack. But, for Magnum.. now you're talking
about the Heat that users have access to.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Steve Baker

On 05/08/16 21:48, Ricardo Rocha wrote:

Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
of requests should be higher but we had some internal issues. We have 
a submission for barcelona to provide a lot more details.


But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a 
burden, and local storage only should be enough?


2. We observe a significant delay (~10min, which is half the total 
time to deploy the cluster) on heat when it seems to be crunching the 
kube_minions nested stacks. Once it's done, it still adds new stacks 
gradually, so it doesn't look like it precomputed all the info in advance


Anyone tried to scale Heat to stacks this size? We end up with a stack 
with:

* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get 
this working.


This delay is already visible in clusters of 512 nodes, but 40% of the 
time in 1000 nodes seems like something we could improve. Any hints on 
Heat configuration optimizations for large stacks very welcome.



Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this 
change in the TripleO undercloud too.



Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <bto...@us.ibm.com 
<mailto:bto...@us.ibm.com>> wrote:


Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com <mailto:bto...@us.ibm.com>
Assistant: Kendra Witherspoon (919) 254-0680

Inactive hide details for Ton Ngo---06/17/2016 12:10:33
PM---Thanks Ricardo for sharing the data, this is really
encouraging! TTon Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo
for sharing the data, this is really encouraging! Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage
questions\)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Date: 06/17/2016 12:10 PM
    Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
100s of nodes






Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
AM---Hi. Just thought the Magnum team would be happy to hear
:)Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
Magnum team would be happy to hear :)

From: Ricardo Rocha <rocha.po...@gmail.com
<mailto:rocha.po...@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
    Date: 06/17/2016 08:16 AM
    Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s
of nodes




Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million
requests / sec.

Check here for some details:_

__https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html_

<https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html>

We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>_
__http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mai

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Roman Vasilets
HI,
  Great to hear it! From the view of Rally team=)

-Best regards, Roman Vasylets

On Sun, Aug 7, 2016 at 10:55 PM, Ricardo Rocha <rocha.po...@gmail.com>
wrote:

> Hi Ton.
>
> I think we should. Also in cases where multiple volume types are available
> (in our case with different iops) there would be additional parameters
> required to select the volume type. I'll add it this week.
>
> It's a detail though, spawning container clusters with Magnum is now super
> easy (and fast!).
>
> Cheers,
>   Ricardo
>
> On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo <t...@us.ibm.com> wrote:
>
>> Hi Ricardo,
>> For your question 1, you can modify the Heat template to not create the
>> Cinder volume and tweak the call to
>> configure-docker-storage.sh to use local storage. It should be fairly
>> straightforward. You just need to make
>> sure the local storage of the flavor is sufficient to host the containers
>> in the benchmark.
>> If you think this is a common scenario, we can open a blueprint for this
>> option.
>> Ton,
>>
>> [image: Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55
>> AM---Hi. Quick update is 1000 nodes and 7 million reqs/sec :) -]Ricardo
>> Rocha ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7
>> million reqs/sec :) - and the number of
>>
>> From: Ricardo Rocha <rocha.po...@gmail.com>
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 08/05/2016 04:51 AM
>>
>> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>> nodes
>> --
>>
>>
>>
>> Hi.
>>
>> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
>> requests should be higher but we had some internal issues. We have a
>> submission for barcelona to provide a lot more details.
>>
>> But a couple questions came during the exercise:
>>
>> 1. Do we really need a volume in the VMs? On large clusters this is a
>> burden, and local storage only should be enough?
>>
>> 2. We observe a significant delay (~10min, which is half the total time
>> to deploy the cluster) on heat when it seems to be crunching the
>> kube_minions nested stacks. Once it's done, it still adds new stacks
>> gradually, so it doesn't look like it precomputed all the info in advance
>>
>> Anyone tried to scale Heat to stacks this size? We end up with a stack
>> with:
>> * 1000 nested stacks (depth 2)
>> * 22000 resources
>> * 47008 events
>>
>> And already changed most of the timeout/retrial values for rpc to get
>> this working.
>>
>> This delay is already visible in clusters of 512 nodes, but 40% of the
>> time in 1000 nodes seems like something we could improve. Any hints on Heat
>> configuration optimizations for large stacks very welcome.
>>
>> Cheers,
>>   Ricardo
>>
>> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <*bto...@us.ibm.com*
>> <bto...@us.ibm.com>> wrote:
>>
>>Thanks Ricardo! This is very exciting progress!
>>
>>--Brad
>>
>>
>>Brad Topol, Ph.D.
>>IBM Distinguished Engineer
>>OpenStack
>>(919) 543-0646
>>Internet: *bto...@us.ibm.com* <bto...@us.ibm.com>
>>Assistant: Kendra Witherspoon (919) 254-0680
>>
>>    [image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>>PM---Thanks Ricardo for sharing the data, this is really encouraging! 
>> T]Ton
>>Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this 
>> is
>>really encouraging! Ton,
>>
>>From: Ton Ngo/Watson/IBM@IBMUS
>>To: "OpenStack Development Mailing List \(not for usage questions\)" <
>>*openstack-dev@lists.openstack.org*
>><openstack-dev@lists.openstack.org>>
>>Date: 06/17/2016 12:10 PM
>>Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s
>>of nodes
>>
>>
>>------
>>
>>
>>
>>    Thanks Ricardo for sharing the data, this is really encouraging!
>>Ton,
>>
>>[image: Inactive hide details for Ricardo Rocha ---06/17/2016
>>08:16:15 AM---Hi. Just thought the Magnum team would be happy to hear 
>> :)]Ricardo
>>Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would 
>> be
>>happy to hear :)
>>
>>From: Ricardo Rocha <*rocha.po...@gmail.com* <r

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ricardo Rocha
Hi Ton.

I think we should. Also in cases where multiple volume types are available
(in our case with different iops) there would be additional parameters
required to select the volume type. I'll add it this week.

It's a detail though, spawning container clusters with Magnum is now super
easy (and fast!).

Cheers,
  Ricardo

On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo <t...@us.ibm.com> wrote:

> Hi Ricardo,
> For your question 1, you can modify the Heat template to not create the
> Cinder volume and tweak the call to
> configure-docker-storage.sh to use local storage. It should be fairly
> straightforward. You just need to make
> sure the local storage of the flavor is sufficient to host the containers
> in the benchmark.
> If you think this is a common scenario, we can open a blueprint for this
> option.
> Ton,
>
> [image: Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55
> AM---Hi. Quick update is 1000 nodes and 7 million reqs/sec :) -]Ricardo
> Rocha ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7
> million reqs/sec :) - and the number of
>
> From: Ricardo Rocha <rocha.po...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 08/05/2016 04:51 AM
>
> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
> --
>
>
>
> Hi.
>
> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
> requests should be higher but we had some internal issues. We have a
> submission for barcelona to provide a lot more details.
>
> But a couple questions came during the exercise:
>
> 1. Do we really need a volume in the VMs? On large clusters this is a
> burden, and local storage only should be enough?
>
> 2. We observe a significant delay (~10min, which is half the total time to
> deploy the cluster) on heat when it seems to be crunching the kube_minions
> nested stacks. Once it's done, it still adds new stacks gradually, so it
> doesn't look like it precomputed all the info in advance
>
> Anyone tried to scale Heat to stacks this size? We end up with a stack
> with:
> * 1000 nested stacks (depth 2)
> * 22000 resources
> * 47008 events
>
> And already changed most of the timeout/retrial values for rpc to get this
> working.
>
> This delay is already visible in clusters of 512 nodes, but 40% of the
> time in 1000 nodes seems like something we could improve. Any hints on Heat
> configuration optimizations for large stacks very welcome.
>
> Cheers,
>   Ricardo
>
> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <*bto...@us.ibm.com*
> <bto...@us.ibm.com>> wrote:
>
>Thanks Ricardo! This is very exciting progress!
>
>--Brad
>
>
>Brad Topol, Ph.D.
>IBM Distinguished Engineer
>OpenStack
>(919) 543-0646
>Internet: *bto...@us.ibm.com* <bto...@us.ibm.com>
>Assistant: Kendra Witherspoon (919) 254-0680
>
>[image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>PM---Thanks Ricardo for sharing the data, this is really encouraging! T]Ton
>Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this is
>really encouraging! Ton,
>
>    From: Ton Ngo/Watson/IBM@IBMUS
>To: "OpenStack Development Mailing List \(not for usage questions\)" <
>*openstack-dev@lists.openstack.org* <openstack-dev@lists.openstack.org>
>>
>Date: 06/17/2016 12:10 PM
>Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s
>of nodes
>
>
>--
>
>
>
>Thanks Ricardo for sharing the data, this is really encouraging!
>Ton,
>
>[image: Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
>AM---Hi. Just thought the Magnum team would be happy to hear :)]Ricardo
>Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
>happy to hear :)
>
>From: Ricardo Rocha <*rocha.po...@gmail.com* <rocha.po...@gmail.com>>
>To: "OpenStack Development Mailing List (not for usage questions)" <
>*openstack-dev@lists.openstack.org* <openstack-dev@lists.openstack.org>
>>
>Date: 06/17/2016 08:16 AM
>Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>nodes
>--
>
>
>
>Hi.
>
>Just thought the Magnum team would be happy to hear :)
>
>We had access to some hardware the last couple days, and tried some
>tests with Magnum and Kubernetes - following an original blog post
>from the kubernetes team.
>
>

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Ton Ngo

Hi Ricardo,
 For your question 1, you can modify the Heat template to not create
the Cinder volume and tweak the call to
configure-docker-storage.sh to use local storage.  It should be fairly
straightforward.  You just need to make
sure the local storage of the flavor is sufficient to host the containers
in the benchmark.
 If you think this is a common scenario, we can open a blueprint for
this option.
Ton,



From:   Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   08/05/2016 04:51 AM
Subject:        Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
requests should be higher but we had some internal issues. We have a
submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to
deploy the cluster) on heat when it seems to be crunching the kube_minions
nested stacks. Once it's done, it still adds new stacks gradually, so it
doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack
with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this
working.

This delay is already visible in clusters of 512 nodes, but 40% of the time
in 1000 nodes seems like something we could improve. Any hints on Heat
configuration optimizations for large stacks very welcome.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <bto...@us.ibm.com> wrote:
  Thanks Ricardo! This is very exciting progress!

  --Brad


  Brad Topol, Ph.D.
  IBM Distinguished Engineer
  OpenStack
  (919) 543-0646
  Internet: bto...@us.ibm.com
  Assistant: Kendra Witherspoon (919) 254-0680

  Inactive hide details for Ton Ngo---06/17/2016 12:10:33 PM---Thanks
  Ricardo for sharing the data, this is really encouraging! TTon
  Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this
  is really encouraging! Ton,

  From: Ton Ngo/Watson/IBM@IBMUS
  To: "OpenStack Development Mailing List \(not for usage questions\)" <
  openstack-dev@lists.openstack.org>
  Date: 06/17/2016 12:10 PM
  Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
  nodes






  Thanks Ricardo for sharing the data, this is really encouraging!
  Ton,

  Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi.
  Just thought the Magnum team would be happy to hear :)Ricardo Rocha
  ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
  happy to hear :)

  From: Ricardo Rocha <rocha.po...@gmail.com>
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Date: 06/17/2016 08:16 AM
  Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes



  Hi.

  Just thought the Magnum team would be happy to hear :)

  We had access to some hardware the last couple days, and tried some
  tests with Magnum and Kubernetes - following an original blog post
  from the kubernetes team.

  Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
  sec.

  Check here for some details:
  
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html


  We'll try bigger in a couple weeks, also using the Rally work from
  Winnie, Ton and Spyros to see where it breaks. Already identified a
  couple issues, will add bugs or push patches for those. If you have
  ideas or suggestions for the next tests let us know.

  Magnum is looking pretty good!

  Cheers,
  Ricardo

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mail

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-05 Thread Ricardo Rocha
Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
requests should be higher but we had some internal issues. We have a
submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to
deploy the cluster) on heat when it seems to be crunching the kube_minions
nested stacks. Once it's done, it still adds new stacks gradually, so it
doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this
working.

This delay is already visible in clusters of 512 nodes, but 40% of the time
in 1000 nodes seems like something we could improve. Any hints on Heat
configuration optimizations for large stacks very welcome.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <bto...@us.ibm.com> wrote:

> Thanks Ricardo! This is very exciting progress!
>
> --Brad
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
> PM---Thanks Ricardo for sharing the data, this is really encouraging! T]Ton
> Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this is
> really encouraging! Ton,
>
> From: Ton Ngo/Watson/IBM@IBMUS
> To: "OpenStack Development Mailing List \(not for usage questions\)" <
> openstack-dev@lists.openstack.org>
> Date: 06/17/2016 12:10 PM
> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
>
> --
>
>
>
> Thanks Ricardo for sharing the data, this is really encouraging!
> Ton,
>
> [image: Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
> AM---Hi. Just thought the Magnum team would be happy to hear :)]Ricardo
> Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
> happy to hear :)
>
> From: Ricardo Rocha <rocha.po...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 06/17/2016 08:16 AM
> Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes
> --
>
>
>
> Hi.
>
> Just thought the Magnum team would be happy to hear :)
>
> We had access to some hardware the last couple days, and tried some
> tests with Magnum and Kubernetes - following an original blog post
> from the kubernetes team.
>
> Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
> sec.
>
> Check here for some details:
>
> *https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html*
> <https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html>
>
> We'll try bigger in a couple weeks, also using the Rally work from
> Winnie, Ton and Spyros to see where it breaks. Already identified a
> couple issues, will add bugs or push patches for those. If you have
> ideas or suggestions for the next tests let us know.
>
> Magnum is looking pretty good!
>
> Cheers,
> Ricardo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-19 Thread Brad Topol

Thanks Ricardo!   This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   06/17/2016 12:10 PM
Subject:    Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
    nodes



Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi.
Just thought the Magnum team would be happy to hear :)Ricardo Rocha
---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be happy
to hear :)

From: Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes



Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html


We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Steven Dake (stdake)
Ricardo,

As one of the original authors of Magnum, I'm super pleased to hear Magnum 
works at this scale with a kubernetes bay.  I don't think it would have done 
that when I finished working on Magnum - a testament to the great community 
around Magnum.

Regards
-steve

From: Ton Ngo <t...@us.ibm.com<mailto:t...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, June 17, 2016 at 9:06 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes


Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

[Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just 
thought the Magnum team would be happy to hear :)]Ricardo Rocha ---06/17/2016 
08:16:15 AM---Hi. Just thought the Magnum team would be happy to hear :)

From: Ricardo Rocha <rocha.po...@gmail.com<mailto:rocha.po...@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes





Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests / sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html

We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Hongbin Lu
Ricardo,

Thanks for sharing. It is good to hear that Magnum works well with a 200 nodes 
cluster.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-17-16 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
> 
> Hi.
> 
> Just thought the Magnum team would be happy to hear :)
> 
> We had access to some hardware the last couple days, and tried some
> tests with Magnum and Kubernetes - following an original blog post from
> the kubernetes team.
> 
> Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
> sec.
> 
> Check here for some details:
> https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-
> kubernetes-2-million.html
> 
> We'll try bigger in a couple weeks, also using the Rally work from
> Winnie, Ton and Spyros to see where it breaks. Already identified a
> couple issues, will add bugs or push patches for those. If you have
> ideas or suggestions for the next tests let us know.
> 
> Magnum is looking pretty good!
> 
> Cheers,
> Ricardo
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Kumari, Madhuri
Hi Ricardo,

Thanks for sharing it. Result seems great and we will surely try to fix the 
issue.

Cheers!

Regards,
Madhuri

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: Friday, June 17, 2016 8:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some tests with 
Magnum and Kubernetes - following an original blog post from the kubernetes 
team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests / sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html

We'll try bigger in a couple weeks, also using the Rally work from Winnie, Ton 
and Spyros to see where it breaks. Already identified a couple issues, will add 
bugs or push patches for those. If you have ideas or suggestions for the next 
tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Ton Ngo

Thanks Ricardo for sharing the data, this is really encouraging!
Ton,



From:   Ricardo Rocha <rocha.po...@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   06/17/2016 08:16 AM
Subject:[openstack-dev]  [magnum] 2 million requests / sec, 100s of
nodes



Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html


We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Ricardo Rocha
Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests / sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html

We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev