On 05/08/16 21:48, Ricardo Rocha wrote:
Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of requests should be higher but we had some internal issues. We have a submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total time to deploy the cluster) on heat when it seems to be crunching the kube_minions nested stacks. Once it's done, it still adds new stacks gradually, so it doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get this working.

This delay is already visible in clusters of 512 nodes, but 40% of the time in 1000 nodes seems like something we could improve. Any hints on Heat configuration optimizations for large stacks very welcome.

Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this change in the TripleO undercloud too.

Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <bto...@us.ibm.com <mailto:bto...@us.ibm.com>> wrote:

    Thanks Ricardo! This is very exciting progress!

    --Brad


    Brad Topol, Ph.D.
    IBM Distinguished Engineer
    OpenStack
    (919) 543-0646
    Internet: bto...@us.ibm.com <mailto:bto...@us.ibm.com>
    Assistant: Kendra Witherspoon (919) 254-0680

    Inactive hide details for Ton Ngo---06/17/2016 12:10:33
    PM---Thanks Ricardo for sharing the data, this is really
    encouraging! TTon Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo
    for sharing the data, this is really encouraging! Ton,

    From: Ton Ngo/Watson/IBM@IBMUS
    To: "OpenStack Development Mailing List \(not for usage
    questions\)" <openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>
    Date: 06/17/2016 12:10 PM
    Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
    100s of nodes


    ------------------------------------------------------------------------



    Thanks Ricardo for sharing the data, this is really encouraging!
    Ton,

    Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
    AM---Hi. Just thought the Magnum team would be happy to hear
    :)Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
    Magnum team would be happy to hear :)

    From: Ricardo Rocha <rocha.po...@gmail.com
    <mailto:rocha.po...@gmail.com>>
    To: "OpenStack Development Mailing List (not for usage questions)"
    <openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>
    Date: 06/17/2016 08:16 AM
    Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s
    of nodes
    ------------------------------------------------------------------------



    Hi.

    Just thought the Magnum team would be happy to hear :)

    We had access to some hardware the last couple days, and tried some
    tests with Magnum and Kubernetes - following an original blog post
    from the kubernetes team.

    Got a 200 node kubernetes bay (800 cores) reaching 2 million
    requests / sec.

    Check here for some details:_
    
__https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html_
    
<https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html>

    We'll try bigger in a couple weeks, also using the Rally work from
    Winnie, Ton and Spyros to see where it breaks. Already identified a
    couple issues, will add bugs or push patches for those. If you have
    ideas or suggestions for the next tests let us know.

    Magnum is looking pretty good!

    Cheers,
    Ricardo

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>_
    __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>



    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to