The slides for the Tokyo talk is available on slideshare:
http://www.slideshare.net/huengo965921/exploring-magnum-and-senlin-integration-for-autoscaling-containers

Ton,




From:   Jay Lau <[email protected]>
To:     "OpenStack Development Mailing List (not for usage questions)"
            <[email protected]>
Date:   11/17/2015 10:05 PM
Subject:        Re: [openstack-dev] [magnum] Autoscaling both clusters and
            containers



It's great that we discuss this in mail list, I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling and
planning a spec for this. You can get some early ideas from what Ton
pointed here:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers


@Ton, is it possible that we publish the slides to slideshare? ;-)

Our thinking was introduce an autoscaler service to Magnum just like what
GCE is doing now, will have you updated when a spec is ready for review.

On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz <[email protected]> wrote:
  Ryan

  I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am
  not following Mesos and Swam community very close these days).
  But Kubernetes 1.1 has pod autoscaling (
  
https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md
  ),
  which should cover containers auto-scaling. Also there is PR for cluster
  auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
  has implementation for GCE, but OpenStack support can be added as well.

  —
  Egor

  From: Ton Ngo <[email protected]<mailto:[email protected]>>
  Reply-To: "OpenStack Development Mailing List (not for usage questions)"
  <[email protected]<mailto:
  [email protected]>>
  Date: Tuesday, November 17, 2015 at 16:58
  To: "OpenStack Development Mailing List (not for usage questions)" <
  [email protected]<mailto:
  [email protected]>>
  Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
  containers


  Hi Ryan,
  There was a talk in the last Summit on this topics to explore the options
  with Magnum, Senlin, Heat, Kubernetes:
  
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers

  A demo was shown with Senlin interfacing to Magnum to autoscale.
  There was also a Magnum design session to discuss this same topics. The
  use cases are similar to what you describe. Because the subject is
  complex, there are many moving parts, and multiple teams/projects are
  involved, one outcome of the design session is that we will write a spec
  on autoscaling containers and cluster. A patch should be coming soon, so
  it would be great to have your input on the spec.
  Ton,

  [Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi
  all, I was having a discussion with a teammate with resp]Ryan Rossiter
  ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with a
  teammate with respect to container

  From: Ryan Rossiter <[email protected]<mailto:
  [email protected]>>
  To: [email protected]<mailto:
  [email protected]>
  Date: 11/17/2015 02:05 PM
  Subject: [openstack-dev] [magnum] Autoscaling both clusters and
  containers

  ________________________________



  Hi all,

  I was having a discussion with a teammate with respect to container
  scaling. He likes the aspect of nova-docker that allows you to scale
  (essentially) infinitely almost instantly, assuming you are using a
  large pool of compute hosts. In the case of Magnum, if I'm a container
  user, I don't want to be paying for a ton of vms that just sit idle, but
  I also want to have enough vms to handle my scale when I infrequently
  need it. But above all, when I need scale, I don't want to suddenly have
  to go boot vms and wait for them to start up when I really need it.

  I saw [1] which discusses container scaling, but I'm thinking we can
  take this one step further. If I don't want to pay for a lot of vms when
  I'm not using them, could I set up an autoscale policy that allows my
  cluster to expand when my container concentration gets too high on my
  existing cluster? It's kind of a case of nested autoscaling. The
  containers are scaled based on request demand, and the cluster vms are
  scaled based on container count.

  I'm unsure of the details of Senlin, but at least looking at Heat
  autoscaling [2], this would not be very hard to add to the Magnum
  templates, and we would forward those on through the bay API. (I figure
  we would do this through the bay, not baymodel, because I can see
  similar clusters that would want to be scaled differently).

  Let me know if I'm totally crazy or if this is a good idea (or if you
  guys have already talked about this before). I would be interested in
  your feedback.

  [1]
  http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html

  [2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

  --
  Thanks,

  Ryan Rossiter (rlrossit)


  __________________________________________________________________________

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: [email protected]<mailto:
  [email protected]>?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  __________________________________________________________________________

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  [email protected]?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to