HI Martin, Thanks for your inputs on this...Really sorry for the late response...Please find my answers inline...
On Tue, Sep 23, 2014 at 10:50 PM, Martin Eppel (meppel) <[email protected]> wrote: > Hi Rekha, > > > > · conceptually we are suggesting to have 3 different types of > autoscaling policies, > > o scaling by statistics, > > o scaling by group member and > > o scaling by group. > > > > > > Based on this, the general algorithm would be like this (in order): > > 1. Scale VMs until the cluster maximum is reached (in at least one > of the cluster within the group - scale by statistics) > +1. This will align with the current stratos... > 2. Scale up a new cluster of same type as the one which has reached > the maximum of VMs until the max member number is reached (scale by group > member) > I'm not sure whether we can scale that member which is max out by creating a new cluster. In stratos, cluster should be created when you subscribe(in our case, it is when deploying an application). After that, you can expand the cluster with any no of instances using the deployment policy. If we introduce a new cluster, then it may bring up several complications, since LB is keeping track of the members per cluster basis and stratos is tightly coupled with one to one mapping between cluster and subscription. I believe that we can kind of having enough room in the deployment policy and make the cluster to span using the available partitions/network partitions. We can come up with an algorithm like within network partition - round robin and between network partition - one after another (currently not supported in stratos). This is just a thought..Or AFAIK, we need to find a way to do this with only one cluster in order to align with the current model. > 3. Scale up a new group instance of the same group type (or > definition), including all the respective dependencies (scale by group) > This should be possible as we maintain a group policy saying min1-max3. > Defining ratios between dependent cluster would be an extra property in > perhaps the group scaling policy ? > > > Yah..Since defining a ratio is for the siblings, we can define group scaling policy. > > > As we have to keep track of each group instance, I think it is important > to distinguish in the autoscaling group model the > > · Group type (or group definition) > > · Group instance of a particular type – unique for each group > instance > > > > which require us to maintain two parameters in the group model (which if I > am not mistake would be group name and group alias, correct ?) > Yah..We even maintain two parameters. As you mentioned, we can keep list of group alias when scaling by group. It is good point. Anyway, we didn't go through much on scale by a group member and scale by group. Will start a separate discussion on that... > > > In the model below would we create a Group monitor for each group instance > or only for a group type ? > > > > Also, I would suggest for a cluster to become “ACTIVE” the number of VMs > in active state have to reach a configurable min number of active VMs, > WDYT ? > +1. This is what even currently implemented in order to send ClusterActivatedEvent. Thanks, Reka > > > Thanks > > > > Martin > > > > > > *From:* Reka Thirunavukkarasu [mailto:[email protected]] > *Sent:* Monday, September 22, 2014 12:24 AM > *To:* dev > *Subject:* Re: [Grouping][Part-1] Decision making in Autoscaler with > Composite Application > > > > Hi Lahiru > > > > On Mon, Sep 22, 2014 at 12:30 PM, Lahiru Sandaruwan <[email protected]> > wrote: > > Hi Reka, > > > > On Mon, Sep 22, 2014 at 11:35 AM, Reka Thirunavukkarasu <[email protected]> > wrote: > > Hi > > > > This is to discuss on how the autoscaler takes decision based on the > Composite application and its dependencies. > > > > Problem > > ======= > > > > - Autoscaler has to receive AppicationCreatedEvent and build up its own > logical model based on the apps and its decencies. > > - It has to make sure the cluster it active before starting the dependent > cluster > > - It has make sure, if something happens to dependent cluster/group, then > according to termination behaviour, what to be done for the dependent > cluster/group or for the parent. > > - It has to make decision when a scaled up decision taken on a cluster, > then according to scale up dependencies such as defining a scale up ratio > between dependent like 3cluster1: 2cluster2, how to handle the dependents. > > > > > > How the scaling up/down logic based on statistics handled with this? Which > one gets the preference? Stat based # of instances or dependency based # of > instances? > > If no dependencies defined for that cluster, then it is purely based on > stats. But if scale up dependencies are there, then dependency will get > more priority over the stats. In this case, let's say you received stats > and decided to spin 2 instances of cluster1 which depends on cluster2 with > 2cluster1:3cluster2 ratio. So, relevant GroupMonitor should decide to spin > 3 instances in cluster2 irrespective of whatever the stats received for > cluster2. This is what i understood. > > > > @Lakmal/Martin, can you also confirm on this? > > > > Thanks, > > Reka > > > > - Make sure to follow up the termination order when killing an instance. > > Eg: kill-dependent: Kill the child cluster when its dependent goes away > > kill-none: don't do anything to the children > > kill-all: Need discussion on how to handle this as whether to > kill all the parents or according to parent's termination behaviour, kill > them or not. > > > > Proposed Solution > > =============== > > > > Part-1: Introduction to hierarchy of monitors > > > > - Require separate Monitors in Autoscaler to handle composite application > which can be achieved by constructing following monitor hierarchy in > autoscaler. > > > > > - As illustrated, ApplicationMonitor and GroupMonitors are having their > own behaviour such as monitoring a set of child monitors which can be > either GroupMonitor or AbstractClusterMonitors according to their > dependencies and according to the monitors status changes such as ACTIVE, > IN_MAINTENANCE, TERMINATED and etc. That's why they have identified to have > a abstract Monitor with common behaviours. But standalone AbstractMonitor > will monitor the members in the cluster and responsible to take decision > based on autoscale parameters such as rif, cpu usage and memory usage. > Hence AbsctractClusterMonitor is different from Monitor. > > > > - ApplicationMonitor will consist of set of GroupMoniotrs and > AbstractClusterMonitors and GroupMonitor will consist of the set of > GroupMoniotrs and AbstractClusterMonitors. > > > > -ApplicationMonitor will responsible to start the chile monitor and Group > Monitor will also responsible to start the chile monitor as it is started. > > > > Will continue to update with rest of the solution on how to build up the > dependencies based on startup order, kill behaviour and scale dependencies > and Even driven ApplicationMonitor and GroupMonitor. > > > > Please share your suggestion on this. > > > > > > -- > > Reka Thirunavukkarasu > Senior Software Engineer, > WSO2, Inc.:http://wso2.com, > > Mobile: +94776442007 > > > > > > > > -- > > -- > Lahiru Sandaruwan > > Committer and PMC member, Apache Stratos, > Senior Software Engineer, > WSO2 Inc., http://wso2.com > > lean.enterprise.middleware > > email: [email protected] cell: (+94) 773 325 954 > blog: http://lahiruwrites.blogspot.com/ > twitter: http://twitter.com/lahirus > linked-in: http://lk.linkedin.com/pub/lahiru-sandaruwan/16/153/146 > > > > > > > > -- > > Reka Thirunavukkarasu > Senior Software Engineer, > WSO2, Inc.:http://wso2.com, > > Mobile: +94776442007 > > > -- Reka Thirunavukkarasu Senior Software Engineer, WSO2, Inc.:http://wso2.com, Mobile: +94776442007
