Re: [ClusterLabs] Q: Resource balancing opration

2016-04-20 Thread Ken Gaillot
On 04/20/2016 01:17 AM, Ulrich Windl wrote:
> Hi!
> 
> I'm wondering: If you boot a node on a cluster, most resources will go to 
> another node (if possible). Due to stickiness configured, those resources 
> will stay there.
> So I'm wondering whether or how I could cause a rebalance of resources on the 
> cluster. I must admit that I don't understand the details of stickiness 
> related to other parameters. In my understanding stickiness should be related 
> to a percentage of utilization dynamically, so that a resource running on a 
> node that is "almost full" should dynamically lower its stickiness to allow 
> resource migration.
> 
> So if you are going to implement a manual resource rebalance operation, could 
> you dynamically lower the stickiness for each resource (by some amount or 
> some factor), wait if something happens, and then repeat the process until 
> resources look balanced. "Looking balanced" should be no worse as if all 
> resources are started when all cluster nodes are up.
> 
> Spontaneous pros and cons for "resource rebalancing"?
> 
> Regards,
> Ulrich

Pacemaker gives you a few levers to pull. Stickiness and utilization
attributes (with a placement strategy) are the main ones.

Normally, pacemaker *will* continually rebalance according to what nodes
are available. Stickiness tells the cluster not to do that.

Whether you should use stickiness (and how much) depends mainly on how
significant is the interruption that occurs when a service is moved. For
a large database supporting a high-traffic website, stopping and
starting can take a long time and cost a lot of business -- so maybe you
want an infinite stickiness in that case, and only rebalance manually
during a scheduled window. For a small VM that can live-migrate quickly
and doesn't affect any of your customer-facing services, maybe you don't
mind setting a small or zero stickiness.

You can also use rules to make the process intelligent. For example, for
a server that provides office services, you could set a rule that sets
infinite stickiness during business hours, and small or zero stickiness
otherwise. That way, you'd get no disruptions when people are actually
using the service during the day, and at night, it would automatically
rebalance.

Normally, pacemaker's idea of "balancing" is to simply distribute the
number of resources on each node as equally as possible. Utilization
attributes and placement strategies let you add more intelligence. For
example, you can define the number of cores per node or the amount of
RAM per node, along with how much each resource is expected to use, and
let pacemaker balance by that instead of just counting the number of
resources.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Q: Resource balancing opration

2016-04-20 Thread Klaus Wenninger
On 04/20/2016 08:17 AM, Ulrich Windl wrote:
> Hi!
>
> I'm wondering: If you boot a node on a cluster, most resources will go to 
> another node (if possible). Due to stickiness configured, those resources 
> will stay there.
> So I'm wondering whether or how I could cause a rebalance of resources on the 
> cluster. I must admit that I don't understand the details of stickiness 
> related to other parameters. In my understanding stickiness should be related 
> to a percentage of utilization dynamically, so that a resource running on a 
> node that is "almost full" should dynamically lower its stickiness to allow 
> resource migration.
The only aim of stickiness is to prevent resources from switching back
and forth.
But you are free to balance it with other score-based mechanisms by
tuning the score-values. Just
be careful not to create anything that constantly switches resources
between nodes when the
stickiness becomes to small in comparison.
>
> So if you are going to implement a manual resource rebalance operation, could 
> you dynamically lower the stickiness for each resource (by some amount or 
> some factor), wait if something happens, and then repeat the process until 
> resources look balanced. "Looking balanced" should be no worse as if all 
> resources are started when all cluster nodes are up.
Main issue I see is how the cluster should know if a node is just
rebooting and thus should maybe wait before starting to rebalance or if
it has failed and rebalancing would be appreciated.
You could either inform the cluster ahead of a scheduled reboot (e.g.
maintenance-mode would be a way to do that)
or the cluster would have to wait a certain time to take certain actions
to leave the node enough time to come back online.
For the latter I don't know if there are meanwhile pacemaker features
that can be used directly for such. (To configure
the corosync-timing accordingly would certainly be a bad idea ;-) ) I
remember to have implemented the detection of
the loss of a node inside a daemon which then - after a certain time -
would modify a pacemaker-attribute which in
turn was used as part of resource-location rules.
 
>
> Spontaneous pros and cons for "resource rebalancing"?
>
> Regards,
> Ulrich
>
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Q: Resource balancing opration

2016-04-20 Thread Ulrich Windl
Hi!

I'm wondering: If you boot a node on a cluster, most resources will go to 
another node (if possible). Due to stickiness configured, those resources will 
stay there.
So I'm wondering whether or how I could cause a rebalance of resources on the 
cluster. I must admit that I don't understand the details of stickiness related 
to other parameters. In my understanding stickiness should be related to a 
percentage of utilization dynamically, so that a resource running on a node 
that is "almost full" should dynamically lower its stickiness to allow resource 
migration.

So if you are going to implement a manual resource rebalance operation, could 
you dynamically lower the stickiness for each resource (by some amount or some 
factor), wait if something happens, and then repeat the process until resources 
look balanced. "Looking balanced" should be no worse as if all resources are 
started when all cluster nodes are up.

Spontaneous pros and cons for "resource rebalancing"?

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org