Re: Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-03-08 Thread Utsav Kanani
Thanks a ton for your help with this

Had another quick question

How can i configure such that there is no rebalancing and the same Process
always gets the same lock.
Is it with the SEMI_AUTO mode

On Thu, Mar 8, 2018 at 6:11 PM, Utsav Kanani  wrote:

> Ignore what is said it works with
>
> idealState.setRebalancerClassName(DelayedAutoRebalancer.class.getName());
>
>
>
> On Thu, Mar 8, 2018 at 6:08 PM, Utsav Kanani 
> wrote:
>
>> sorry for the late response
>> hey guys this does not help either
>> do you guys want me to send you my code files?
>>
>> this is the code change i made in the ZKHelixAdmin class
>> tried with both
>>
>> idealState.setRebalancerClassName(DelayedAutoRebalancer.class.getName());
>>
>> and withoug
>>
>> @Override
>> public void addResource(String clusterName, String resourceName, int 
>> partitions,
>> String stateModelRef, String rebalancerMode, String rebalanceStrategy, 
>> int bucketSize,
>> int maxPartitionsPerInstance) {
>>   if (!ZKUtil.isClusterSetup(clusterName, _zkClient)) {
>> throw new HelixException("cluster " + clusterName + " is not setup yet");
>>   }
>>
>>   IdealState idealState = new IdealState(resourceName);
>>   idealState.setNumPartitions(partitions);
>>   idealState.setStateModelDefRef(stateModelRef);
>>   RebalanceMode mode =
>>   idealState.rebalanceModeFromString(rebalancerMode, 
>> RebalanceMode.SEMI_AUTO);
>>   idealState.setRebalanceMode(mode);
>>   idealState.setRebalanceStrategy(rebalanceStrategy);
>>   >idealState.setMinActiveReplicas(0);
>>   
>> idealState.setStateModelFactoryName(HelixConstants.DEFAULT_STATE_MODEL_FACTORY);
>>   idealState.setRebalanceDelay(10);
>>   idealState.setDelayRebalanceEnabled(true);
>>   //idealState.setRebalancerClassName(DelayedAutoRebalancer.class.getName());
>>
>>   if (maxPartitionsPerInstance > 0 && maxPartitionsPerInstance < 
>> Integer.MAX_VALUE) {
>> idealState.setMaxPartitionsPerInstance(maxPartitionsPerInstance);
>>   }
>>   if (bucketSize > 0) {
>> idealState.setBucketSize(bucketSize);
>>   }
>>   addResource(clusterName, resourceName, idealState);
>> }
>>
>>
>>
>> On Thu, Mar 1, 2018 at 11:20 AM, kishore g  wrote:
>>
>>> We should have a recipe for delayed rebalancer
>>>
>>> On Thu, Mar 1, 2018 at 9:39 AM, Lei Xia  wrote:
>>>
 Hi, Utsav

   Sorry to get back to you late.  There is one more thing to config,

 idealstate.setMinActiveReplicas(0);

  This tell Helix the minimal replica it needs to maintain,  by default
 is set to 1, it means Helix needs to maintain at least 1 replica
 irregardless of delayed rebalancing.  For your case, you want to set it to
 0.


 Lei

 On Mon, Feb 26, 2018 at 11:38 AM, Utsav Kanani 
 wrote:

> Hi Lei,
>
> That did not work
> Seeing the same behavior
> Added the following method to ZKHelixAdmin Class
>
> public void enableClusterDelayMode(String clusterName) {
>   ConfigAccessor configAccessor = new ConfigAccessor(_zkClient);
>   ClusterConfig clusterConfig = 
> configAccessor.getClusterConfig(clusterName);
>   clusterConfig.setDelayRebalaceEnabled(true);
>   clusterConfig.setRebalanceDelayTime(10);
>   configAccessor.setClusterConfig(clusterName, clusterConfig);
> }
>
> and calling it in the demo class
>
> HelixAdmin admin = new ZKHelixAdmin(zkAddress);
> admin.addCluster(clusterName, true);
> >((ZKHelixAdmin)admin).enableClusterDelayMode(clusterName);
> StateModelConfigGenerator generator = new StateModelConfigGenerator();
> admin.addStateModelDef(clusterName, "OnlineOffline",
> new StateModelDefinition(generator.generateConfigForOnlineOffline()));
>
> admin.addResource(clusterName, lockGroupName, numPartitions, 
> "OnlineOffline",
> RebalanceMode.FULL_AUTO.toString());
> admin.rebalance(clusterName, lockGroupName, 1);
>
>
>
>
>
> STARTING localhost_12000
> STARTING localhost_12001
> STARTING localhost_12002
> STARTED localhost_12000
> STARTED localhost_12002
> STARTED localhost_12001
> localhost_12000 acquired lock:lock-group_0
> localhost_12002 acquired lock:lock-group_3
> localhost_12002 acquired lock:lock-group_9
> localhost_12001 acquired lock:lock-group_2
> localhost_12001 acquired lock:lock-group_5
> localhost_12000 acquired lock:lock-group_11
> localhost_12002 acquired lock:lock-group_6
> localhost_12000 acquired lock:lock-group_7
> localhost_12002 acquired lock:lock-group_10
> localhost_12001 acquired lock:lock-group_8
> localhost_12001 acquired lock:lock-group_1
> localhost_12000 acquired lock:lock-group_4
> lockName acquired By
> ==
> lock-group_0 localhost_12000
> lock-group_1 

Re: Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-03-08 Thread Utsav Kanani
sorry for the late response
hey guys this does not help either
do you guys want me to send you my code files?

this is the code change i made in the ZKHelixAdmin class
tried with both

idealState.setRebalancerClassName(DelayedAutoRebalancer.class.getName());

and withoug

@Override
public void addResource(String clusterName, String resourceName, int partitions,
String stateModelRef, String rebalancerMode, String
rebalanceStrategy, int bucketSize,
int maxPartitionsPerInstance) {
  if (!ZKUtil.isClusterSetup(clusterName, _zkClient)) {
throw new HelixException("cluster " + clusterName + " is not setup yet");
  }

  IdealState idealState = new IdealState(resourceName);
  idealState.setNumPartitions(partitions);
  idealState.setStateModelDefRef(stateModelRef);
  RebalanceMode mode =
  idealState.rebalanceModeFromString(rebalancerMode,
RebalanceMode.SEMI_AUTO);
  idealState.setRebalanceMode(mode);
  idealState.setRebalanceStrategy(rebalanceStrategy);
  >idealState.setMinActiveReplicas(0);
  
idealState.setStateModelFactoryName(HelixConstants.DEFAULT_STATE_MODEL_FACTORY);
  idealState.setRebalanceDelay(10);
  idealState.setDelayRebalanceEnabled(true);
  //idealState.setRebalancerClassName(DelayedAutoRebalancer.class.getName());

  if (maxPartitionsPerInstance > 0 && maxPartitionsPerInstance <
Integer.MAX_VALUE) {
idealState.setMaxPartitionsPerInstance(maxPartitionsPerInstance);
  }
  if (bucketSize > 0) {
idealState.setBucketSize(bucketSize);
  }
  addResource(clusterName, resourceName, idealState);
}



On Thu, Mar 1, 2018 at 11:20 AM, kishore g  wrote:

> We should have a recipe for delayed rebalancer
>
> On Thu, Mar 1, 2018 at 9:39 AM, Lei Xia  wrote:
>
>> Hi, Utsav
>>
>>   Sorry to get back to you late.  There is one more thing to config,
>>
>> idealstate.setMinActiveReplicas(0);
>>
>>  This tell Helix the minimal replica it needs to maintain,  by default is
>> set to 1, it means Helix needs to maintain at least 1 replica irregardless
>> of delayed rebalancing.  For your case, you want to set it to 0.
>>
>>
>> Lei
>>
>> On Mon, Feb 26, 2018 at 11:38 AM, Utsav Kanani 
>> wrote:
>>
>>> Hi Lei,
>>>
>>> That did not work
>>> Seeing the same behavior
>>> Added the following method to ZKHelixAdmin Class
>>>
>>> public void enableClusterDelayMode(String clusterName) {
>>>   ConfigAccessor configAccessor = new ConfigAccessor(_zkClient);
>>>   ClusterConfig clusterConfig = 
>>> configAccessor.getClusterConfig(clusterName);
>>>   clusterConfig.setDelayRebalaceEnabled(true);
>>>   clusterConfig.setRebalanceDelayTime(10);
>>>   configAccessor.setClusterConfig(clusterName, clusterConfig);
>>> }
>>>
>>> and calling it in the demo class
>>>
>>> HelixAdmin admin = new ZKHelixAdmin(zkAddress);
>>> admin.addCluster(clusterName, true);
>>> >((ZKHelixAdmin)admin).enableClusterDelayMode(clusterName);
>>> StateModelConfigGenerator generator = new StateModelConfigGenerator();
>>> admin.addStateModelDef(clusterName, "OnlineOffline",
>>> new StateModelDefinition(generator.generateConfigForOnlineOffline()));
>>>
>>> admin.addResource(clusterName, lockGroupName, numPartitions, 
>>> "OnlineOffline",
>>> RebalanceMode.FULL_AUTO.toString());
>>> admin.rebalance(clusterName, lockGroupName, 1);
>>>
>>>
>>>
>>>
>>>
>>> STARTING localhost_12000
>>> STARTING localhost_12001
>>> STARTING localhost_12002
>>> STARTED localhost_12000
>>> STARTED localhost_12002
>>> STARTED localhost_12001
>>> localhost_12000 acquired lock:lock-group_0
>>> localhost_12002 acquired lock:lock-group_3
>>> localhost_12002 acquired lock:lock-group_9
>>> localhost_12001 acquired lock:lock-group_2
>>> localhost_12001 acquired lock:lock-group_5
>>> localhost_12000 acquired lock:lock-group_11
>>> localhost_12002 acquired lock:lock-group_6
>>> localhost_12000 acquired lock:lock-group_7
>>> localhost_12002 acquired lock:lock-group_10
>>> localhost_12001 acquired lock:lock-group_8
>>> localhost_12001 acquired lock:lock-group_1
>>> localhost_12000 acquired lock:lock-group_4
>>> lockName acquired By
>>> ==
>>> lock-group_0 localhost_12000
>>> lock-group_1 localhost_12001
>>> lock-group_10 localhost_12002
>>> lock-group_11 localhost_12000
>>> lock-group_2 localhost_12001
>>> lock-group_3 localhost_12002
>>> lock-group_4 localhost_12000
>>> lock-group_5 localhost_12001
>>> lock-group_6 localhost_12002
>>> lock-group_7 localhost_12000
>>> lock-group_8 localhost_12001
>>> lock-group_9 localhost_12002
>>> Stopping localhost_12000
>>> localhost_12000Interrupted
>>> localhost_12001 acquired lock:lock-group_11
>>> localhost_12001 acquired lock:lock-group_0
>>> localhost_12002 acquired lock:lock-group_7
>>> localhost_12002 acquired lock:lock-group_4
>>> lockName acquired By
>>> ==
>>> lock-group_0 localhost_12001
>>> lock-group_1 localhost_12001
>>> lock-group_10 localhost_12002
>>> lock-group_11 

Re: Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-03-01 Thread kishore g
We should have a recipe for delayed rebalancer

On Thu, Mar 1, 2018 at 9:39 AM, Lei Xia  wrote:

> Hi, Utsav
>
>   Sorry to get back to you late.  There is one more thing to config,
>
> idealstate.setMinActiveReplicas(0);
>
>  This tell Helix the minimal replica it needs to maintain,  by default is
> set to 1, it means Helix needs to maintain at least 1 replica irregardless
> of delayed rebalancing.  For your case, you want to set it to 0.
>
>
> Lei
>
> On Mon, Feb 26, 2018 at 11:38 AM, Utsav Kanani 
> wrote:
>
>> Hi Lei,
>>
>> That did not work
>> Seeing the same behavior
>> Added the following method to ZKHelixAdmin Class
>>
>> public void enableClusterDelayMode(String clusterName) {
>>   ConfigAccessor configAccessor = new ConfigAccessor(_zkClient);
>>   ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
>>   clusterConfig.setDelayRebalaceEnabled(true);
>>   clusterConfig.setRebalanceDelayTime(10);
>>   configAccessor.setClusterConfig(clusterName, clusterConfig);
>> }
>>
>> and calling it in the demo class
>>
>> HelixAdmin admin = new ZKHelixAdmin(zkAddress);
>> admin.addCluster(clusterName, true);
>> >((ZKHelixAdmin)admin).enableClusterDelayMode(clusterName);
>> StateModelConfigGenerator generator = new StateModelConfigGenerator();
>> admin.addStateModelDef(clusterName, "OnlineOffline",
>> new StateModelDefinition(generator.generateConfigForOnlineOffline()));
>>
>> admin.addResource(clusterName, lockGroupName, numPartitions, "OnlineOffline",
>> RebalanceMode.FULL_AUTO.toString());
>> admin.rebalance(clusterName, lockGroupName, 1);
>>
>>
>>
>>
>>
>> STARTING localhost_12000
>> STARTING localhost_12001
>> STARTING localhost_12002
>> STARTED localhost_12000
>> STARTED localhost_12002
>> STARTED localhost_12001
>> localhost_12000 acquired lock:lock-group_0
>> localhost_12002 acquired lock:lock-group_3
>> localhost_12002 acquired lock:lock-group_9
>> localhost_12001 acquired lock:lock-group_2
>> localhost_12001 acquired lock:lock-group_5
>> localhost_12000 acquired lock:lock-group_11
>> localhost_12002 acquired lock:lock-group_6
>> localhost_12000 acquired lock:lock-group_7
>> localhost_12002 acquired lock:lock-group_10
>> localhost_12001 acquired lock:lock-group_8
>> localhost_12001 acquired lock:lock-group_1
>> localhost_12000 acquired lock:lock-group_4
>> lockName acquired By
>> ==
>> lock-group_0 localhost_12000
>> lock-group_1 localhost_12001
>> lock-group_10 localhost_12002
>> lock-group_11 localhost_12000
>> lock-group_2 localhost_12001
>> lock-group_3 localhost_12002
>> lock-group_4 localhost_12000
>> lock-group_5 localhost_12001
>> lock-group_6 localhost_12002
>> lock-group_7 localhost_12000
>> lock-group_8 localhost_12001
>> lock-group_9 localhost_12002
>> Stopping localhost_12000
>> localhost_12000Interrupted
>> localhost_12001 acquired lock:lock-group_11
>> localhost_12001 acquired lock:lock-group_0
>> localhost_12002 acquired lock:lock-group_7
>> localhost_12002 acquired lock:lock-group_4
>> lockName acquired By
>> ==
>> lock-group_0 localhost_12001
>> lock-group_1 localhost_12001
>> lock-group_10 localhost_12002
>> lock-group_11 localhost_12001
>> lock-group_2 localhost_12001
>> lock-group_3 localhost_12002
>> lock-group_4 localhost_12002
>> lock-group_5 localhost_12001
>> lock-group_6 localhost_12002
>> lock-group_7 localhost_12002
>> lock-group_8 localhost_12001
>> lock-group_9 localhost_12002
>> ===Starting localhost_12000
>> STARTING localhost_12000
>> localhost_12000 acquired lock:lock-group_11
>> localhost_12000 acquired lock:lock-group_0
>> STARTED localhost_12000
>> localhost_12000 acquired lock:lock-group_7
>> localhost_12000 acquired lock:lock-group_4
>> localhost_12001 releasing lock:lock-group_11
>> localhost_12001 releasing lock:lock-group_0
>> localhost_12002 releasing lock:lock-group_7
>> localhost_12002 releasing lock:lock-group_4
>> lockName acquired By
>> ==
>> lock-group_0 localhost_12000
>> lock-group_1 localhost_12001
>> lock-group_10 localhost_12002
>> lock-group_11 localhost_12000
>> lock-group_2 localhost_12001
>> lock-group_3 localhost_12002
>> lock-group_4 localhost_12000
>> lock-group_5 localhost_12001
>> lock-group_6 localhost_12002
>> lock-group_7 localhost_12000
>> lock-group_8 localhost_12001
>> lock-group_9 localhost_12002
>>
>>
>> On Sat, Feb 24, 2018 at 8:26 PM, Lei Xia  wrote:
>>
>>> Hi, Utsav
>>>
>>>   Delay rebalancer by default is disabled in cluster level (this is to
>>> keep back-compatible somehow), you need to enable it in the clusterConfig,
>>> e.g
>>>
>>> ConfigAccessor configAccessor = new ConfigAccessor(zkClient);
>>> ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
>>> clusterConfig.setDelayRebalaceEnabled(enabled);
>>> configAccessor.setClusterConfig(clusterName, clusterConfig);
>>>
>>>
>>>   Could you 

Re: Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-03-01 Thread Lei Xia
Hi, Utsav

  Sorry to get back to you late.  There is one more thing to config,

idealstate.setMinActiveReplicas(0);

 This tell Helix the minimal replica it needs to maintain,  by default is
set to 1, it means Helix needs to maintain at least 1 replica irregardless
of delayed rebalancing.  For your case, you want to set it to 0.


Lei

On Mon, Feb 26, 2018 at 11:38 AM, Utsav Kanani 
wrote:

> Hi Lei,
>
> That did not work
> Seeing the same behavior
> Added the following method to ZKHelixAdmin Class
>
> public void enableClusterDelayMode(String clusterName) {
>   ConfigAccessor configAccessor = new ConfigAccessor(_zkClient);
>   ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
>   clusterConfig.setDelayRebalaceEnabled(true);
>   clusterConfig.setRebalanceDelayTime(10);
>   configAccessor.setClusterConfig(clusterName, clusterConfig);
> }
>
> and calling it in the demo class
>
> HelixAdmin admin = new ZKHelixAdmin(zkAddress);
> admin.addCluster(clusterName, true);
> >((ZKHelixAdmin)admin).enableClusterDelayMode(clusterName);
> StateModelConfigGenerator generator = new StateModelConfigGenerator();
> admin.addStateModelDef(clusterName, "OnlineOffline",
> new StateModelDefinition(generator.generateConfigForOnlineOffline()));
>
> admin.addResource(clusterName, lockGroupName, numPartitions, "OnlineOffline",
> RebalanceMode.FULL_AUTO.toString());
> admin.rebalance(clusterName, lockGroupName, 1);
>
>
>
>
>
> STARTING localhost_12000
> STARTING localhost_12001
> STARTING localhost_12002
> STARTED localhost_12000
> STARTED localhost_12002
> STARTED localhost_12001
> localhost_12000 acquired lock:lock-group_0
> localhost_12002 acquired lock:lock-group_3
> localhost_12002 acquired lock:lock-group_9
> localhost_12001 acquired lock:lock-group_2
> localhost_12001 acquired lock:lock-group_5
> localhost_12000 acquired lock:lock-group_11
> localhost_12002 acquired lock:lock-group_6
> localhost_12000 acquired lock:lock-group_7
> localhost_12002 acquired lock:lock-group_10
> localhost_12001 acquired lock:lock-group_8
> localhost_12001 acquired lock:lock-group_1
> localhost_12000 acquired lock:lock-group_4
> lockName acquired By
> ==
> lock-group_0 localhost_12000
> lock-group_1 localhost_12001
> lock-group_10 localhost_12002
> lock-group_11 localhost_12000
> lock-group_2 localhost_12001
> lock-group_3 localhost_12002
> lock-group_4 localhost_12000
> lock-group_5 localhost_12001
> lock-group_6 localhost_12002
> lock-group_7 localhost_12000
> lock-group_8 localhost_12001
> lock-group_9 localhost_12002
> Stopping localhost_12000
> localhost_12000Interrupted
> localhost_12001 acquired lock:lock-group_11
> localhost_12001 acquired lock:lock-group_0
> localhost_12002 acquired lock:lock-group_7
> localhost_12002 acquired lock:lock-group_4
> lockName acquired By
> ==
> lock-group_0 localhost_12001
> lock-group_1 localhost_12001
> lock-group_10 localhost_12002
> lock-group_11 localhost_12001
> lock-group_2 localhost_12001
> lock-group_3 localhost_12002
> lock-group_4 localhost_12002
> lock-group_5 localhost_12001
> lock-group_6 localhost_12002
> lock-group_7 localhost_12002
> lock-group_8 localhost_12001
> lock-group_9 localhost_12002
> ===Starting localhost_12000
> STARTING localhost_12000
> localhost_12000 acquired lock:lock-group_11
> localhost_12000 acquired lock:lock-group_0
> STARTED localhost_12000
> localhost_12000 acquired lock:lock-group_7
> localhost_12000 acquired lock:lock-group_4
> localhost_12001 releasing lock:lock-group_11
> localhost_12001 releasing lock:lock-group_0
> localhost_12002 releasing lock:lock-group_7
> localhost_12002 releasing lock:lock-group_4
> lockName acquired By
> ==
> lock-group_0 localhost_12000
> lock-group_1 localhost_12001
> lock-group_10 localhost_12002
> lock-group_11 localhost_12000
> lock-group_2 localhost_12001
> lock-group_3 localhost_12002
> lock-group_4 localhost_12000
> lock-group_5 localhost_12001
> lock-group_6 localhost_12002
> lock-group_7 localhost_12000
> lock-group_8 localhost_12001
> lock-group_9 localhost_12002
>
>
> On Sat, Feb 24, 2018 at 8:26 PM, Lei Xia  wrote:
>
>> Hi, Utsav
>>
>>   Delay rebalancer by default is disabled in cluster level (this is to
>> keep back-compatible somehow), you need to enable it in the clusterConfig,
>> e.g
>>
>> ConfigAccessor configAccessor = new ConfigAccessor(zkClient);
>> ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
>> clusterConfig.setDelayRebalaceEnabled(enabled);
>> configAccessor.setClusterConfig(clusterName, clusterConfig);
>>
>>
>>   Could you please have a try and let me know whether it works or not?
>> Thanks
>>
>>
>> Lei
>>
>>
>> On Fri, Feb 23, 2018 at 2:33 PM, Utsav Kanani 
>> wrote:
>>
>>> I am trying to expand the Lockmanager example http://helix.apache.
>>> 

Re: Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-02-26 Thread Utsav Kanani
Hi Lei,

That did not work
Seeing the same behavior
Added the following method to ZKHelixAdmin Class

public void enableClusterDelayMode(String clusterName) {
  ConfigAccessor configAccessor = new ConfigAccessor(_zkClient);
  ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
  clusterConfig.setDelayRebalaceEnabled(true);
  clusterConfig.setRebalanceDelayTime(10);
  configAccessor.setClusterConfig(clusterName, clusterConfig);
}

and calling it in the demo class

HelixAdmin admin = new ZKHelixAdmin(zkAddress);
admin.addCluster(clusterName, true);
>((ZKHelixAdmin)admin).enableClusterDelayMode(clusterName);
StateModelConfigGenerator generator = new StateModelConfigGenerator();
admin.addStateModelDef(clusterName, "OnlineOffline",
new StateModelDefinition(generator.generateConfigForOnlineOffline()));

admin.addResource(clusterName, lockGroupName, numPartitions, "OnlineOffline",
RebalanceMode.FULL_AUTO.toString());
admin.rebalance(clusterName, lockGroupName, 1);





STARTING localhost_12000
STARTING localhost_12001
STARTING localhost_12002
STARTED localhost_12000
STARTED localhost_12002
STARTED localhost_12001
localhost_12000 acquired lock:lock-group_0
localhost_12002 acquired lock:lock-group_3
localhost_12002 acquired lock:lock-group_9
localhost_12001 acquired lock:lock-group_2
localhost_12001 acquired lock:lock-group_5
localhost_12000 acquired lock:lock-group_11
localhost_12002 acquired lock:lock-group_6
localhost_12000 acquired lock:lock-group_7
localhost_12002 acquired lock:lock-group_10
localhost_12001 acquired lock:lock-group_8
localhost_12001 acquired lock:lock-group_1
localhost_12000 acquired lock:lock-group_4
lockName acquired By
==
lock-group_0 localhost_12000
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12000
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12000
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12000
lock-group_8 localhost_12001
lock-group_9 localhost_12002
Stopping localhost_12000
localhost_12000Interrupted
localhost_12001 acquired lock:lock-group_11
localhost_12001 acquired lock:lock-group_0
localhost_12002 acquired lock:lock-group_7
localhost_12002 acquired lock:lock-group_4
lockName acquired By
==
lock-group_0 localhost_12001
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12001
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12002
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12002
lock-group_8 localhost_12001
lock-group_9 localhost_12002
===Starting localhost_12000
STARTING localhost_12000
localhost_12000 acquired lock:lock-group_11
localhost_12000 acquired lock:lock-group_0
STARTED localhost_12000
localhost_12000 acquired lock:lock-group_7
localhost_12000 acquired lock:lock-group_4
localhost_12001 releasing lock:lock-group_11
localhost_12001 releasing lock:lock-group_0
localhost_12002 releasing lock:lock-group_7
localhost_12002 releasing lock:lock-group_4
lockName acquired By
==
lock-group_0 localhost_12000
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12000
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12000
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12000
lock-group_8 localhost_12001
lock-group_9 localhost_12002


On Sat, Feb 24, 2018 at 8:26 PM, Lei Xia  wrote:

> Hi, Utsav
>
>   Delay rebalancer by default is disabled in cluster level (this is to
> keep back-compatible somehow), you need to enable it in the clusterConfig,
> e.g
>
> ConfigAccessor configAccessor = new ConfigAccessor(zkClient);
> ClusterConfig clusterConfig = configAccessor.getClusterConfig(clusterName);
> clusterConfig.setDelayRebalaceEnabled(enabled);
> configAccessor.setClusterConfig(clusterName, clusterConfig);
>
>
>   Could you please have a try and let me know whether it works or not?
> Thanks
>
>
> Lei
>
>
> On Fri, Feb 23, 2018 at 2:33 PM, Utsav Kanani 
> wrote:
>
>> I am trying to expand the Lockmanager example
>> http://helix.apache.org/0.6.2-incubating-docs/recipes/lock_manager.html to
>> introduce delay
>>
>> tried doing something like this
>> IdealState state = admin.getResourceIdealState(clusterName,
>> lockGroupName);
>>  state.setRebalanceDelay(10);
>>  state.setDelayRebalanceEnabled(true);
>>  state.setRebalancerClassName(DelayedAutoRebalancer.class.getName());
>>  admin.setResourceIdealState(clusterName, lockGroupName, state);
>>  admin.rebalance(clusterName, lockGroupName, 1);
>>
>> On killing a node there is immediate rebalancing which takes place. I was
>> hoping for a delay of 100 seconds before rebalancing but i am not seeing
>> that behavior
>>
>>
>> On Stopping 

Unable to get Rebalance Delay to work using the distributed lock manager recipe

2018-02-23 Thread Utsav Kanani
I am trying to expand the Lockmanager example
http://helix.apache.org/0.6.2-incubating-docs/recipes/lock_manager.html to
introduce delay

tried doing something like this
IdealState state = admin.getResourceIdealState(clusterName, lockGroupName);
 state.setRebalanceDelay(10);
 state.setDelayRebalanceEnabled(true);
 state.setRebalancerClassName(DelayedAutoRebalancer.class.getName());
 admin.setResourceIdealState(clusterName, lockGroupName, state);
 admin.rebalance(clusterName, lockGroupName, 1);

On killing a node there is immediate rebalancing which takes place. I was
hoping for a delay of 100 seconds before rebalancing but i am not seeing
that behavior


On Stopping localhost_12000 the locks are acquired immediately by
localhost_12001 and localhost_12002

on STARTING localhost_12000 the rebalance is again immediate.

localhost_12000 acquired lock:lock-group_11
localhost_12000 acquired lock:lock-group_7
localhost_12000 acquired lock:lock-group_0
localhost_12000 acquired lock:lock-group_4
STARTED localhost_12000
localhost_12001 releasing lock:lock-group_0
localhost_12001 releasing lock:lock-group_11
localhost_12002 releasing lock:lock-group_4
localhost_12002 releasing lock:lock-group_7


Here is the output
=

STARTING localhost_12000
STARTING localhost_12001
STARTING localhost_12002
STARTED localhost_12001
STARTED localhost_12002
STARTED localhost_12000
localhost_12000 acquired lock:lock-group_11
localhost_12002 acquired lock:lock-group_10
localhost_12002 acquired lock:lock-group_9
localhost_12002 acquired lock:lock-group_3
localhost_12001 acquired lock:lock-group_2
localhost_12001 acquired lock:lock-group_1
localhost_12001 acquired lock:lock-group_8
localhost_12002 acquired lock:lock-group_6
localhost_12000 acquired lock:lock-group_4
localhost_12000 acquired lock:lock-group_0
localhost_12000 acquired lock:lock-group_7
localhost_12001 acquired lock:lock-group_5
lockName acquired By
==
lock-group_0 localhost_12000
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12000
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12000
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12000
lock-group_8 localhost_12001
lock-group_9 localhost_12002
Stopping localhost_12000
localhost_12000Interrupted
localhost_12002 acquired lock:lock-group_4
localhost_12001 acquired lock:lock-group_11
localhost_12002 acquired lock:lock-group_7
localhost_12001 acquired lock:lock-group_0
lockName acquired By
==
lock-group_0 localhost_12001
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12001
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12002
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12002
lock-group_8 localhost_12001
lock-group_9 localhost_12002
===Starting localhost_12000
STARTING localhost_12000
localhost_12000 acquired lock:lock-group_11
localhost_12000 acquired lock:lock-group_7
localhost_12000 acquired lock:lock-group_0
localhost_12000 acquired lock:lock-group_4
STARTED localhost_12000
localhost_12001 releasing lock:lock-group_0
localhost_12001 releasing lock:lock-group_11
localhost_12002 releasing lock:lock-group_4
localhost_12002 releasing lock:lock-group_7
lockName acquired By
==
lock-group_0 localhost_12000
lock-group_1 localhost_12001
lock-group_10 localhost_12002
lock-group_11 localhost_12000
lock-group_2 localhost_12001
lock-group_3 localhost_12002
lock-group_4 localhost_12000
lock-group_5 localhost_12001
lock-group_6 localhost_12002
lock-group_7 localhost_12000
lock-group_8 localhost_12001
lock-group_9 localhost_12002


LockManagerCustomerStrategyDemo.java
Description: Binary data