I also thought that we do not need to call rebalance().

But, when I remove the call to rebalance(), it does not work, even if the 
REBALANCE_MODE is set to FULL_AUTO.

Is it possible that the current Shard Controller (master node) is not able to 
automatically detect IdealState config change?

Here is the config (in ZK):


"IDEAL_STATE_MODE" : "AUTO_REBALANCE",
"REBALANCE_MODE" : "FULL_AUTO"

> On Jan 31, 2017, at 10:20 PM, kishore g <[email protected]> wrote:
> 
> Glad that worked. Calling rebalance is not needed if its running in AUTO mode.
> 
> On Tue, Jan 31, 2017 at 10:03 PM, Tejeswar Das <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Kishore,
> 
> Thanks for your response!
> 
> Yep that worked! 
> 
> So basically I enhanced our service config (CLI) tool, that would use 
> HelixAdmin to increase the number of shards, and rebalance the cluster, so 
> that the newly added shards are picked up by the currently running instances. 
> It works as expected.
> 
> 
>         final HelixAdmin admin = new 
> ZKHelixAdmin(config.getZookeeperConnectString());
> 
>         final IdealState is = admin.getResourceIdealState(clusterName, 
> shardGroupName);
>         is.setNumPartitions(updatedPartitionCount);
> 
>         admin.setResourceIdealState(clusterName, shardGroupName, is);
> 
>         admin.rebalance(clusterName, shardGroupName, 1);
> 
> Thanks a lot for your help!
> 
> Regards
> Tej
> 
>> On Jan 31, 2017, at 8:55 PM, kishore g <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> we don't have any explicit API in the lock manager recipe to change the 
>> number of shards. But increasing the number of shards is as simple as 
>> updating the number of partitions in the idealstate.
>> 
>> IdealState is = helixAdmin.getResourceIdealState(cluster, resource);
>> is.setNumPartitions(X);
>> helixAdmin.setResourceIdealState(cluster, is);
>> 
>> Let us know if that works.
>> 
>> 
>> Thanks,
>> Kishore G
>>  
>> 
>> On Tue, Jan 31, 2017 at 3:29 PM, Tejeswar Das <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hi,
>> 
>> I am using Helix’s Distributed Lock Manager recipe in my project, and each 
>> Lock represents a shard or partition. It has been working pretty well. I am 
>> able to run my service as a cluster of multiple  instances, and I see the 
>> shards getting evenly distributed, when a new service instance joins the 
>> cluster, or leaves cluster, etc.
>> 
>> I have a use-case whereby I want to increase the number of shards in the 
>> cluster.
>> 
>> Which means, I would like to be able to dynamically increase the number of 
>> locks that the Lock Manager is managing. Does the Lock Manager provide such 
>> capability?
>> 
>> Please let me know.
>> 
>> Thanks and regards
>> Tej
>> 
> 
> 

Reply via email to