Thanks Kishore for your reply!

What I see is that, a resource's ideal state becomes empty after
external view converges with the assignment.

When I create a resource I compute an initial IS, and attach a
USER_DEFINED rebalancer.  After Helix stabilizes a resource,  its
"IDEALSTATES" mapFields is wiped off in zookeeper.  When the next
round of rebalancing starts,  computeResourceMapping() will always get
an empty idealState.




On Fri, Apr 29, 2016 at 1:16 PM, kishore g <g.kish...@gmail.com> wrote:
> Hi,
>
> Current state will not show dead replicas. You need to use previous
> idealstate to derive that info. The logic will be something like this
>
> computeResource(.....) {
>   List<Instances> instances =
> previousIdealState.getInstancesForPartition(P0)
> foreach instance
>   if(!liveInstances.contain(instance)){
>     //NEED TO ASSIGN ANOTHER INSTANCE TO FOR THIS PARTITION
>   }
> }
>
> This allows your logic to be idempotent and not depend on incremental
> changes.
>
> thanks,
> Kishore G
>
> On Thu, Apr 28, 2016 at 4:27 PM, Neutron sharc <neutronsh...@gmail.com>
> wrote:
>
>> Hi team,
>>
>> in USER_DEFINED rebalance mode, the callback computeResourceMapping()
>> accepts a “currentState”.  Does this variable include replicas on a
>> dead participant ?
>>
>> For example, my resource has a partition P1 master replica on
>> participant node1, a slave replica on participant node2.  When node1
>> dies,  in callback computeResourceMapping() I retrieve P1’s replicas:
>>
>> Map<ParticipantId, State> replicas =
>> currentState.getCurrentStateMap(resourceId, partitionId);
>>
>>
>> Here the “replicas” includes only node2,  there is no entry for node1.
>>
>> However, I want to know all replicas including dead ones, so that I
>> can know that a master replica is gone and I should failover to an
>> existing slave, instead of starting a new master.
>>
>>
>> Appreciate any comments!
>>
>>
>> -Neutron
>>

Reply via email to