oh, looks like you are not removing the entries from Idealstate.
You can do this.
Idealstate is = helixadmin.getResourceIdealState(clutername, resourcename).
//update the idealstate
List<String> partitionsToDrop = ....
for(String partition:partitionsToDrop){
is.getrecord().getListFields().remove(partition)
is.getrecord().getMapFields().remove(partition)
}
is.setNumPartitions(is.getRecord().getMapFields().size());
can you try this.
thanks
Kishore G
On Fri, Feb 5, 2016 at 4:52 AM, ShaoFeng Shi <[email protected]> wrote:
> I see there is a JIRA HELIX-132
> <https://issues.apache.org/jira/browse/HELIX-132> which has fixed that
> issue in 0.7.1; but from my case, after drop the resource and add back, the
> external view still has the old partitions (got from helix-admin.sh),
> interesting...
>
> 2016-02-05 10:49 GMT+08:00 ShaoFeng Shi <[email protected]>:
>
>> I didn't; But after add the transitions between Dropped and Offline, it
>> got the same behavior.
>> I also changed my code to drop the resource and create again when its
>> partition number reaches a limit; But the external view wasn't deleted,
>> that makes Helix thought all the partitions have already been propery
>> assigned; Following is a state after the resoruce be recreated, you can see
>> although its "NUM_PARTITIONS" is 8, the partitions in external view is 20
>> (which is the up limit):
>>
>> IdealState for Resource_Stream:
>> {
>> "id" : "Resource_Stream",
>> "mapFields" : {
>> "Resource_Stream_0" : {
>> },
>> "Resource_Stream_1" : {
>> },
>> "Resource_Stream_10" : {
>> },
>> "Resource_Stream_11" : {
>> },
>> "Resource_Stream_12" : {
>> },
>> "Resource_Stream_13" : {
>> },
>> "Resource_Stream_14" : {
>> },
>> "Resource_Stream_15" : {
>> },
>> "Resource_Stream_16" : {
>> },
>> "Resource_Stream_17" : {
>> },
>> "Resource_Stream_18" : {
>> },
>> "Resource_Stream_19" : {
>> },
>> "Resource_Stream_2" : {
>> },
>> "Resource_Stream_3" : {
>> },
>> "Resource_Stream_4" : {
>> },
>> "Resource_Stream_5" : {
>> },
>> "Resource_Stream_6" : {
>> },
>> "Resource_Stream_7" : {
>> },
>> "Resource_Stream_8" : {
>> },
>> "Resource_Stream_9" : {
>> }
>> },
>> "listFields" : {
>> "Resource_Stream_0" : [ ],
>> "Resource_Stream_1" : [ ],
>> "Resource_Stream_10" : [ ],
>> "Resource_Stream_11" : [ ],
>> "Resource_Stream_12" : [ ],
>> "Resource_Stream_13" : [ ],
>> "Resource_Stream_14" : [ ],
>> "Resource_Stream_15" : [ ],
>> "Resource_Stream_16" : [ ],
>> "Resource_Stream_17" : [ ],
>> "Resource_Stream_18" : [ ],
>> "Resource_Stream_19" : [ ],
>> "Resource_Stream_2" : [ ],
>> "Resource_Stream_3" : [ ],
>> "Resource_Stream_4" : [ ],
>> "Resource_Stream_5" : [ ],
>> "Resource_Stream_6" : [ ],
>> "Resource_Stream_7" : [ ],
>> "Resource_Stream_8" : [ ],
>> "Resource_Stream_9" : [ ]
>> },
>> "simpleFields" : {
>> "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
>> "INSTANCE_GROUP_TAG" : "Tag_StreamBuilder",
>> "NUM_PARTITIONS" : "8",
>> "REBALANCE_MODE" : "FULL_AUTO",
>> "REPLICAS" : "2",
>> "STATE_MODEL_DEF_REF" : "LeaderStandby",
>> "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
>> }
>> }
>>
>> ExternalView for Resource_Stream:
>> {
>> "id" : "Resource_Stream",
>> "mapFields" : {
>> "Resource_Stream_0" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_1" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_10" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_11" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_12" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_13" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_14" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_15" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_16" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_17" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_18" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_19" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_2" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_3" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_4" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_5" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_6" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_7" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> },
>> "Resource_Stream_8" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "LEADER",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "STANDBY"
>> },
>> "Resource_Stream_9" : {
>> "kylin-dev3-eaz-969995.phx01.eaz.ebayc3.com_8080" : "STANDBY",
>> "kylin-dev4-eaz-687945.phx01.eaz.ebayc3.com_8080" : "LEADER"
>> }
>> },
>> "listFields" : {
>> },
>> "simpleFields" : {
>> "BUCKET_SIZE" : "0"
>> }
>> }
>>
>> 2016-02-04 23:08 GMT+08:00 kishore g <[email protected]>:
>>
>>> Ha ave you defined offline to dropped state transition in your state
>>> transition handler.
>>> On Feb 3, 2016 11:31 PM, "ShaoFeng Shi" <[email protected]> wrote:
>>>
>>>> I found an issue: the numberOfPartitions can only be increased, and
>>>> couldn't be decreased; Even if I set it to 0, and then do a rebalance, the
>>>> old partitions still exist in idea view and external view; So, how can
>>>> descrease or drop a partition?
>>>>
>>>>
>>>>
>>>> 2016-02-04 15:28 GMT+08:00 ShaoFeng Shi <[email protected]>:
>>>>
>>>>> Nice, that you Kishore!
>>>>>
>>>>> 2016-02-04 10:35 GMT+08:00 kishore g <[email protected]>:
>>>>>
>>>>>> This is needed for another project I am working on. There is no
>>>>>> reason for Helix to depend on this convention. I will fix this.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 3, 2016 at 5:51 PM, ShaoFeng Shi <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I'm trying to use Helix (0.7.1) to manage our resource partitions on
>>>>>>> a
>>>>>>> cluster. My scenario is, every 5 minutes a partition will be added;
>>>>>>> What I
>>>>>>> do now is, get the ideal state, +1 for the partition number and then
>>>>>>> update
>>>>>>> it with ZKHelixAdmin. With a rebalance action, the new partition
>>>>>>> will be
>>>>>>> created and assigned to an instance. What the instance can get from
>>>>>>> Helix
>>>>>>> is the resource name and partition id. To map the parition to my
>>>>>>> logical
>>>>>>> data, I maintained a mapping table in our datastore, which looks
>>>>>>> like:
>>>>>>>
>>>>>>> {
>>>>>>> "resource_0": 201601010000_ 201601010005,
>>>>>>> "resource_1": 201601010005_ 201601010010,
>>>>>>> ...
>>>>>>> "resource_11": 201601010055_ 201601010100
>>>>>>> }
>>>>>>>
>>>>>>> Here it has 12 partitions; Now I want to discard some old
>>>>>>> partitions, say
>>>>>>> the first 6 partitions; It seems in Helix the partitions must start
>>>>>>> from
>>>>>>> 0, so with an update on the IdealState, set # of partitions to 6,
>>>>>>> the new
>>>>>>> partitions on the cluster would become to:
>>>>>>>
>>>>>>> resource_0,resource_1, ..., resource_5,
>>>>>>>
>>>>>>> To make sure the partitions wouldn't be wrongly mapped, I need
>>>>>>> update my
>>>>>>> mapping table before the rebalance. While that may not ensure the
>>>>>>> atomic
>>>>>>> between the two updates.
>>>>>>>
>>>>>>> So my question is, what's the suggested way to do the resource
>>>>>>> partition
>>>>>>> mapping? does Helix allow user to specify additional information on a
>>>>>>> partition (if have this, I don't need maintain the mapping outside)?
>>>>>>> Can we
>>>>>>> have some simple APIs like addPartition(String parititionName),
>>>>>>> dropParitition(String partitionName), just like that for resource?
>>>>>>> The
>>>>>>> numeric paritition id can be an internal ID and not exposed to user.
>>>>>>>
>>>>>>> I guess many entry users will have such questions. Just raise this
>>>>>>> for a
>>>>>>> broad discussion; Any comment and suggestion is welcomed. Thanks for
>>>>>>> your
>>>>>>> time!
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best regards,
>>>>>>>
>>>>>>> Shaofeng Shi
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>>
>>>>> Shaofeng Shi
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>>
>>>> Shaofeng Shi
>>>>
>>>>
>>
>>
>> --
>> Best regards,
>>
>> Shaofeng Shi
>>
>>
>
>
> --
> Best regards,
>
> Shaofeng Shi
>
>