[ 
https://issues.apache.org/jira/browse/KAFKA-1665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1665.
------------------------------
    Resolution: Auto Closed

Closing inactive issue. Closing as per above comments.

> controller state gets stuck in message after execute
> ----------------------------------------------------
>
>                 Key: KAFKA-1665
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1665
>             Project: Kafka
>          Issue Type: Bug
>          Components: controller
>            Reporter: Joe Stein
>            Priority: Major
>
> I had a 0.8.1.1 Kafka Broker go down, and I was trying to use the reassign 
> partition script to move topics off that broker. When I describe the topics, 
> I see the following:
> Topic: mini__022____active_120__33__mini Partition: 0 Leader: 2131118 
> Replicas: 2131118,2166601,2163421 Isr: 2131118,2166601
> This shows that the broker “2163421” is down. So I create the following file 
> /tmp/move_topic.json:
> {
>     "version": 1,
>     "partitions": [
>         {
>             "topic": "mini__022____active_120__33__mini",
>             "partition": 0,
>             "replicas": [
>                 2131118, 2166601,  2156998
>             ]
>         }
>     ]
> }
> And then do this:
> ./kafka-reassign-partitions.sh --execute --reassignment-json-file 
> /tmp/move_topic.json
> Successfully started reassignment of partitions 
> {"version":1,"partitions":[{"topic":"mini__022____active_120__33__mini","partition":0,"replicas":[2131118,2166601,2156998]}]}
> However, when I try to verify this, I get the following error:
> ./kafka-reassign-partitions.sh --verify --reassignment-json-file 
> /tmp/move_topic.json
> Status of partition reassignment:
> ERROR: Assigned replicas (2131118,2166601,2156998,2163421) don't match the 
> list of replicas for reassignment (2131118,2166601,2156998) for partition 
> [mini__022____active_120__33__mini,0]
> Reassignment of partition [mini__022____active_120__33__mini,0] failed
> If I describe the topics, I now see there are 4 replicas. This has been like 
> this for many hours now, so it seems to have permanently moved to 4 replicas 
> for some reason.
> Topic:mini__022____active_120__33__mini PartitionCount:1 ReplicationFactor:4 
> Configs:
> Topic: mini__022____active_120__33__mini Partition: 0 Leader: 2131118 
> Replicas: 2131118,2166601,2156998,2163421 Isr: 2131118,2166601
> If I re-execute and re-verify, I get the same error. So it seems to be wedged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to