I will create a new post for that.

Em sábado, 28 de março de 2015 02:47:44 UTC-3, Mark Walkom escreveu:
>
> Please don't put such large logs directly into emails like this, use 
> gist/pastebin/etc :)
>
> If you can do that someone might be able to read things a little easier 
> and provide assistance.
>
> On 28 March 2015 at 04:27, Marcelo Paes Rech <marcelo...@gmail.com 
> <javascript:>> wrote:
>
> Hi guys,
>
> Our cluster has 7 data nodes. Last week one of them was turned off. We 
> restarted the node and then the cluster started rebalancing. The problem is 
> it nerver finishes rebalancing.
>
> Node logs shows an update coming from master node during the rebalancing, 
> seconds later the rebalancing is canceled and then it starts again.
>
> I have created #10281 
> <https://github.com/elastic/elasticsearch/issues/10281> on GitHub, but I 
> am including the problem here too. Maybe somebody ran into this problem and 
> already fixed it.
>
> The logs were collected in different moments but they represent the same 
> behaviour.
>
> The log follows:
>
> Data Node:
>
> [2015-03-24 15:26:05,244][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][0] creating shard
> [2015-03-24 15:26:05,244][DEBUG][index.service            ] [THE_CHOSEN_ONE] 
> [my_index] creating shard_id [0]
> [2015-03-24 15:26:05,274][DEBUG][index.deletionpolicy     ] [THE_CHOSEN_ONE] 
> [my_index][0] Using [keep_only_last] deletion policy
> [2015-03-24 15:26:05,275][DEBUG][index.merge.policy       ] [THE_CHOSEN_ONE] 
> [my_index][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], 
> floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], 
> max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
> [2015-03-24 15:26:05,276][DEBUG][index.merge.scheduler    ] [THE_CHOSEN_ONE] 
> [my_index][0] using [concurrent] merge scheduler with max_thread_count[3]
> [2015-03-24 15:26:05,278][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [CREATED]
> [2015-03-24 15:26:05,278][DEBUG][index.translog           ] [THE_CHOSEN_ONE] 
> [my_index][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size 
> [200mb], flush_threshold_period [30m]
> [2015-03-24 15:26:05,280][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [CREATED]->[RECOVERING], reason [from 
> [VEROIA][Fto2cgJ-RB-IjV3EfLm1Sw][es2.mydomain][inet[/172.31.234.165:9300]]{master=false}]
> [2015-03-24 15:26:05,281][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][2] creating shard
> [2015-03-24 15:26:05,281][DEBUG][index.service            ] [THE_CHOSEN_ONE] 
> [my_index] creating shard_id [2]
> [2015-03-24 15:26:05,358][DEBUG][index.deletionpolicy     ] [THE_CHOSEN_ONE] 
> [my_index][2] Using [keep_only_last] deletion policy
> [2015-03-24 15:26:05,361][DEBUG][index.merge.policy       ] [THE_CHOSEN_ONE] 
> [my_index][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], 
> floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], 
> max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
> [2015-03-24 15:26:05,363][DEBUG][index.merge.scheduler    ] [THE_CHOSEN_ONE] 
> [my_index][2] using [concurrent] merge scheduler with max_thread_count[3]
> [2015-03-24 15:26:05,371][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [CREATED]
> [2015-03-24 15:26:05,373][DEBUG][index.translog           ] [THE_CHOSEN_ONE] 
> [my_index][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size 
> [200mb], flush_threshold_period [30m]
> [2015-03-24 15:26:05,378][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [CREATED]->[RECOVERING], reason [from 
> [PERAIA][bOUh4S-fSmW8vcD8Gw0sQw][es4.mydomain][inet[/172.31.234.167:9300]]{master=false}]
> [2015-03-24 15:26:05,380][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: done applying updated cluster_state (version: 116348)
> [2015-03-24 15:26:05,682][DEBUG][discovery.zen.publish    ] [THE_CHOSEN_ONE] 
> received cluster state version 116349
> [2015-03-24 15:26:05,684][DEBUG][discovery.zen            ] [THE_CHOSEN_ONE] 
> received cluster state from 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}] which is also master but with cluster name [Cluster [gvtmusic]]
> [2015-03-24 15:26:05,685][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: execute
> [2015-03-24 15:26:05,685][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> cluster state updated, version [116349], source [zen-disco-receive(from 
> master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]
> [2015-03-24 15:26:05,686][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> set local cluster state to version 116349
> [2015-03-24 15:26:05,687][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][0] removing shard (different instance of it allocated on this 
> node, current [[my_index][0], node[OI3VgsmPSBekiVF7WhSm4Q], relocating 
> [gBhKyL1gQ1WnHLUFvR_BPQ], [R], s[INITIALIZING]], global [[my_index][0], 
> node[OI3VgsmPSBekiVF7WhSm4Q], [R], s[INITIALIZING]])
> [2015-03-24 15:26:05,807][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [RECOVERING]->[CLOSED], reason [removing shard 
> (different instance of it allocated on this node)]
> [2015-03-24 15:26:05,808][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][0] creating shard
> [2015-03-24 15:26:05,808][DEBUG][index.service            ] [THE_CHOSEN_ONE] 
> [my_index] creating shard_id [0]
> [2015-03-24 15:26:05,856][DEBUG][index.deletionpolicy     ] [THE_CHOSEN_ONE] 
> [my_index][0] Using [keep_only_last] deletion policy
> [2015-03-24 15:26:05,857][DEBUG][index.merge.policy       ] [THE_CHOSEN_ONE] 
> [my_index][0] using [tiered] merge policy with expunge_deletes_allowed[10.0], 
> floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], 
> max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
> [2015-03-24 15:26:05,858][DEBUG][index.merge.scheduler    ] [THE_CHOSEN_ONE] 
> [my_index][0] using [concurrent] merge scheduler with max_thread_count[3]
> [2015-03-24 15:26:05,861][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [CREATED]
> [2015-03-24 15:26:05,862][DEBUG][index.translog           ] [THE_CHOSEN_ONE] 
> [my_index][0] interval [5s], flush_threshold_ops [5000], flush_threshold_size 
> [200mb], flush_threshold_period [30m]
> [2015-03-24 15:26:05,865][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [CREATED]->[RECOVERING], reason [from 
> [VEROIA][Fto2cgJ-RB-IjV3EfLm1Sw][es2.mydomain][inet[/172.31.234.165:9300]]{master=false}]
> [2015-03-24 15:26:05,868][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: done applying updated cluster_state (version: 116349)
> [2015-03-24 15:26:06,514][DEBUG][discovery.zen.publish    ] [THE_CHOSEN_ONE] 
> received cluster state version 116350
> [2015-03-24 15:26:06,514][DEBUG][discovery.zen            ] [THE_CHOSEN_ONE] 
> received cluster state from 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}] which is also master but with cluster name [Cluster [gvtmusic]]
> [2015-03-24 15:26:06,515][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: execute
> [2015-03-24 15:26:06,515][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> cluster state updated, version [116350], source [zen-disco-receive(from 
> master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]
> [2015-03-24 15:26:06,516][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> set local cluster state to version 116350
> [2015-03-24 15:26:06,518][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][2] removing shard (different instance of it allocated on this 
> node, current [[my_index][2], node[OI3VgsmPSBekiVF7WhSm4Q], relocating 
> [Fto2cgJ-RB-IjV3EfLm1Sw], [R], s[INITIALIZING]], global [[my_index][2], 
> node[OI3VgsmPSBekiVF7WhSm4Q], [R], s[INITIALIZING]])
> [2015-03-24 15:26:06,622][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [RECOVERING]->[CLOSED], reason [removing shard 
> (different instance of it allocated on this node)]
> [2015-03-24 15:26:06,623][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][2] creating shard
> [2015-03-24 15:26:06,623][DEBUG][index.service            ] [THE_CHOSEN_ONE] 
> [my_index] creating shard_id [2]
> [2015-03-24 15:26:06,687][DEBUG][index.deletionpolicy     ] [THE_CHOSEN_ONE] 
> [my_index][2] Using [keep_only_last] deletion policy
> [2015-03-24 15:26:06,688][DEBUG][index.merge.policy       ] [THE_CHOSEN_ONE] 
> [my_index][2] using [tiered] merge policy with expunge_deletes_allowed[10.0], 
> floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], 
> max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]
> [2015-03-24 15:26:06,690][DEBUG][index.merge.scheduler    ] [THE_CHOSEN_ONE] 
> [my_index][2] using [concurrent] merge scheduler with max_thread_count[3]
> [2015-03-24 15:26:06,692][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [CREATED]
> [2015-03-24 15:26:06,692][DEBUG][index.translog           ] [THE_CHOSEN_ONE] 
> [my_index][2] interval [5s], flush_threshold_ops [5000], flush_threshold_size 
> [200mb], flush_threshold_period [30m]
> [2015-03-24 15:26:06,695][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [CREATED]->[RECOVERING], reason [from 
> [PERAIA][bOUh4S-fSmW8vcD8Gw0sQw][es4.mydomain][inet[/172.31.234.167:9300]]{master=false}]
> [2015-03-24 15:26:06,697][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: done applying updated cluster_state (version: 116350)
> [2015-03-24 15:26:08,215][DEBUG][discovery.zen.publish    ] [THE_CHOSEN_ONE] 
> received cluster state version 116351
> [2015-03-24 15:26:08,215][DEBUG][discovery.zen            ] [THE_CHOSEN_ONE] 
> received cluster state from 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}] which is also master but with cluster name [Cluster [gvtmusic]]
> [2015-03-24 15:26:08,216][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: execute
> [2015-03-24 15:26:08,217][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> cluster state updated, version [116351], source [zen-disco-receive(from 
> master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]
> [2015-03-24 15:26:08,218][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> set local cluster state to version 116351
> [2015-03-24 15:26:08,224][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][0] removing shard (not allocated)
> [2015-03-24 15:26:08,297][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][0] state: [RECOVERING]->[CLOSED], reason [removing shard (not 
> allocated)]
> [2015-03-24 15:26:08,304][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: done applying updated cluster_state (version: 116351)
> [2015-03-24 15:26:08,812][DEBUG][discovery.zen.publish    ] [THE_CHOSEN_ONE] 
> received cluster state version 116352
> [2015-03-24 15:26:08,813][DEBUG][discovery.zen            ] [THE_CHOSEN_ONE] 
> received cluster state from 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}] which is also master but with cluster name [Cluster [gvtmusic]]
> [2015-03-24 15:26:08,813][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: execute
> [2015-03-24 15:26:08,814][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> cluster state updated, version [116352], source [zen-disco-receive(from 
> master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]
> [2015-03-24 15:26:08,814][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> set local cluster state to version 116352
> [2015-03-24 15:26:08,816][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index][2] removing shard (not allocated)
> [2015-03-24 15:26:08,820][DEBUG][index.shard.service      ] [THE_CHOSEN_ONE] 
> [my_index][2] state: [RECOVERING]->[CLOSED], reason [removing shard (not 
> allocated)]
> [2015-03-24 15:26:08,821][DEBUG][indices.cluster          ] [THE_CHOSEN_ONE] 
> [my_index] cleaning index (no shards allocated)
> [2015-03-24 15:26:08,821][DEBUG][index.cache.filter.weighted] 
> [THE_CHOSEN_ONE] [my_index] full cache clear, reason [close]
> [2015-03-24 15:26:08,827][DEBUG][cluster.service          ] [THE_CHOSEN_ONE] 
> processing [zen-disco-receive(from master 
> [[MR3][4LssjGYPTX6EL7Kv-ylgMQ][master3.mydomain][inet[/172.31.234.181:9300]]{data=false,
>  master=true}])]: done applying updated cluster_state (version: 116352)
>
>
> Master node:
>
> [2015-03-26 15:23:08,468][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,506][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,516][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,523][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,534][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,545][DEBUG][cluster.service          ] [MR3] set local 
> cluster state to version 142212
> [2015-03-26 15:23:08,545][DEBUG][river.cluster            ] [MR3] processing 
> [reroute_rivers_node_changed]: execute
> [2015-03-26 15:23:08,545][DEBUG][river.cluster            ] [MR3] processing 
> [reroute_rivers_node_changed]: no change in cluster_state
> [2015-03-26 15:23:08,547][DEBUG][cluster.service          ] [MR3] processing 
> [shard-started ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], [R], 
> s[INITIALIZING]), reason [after recovery (replica) from node 
> [[KOZANI][_dLxnGwlTt6P4i7PqzNEUQ][pves1-6.popvono][inet[/172.31.234.187:9300]]{master=false}]]]:
>  done applying updated cluster_state (version: 142212)
> [2015-03-26 15:23:08,547][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: execute
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,547][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,548][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,549][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,549][DEBUG][cluster.action.shard     ] [MR3] 
> [my_index][1] will apply shard failed [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,564][DEBUG][cluster.service          ] [MR3] cluster 
> state updated, version [142213], source [shard-failed ([my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING]), reason [Failed to perform [indices/index/b_shard/delete] on 
> replica, message [RemoteTransportException[Failed to deserialize exception 
> response from stream]; nested: TransportSerializationException[Failed to 
> deserialize exception response from stream]; nested: 
> StreamCorruptedException[unexpected end of block data]; ]]]
> [2015-03-26 15:23:08,564][DEBUG][cluster.service          ] [MR3] publishing 
> cluster state version 142213
> [2015-03-26 15:23:08,601][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[bOUh4S-fSmW8vcD8Gw0sQw], relocating [4UkjJGtNQdCvJH3OuXEyUQ], [R], 
> s[RELOCATING], indexUUID [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform 
> [index] on replica, message [RemoteTransportException[Failed to deserialize 
> exception response from stream]; nested: 
> TransportSerializationException[Failed to deserialize exception response from 
> stream]; nested: StreamCorruptedException[unexpected end of block data]; ]]
> [2015-03-26 15:23:08,619][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[4UkjJGtNQdCvJH3OuXEyUQ], [R], s[INITIALIZING], indexUUID 
> [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform [index] on replica, 
> message [RemoteTransportException[Failed to deserialize exception response 
> from stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,668][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[4UkjJGtNQdCvJH3OuXEyUQ], [R], s[INITIALIZING], indexUUID 
> [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform [index] on replica, 
> message [RemoteTransportException[Failed to deserialize exception response 
> from stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,724][DEBUG][cluster.service          ] [MR3] set local 
> cluster state to version 142213
> [2015-03-26 15:23:08,725][DEBUG][river.cluster            ] [MR3] processing 
> [reroute_rivers_node_changed]: execute
> [2015-03-26 15:23:08,725][DEBUG][river.cluster            ] [MR3] processing 
> [reroute_rivers_node_changed]: no change in cluster_state
> [2015-03-26 15:23:08,725][WARN ][cluster.action.shard     ] [MR3] 
> [my_index][1] received shard failed for [my_index][1], 
> node[4UkjJGtNQdCvJH3OuXEyUQ], [R], s[INITIALIZING], indexUUID 
> [evIJHzdjQP6x0vr3Tve2-w], reason [Failed to perform [index] on replica, 
> message [RemoteTransportException[Failed to deserialize exception response 
> from stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: done applying updated cluster_state (version: 142213)
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: execute
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: no change in cluster_state
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: execute
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: no change in cluster_state
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: execute
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: no change in cluster_state
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize 
> exception response from stream]; nested: StreamCorruptedException[unexpected 
> end of block data]; ]]]: execute
> [2015-03-26 15:23:08,727][DEBUG][cluster.service          ] [MR3] processing 
> [shard-failed ([my_index][1], node[bOUh4S-fSmW8vcD8Gw0sQw], relocating 
> [4UkjJGtNQdCvJH3OuXEyUQ], [R], s[RELOCATING]), reason [Failed to perform 
> [indices/index/b_shard/delete] on replica, message 
> [RemoteTransportException[Failed to deserialize exception response from 
> stream]; nested: TransportSerializationException[Failed to deserialize excep
>
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/956fd660-0a6f-441a-aecd-736537898b49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to