[ 
https://issues.apache.org/jira/browse/HADOOP-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HADOOP-3050:
----------------------------------

    Attachment: blockReport2.patch

This patch makes sure that the initial block report is sent once and only once.

> Cluster fall into infinite loop trying to replicate a block to a target that 
> aready has this replica.
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3050
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3050
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.2
>            Reporter: Konstantin Shvachko
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: blockReport.patch, blockReport1.patch, 
> blockReport2.patch, FailedTestDecommission.log
>
>
> This happened during a test run by Hudson. So fortunately we have all logs 
> present.
> http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1987/console
> Search for TestDecommission. And look for block blk_167544198419718831 that 
> is being replicated to node 127.0.0.1:65168 over and over again.
> The issue needs to be investigated. I am making it a blocker until it is.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to