[ 
https://issues.apache.org/jira/browse/HDFS-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13022464#comment-13022464
 ] 

Hudson commented on HDFS-1526:
------------------------------

Integrated in Hadoop-Hdfs-trunk #643 (See 
[https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/])
    

> Dfs client name for a map/reduce task should have some randomness
> -----------------------------------------------------------------
>
>                 Key: HDFS-1526
>                 URL: https://issues.apache.org/jira/browse/HDFS-1526
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.23.0
>
>         Attachments: clientName.patch, randClientId1.patch, 
> randClientId2.patch, randClientId3.patch
>
>
> Fsck shows one of the files in our dfs cluster is corrupt.
> /bin/hadoop fsck aFile -files -blocks -locations
> aFile: 4633 bytes, 2 block(s): 
> aFile: CORRUPT block blk_-4597378336099313975
> OK
> 0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
> 1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
> On disk, these two blocks are of the same size and the same content. It turns 
> out the writer of the file is from a multiple threaded map task. Each thread 
> may write to the same file. One possible interaction of two threads might 
> make this to happen:
> [T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to 
> aFile][T2: addBlock1 to aFile]...
> Because T1 and T2 have the same client name, which is the map task id, the 
> above interactions could be done without any lease exception, thus eventually 
> leading to a corrupt file. To solve the problem, a mapreduce task's client 
> name could be formed by its task id followed by a random number.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to