[ 
https://issues.apache.org/jira/browse/HDFS-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13461744#comment-13461744
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3701:
----------------------------------------------

- fetchLocatedBlocksAndGetLastBlockLength() returns the last block length but 
the length is only used for checking whether it equals to -1.  So how about 
changing the return type to boolean (false means location unavailable) and 
renaming it to fetchLocatedBlocks()?  Then, openInfo could be simplified as 
below
{code}
    synchronized void openInfo() throws IOException {
      for(int retries = 3; retries > 0; retries--) {
        if (fetchLocatedBlocks()) {
          //fetch block success
          return;
        } else {
          // Last block location unavailable. When a cluster restarts,
          // DNs may not report immediately. At this time partial block
          // locations will not be available with NN for getting the length.
          // Lets retry a few times to get the length.
          DFSClient.LOG.warn("Last block locations unavailable. "
              + "Datanodes might not have reported blocks completely."
              + " Will retry for " + retries + " times");
          waitFor(4000);
        }
      }
      throw new IOException("Could not obtain the last block locations.");
    }
{code}

- We may also change the return type of updateBlockInfo(..) to boolean since 
the length is not used.  I am fine if you want to keep the length.

- Throw InterruptedIOException, a subclass of IOException, instead of 
IOException in waitFor(..).

                
> HDFS may miss the final block when reading a file opened for writing if one 
> of the datanode is dead
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3701
>                 URL: https://issues.apache.org/jira/browse/HDFS-3701
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3
>            Reporter: nkeywal
>            Priority: Critical
>         Attachments: HDFS-3701.branch-1.v2.merged.patch, 
> HDFS-3701.ontopof.v1.patch, HDFS-3701.patch
>
>
> When the file is opened for writing, the DFSClient calls one of the datanode 
> owning the last block to get its size. If this datanode is dead, the socket 
> exception is shallowed and the size of this last block is equals to zero. 
> This seems to be fixed on trunk, but I didn't find a related Jira. On 1.0.3, 
> it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to