[ 
https://issues.apache.org/jira/browse/HDFS-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-404.
------------------------------

    Resolution: Invalid
      Assignee: Todd Lipcon

Hi Gianyu,

This is not the forum for questions about the code.

To quickly answer your question, the comparison is about the Blocks, not the 
LocatedBlocks. It is essentialy an assertion that the actual block IDs in the 
file have not changed - if they have it indicates that the file has been 
swapped with another file underneath the reader. Making sure the blocks have 
the same IDs causes this to throw an error instead of starting to read the 
other file.

Resolving as invalid.

> Why open method in class DFSClient would compare old LocatedBlocks and new 
> LocatedBlocks?
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-404
>                 URL: https://issues.apache.org/jira/browse/HDFS-404
>             Project: Hadoop HDFS
>          Issue Type: Wish
>            Reporter: qianyu
>            Assignee: Todd Lipcon
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This is in the package of org.apache.hadoop.hdfs, DFSClient.openInfo():
> if (locatedBlocks != null) {
>         Iterator<LocatedBlock> oldIter = 
> locatedBlocks.getLocatedBlocks().iterator();
>         Iterator<LocatedBlock> newIter = 
> newInfo.getLocatedBlocks().iterator();
>         while (oldIter.hasNext() && newIter.hasNext()) {
>           if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
>             throw new IOException("Blocklist for " + src + " has changed!");
>           }
>         }
>       }
> Why we need compare old LocatedBlocks and new LocatedBlocks, and in what case 
> it happen?
> Why not "this.locatedBlocks = newInfo" directly?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to