[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16542275#comment-16542275
 ] 

Shweta commented on HDFS-13663:
-------------------------------

Hi Xiao,

I have updated the patch as mentioned by you above. 

> Should throw exception when incorrect block size is set
> -------------------------------------------------------
>
>                 Key: HDFS-13663
>                 URL: https://issues.apache.org/jira/browse/HDFS-13663
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>            Assignee: Shweta
>            Priority: Major
>         Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List<BlockRecord> syncList) throws IOException {
>        newBlock.setNumBytes(finalizedLength);
>         break;
>       case RBW:
>       case RWR:
>         long minLength = Long.MAX_VALUE;
>         for(BlockRecord r : syncList) {
>           ReplicaState rState = r.rInfo.getOriginalReplicaState();
>           if(rState == bestState) {
>             minLength = Math.min(minLength, r.rInfo.getNumBytes());
>             participatingList.add(r);
>           }
>           if (LOG.isDebugEnabled()) {
>             LOG.debug("syncBlock replicaInfo: block=" + block +
>                 ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
>                 ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
>                 bestState.name());
>           }
>         }
>         // recover() guarantees syncList will have at least one replica with 
> RWR
>         // or better state.
>         assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
>         newBlock.setNumBytes(minLength);
>         break;
>       case RUR:
>       case TEMPORARY:
>         assert false : "bad replica state: " + bestState;
>       default:
>         break; // we have 'case' all enum values
>       }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to