[
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16542180#comment-16542180
]
Xiao Chen edited comment on HDFS-13663 at 7/12/18 8:35 PM:
-----------------------------------------------------------
Hi Shweta,
Thanks for the update. Patch 3 looks really close. Could you remove the extra
line above the {{break}}?
was (Author: xiaochen):
Hi Shweta,
Patch 3 looks really close. Could you remove the extra line above the {{break}}?
> Should throw exception when incorrect block size is set
> -------------------------------------------------------
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Yongjun Zhang
> Assignee: Shweta
> Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch,
> HDFS-13663.003.patch, HDFS-13663.004.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List<BlockRecord> syncList) throws IOException {
> newBlock.setNumBytes(finalizedLength);
> break;
> case RBW:
> case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
> ReplicaState rState = r.rInfo.getOriginalReplicaState();
> if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
> }
> if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" +
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
> }
> }
> // recover() guarantees syncList will have at least one replica with
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should
> throw exception
> newBlock.setNumBytes(minLength);
> break;
> case RUR:
> case TEMPORARY:
> assert false : "bad replica state: " + bestState;
> default:
> break; // we have 'case' all enum values
> }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block
> xyz because on-disk length 11852203 is shorter than NameNode recorded length
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]