Tony Wu created HDFS-9236:
-----------------------------
Summary: Add sanity check for block size during block recovery
Key: HDFS-9236
URL: https://issues.apache.org/jira/browse/HDFS-9236
Project: Hadoop HDFS
Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Tony Wu
Assignee: Tony Wu
Ran into an issue while running test against faulty data-node code.
Currently in DataNode.java:
{code:java}
/** Block synchronization */
void syncBlock(RecoveringBlock rBlock,
List<BlockRecord> syncList) throws IOException {
…
// Calculate the best available replica state.
ReplicaState bestState = ReplicaState.RWR;
…
// Calculate list of nodes that will participate in the recovery
// and the new block size
List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
-1, recoveryId);
switch(bestState) {
…
case RBW:
case RWR:
long minLength = Long.MAX_VALUE;
for(BlockRecord r : syncList) {
ReplicaState rState = r.rInfo.getOriginalReplicaState();
if(rState == bestState) {
minLength = Math.min(minLength, r.rInfo.getNumBytes());
participatingList.add(r);
}
}
newBlock.setNumBytes(minLength);
break;
…
}
…
nn.commitBlockSynchronization(block,
newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
datanodes, storages);
}
{code}
This code is called by the DN coordinating the block recovery. In the above
case, it is possible for none of the rState (reported by DNs with copies of the
replica being recovered) to match the bestState. This can either be caused by
faulty DN code or stale/modified/corrupted files on DN. When this happens the
DN will end up reporting the minLengh of Long.MAX_VALUE.
Unfortunately there is no check on the NN for replica length. See
FSNamesystem.java:
{code:java}
void commitBlockSynchronization(ExtendedBlock oldBlock,
long newgenerationstamp, long newlength,
boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
String[] newtargetstorages) throws IOException {
…
if (deleteblock) {
Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
boolean remove = iFile.removeLastBlock(blockToDel) != null;
if (remove) {
blockManager.removeBlock(storedBlock);
}
} else {
// update last block
if(!copyTruncate) {
storedBlock.setGenerationStamp(newgenerationstamp);
//>>>> XXX block length is updated without any check <<<<//
storedBlock.setNumBytes(newlength);
}
…
if (closeFile) {
LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
+ ", file=" + src
+ (copyTruncate ? ", newBlock=" + truncatedBlock
: ", newgenerationstamp=" + newgenerationstamp)
+ ", newlength=" + newlength
+ ", newtargets=" + Arrays.asList(newtargets) + ") successful");
} else {
LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
}
}
{code}
After this point the block length becomes Long.MAX_VALUE. Any subsequent block
report (even with correct length) will cause the block to be marked as
corrupted. Since this is block could be the last block of the file. If this
happens and the client goes away, NN won’t be able to recover the lease and
close the file because the last block is under-replicated.
I believe we need to have a sanity check for block size on both DN and NN to
prevent such case from happening.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)