[ https://issues.apache.org/jira/browse/HDFS-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852561#comment-13852561 ]
Hadoop QA commented on HDFS-5667: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12619429/h5667.03.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified test files. {color:red}-1 javac{color}. The applied patch generated 1545 javac compiler warnings (more than the trunk's current 1543 warnings). {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestDiskError {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5764//testReport/ Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/5764//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5764//console This message is automatically generated. > Include DatanodeStorage in StorageReport > ---------------------------------------- > > Key: HDFS-5667 > URL: https://issues.apache.org/jira/browse/HDFS-5667 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode > Affects Versions: 3.0.0 > Reporter: Eric Sirianni > Assignee: Arpit Agarwal > Fix For: 3.0.0 > > Attachments: h5667.02.patch, h5667.03.patch > > > The fix for HDFS-5484 was accidentally regressed by the following change made > via HDFS-5542 > {code} > + DatanodeStorageInfo updateStorage(DatanodeStorage s) { > synchronized (storageMap) { > DatanodeStorageInfo storage = storageMap.get(s.getStorageID()); > if (storage == null) { > @@ -670,8 +658,6 @@ > " for DN " + getXferAddr()); > storage = new DatanodeStorageInfo(this, s); > storageMap.put(s.getStorageID(), storage); > - } else { > - storage.setState(s.getState()); > } > return storage; > } > {code} > By removing the 'else' and no longer updating the state in the BlockReport > processing path, we effectively get the bogus state & type that is set via > the first heartbeat (see the fix for HDFS-5455): > {code} > + if (storage == null) { > + // This is seen during cluster initialization when the heartbeat > + // is received before the initial block reports from each storage. > + storage = updateStorage(new DatanodeStorage(report.getStorageID())); > {code} > Even reverting the change and reintroducing the 'else' leaves the state & > type temporarily inaccurate until the first block report. > As discussed with [~arpitagarwal], a better fix would be to simply include > the full {{DatanodeStorage}} object in the {{StorageReport}} (as opposed to > only the Storage ID). This requires adding the {{DatanodeStorage}} object to > {{StorageReportProto}}. It needs to be a new optional field and we cannot > remove the existing {{StorageUuid}} for protocol compatibility. -- This message was sent by Atlassian JIRA (v6.1.4#6159)