[ 
https://issues.apache.org/jira/browse/HDFS-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272751#comment-17272751
 ] 

Hadoop QA commented on HDFS-15714:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
27s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
4s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:blue}0{color} | {color:blue} buf {color} | {color:blue}  0m  1s{color} 
| {color:blue}{color} | {color:blue} buf was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 
79 new or modified test files. {color} |
|| || || || {color:brown} HDFS-15714 Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 13m 
53s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
50s{color} | {color:green}{color} | {color:green} HDFS-15714 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
54s{color} | {color:green}{color} | {color:green} HDFS-15714 passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
22s{color} | {color:green}{color} | {color:green} HDFS-15714 passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 9s{color} | {color:green}{color} | {color:green} HDFS-15714 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
3s{color} | {color:green}{color} | {color:green} HDFS-15714 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 51s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
30s{color} | {color:green}{color} | {color:green} HDFS-15714 passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
53s{color} | {color:green}{color} | {color:green} HDFS-15714 passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
46s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
26s{color} | {color:green}{color} | {color:green} HDFS-15714 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
7s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m  7s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 30 new + 142 unchanged - 30 
fixed = 172 total (was 172) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m  7s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 67 new + 2006 unchanged - 27 
fixed = 2073 total (was 2033) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
21s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 22m 21s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt{color}
 | {color:red} root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with 
JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 33 new + 139 
unchanged - 33 fixed = 172 total (was 172) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 22m 21s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt{color}
 | {color:red} root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with 
JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 67 new + 
1901 unchanged - 27 fixed = 1968 total (was 1928) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 43s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-checkstyle-root.txt{color}
 | {color:orange} root: The patch generated 144 new + 4280 unchanged - 35 fixed 
= 4424 total (was 4315) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
18s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/whitespace-eol.txt{color}
 | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply 
--whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green}{color} | {color:green} The patch has no ill-formed 
XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 31s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
9s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} hadoop-common in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
19s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt{color}
 | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
34s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 2 new 
+ 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
31s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html{color}
 | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
50s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html{color}
 | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 25 new + 0 unchanged - 
0 fixed = 25 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  9s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt{color}
 | {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green}{color} | {color:green} hadoop-hdfs-client in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 42s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 46s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt{color}
 | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 27s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-tools_hadoop-fs2img.txt{color}
 | {color:red} hadoop-fs2img in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
52s{color} | 
{color:red}https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-asflicense-problems.txt{color}
 | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}409m 22s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Found reliance on default encoding in 
org.apache.hadoop.fs.impl.FileSystemMultipartUploader.lambda$innerComplete$3(Map$Entry):in
 
org.apache.hadoop.fs.impl.FileSystemMultipartUploader.lambda$innerComplete$3(Map$Entry):
 String.getBytes()  At FileSystemMultipartUploader.java:[line 217] |
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Switch statement found in 
org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(DatanodeProtocolProtos$ProvidedVolCommandProto)
 where default case is missing  At PBHelper.java:where default case is missing  
At PBHelper.java:[lines 833-838] |
|  |  Switch statement found in 
org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(ProvidedVolumeCommand) where 
default case is missing  At PBHelper.java:where default case is missing  At 
PBHelper.java:[lines 640-645] |
|  |  
org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.addBlocksToAliasMap(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
ProvidedStorageMap.java:keySet iterator instead of entrySet iterator  At 
ProvidedStorageMap.java:[line 274] |
|  |  Redundant nullcheck of nnProxy, which is known to be non-null in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.NamenodeInMemoryAliasMapClient.setConf(Configuration)
  Redundant null check at NamenodeInMemoryAliasMapClient.java:is known to be 
non-null in 
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.NamenodeInMemoryAliasMapClient.setConf(Configuration)
  Redundant null check at NamenodeInMemoryAliasMapClient.java:[line 51] |
|  |  Redundant nullcheck of 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap.replicas(String),
 which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(String)
  Redundant null check at FsDatasetImpl.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(String)
  Redundant null check at FsDatasetImpl.java:[line 229] |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(RamDiskReplicaTracker)
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
FsVolumeImpl.java:keySet iterator instead of entrySet iterator  At 
FsVolumeImpl.java:[line 1074] |
|  |  Redundant nullcheck of 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.localReplicaInfo,
 which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.createLocalReplica(FsVolumeImpl)
  Redundant null check at SynchronousReadThroughInputStream.java:is known to be 
non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.createLocalReplica(FsVolumeImpl)
  Redundant null check at SynchronousReadThroughInputStream.java:[line 175] |
|  |  instanceof will always return true for all non-null values in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap.addAll(VolumeReplicaMap),
 since all 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap are 
instances of 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap  At 
VolumeReplicaMap.java:for all non-null values in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap.addAll(VolumeReplicaMap),
 since all 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap are 
instances of 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap  At 
VolumeReplicaMap.java:[line 170] |
|  |  Redundant nullcheck of xConfig, which is known to be non-null in 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddMountOp.writeFields(DataOutputStream)
  Redundant null check at FSEditLogOp.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddMountOp.writeFields(DataOutputStream)
  Redundant null check at FSEditLogOp.java:[line 4466] |
|  |  Possible null pointer dereference of r in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long, 
String, String, long)  Dereferenced at FSNamesystem.java:r in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long, 
String, String, long)  Dereferenced at FSNamesystem.java:[line 2440] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.MountManager.prepareMount(String, 
String, MountMode, Map, Configuration):in 
org.apache.hadoop.hdfs.server.namenode.MountManager.prepareMount(String, 
String, MountMode, Map, Configuration): String.getBytes()  At 
MountManager.java:[line 177] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.lambda$getXattrValueByName$3(XAttr):in
 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.lambda$getXattrValueByName$3(XAttr):
 new String(byte[])  At SyncMountManager.java:[line 324] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.performInitialDiff(String,
 String):in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.performInitialDiff(String,
 String): String.getBytes()  At SyncMountManager.java:[line 227] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.storeSnapshotNameAsXAttr(String,
 String, String, XAttrSetFlag):in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.storeSnapshotNameAsXAttr(String,
 String, String, XAttrSetFlag): String.getBytes()  At 
SyncMountManager.java:[line 282] |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.manager in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.getInstance(Configuration,
 FSNamesystem)  At SyncMountManager.java:field 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.manager in 
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.getInstance(Configuration,
 FSNamesystem)  At SyncMountManager.java:[lines 104-105] |
|  |  
org.apache.hadoop.hdfs.server.namenode.mountmanager.SimpleReadCacheManager.findBlocksToEvict(long)
 makes inefficient use of keySet iterator instead of entrySet iterator  At 
SimpleReadCacheManager.java:keySet iterator instead of entrySet iterator  At 
SimpleReadCacheManager.java:[line 320] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getSourceSnapshotId(SnapshotDiffReport):in
 
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getSourceSnapshotId(SnapshotDiffReport):
 String.getBytes()  At SyncMonitor.java:[line 303] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getTargetSnapshotId(SnapshotDiffReport):in
 
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getTargetSnapshotId(SnapshotDiffReport):
 String.getBytes()  At SyncMonitor.java:[line 315] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncServiceSatisfier.syncServiceSatisfierThread;
 locked 70% of time  Unsynchronized access at SyncServiceSatisfier.java:70% of 
time  Unsynchronized access at SyncServiceSatisfier.java:[line 182] |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.writeCacheEvictor
 in 
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.getInstance(Configuration,
 FSNamesystem)  At WriteCacheEvictor.java:field 
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.writeCacheEvictor
 in 
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.getInstance(Configuration,
 FSNamesystem)  At WriteCacheEvictor.java:[lines 84-92] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
 Path):in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
 Path): new String(byte[])  At DirectoryPlanner.java:[line 63] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
 Path, String):in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
 Path, String): new String(byte[])  At DirectoryPlanner.java:[line 73] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.createPlanForDirectory(SnapshotDiffReport$DiffReportEntry,
 String, ProvidedVolumeInfo, int):in 
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.createPlanForDirectory(SnapshotDiffReport$DiffReportEntry,
 String, ProvidedVolumeInfo, int): String.getBytes()  At 
DirectoryPlanner.java:[line 103] |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.hdfs.tools.DFSAdmin.addMount(String[])  At 
DFSAdmin.java:rather than n in 
org.apache.hadoop.hdfs.tools.DFSAdmin.addMount(String[])  At 
DFSAdmin.java:[line 2711] |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.hdfs.tools.DFSAdmin.listMounts(String[])  At 
DFSAdmin.java:rather than n in 
org.apache.hadoop.hdfs.tools.DFSAdmin.listMounts(String[])  At 
DFSAdmin.java:[line 2746] |
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.TestFileLengthOnClusterRestart |
|   | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.TestAppendDifferentChecksum |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.web.TestWebHDFSAcl |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.blockmanagement.TestCorruptionWithFailover |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportLease |
|   | hadoop.fs.TestFcHdfsPermission |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestLeaseRecoveryStriped |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
|   | hadoop.hdfs.server.balancer.TestBalancerService |
|   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.namenode.TestMultiRootProvidedCluster |
|   | hadoop.hdfs.server.namenode.TestSingleUGIResolver |
|   | hadoop.hdfs.server.namenode.TestFailuresDuringMount |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/2655 |
| JIRA Issue | HDFS-15714 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle xml cc buflint bufcompat |
| uname | Linux 9a595f5ac4c5 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | HDFS-15714 / d82009599a2 |
| Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
|  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/testReport/ |
| Max. process+thread count | 2226 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-fs2img U: . |
| Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/console |
| versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> HDFS Provided Storage Read/Write Mount Support On-the-fly
> ---------------------------------------------------------
>
>                 Key: HDFS-15714
>                 URL: https://issues.apache.org/jira/browse/HDFS-15714
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: datanode, namenode
>    Affects Versions: 3.4.0
>            Reporter: Feilong He
>            Assignee: Feilong He
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS-15714-01.patch, 
> HDFS_Provided_Storage_Design-V1.pdf, HDFS_Provided_Storage_Performance-V1.pdf
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS Provided Storage (PS) is a feature to tier HDFS over other file systems. 
> In HDFS-9806, PROVIDED storage type was introduced to HDFS. Through 
> configuring external storage with PROVIDED tag for DataNode, user can enable 
> application to access data stored externally from HDFS side. However, there 
> are two issues need to be addressed. Firstly, mounting external storage 
> on-the-fly, namely dynamic mount, is lacking. It is necessary to get it 
> supported to flexibly combine HDFS with an external storage at runtime. 
> Secondly, PS write is not supported by current HDFS. But in real 
> applications, it is common to transfer data bi-directionally for read/write 
> between HDFS and external storage.
> Through this JIRA, we are presenting our work for PS write support and 
> dynamic mount support for both read & write. Please note in the community 
> several JIRAs have been filed for these topics. Our work is based on these 
> previous community work, with new design & implementation to support called 
> writeBack mount and enable admin to add any mount on-the-fly. We appreciate 
> those folks in the community for their great contribution! See their pending 
> JIRAs: HDFS-14805 & HDFS-12090.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to