[
https://issues.apache.org/jira/browse/HDFS-15714?focusedWorklogId=542754&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542754
]
ASF GitHub Bot logged work on HDFS-15714:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Jan/21 10:32
Start Date: 27/Jan/21 10:32
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #2655:
URL: https://github.com/apache/hadoop/pull/2655#issuecomment-768192307
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 1m 27s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 4s | | No case conflicting files
found. |
| +0 :ok: | buf | 0m 1s | | buf was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch
appears to include 79 new or modified test files. |
|||| _ HDFS-15714 Compile Tests _ |
| +0 :ok: | mvndep | 13m 53s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 23m 50s | | HDFS-15714 passed |
| +1 :green_heart: | compile | 21m 54s | | HDFS-15714 passed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 |
| +1 :green_heart: | compile | 18m 22s | | HDFS-15714 passed with JDK
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| +1 :green_heart: | checkstyle | 4m 9s | | HDFS-15714 passed |
| +1 :green_heart: | mvnsite | 6m 3s | | HDFS-15714 passed |
| +1 :green_heart: | shadedclient | 27m 51s | | branch has no errors
when building and testing our client artifacts. |
| +1 :green_heart: | javadoc | 4m 30s | | HDFS-15714 passed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 |
| +1 :green_heart: | javadoc | 5m 53s | | HDFS-15714 passed with JDK
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| +0 :ok: | spotbugs | 0m 46s | | Used deprecated FindBugs config;
considering switching to SpotBugs. |
| +1 :green_heart: | findbugs | 11m 26s | | HDFS-15714 passed |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 4m 31s | | the patch passed |
| +1 :green_heart: | compile | 21m 7s | | the patch passed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 |
| -1 :x: | cc | 21m 7s |
[/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
| root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 30 new + 142 unchanged - 30
fixed = 172 total (was 172) |
| -1 :x: | javac | 21m 7s |
[/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
| root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 67 new + 2006 unchanged - 27
fixed = 2073 total (was 2033) |
| +1 :green_heart: | compile | 22m 21s | | the patch passed with JDK
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| -1 :x: | cc | 22m 21s |
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
| root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 33 new + 139
unchanged - 33 fixed = 172 total (was 172) |
| -1 :x: | javac | 22m 21s |
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
| root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 67 new + 1901
unchanged - 27 fixed = 1968 total (was 1928) |
| -0 :warning: | checkstyle | 4m 43s |
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-checkstyle-root.txt)
| root: The patch generated 144 new + 4280 unchanged - 35 fixed = 4424 total
(was 4315) |
| +1 :green_heart: | mvnsite | 9m 18s | | the patch passed |
| -1 :x: | whitespace | 0m 0s |
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/whitespace-eol.txt)
| The patch has 6 line(s) that end in whitespace. Use git apply
--whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply |
| +1 :green_heart: | xml | 0m 7s | | The patch has no ill-formed XML
file. |
| +1 :green_heart: | shadedclient | 19m 31s | | patch has no errors
when building and testing our client artifacts. |
| -1 :x: | javadoc | 1m 9s |
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
| hadoop-common in the patch failed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. |
| -1 :x: | javadoc | 0m 55s |
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
| hadoop-hdfs-client in the patch failed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. |
| -1 :x: | javadoc | 1m 19s |
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
| hadoop-hdfs in the patch failed with JDK
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. |
| -1 :x: | javadoc | 1m 34s |
[/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
|
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 2 new
+ 1 unchanged - 0 fixed = 3 total (was 1) |
| -1 :x: | findbugs | 2m 31s |
[/new-findbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html)
| hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed
= 1 total (was 0) |
| -1 :x: | findbugs | 3m 50s |
[/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html)
| hadoop-hdfs-project/hadoop-hdfs generated 25 new + 0 unchanged - 0 fixed =
25 total (was 0) |
|||| _ Other Tests _ |
| -1 :x: | unit | 17m 9s |
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
| hadoop-common in the patch passed. |
| +1 :green_heart: | unit | 2m 30s | | hadoop-hdfs-client in the patch
passed. |
| -1 :x: | unit | 127m 42s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch passed. |
| -1 :x: | unit | 0m 44s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
| hadoop-hdfs-rbf in the patch failed. |
| -1 :x: | unit | 0m 46s |
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
| hadoop-aws in the patch failed. |
| -1 :x: | unit | 16m 27s |
[/patch-unit-hadoop-tools_hadoop-fs2img.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-unit-hadoop-tools_hadoop-fs2img.txt)
| hadoop-fs2img in the patch passed. |
| -1 :x: | asflicense | 1m 52s |
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/patch-asflicense-problems.txt)
| The patch generated 1 ASF License warnings. |
| | | 409m 22s | | |
| Reason | Tests |
|-------:|:------|
| FindBugs | module:hadoop-common-project/hadoop-common |
| | Found reliance on default encoding in
org.apache.hadoop.fs.impl.FileSystemMultipartUploader.lambda$innerComplete$3(Map$Entry):in
org.apache.hadoop.fs.impl.FileSystemMultipartUploader.lambda$innerComplete$3(Map$Entry):
String.getBytes() At FileSystemMultipartUploader.java:[line 217] |
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
| | Switch statement found in
org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(DatanodeProtocolProtos$ProvidedVolCommandProto)
where default case is missing At PBHelper.java:where default case is missing
At PBHelper.java:[lines 833-838] |
| | Switch statement found in
org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(ProvidedVolumeCommand) where
default case is missing At PBHelper.java:where default case is missing At
PBHelper.java:[lines 640-645] |
| |
org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.addBlocksToAliasMap(Map)
makes inefficient use of keySet iterator instead of entrySet iterator At
ProvidedStorageMap.java:keySet iterator instead of entrySet iterator At
ProvidedStorageMap.java:[line 274] |
| | Redundant nullcheck of nnProxy, which is known to be non-null in
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.NamenodeInMemoryAliasMapClient.setConf(Configuration)
Redundant null check at NamenodeInMemoryAliasMapClient.java:is known to be
non-null in
org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.NamenodeInMemoryAliasMapClient.setConf(Configuration)
Redundant null check at NamenodeInMemoryAliasMapClient.java:[line 51] |
| | Redundant nullcheck of
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap.replicas(String),
which is known to be non-null in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(String)
Redundant null check at FsDatasetImpl.java:is known to be non-null in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(String)
Redundant null check at FsDatasetImpl.java:[line 229] |
| |
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(RamDiskReplicaTracker)
makes inefficient use of keySet iterator instead of entrySet iterator At
FsVolumeImpl.java:keySet iterator instead of entrySet iterator At
FsVolumeImpl.java:[line 1074] |
| | Redundant nullcheck of
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.localReplicaInfo,
which is known to be non-null in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.createLocalReplica(FsVolumeImpl)
Redundant null check at SynchronousReadThroughInputStream.java:is known to be
non-null in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.SynchronousReadThroughInputStream.createLocalReplica(FsVolumeImpl)
Redundant null check at SynchronousReadThroughInputStream.java:[line 175] |
| | instanceof will always return true for all non-null values in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap.addAll(VolumeReplicaMap),
since all
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap are
instances of
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap At
VolumeReplicaMap.java:for all non-null values in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap.addAll(VolumeReplicaMap),
since all
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap are
instances of
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.VolumeReplicaMap At
VolumeReplicaMap.java:[line 170] |
| | Redundant nullcheck of xConfig, which is known to be non-null in
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddMountOp.writeFields(DataOutputStream)
Redundant null check at FSEditLogOp.java:is known to be non-null in
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddMountOp.writeFields(DataOutputStream)
Redundant null check at FSEditLogOp.java:[line 4466] |
| | Possible null pointer dereference of r in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long,
String, String, long) Dereferenced at FSNamesystem.java:r in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long,
String, String, long) Dereferenced at FSNamesystem.java:[line 2440] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.MountManager.prepareMount(String,
String, MountMode, Map, Configuration):in
org.apache.hadoop.hdfs.server.namenode.MountManager.prepareMount(String,
String, MountMode, Map, Configuration): String.getBytes() At
MountManager.java:[line 177] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.lambda$getXattrValueByName$3(XAttr):in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.lambda$getXattrValueByName$3(XAttr):
new String(byte[]) At SyncMountManager.java:[line 324] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.performInitialDiff(String,
String):in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.performInitialDiff(String,
String): String.getBytes() At SyncMountManager.java:[line 227] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.storeSnapshotNameAsXAttr(String,
String, String, XAttrSetFlag):in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.storeSnapshotNameAsXAttr(String,
String, String, XAttrSetFlag): String.getBytes() At
SyncMountManager.java:[line 282] |
| | Incorrect lazy initialization of static field
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.manager in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.getInstance(Configuration,
FSNamesystem) At SyncMountManager.java:field
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.manager in
org.apache.hadoop.hdfs.server.namenode.SyncMountManager.getInstance(Configuration,
FSNamesystem) At SyncMountManager.java:[lines 104-105] |
| |
org.apache.hadoop.hdfs.server.namenode.mountmanager.SimpleReadCacheManager.findBlocksToEvict(long)
makes inefficient use of keySet iterator instead of entrySet iterator At
SimpleReadCacheManager.java:keySet iterator instead of entrySet iterator At
SimpleReadCacheManager.java:[line 320] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getSourceSnapshotId(SnapshotDiffReport):in
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getSourceSnapshotId(SnapshotDiffReport):
String.getBytes() At SyncMonitor.java:[line 303] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getTargetSnapshotId(SnapshotDiffReport):in
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncMonitor.getTargetSnapshotId(SnapshotDiffReport):
String.getBytes() At SyncMonitor.java:[line 315] |
| | Inconsistent synchronization of
org.apache.hadoop.hdfs.server.namenode.syncservice.SyncServiceSatisfier.syncServiceSatisfierThread;
locked 70% of time Unsynchronized access at SyncServiceSatisfier.java:70% of
time Unsynchronized access at SyncServiceSatisfier.java:[line 182] |
| | Incorrect lazy initialization of static field
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.writeCacheEvictor
in
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.getInstance(Configuration,
FSNamesystem) At WriteCacheEvictor.java:field
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.writeCacheEvictor
in
org.apache.hadoop.hdfs.server.namenode.syncservice.WriteCacheEvictor.getInstance(Configuration,
FSNamesystem) At WriteCacheEvictor.java:[lines 84-92] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
Path):in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
Path): new String(byte[]) At DirectoryPlanner.java:[line 63] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
Path, String):in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.convertPathToAbsoluteFile(byte[],
Path, String): new String(byte[]) At DirectoryPlanner.java:[line 73] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.createPlanForDirectory(SnapshotDiffReport$DiffReportEntry,
String, ProvidedVolumeInfo, int):in
org.apache.hadoop.hdfs.server.namenode.syncservice.planner.DirectoryPlanner.createPlanForDirectory(SnapshotDiffReport$DiffReportEntry,
String, ProvidedVolumeInfo, int): String.getBytes() At
DirectoryPlanner.java:[line 103] |
| | Format string should use %n rather than n in
org.apache.hadoop.hdfs.tools.DFSAdmin.addMount(String[]) At
DFSAdmin.java:rather than n in
org.apache.hadoop.hdfs.tools.DFSAdmin.addMount(String[]) At
DFSAdmin.java:[line 2711] |
| | Format string should use %n rather than n in
org.apache.hadoop.hdfs.tools.DFSAdmin.listMounts(String[]) At
DFSAdmin.java:rather than n in
org.apache.hadoop.hdfs.tools.DFSAdmin.listMounts(String[]) At
DFSAdmin.java:[line 2746] |
| Failed junit tests | hadoop.ha.TestZKFailoverController |
| | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
| | hadoop.hdfs.tools.TestDFSAdminWithHA |
| | hadoop.hdfs.TestDatanodeRegistration |
| | hadoop.hdfs.TestDFSUpgradeFromImage |
| | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| | hadoop.hdfs.TestAppendSnapshotTruncate |
| | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
| | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
| | hadoop.hdfs.TestGetFileChecksum |
| | hadoop.hdfs.TestRollingUpgrade |
| | hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling |
| | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
| | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
| | hadoop.tools.TestJMXGet |
| | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
| | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
| | hadoop.hdfs.TestBlocksScheduledCounter |
| | hadoop.hdfs.TestDistributedFileSystem |
| | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
| | hadoop.hdfs.security.TestDelegationToken |
| | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
| | hadoop.hdfs.TestFileLengthOnClusterRestart |
| | hadoop.hdfs.TestFileAppend3 |
| | hadoop.hdfs.TestSafeMode |
| | hadoop.hdfs.TestAppendDifferentChecksum |
| | hadoop.hdfs.TestFileAppend2 |
| | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
| | hadoop.hdfs.web.TestWebHDFSAcl |
| | hadoop.hdfs.tools.TestDFSAdmin |
| | hadoop.tools.TestHdfsConfigFields |
| | hadoop.hdfs.server.blockmanagement.TestCorruptionWithFailover |
| | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
| | hadoop.hdfs.server.blockmanagement.TestNodeCount |
| | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
| | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
| | hadoop.hdfs.server.balancer.TestBalancer |
| | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
| | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
| | hadoop.hdfs.TestAclsEndToEnd |
| | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
| | hadoop.hdfs.server.blockmanagement.TestBlockReportLease |
| | hadoop.fs.TestFcHdfsPermission |
| | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
| | hadoop.hdfs.TestLeaseRecoveryStriped |
| | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
| | hadoop.hdfs.server.balancer.TestBalancerService |
| | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
| |
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
|
| | hadoop.hdfs.TestMissingBlocksAlert |
| | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
| | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
| | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
| | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
| | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
| | hadoop.hdfs.server.namenode.TestMultiRootProvidedCluster |
| | hadoop.hdfs.server.namenode.TestSingleUGIResolver |
| | hadoop.hdfs.server.namenode.TestFailuresDuringMount |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/2655 |
| JIRA Issue | HDFS-15714 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle xml cc buflint bufcompat |
| uname | Linux 9a595f5ac4c5 4.15.0-101-generic #102-Ubuntu SMP Mon May 11
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | HDFS-15714 / d82009599a2 |
| Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/testReport/ |
| Max. process+thread count | 2226 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs
hadoop-hdfs-project/hadoop-hdfs-rbf hadoop-tools/hadoop-aws
hadoop-tools/hadoop-fs2img U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/console |
| versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 542754)
Time Spent: 20m (was: 10m)
> HDFS Provided Storage Read/Write Mount Support On-the-fly
> ---------------------------------------------------------
>
> Key: HDFS-15714
> URL: https://issues.apache.org/jira/browse/HDFS-15714
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, namenode
> Affects Versions: 3.4.0
> Reporter: Feilong He
> Assignee: Feilong He
> Priority: Major
> Labels: pull-request-available
> Attachments: HDFS-15714-01.patch,
> HDFS_Provided_Storage_Design-V1.pdf, HDFS_Provided_Storage_Performance-V1.pdf
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> HDFS Provided Storage (PS) is a feature to tier HDFS over other file systems.
> In HDFS-9806, PROVIDED storage type was introduced to HDFS. Through
> configuring external storage with PROVIDED tag for DataNode, user can enable
> application to access data stored externally from HDFS side. However, there
> are two issues need to be addressed. Firstly, mounting external storage
> on-the-fly, namely dynamic mount, is lacking. It is necessary to get it
> supported to flexibly combine HDFS with an external storage at runtime.
> Secondly, PS write is not supported by current HDFS. But in real
> applications, it is common to transfer data bi-directionally for read/write
> between HDFS and external storage.
> Through this JIRA, we are presenting our work for PS write support and
> dynamic mount support for both read & write. Please note in the community
> several JIRAs have been filed for these topics. Our work is based on these
> previous community work, with new design & implementation to support called
> writeBack mount and enable admin to add any mount on-the-fly. We appreciate
> those folks in the community for their great contribution! See their pending
> JIRAs: HDFS-14805 & HDFS-12090.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]