[
https://issues.apache.org/jira/browse/HDFS-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165633#comment-17165633
]
Hadoop QA commented on HDFS-15484:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
17m 11s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m
20s{color} | {color:blue} Used deprecated FindBugs config; considering
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m
28s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 28s{color} |
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 28s{color}
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 58s{color} | {color:orange} hadoop-hdfs-project: The patch generated 12 new
+ 394 unchanged - 0 fixed = 406 total (was 394) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
13m 36s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m
25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 3 new
+ 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 4 new + 0
unchanged - 0 fixed = 4 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 18s{color}
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
47s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 54s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
| | Invocation of toString on dst in
org.apache.hadoop.hdfs.DistributedFileSystem.batchRename(String[], String[],
Options$Rename[]) At DistributedFileSystem.java:in
org.apache.hadoop.hdfs.DistributedFileSystem.batchRename(String[], String[],
Options$Rename[]) At DistributedFileSystem.java:[line 973] |
| | Invocation of toString on src in
org.apache.hadoop.hdfs.DistributedFileSystem.batchRename(String[], String[],
Options$Rename[]) At DistributedFileSystem.java:in
org.apache.hadoop.hdfs.DistributedFileSystem.batchRename(String[], String[],
Options$Rename[]) At DistributedFileSystem.java:[line 973] |
| | Dead store to startTime in
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.batchRename(String[],
String[], Options$Rename[]) At
ClientNamenodeProtocolTranslatorPB.java:org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.batchRename(String[],
String[], Options$Rename[]) At ClientNamenodeProtocolTranslatorPB.java:[line
659] |
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
| | Invocation of toString on dsts in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.batchRename(String[],
String[], boolean, Options$Rename[]) At FSNamesystem.java:in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.batchRename(String[],
String[], boolean, Options$Rename[]) At FSNamesystem.java:[line 3364] |
| | Invocation of toString on srcs in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.batchRename(String[],
String[], boolean, Options$Rename[]) At FSNamesystem.java:in
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.batchRename(String[],
String[], boolean, Options$Rename[]) At FSNamesystem.java:[line 3364] |
| | Invocation of toString on dsts in
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.batchRename(String[],
String[], Options$Rename[]) At NameNodeRpcServer.java:in
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.batchRename(String[],
String[], Options$Rename[]) At NameNodeRpcServer.java:[line 1104] |
| | Invocation of toString on srcs in
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.batchRename(String[],
String[], Options$Rename[]) At NameNodeRpcServer.java:in
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.batchRename(String[],
String[], Options$Rename[]) At NameNodeRpcServer.java:[line 1104] |
| Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
| | hadoop.tools.TestHdfsConfigFields |
| | hadoop.hdfs.TestStripedFileAppend |
| | hadoop.hdfs.TestReadStripedFileWithDNFailure |
| | hadoop.hdfs.TestFileChecksum |
| | hadoop.hdfs.server.balancer.TestBalancer |
| | hadoop.hdfs.TestReconstructStripedFile |
| | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
| | hadoop.hdfs.TestFileConcurrentReader |
| | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
| | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
| | hadoop.hdfs.TestErasureCodingPolicies |
| | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
| | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
| | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
| | hadoop.hdfs.TestDFSClientRetries |
| | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
| | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base:
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/Dockerfile
|
| JIRA Issue | HDFS-15484 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/13008466/HDFS-15484.new_method.patch
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle cc prototool |
| uname | Linux 91dd6ba45ce7 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 60a254621a3 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| compile |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/patch-compile-hadoop-hdfs-project.txt
|
| cc |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/patch-compile-hadoop-hdfs-project.txt
|
| javac |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/patch-compile-hadoop-hdfs-project.txt
|
| checkstyle |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
|
| findbugs |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
|
| findbugs |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
|
| unit |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/testReport/ |
| Max. process+thread count | 3465 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/3/console |
| versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
This message was automatically generated.
> Add option in enum Rename to suport batch rename
> ------------------------------------------------
>
> Key: HDFS-15484
> URL: https://issues.apache.org/jira/browse/HDFS-15484
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: dfsclient, namenode, performance
> Reporter: Yang Yun
> Assignee: Yang Yun
> Priority: Minor
> Attachments: HDFS-15484.001.patch, HDFS-15484.new_method.patch
>
>
> Sometime we need rename many files after a task, add a new option in enum
> Rename to support batch rename, which only need one RPC and one lock. For
> example,
> rename(new Path("/dir1/f1::/dir2/f2"), new Path("/dir3/f1::dir4/f4"),
> Rename.BATCH)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]