[
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15494236#comment-15494236
]
Hadoop QA commented on HADOOP-12502:
------------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 3 new or modified test
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 31s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch
generated 3 new + 450 unchanged - 0 fixed = 453 total (was 450) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 34s{color}
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
21s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 42s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12502 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12828694/HADOOP-12502-04.patch
|
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 2ceeb759c351 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / fcbac00 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HADOOP-Build/10520/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/10520/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HADOOP-Build/10520/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U:
hadoop-common-project/hadoop-common |
| Console output |
https://builds.apache.org/job/PreCommit-HADOOP-Build/10520/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> SetReplication OutOfMemoryError
> -------------------------------
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.3.0
> Reporter: Philipp Schuegerl
> Assignee: Vinayakumar B
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch,
> HADOOP-12502-03.patch, HADOOP-12502-04.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory.
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> at java.util.Arrays.copyOfRange(Arrays.java:2694)
> at java.lang.String.<init>(String.java:203)
> at java.lang.String.substring(String.java:1913)
> at java.net.URI$Parser.substring(URI.java:2850)
> at java.net.URI$Parser.parse(URI.java:3046)
> at java.net.URI.<init>(URI.java:753)
> at org.apache.hadoop.fs.Path.initialize(Path.java:203)
> at org.apache.hadoop.fs.Path.<init>(Path.java:116)
> at org.apache.hadoop.fs.Path.<init>(Path.java:94)
> at
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
> at
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> at
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
> at
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]