[
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309507#comment-16309507
]
Vinayakumar B edited comment on HADOOP-12502 at 1/19/18 6:06 AM:
-----------------------------------------------------------------
Fixed the testcase by keeping old way (non-iterator and sorted) for *-getmerge*
command.
Without this change test fails due to change in the order of elements in
listStatusIterator().
LocalFileSystem returns sorted items in Windows, but it might not be sorted in
linux. So test failing.
was (Author: vinayrpet):
Fixed the testcase by keeping old way (non-iterator and sorted) for *-getmerge*
command.
Without this change test fails due to change in the order of elements in
listStatusIterator().
LocalFileSystem returns sorted items in Windows, but it will be same in linux.
So test failing.
> SetReplication OutOfMemoryError
> -------------------------------
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.3.0
> Reporter: Philipp Schuegerl
> Assignee: Vinayakumar B
> Priority: Major
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch,
> HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch,
> HADOOP-12502-06.patch, HADOOP-12502-07.patch, HADOOP-12502-08.patch,
> HADOOP-12502-09.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory.
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
> exceeded
> at java.util.Arrays.copyOfRange(Arrays.java:2694)
> at java.lang.String.<init>(String.java:203)
> at java.lang.String.substring(String.java:1913)
> at java.net.URI$Parser.substring(URI.java:2850)
> at java.net.URI$Parser.parse(URI.java:3046)
> at java.net.URI.<init>(URI.java:753)
> at org.apache.hadoop.fs.Path.initialize(Path.java:203)
> at org.apache.hadoop.fs.Path.<init>(Path.java:116)
> at org.apache.hadoop.fs.Path.<init>(Path.java:94)
> at
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
> at
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> at
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
> at
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]