[
https://issues.apache.org/jira/browse/HDFS-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13569083#comment-13569083
]
Andy Isaacson commented on HDFS-4461:
-------------------------------------
The actual OOM backtrace is on the DN thread:
{noformat}
at java.lang.OutOfMemoryError.<init>()V (OutOfMemoryError.java:25)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat()Lorg/apache/hadoop/hdfs/server/protocol/HeartbeatResponse;
(BPServiceActor.java:434)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService()V
(BPServiceActor.java:520)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run()V
(BPServiceActor.java:673)
at java.lang.Thread.run()V (Thread.java:662)
{noformat}
> DirectoryScanner: volume path prefix takes up memory for every block that is
> scanned
> -------------------------------------------------------------------------------------
>
> Key: HDFS-4461
> URL: https://issues.apache.org/jira/browse/HDFS-4461
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 2.0.3-alpha
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HDFS-4461.002.patch, HDFS-4461.003.patch,
> memory-analysis.png
>
>
> In the {{DirectoryScanner}}, we create a class {{ScanInfo}} for every block.
> This object contains two File objects-- one for the metadata file, and one
> for the block file. Since those File objects contain full paths, users who
> pick a lengthly path for their volume roots will end up using an extra
> N_blocks * path_prefix bytes per block scanned. We also don't really need to
> store File objects-- storing strings and then creating File objects as needed
> would be cheaper. This would be a nice efficiency improvement.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira