[
https://issues.apache.org/jira/browse/HDFS-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568320#comment-13568320
]
Hadoop QA commented on HDFS-4461:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12567472/memory-analysis.png
against trunk revision .
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3927//console
This message is automatically generated.
> DirectoryScanner: volume path prefix takes up memory for every block that is
> scanned
> -------------------------------------------------------------------------------------
>
> Key: HDFS-4461
> URL: https://issues.apache.org/jira/browse/HDFS-4461
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 2.0.3-alpha
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: 002.patch, memory-analysis.png
>
>
> In the {{DirectoryScanner}}, we create a class {{ScanInfo}} for every block.
> This object contains two File objects-- one for the metadata file, and one
> for the block file. Since those File objects contain full paths, users who
> pick a lengthly path for their volume roots will end up using an extra
> N_blocks * path_prefix bytes per block scanned. We also don't really need to
> store File objects-- storing strings and then creating File objects as needed
> would be cheaper. This has been causing out-of-memory conditions for users
> who pick such long volume paths.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira