[ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5995:
--------------------------------

    Attachment: HDFS-5995.1.patch

This patch puts a soft cap on the size of the array that we pre-allocate in the 
loader.  I call it a soft cap, because it's still an {{ArrayList}} capable of 
growing dynamically.  This has eliminated the heap dumps in my environment.

It seems a shame to have to change the main loader code to accommodate the 
test, but I haven't been able to come up with an alternative fix in the test 
code.

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
> and dumps heap.
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5995
>                 URL: https://issues.apache.org/jira/browse/HDFS-5995
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: namenode, test
>    Affects Versions: 3.0.0
>            Reporter: Chris Nauroth
>            Priority: Minor
>         Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
> {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
> doesn't actually cause the test to fail, because it's a failure test that 
> corrupts an edit log intentionally.  Still, this might cause confusion if 
> someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to