[ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13918863#comment-13918863
 ] 

Chris Nauroth commented on HDFS-5995:
-------------------------------------

It definitely wouldn't help if a NameNode legitimately produced a humongous 
array in its edits, but then I'd expect {{dfs.namenode.max.op.size}} to kick in 
and guard against that indirectly.  (I forgot to mention that 
{{dfs.namenode.max.op.size}} ought to be enforced while reading into the 
intermediate payload buffer.)  This proposal would however protect against 
random corruption that changes a length field to a huge integer, because that 
would screw up the checksum, and we'd have a way to verify the checksum before 
deserialization and array allocation, not after.

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
> and dumps heap.
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5995
>                 URL: https://issues.apache.org/jira/browse/HDFS-5995
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: namenode, test
>    Affects Versions: 3.0.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>            Priority: Minor
>         Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
> {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
> doesn't actually cause the test to fail, because it's a failure test that 
> corrupts an edit log intentionally.  Still, this might cause confusion if 
> someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to