[ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13918452#comment-13918452
 ] 

Colin Patrick McCabe commented on HDFS-5995:
--------------------------------------------

bq. Currently almost all ops assume that the data is not corrupted. 
Practically, it is also difficult for the code to tell whether the data is 
corrupted.

Haohui, we did a lot of work in years past to figure out how to avoid crashing 
the NameNode when looking at invalid data in the edit log.  If we crash when 
looking at invalid data, it could take down the JN, NN, or other important 
daemon.  Disks are unreliable, and we never know quite what they're going to 
show us.

That's why we came up with {{dfs.namenode.max.op.size}} -- see HDFS-3440 for 
details.  [~cnauroth], it sounds like the ACL code is not honoring this 
parameter.  We should fix it.

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
> and dumps heap.
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5995
>                 URL: https://issues.apache.org/jira/browse/HDFS-5995
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: namenode, test
>    Affects Versions: 3.0.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>            Priority: Minor
>         Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
> {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
> doesn't actually cause the test to fail, because it's a failure test that 
> corrupts an edit log intentionally.  Still, this might cause confusion if 
> someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to