[ https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13918948#comment-13918948 ]
Colin Patrick McCabe commented on HDFS-5995: -------------------------------------------- So, if I understand correctly, you'd like to verify the checksum prior to attempting to deserialize the op. That would allow us to have some confidence against random bit flips. That seems reasonable to me. As a side note, I don't think you need to do another buffer copy to do it. You could probably just clone a ByteBuffer from the existing one and set a different mark or limit. In any case, it's good form to guard against malformed inputs at every level (defense in depth), so we should probably fix this JIRA, and then file a follow-on if it seems like a good idea... > TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError > and dumps heap. > -------------------------------------------------------------------------------------------- > > Key: HDFS-5995 > URL: https://issues.apache.org/jira/browse/HDFS-5995 > Project: Hadoop HDFS > Issue Type: Test > Components: namenode, test > Affects Versions: 3.0.0 > Reporter: Chris Nauroth > Assignee: Chris Nauroth > Priority: Minor > Attachments: HDFS-5995.1.patch > > > {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing > {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685. This > doesn't actually cause the test to fail, because it's a failure test that > corrupts an edit log intentionally. Still, this might cause confusion if > someone reviews the build logs and thinks this is a more serious problem. -- This message was sent by Atlassian JIRA (v6.2#6252)