[
https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240565#comment-13240565
]
Colin Patrick McCabe commented on HDFS-3134:
--------------------------------------------
Hi Suresh,
I'm sorry if my description was unclear. I am not talking about blindly
translating unchecked exceptions into something else. I'm talking about fixing
the code so it doesn't generate those unchecked exceptions in the first place.
Hope this helps.
Colin
> harden edit log loader against malformed or malicious input
> -----------------------------------------------------------
>
> Key: HDFS-3134
> URL: https://issues.apache.org/jira/browse/HDFS-3134
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
>
> Currently, the edit log loader does not handle bad or malicious input
> sensibly.
> We can often cause OutOfMemory exceptions, null pointer exceptions, or other
> unchecked exceptions to be thrown by feeding the edit log loader bad input.
> In some environments, an out of memory error can cause the JVM process to be
> terminated.
> It's clear that we want these exceptions to be thrown as IOException instead
> of as unchecked exceptions. We also want to avoid out of memory situations.
> The main task here is to put a sensible upper limit on the lengths of arrays
> and strings we allocate on command. The other task is to try to avoid
> creating unchecked exceptions (by dereferencing potentially-NULL pointers,
> for example). Instead, we should verify ahead of time and give a more
> sensible error message that reflects the problem with the input.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira