[
https://issues.apache.org/jira/browse/HADOOP-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13268771#comment-13268771
]
Todd Lipcon commented on HADOOP-8361:
-------------------------------------
Also, does BlockLocation's Writable implementation get used anywhere anymore?
Block locations aren't serialized in the fsimage, so I don't know if we need to
change it at this point - and perhaps we could file a separate JIRA to remove
it entirely if it's unused.
> avoid out-of-memory problems when deserializing strings
> -------------------------------------------------------
>
> Key: HADOOP-8361
> URL: https://issues.apache.org/jira/browse/HADOOP-8361
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HADOOP-8361.001.patch, HADOOP-8361.002.patch
>
>
> In HDFS, we want to be able to read the edit log without crashing on an OOM
> condition. Unfortunately, we currently cannot do this, because there are no
> limits on the length of certain data types we pull from the edit log. We
> often read strings without setting any upper limit on the length we're
> prepared to accept.
> It's not that we don't have limits on strings-- for example, HDFS limits the
> maximum path length to 8000 UCS-2 characters. Linux limits the maximum user
> name length to either 64 or 128 bytes, depending on what version you are
> running. It's just that we're not exposing these limits to the
> deserialization functions that need to be aware of them.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira