[ 
https://issues.apache.org/jira/browse/HADOOP-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8361:
-----------------------------------------

    Attachment: HADOOP-8361.003.patch
    
> avoid out-of-memory problems when deserializing strings
> -------------------------------------------------------
>
>                 Key: HADOOP-8361
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8361
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8361.001.patch, HADOOP-8361.002.patch, 
> HADOOP-8361.003.patch
>
>
> In HDFS, we want to be able to read the edit log without crashing on an OOM 
> condition.  Unfortunately, we currently cannot do this, because there are no 
> limits on the length of certain data types we pull from the edit log.  We 
> often read strings without setting any upper limit on the length we're 
> prepared to accept.
> It's not that we don't have limits on strings-- for example, HDFS limits the 
> maximum path length to 8000 UCS-2 characters.  Linux limits the maximum user 
> name length to either 64 or 128 bytes, depending on what version you are 
> running.  It's just that we're not exposing these limits to the 
> deserialization functions that need to be aware of them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to