[ 
https://issues.apache.org/jira/browse/HDFS-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278814#comment-13278814
 ] 

Hudson commented on HDFS-3440:
------------------------------

Integrated in Hadoop-Mapreduce-trunk #1083 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1083/])
    HDFS-3440. More effectively limit stream memory consumption when reading 
corrupt edit logs. Contributed by Colin Patrick McCabe. (Revision 1339978)

     Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1339978
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/java/org/apache/hadoop/contrib/bkjournal/BookKeeperEditLogInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/StreamLimiter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java

                
> should more effectively limit stream memory consumption when reading corrupt 
> edit logs
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-3440
>                 URL: https://issues.apache.org/jira/browse/HDFS-3440
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1
>
>         Attachments: HDFS-3440.001.patch, HDFS-3440.002.patch
>
>
> Currently, we do in.mark(100MB) before reading an opcode out of the edit log. 
>  However, this could result in us usin all of those 100 MB when reading bogus 
> data, which is not what we want.  It also could easily make some corrupt edit 
> log files unreadable.
> We should have a stream limiter interface, that causes a clean IOException 
> when we're in this situation, and does not result in huge memory consumption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to