[
https://issues.apache.org/jira/browse/HDFS-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278472#comment-13278472
]
Todd Lipcon commented on HDFS-3440:
-----------------------------------
- StreamLImiter either needs to be package-private or marked with a private
interface annotation
- we don't generally mark interface methods as "abstract". In fact I didn't
know that was legal java
- can you refactor out the code that checks curPos+len against the limit into a
{{checkLimit(int bytesToRead);}} call?
- would be good to add a simple unit test of this functionality - eg construct
a FSEditLogOp.Reader and give it a header which would cause it to try to read
more than MAX_OP_SIZE, verify it throws the expected exception.
> should more effectively limit stream memory consumption when reading corrupt
> edit logs
> --------------------------------------------------------------------------------------
>
> Key: HDFS-3440
> URL: https://issues.apache.org/jira/browse/HDFS-3440
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HDFS-3440.001.patch
>
>
> Currently, we do in.mark(100MB) before reading an opcode out of the edit log.
> However, this could result in us usin all of those 100 MB when reading bogus
> data, which is not what we want. It also could easily make some corrupt edit
> log files unreadable.
> We should have a stream limiter interface, that causes a clean IOException
> when we're in this situation, and does not result in huge memory consumption.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira