[
https://issues.apache.org/jira/browse/HDFS-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13544489#comment-13544489
]
Colin Patrick McCabe commented on HDFS-4304:
--------------------------------------------
Last comment should read "the JN that is actually going to be writing the bytes
to disk."
> Make FSEditLogOp.MAX_OP_SIZE configurable
> -----------------------------------------
>
> Key: HDFS-4304
> URL: https://issues.apache.org/jira/browse/HDFS-4304
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.0.0, 2.0.3-alpha
> Reporter: Todd Lipcon
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4304.001.patch, HDFS-4304.002.patch,
> HDFS-4304.003.patch, HDFS-4304.004.patch, HDFS-4304.005.patch
>
>
> Today we ran into an issue where a NN had logged a very large op, greater
> than the 1.5MB MAX_OP_SIZE constant. In order to successfully load the edits,
> we had to patch with a larger constant. This constant should be configurable
> so that we wouldn't have to recompile in these odd cases. Additionally, I
> think the default should be bumped a bit higher, since it's only a safeguard
> against OOME, and people tend to run NNs with multi-GB heaps.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira