[
https://issues.apache.org/jira/browse/HDFS-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Colin Patrick McCabe updated HDFS-4304:
---------------------------------------
Attachment: HDFS-4304.005.patch
This version of the patch makes MAX_OP_SIZE configurable in production and not
just in recovery mode.
I didn't implement the warning when writing an over-long opcode. It would be
really tricky to do this "right"-- for example, if you're using QJM, the
maximum op size on your local NameNode may not be the same as on the NN that is
actually going to be writing the bytes to disk. I think that would get messy.
This is just a minimal change to make something which wasn't configurable
before, configurable.
> Make FSEditLogOp.MAX_OP_SIZE configurable
> -----------------------------------------
>
> Key: HDFS-4304
> URL: https://issues.apache.org/jira/browse/HDFS-4304
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.0.0, 2.0.3-alpha
> Reporter: Todd Lipcon
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-4304.001.patch, HDFS-4304.002.patch,
> HDFS-4304.003.patch, HDFS-4304.004.patch, HDFS-4304.005.patch
>
>
> Today we ran into an issue where a NN had logged a very large op, greater
> than the 1.5MB MAX_OP_SIZE constant. In order to successfully load the edits,
> we had to patch with a larger constant. This constant should be configurable
> so that we wouldn't have to recompile in these odd cases. Additionally, I
> think the default should be bumped a bit higher, since it's only a safeguard
> against OOME, and people tend to run NNs with multi-GB heaps.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira