[
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249299#comment-15249299
]
Chris Nauroth commented on HDFS-10312:
--------------------------------------
It appears the discussions in those other JIRAs missed the point that
{{ipc.maximum.data.length}} controls only the maximum payload accepted by the
RPC server. Without this patch, it is not sufficient to work around the size
enforcement by protobuf, demonstrated in the stack trace that I included in
prior comments. Asking admins to repartition blocks across multiple storages
on the same drive isn't a viable workaround for them. HDFS-9011 is a much
deeper change that will require further review. This patch is a simple way to
unblock clusters that have already gotten into this state accidentally.
> Large block reports may fail to decode at NameNode due to 64 MB protobuf
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
> Key: HDFS-10312
> URL: https://issues.apache.org/jira/browse/HDFS-10312
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch,
> HDFS-10312.003.patch, HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by
> default. For exceptional circumstances, this can be uptuned using
> {{ipc.maximum.data.length}}. However, for block reports, there is still an
> internal maximum length restriction of 64 MB enforced by protobuf. (Sample
> stack trace to follow in comments.) This issue proposes to apply the same
> override to our block list decoding, so that large block reports can proceed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)