[
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249906#comment-15249906
]
Chris Nauroth commented on HDFS-10312:
--------------------------------------
bq. With your patch, to come out the current case, ipc.maximum.data.length
should be changed in both NN and DN side.
The slightly strange thing is that it seems the 64 MB enforcement by protobuf
only happens at time of decoding a message, not at time of creating the
message. In my testing, I only saw problems on the server side consuming the
message (the NameNode). I'm not sure that it would be strictly required to
make the configuration change on DataNodes, but there is also no harm in doing
it that way.
> Large block reports may fail to decode at NameNode due to 64 MB protobuf
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
> Key: HDFS-10312
> URL: https://issues.apache.org/jira/browse/HDFS-10312
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch,
> HDFS-10312.003.patch, HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by
> default. For exceptional circumstances, this can be uptuned using
> {{ipc.maximum.data.length}}. However, for block reports, there is still an
> internal maximum length restriction of 64 MB enforced by protobuf. (Sample
> stack trace to follow in comments.) This issue proposes to apply the same
> override to our block list decoding, so that large block reports can proceed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)