[
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249030#comment-15249030
]
Arpit Agarwal commented on HDFS-10312:
--------------------------------------
Yeah I agree that is rather unfortunate since the change to the message length
is not plumbed without your patch.
I think the missing code paths can be tested with a targeted pre-condition that
ensures any change to the config setting is propagated to the BufferDecoder
(and the CodedInputStream) and that pre-condition will fail without your
src/main changes. However it's okay to evaluate it in a follow up Jira and we
don't need to hold up this one.
+1 from me.
> Large block reports may fail to decode at NameNode due to 64 MB protobuf
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
> Key: HDFS-10312
> URL: https://issues.apache.org/jira/browse/HDFS-10312
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch,
> HDFS-10312.003.patch, HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by
> default. For exceptional circumstances, this can be uptuned using
> {{ipc.maximum.data.length}}. However, for block reports, there is still an
> internal maximum length restriction of 64 MB enforced by protobuf. (Sample
> stack trace to follow in comments.) This issue proposes to apply the same
> override to our block list decoding, so that large block reports can proceed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)