[
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15248984#comment-15248984
]
Chris Nauroth commented on HDFS-10312:
--------------------------------------
[~arpitagarwal], I like your suggestion for speeding up the test.
Unfortunately, I think this doesn't quite give us the same test coverage. To
demonstrate this, apply patch v004, then revert the src/main changes, and then
run the test. It will fail on a protobuf decoding exception. That's exactly
the condition we want to test, and the src/main changes make the test pass.
After applying the delta, that's no longer true. The test passes with or
without the src/main changes. That's because with the smaller block report
sizes, we don't hit the internal protobuf default of 64 MB maximum. Using a
block report size of 6000000, we definitely push over 64 MB for the RPC message
size, so we definitely trigger the right condition.
> Large block reports may fail to decode at NameNode due to 64 MB protobuf
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
> Key: HDFS-10312
> URL: https://issues.apache.org/jira/browse/HDFS-10312
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch,
> HDFS-10312.003.patch, HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by
> default. For exceptional circumstances, this can be uptuned using
> {{ipc.maximum.data.length}}. However, for block reports, there is still an
> internal maximum length restriction of 64 MB enforced by protobuf. (Sample
> stack trace to follow in comments.) This issue proposes to apply the same
> override to our block list decoding, so that large block reports can proceed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)