[ 
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10312:
---------------------------------
    Attachment: HDFS-10312.001.patch

The attached patch passes the value of {{ipc.maximum.data.length}} through to 
the block list decoding layer, and then applies it as an override to the 
protobuf classes.  I considered introducing a new configuration property, but 
ultimately I decided against it, because the admin would just have to tune 2 
things in sync if they encountered this problem.  I maintained a few of the old 
method signatures that don't include the max length and annotated them 
{{VisibleForTesting}} to avoid larger impact on existing tests.  The new test 
suite demonstrates the problem and the fix.

> Large block reports may fail to decode at NameNode due to 64 MB protobuf 
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10312
>                 URL: https://issues.apache.org/jira/browse/HDFS-10312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-10312.001.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by 
> default.  For exceptional circumstances, this can be uptuned using 
> {{ipc.maximum.data.length}}.  However, for block reports, there is still an 
> internal maximum length restriction of 64 MB enforced by protobuf.  (Sample 
> stack trace to follow in comments.)  This issue proposes to apply the same 
> override to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to