[ 
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10312:
---------------------------------
    Attachment: HDFS-10312.003.patch

Here is patch v003 with one more change in the test.  I found that all of the 
bogus block IDs were causing a lot of log spam and slowing down the test, 
particularly for the block state change messages and the {{FsDatasetImpl}} 
"Failed to delete replica" messages.  I've changed the test to set log level to 
WARN for these.  That skips the log spam and speeds up the test quite a bit.

> Large block reports may fail to decode at NameNode due to 64 MB protobuf 
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10312
>                 URL: https://issues.apache.org/jira/browse/HDFS-10312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch, 
> HDFS-10312.003.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by 
> default.  For exceptional circumstances, this can be uptuned using 
> {{ipc.maximum.data.length}}.  However, for block reports, there is still an 
> internal maximum length restriction of 64 MB enforced by protobuf.  (Sample 
> stack trace to follow in comments.)  This issue proposes to apply the same 
> override to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to