[ 
https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249247#comment-15249247
 ] 

Brahma Reddy Battula commented on HDFS-10312:
---------------------------------------------

We've seen same issue and reported HDFS-8574 . As per discussion , it can be 
solved by HDFS-9011 but did not seen any progress there.
As colin suggested there, "It would be simpler for the admin to create two (or 
more) storages on the same drive, and it wouldn't involve any code modification 
by us." 

Even now numofblocks per volume are exposed ( HDFS-9425) such that admin can 
monitor this.


> Large block reports may fail to decode at NameNode due to 64 MB protobuf 
> maximum length restriction.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10312
>                 URL: https://issues.apache.org/jira/browse/HDFS-10312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch, 
> HDFS-10312.003.patch, HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by 
> default.  For exceptional circumstances, this can be uptuned using 
> {{ipc.maximum.data.length}}.  However, for block reports, there is still an 
> internal maximum length restriction of 64 MB enforced by protobuf.  (Sample 
> stack trace to follow in comments.)  This issue proposes to apply the same 
> override to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to