[
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13697774#comment-13697774
]
Hudson commented on HADOOP-9676:
--------------------------------
Integrated in Hadoop-Mapreduce-trunk #1475 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1475/])
HADOOP-9676. Make maximum RPC buffer size configurable (Colin Patrick
McCabe) (Revision 1498737)
Result = FAILURE
cmccabe :
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1498737
Files :
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
> make maximum RPC buffer size configurable
> -----------------------------------------
>
> Key: HADOOP-9676
> URL: https://issues.apache.org/jira/browse/HADOOP-9676
> Project: Hadoop Common
> Issue Type: Improvement
> Affects Versions: 2.1.0-beta
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Priority: Minor
> Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch
>
>
> Currently the RPC server just allocates however much memory the client asks
> for, without validating. It would be nice to make the maximum RPC buffer
> size configurable. This would prevent a rogue client from bringing down the
> NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers. It
> would also make it easier to debug issues with super-large RPCs or malformed
> headers, since OOMs can be difficult for developers to reproduce.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira