[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13721116#comment-13721116
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9676:
-

[~cmccabe], can you please set the fix-version for this one? 2.1.0-beta? 
(Haven't seen your other commits, but saying out just in case - we set 
target-version pre-commit and fix-version at commit time.)

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697682#comment-13697682
 ] 

Hudson commented on HADOOP-9676:


Integrated in Hadoop-Yarn-trunk #258 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/258/])
HADOOP-9676.  Make maximum RPC buffer size configurable (Colin Patrick 
McCabe) (Revision 1498737)

 Result = FAILURE
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498737
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java


 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697774#comment-13697774
 ] 

Hudson commented on HADOOP-9676:


Integrated in Hadoop-Mapreduce-trunk #1475 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1475/])
HADOOP-9676.  Make maximum RPC buffer size configurable (Colin Patrick 
McCabe) (Revision 1498737)

 Result = FAILURE
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498737
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java


 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697126#comment-13697126
 ] 

Hadoop QA commented on HADOOP-9676:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590290/HADOOP-9676.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2713//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2713//console

This message is automatically generated.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697267#comment-13697267
 ] 

Suresh Srinivas commented on HADOOP-9676:
-

Can you please merge this to 2.1.0-beta, since rc2 is not yet out?

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697266#comment-13697266
 ] 

Suresh Srinivas commented on HADOOP-9676:
-

+1 for the patch.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697327#comment-13697327
 ] 

Hudson commented on HADOOP-9676:


Integrated in Hadoop-trunk-Commit #4027 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4027/])
HADOOP-9676.  Make maximum RPC buffer size configurable (Colin Patrick 
McCabe) (Revision 1498737)

 Result = SUCCESS
cmccabe : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498737
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java


 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697368#comment-13697368
 ] 

Roman Shaposhnik commented on HADOOP-9676:
--

Tested this patch on top of branch-2.1 with Bigtop -- the biggest issue (NN 
OOMing) is now gone, but a few subtests from TestCLI still fail. A big +1 to 
have this patch as part of 2.1

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-06-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13695847#comment-13695847
 ] 

Suresh Srinivas commented on HADOOP-9676:
-

+1 for the patch.

Could you move the all the dataLength check to a static method and add a test 
for that?

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira