[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Fix Version/s: 2.1.0-beta

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Affects Version/s: 2.2.0
   Status: Patch Available  (was: Open)

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Attachment: HADOOP-9676.003.patch

* move dataLength check to a separate method

* add {{TestProtoBufRpc#testExtraLongRpc}}

will commit today or tomorrow if there are no more comments.

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

 Target Version/s: 2.1.0-beta
Affects Version/s: (was: 2.2.0)
   2.1.0-beta

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

merged to 2.1-beta, branch-2, trunk

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable

2013-06-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9676:
-

Attachment: HADOOP-9676.001.patch

 make maximum RPC buffer size configurable
 -

 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9676.001.patch


 Currently the RPC server just allocates however much memory the client asks 
 for, without validating.  It would be nice to make the maximum RPC buffer 
 size configurable.  This would prevent a rogue client from bringing down the 
 NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
 would also make it easier to debug issues with super-large RPCs or malformed 
 headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira