[
https://issues.apache.org/jira/browse/HADOOP-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13616487#comment-13616487
]
Suresh Srinivas commented on HADOOP-9441:
-----------------------------------------
We could place a size limit using CodeInputStream#setSizeLimit().
[~sanjay.radia]?
> Denial of Service in IPC Server.java
> ------------------------------------
>
> Key: HADOOP-9441
> URL: https://issues.apache.org/jira/browse/HADOOP-9441
> Project: Hadoop Common
> Issue Type: Bug
> Components: ipc
> Affects Versions: 1.1.2
> Reporter: Wouter de Bie
> Priority: Minor
>
> When experimenting with a pure python client for HDFS, I noticed that there
> is a DOS in the IPC Server. The IPC packet specifies the size (32bit int) of
> the protobuf payload and that size is directly used to create a buffer that
> is used to parse the protobuf message. This means that with malformed
> packets, clients are able to allocate 4G of memory on the heap (which in my
> case, blew the heap on my test cluster).
> I haven't looked at a good way of solving this, but just wanted to raise the
> issue.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira