[ 
https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15967368#comment-15967368
 ] 

Ismael Juma commented on KAFKA-5062:
------------------------------------

[~rsivaram], yes, it was on PLAINTEXT. The weird thing is that `NetworkReceive` 
checks that the request size is within socket.request.max.bytes before 
allocating the buffer. I don't know if [~apurva] has the data, so I'll leave it 
to him to answer that.

> Kafka brokers can accept malformed requests which allocate gigabytes of memory
> ------------------------------------------------------------------------------
>
>                 Key: KAFKA-5062
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5062
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Apurva Mehta
>
> In some circumstances, it is possible to cause a Kafka broker to allocate 
> massive amounts of memory by writing malformed bytes to the brokers port. 
> In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 
> gigabytes, the first 360 bytes of which were non kafka requests -- an 
> application was writing the wrong data to kafka, causing the broker to 
> interpret the request size as 1.8GB and then allocate that amount. Apart from 
> the first 360 bytes, the rest of the 1.8GB byte array was null. 
> We have a socket.request.max.bytes set at 100MB to protect against this kind 
> of thing, but somehow that limit is not always respected. We need to 
> investigate why and fix it.
> cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to