[ https://issues.apache.org/jira/browse/KAFKA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15968257#comment-15968257 ]
James Cheng commented on KAFKA-5062: ------------------------------------ I agree with Jun about how to try to reproduce it. According to http://kafka.apache.org/protocol.html#protocol_common, a RequestOrResponse is Size (int32) followed by the rest of the bytes of the Request|Response. If you take a valid request, and just change the first 4 bytes to something huge, and send it in, what would happen in that case? That's the scenario that I mentioned in the link in my first comment. Some system that story parsed the first 4 bytes of H T T P, which turns into decimal value of 1213486160, which caused some application to attempt to allocate 1.2GB of memory for a request. > Kafka brokers can accept malformed requests which allocate gigabytes of memory > ------------------------------------------------------------------------------ > > Key: KAFKA-5062 > URL: https://issues.apache.org/jira/browse/KAFKA-5062 > Project: Kafka > Issue Type: Bug > Reporter: Apurva Mehta > > In some circumstances, it is possible to cause a Kafka broker to allocate > massive amounts of memory by writing malformed bytes to the brokers port. > In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 > gigabytes, the first 360 bytes of which were non kafka requests -- an > application was writing the wrong data to kafka, causing the broker to > interpret the request size as 1.8GB and then allocate that amount. Apart from > the first 360 bytes, the rest of the 1.8GB byte array was null. > We have a socket.request.max.bytes set at 100MB to protect against this kind > of thing, but somehow that limit is not always respected. We need to > investigate why and fix it. > cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe] -- This message was sent by Atlassian JIRA (v6.3.15#6346)