[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887957#comment-15887957 ] Adrian McCague commented on KAFKA-3990: --- I have narrowed down my particular issues to being caused by the confluent monitoring interceptors so it may not be Kafka specific. > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 > Marathon >Reporter: Brice Dutheil > Attachments: app-producer-config.log, kafka-broker-logs.zip > > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49 > receive the parameters : {{maxSize=-1}}, {{source="-1"}} > We may have missed
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886466#comment-15886466 ] Adrian McCague commented on KAFKA-3990: --- For what it's worth witnessing this as well, the received size is always 352,518,912 bytes - still trying to track down exactly where that's coming from. Using Streams and brokers: 0.10.1.1 {code} 2017-02-27 19:28:16 ERROR KafkaThread:30 - Uncaught exception in kafka-producer-network-thread | confluent.monitoring.interceptor.app-2-StreamThread-2-producer: java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[?:1.8.0_112] at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[?:1.8.0_112] at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.common.network.Selector.poll(Selector.java:291) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236) ~[kafka-clients-0.10.1.1.jar:?] at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135) ~[kafka-clients-0.10.1.1.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] {code} Sometimes the exception message is shown as the consumer (from the producer thread): {code} Uncaught exception in kafka-producer-network-thread | confluent.monitoring.interceptor.app-2-StreamThread-2-consumer {code} The payload comes on the first processed message when the application is started, and then all is fine for some time. We have seen it trigger later in the logs but not linked it to anything. Will update if I can find anything out, and I will investigate to see if we have any other services bound to the broker port as other comments have found. > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 > Marathon >Reporter: Brice Dutheil > Attachments: app-producer-config.log, kafka-broker-logs.zip > > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE:
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531011#comment-15531011 ] Andrew Olson commented on KAFKA-3990: - I think this Jira can be closed as a duplicate of KAFKA-2512, version and magic byte verification should address this. We saw the same thing with the new Consumer when its bootstrap.servers was accidentally set to the host:port of a Kafka Offset Monitor (https://github.com/quantifind/KafkaOffsetMonitor) service. > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 > Marathon >Reporter: Brice Dutheil > Attachments: app-producer-config.log, kafka-broker-logs.zip > > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at >
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399572#comment-15399572 ] Brice Dutheil commented on KAFKA-3990: -- Hi after further investigation we found out the issue came because we switched from bamboo to marathon-lb, and marathon-lb opens the 9091 HTTP port (https://github.com/mesosphere/marathon-lb#operational-best-practices), we missed that during the upgrade. {code} > curl -v dockerhost:9091 * About to connect() to dockerhost port 9091 (#0) * Trying 172.17.42.1... * Connected to dockerhost (172.17.42.1) port 9091 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: dockerhost:9091 > Accept: */* > * Empty reply from server * Connection #0 to host dockerhost left intact curl: (52) Empty reply from server {code} However I'm surprised Kafka / clients don't check the validity of the payload. > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > Attachments: app-producer-config.log, kafka-broker-logs.zip > > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15399370#comment-15399370 ] Brice Dutheil commented on KAFKA-3990: -- Hi all, sorry for the delayed response I have busy with other stuff. Yes the broker is 0.9.0.1 as well. It runs in a docker container too. I attached the broker logs. We restarted the single instance cluster (~ 13:20), and a few minutes later (~ 13:34) we ran the application and the app face same problem with this big message. This got me curious, I only looked at the server.log, however controller.log show OOME as well right at the broker instance start : {code} [2016-07-29 13:20:34,366] WARN [Controller-1-to-broker-1-send-thread], Controller 1 epoch 1 fails to send request {controller_id=1,controller_epoch=1,partition_states=[],live_brokers=[{id=1,end_points=[{port=9091,host=dockerhost,security_protocol_type=0}]}]} to broker Node(1, dockerhost, 9091). Reconnecting to broker. (kafka.controller.RequestSendThread) java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) at org.apache.kafka.common.network.Selector.poll(Selector.java:286) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) at kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128) at kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139) at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:80) at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:180) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) {code} > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > Attachments: app-producer-config.log, kafka-broker-logs.zip > > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN >
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398290#comment-15398290 ] Ismael Juma commented on KAFKA-3990: [~jkreps], that's the idea, but bugs have been reported, eg: https://issues.apache.org/jira/browse/KAFKA-3550 > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49 > receive the parameters : {{maxSize=-1}}, {{source="-1"}} > We may have missed configuration in our setup but kafka clients shouldn't > raise an OOME. For reference the producer is
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398283#comment-15398283 ] Jay Kreps commented on KAFKA-3990: -- [~hachikuji] But the response format is dictated by the producer format so it shouldn't be the case that you ever get an unknown format back in a response, right? > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49 > receive the parameters : {{maxSize=-1}}, {{source="-1"}} > We may have missed configuration in our setup but kafka clients shouldn't >
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398115#comment-15398115 ] Jason Gustafson commented on KAFKA-3990: [~bric3] This kind of error is typically the result of an incompatible message format returned by the broker. Can you confirm the version of your brokers? > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49 > receive the parameters : {{maxSize=-1}}, {{source="-1"}} > We may have missed configuration in our setup but kafka clients shouldn't >
[jira] [Commented] (KAFKA-3990) Kafka New Producer may raise an OutOfMemoryError
[ https://issues.apache.org/jira/browse/KAFKA-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393625#comment-15393625 ] Ismael Juma commented on KAFKA-3990: The buffer is allocated based on the size returned by the broker. It's unlikely that the broker would return such a big payload to the producer, so perhaps the message got corrupted on the way? Is there anything of interest in the broker logs? > Kafka New Producer may raise an OutOfMemoryError > > > Key: KAFKA-3990 > URL: https://issues.apache.org/jira/browse/KAFKA-3990 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 0.9.0.1 > Environment: Docker, Base image : CentOS > Java 8u77 >Reporter: Brice Dutheil > > We are regularly seeing OOME errors on a kafka producer, we first saw : > {code} > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > This line refer to a buffer allocation {{ByteBuffer.allocate(receiveSize)}} > (see > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L93) > Usually the app runs fine within 200/400 MB heap and a 64 MB Metaspace. And > we are producing small messages 500B at most. > Also the error don't appear on the devlopment environment, in order to > identify the issue we tweaked the code to give us actual data of the > allocation size, we got this stack : > {code} > 09:55:49.484 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: constructor : Integer='-1', String='-1' > 09:55:49.485 [auth] [kafka-producer-network-thread | producer-1] WARN > o.a.k.c.n.NetworkReceive HEAP-ISSUE: method : > NetworkReceive.readFromReadableChannel.receiveSize=1213486160 > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/tomcat.hprof ... > Heap dump file created [69583827 bytes in 0.365 secs] > 09:55:50.324 [auth] [kafka-producer-network-thread | producer-1] ERROR > o.a.k.c.utils.KafkaThread Uncaught exception in kafka-producer-network-thread > | producer-1: > java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77] > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77] > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) > ~[kafka-clients-0.9.0.1.jar:na] > at > org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.common.network.Selector.poll(Selector.java:286) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) > ~[kafka-clients-0.9.0.1.jar:na] > at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) > ~[kafka-clients-0.9.0.1.jar:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_77] > {code} > Notice the size to allocate {{1213486160}} ~1.2 GB. I'm not yet sure how this > size is initialised. > Notice as well that every time this OOME appear the {{NetworkReceive}} > constructor at > https://github.com/apache/kafka/blob/0.9.0.1/clients/src/main/java/org/apache/kafka/common/network/NetworkReceive.java#L49 > receive the parameters : {{maxSize=-1}},