[
https://issues.apache.org/jira/browse/KAFKA-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17572413#comment-17572413
]
microle.dong edited comment on KAFKA-14088 at 7/28/22 12:20 PM:
----------------------------------------------------------------
[~jackin853]
it be avoided by using SASL。
these abnormal packets cannot be cached when kafka with kerberos
{code:java}
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size
= -2147483608)
at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:102)
at
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:242)
at
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:487)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at kafka.network.Processor.poll(SocketServer.scala:808)
at kafka.network.Processor.run(SocketServer.scala:712)
at java.lang.Thread.run(Thread.java:748){code}
was (Author: JIRAUSER284752):
[~jackin853]
it be avoided by using SASL。
In our company , these abnormal packets cannot be cached by kafka when using
kerberos
{code:java}
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size
= -2147483608)
at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:102)
at
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:242)
at
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:487)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at kafka.network.Processor.poll(SocketServer.scala:808)
at kafka.network.Processor.run(SocketServer.scala:712)
at java.lang.Thread.run(Thread.java:748){code}
> KafkaChannel memory leak
> ------------------------
>
> Key: KAFKA-14088
> URL: https://issues.apache.org/jira/browse/KAFKA-14088
> Project: Kafka
> Issue Type: Bug
> Components: network
> Affects Versions: 2.2.1
> Environment: Current system environment:
> kafka version: 2.2.1
> openjdk(openj9): jdk1.8
> Heap memory: 6.4GB
> MaxDirectSize: 8GB
> Total number of topics: about 150+, each with about 3 partitions
> Reporter: Gao Fei
> Priority: Minor
>
> The kafka broker reports OutOfMemoryError: Java heap space and
> OutOfMemoryError: Direct buffer memory at the same time. Through the memory
> dump, it is found that the most occupied objects are
> KafkaChannel->NetworkReceive->HeapByteBuffer, there are about 4 such
> KafkaChannels, each about 1.5GB Around, and the total heap memory allocation
> is only 6.4GB.
> It's strange why a KafkaChannel occupies so much heap memory. Isn't each
> batch request slowly written to disk through the RequestHandler thread?
> Normally, this memory in KafkaChannel should be released continuously, but it
> is not released.
> I am curious why there is such a large HeapByteBuffer object in KafkaChannel?
> What does this object store? Shouldn't the socket communication here use a
> lot of direct memory? Instead, why a lot of heap memory is used, and why is
> it not released?
> The business data is not very large, the business data of each customer is
> different, and some customers have this OOM in the environment, and some
> customers with large business data do not appear OOM.
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
> at sun.nio.ch.IOUtil.read(IOUtil.java:195)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
> at
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:117)
> at
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
> at
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
> at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
> at
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
> at kafka.network.Processor.poll(SocketServer.scala:863)
> at kafka.network.Processor.run(SocketServer.scala:762)
> at java.lang.Thread.run(Thread.java:745)
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at org.apache.kafka.common.MemoryPool$1.tryAllocate(MemoryPool.java:30)
> at
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
> at
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
> at
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
> at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
> at
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
> at kafka.network.Processor.poll(SocketServer.scala:863)
> at kafka.network.Processor.run(SocketServer.scala:762)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)