报错信息如下
Caused by: java.lang.OutOfMemoryError: Direct buffer memory at 
java.nio.Bits.reserveMemory(Bits.java:693) 

 at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 

 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241) 

 at sun.nio.ch.IOUtil.read(IOUtil.java:195) 

 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)

 at 
org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:110)

 at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:97)

 at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)

 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:169) 

 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:150) 

 at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:355) 

 at org.apache.kafka.common.network.Selector.poll(Selector.java:303) 

 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 

 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 

 at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1047)
 

 at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)



版本 
  Flink:1.9.1 
  kafka-client:0.10.0.1
环境 
  on yarn
JVM参数
  -Xms14336m
  -Xmx14336m
  -XX:MaxDirectMemorySize=6144m
flink-conf.yml
 使用的是默认的参数
 Stream任务,并且没有使用RocksDB
 
目前初步怀疑是Flink 的堆外内存占用过大导致kafka consumer 无法申请堆外内存导致OOM。但根据官方文档的配置 
taskmanager.memory.fraction=0.7 这个应该在我的程序中不生效
taskmanager.network.memory.fraction=0.1


这样的配置下来应该用户代码可使用的堆外内存为6144m*0.9=5529m
我的问题是
在我当前的环境下是否还有我没注意到的Flink堆外内存配置,或者Flink需要占用的堆外内存是我所不了解的。 
除了控制kafka comsumer 的流量以外有没有什么其他的调整方式?


Best
Aven

回复