??????????????????????, ????????????;
taskmanager.memory.task.off-heap.size ????????taskmanager??????????
streamTableEnv.getConfig().getConfiguration().setString(key, value);
????????????????????????
------------------ ???????? ------------------
??????:
"user-zh"
<[email protected]>;
????????: 2020??11??16??(??????) ????7:14
??????: "[email protected]"<[email protected]>;
????: ????: flink-1.11.2 ?? ????????????
???????????????????????? taskmanager.memory.task.off-heap.size
???????????????? ????????????????????
streamTableEnv.getConfig().getConfiguration().setString(key, value);
________________________________
??????: Xintong Song <[email protected]>
????????: 2020??11??16?? 10:59
??????: user-zh <[email protected]>
????: Re: flink-1.11.2 ?? ????????????
???????????????????????????????????? job ?????? direct ????????????
?????????????????????????????? `taskmanager.memory.task.off-heap.size`
??????????
?????? TM ?????????????????????????? job ?????? direct ??????
Thank you~
Xintong Song
On Mon, Nov 16, 2020 at 6:38 PM ?? ???? <[email protected]> wrote:
> flink-on-yarn . per-job????????????kafka??group.id
>
????????????????offset????????????????????????????????????????????????????????????????
> ________________________________
> ??????: Xintong Song <[email protected]>
> ????????: 2020??11??16?? 10:11
> ??????: user-zh <[email protected]>
> ????: Re: flink-1.11.2 ?? ????????????
>
> ??????????????????standalone??
> ?????????????????????????????????????????????????? TM
???????????????????????????? TM??
>
> Thank you~
>
> Xintong Song
>
>
>
> On Mon, Nov 16, 2020 at 5:53 PM ?? ???? <[email protected]>
wrote:
>
> > ????????rocksdb, ????????5??1??tm, 5??slot??tm ??????
> >
10G????????????????????????????????????????????????????????????????????????????????????????offset????????????????????????????
> >
> > 2020-11-16 17:44:52
> > java.lang.OutOfMemoryError: Direct buffer memory. The direct
> out-of-memory
> > error has occurred. This can mean two things: either job(s)
require(s) a
> > larger size of JVM direct memory or there is a direct memory leak. The
> > direct memory can be allocated by user code or some of its
dependencies.
> In
> > this case 'taskmanager.memory.task.off-heap.size' configuration option
> > should be increased. Flink framework and its dependencies also consume
> the
> > direct memory, mostly for network communication. The most of network
> memory
> > is managed by Flink and should not result in out-of-memory error. In
> > certain special cases, in particular for jobs with high parallelism,
the
> > framework may require more direct memory which is not managed by
Flink.
> In
> > this case 'taskmanager.memory.framework.off-heap.size' configuration
> option
> > should be increased. If the error persists then there is probably a
> direct
> > memory leak in user code or some of its dependencies which has to be
> > investigated and fixed. The task executor has to be shutdown...
> > at java.nio.Bits.reserveMemory(Bits.java:658)
> > at
java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> > at
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> > at
sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
> > at sun.nio.ch.IOUtil.read(IOUtil.java:195)
> > at
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:109)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:101)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.common.network.Selector.poll(Selector.java:326)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1096)
> > at
> >
>
org.apache.flink.kafka011.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> > at
> >
>
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getRecordsFromKafka(KafkaConsumerThread.java:535)
> > at
> >
>
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:264)
> >
> >
> >
>