[
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17034102#comment-17034102
]
Xintong Song commented on FLINK-15981:
--------------------------------------
Thanks for creating this ticket, [~lzljs3620320].
+1 for obtaining memory from network buffer pool.
I think the alternative of limiting number of partitions read concurrently
probably reduce the chance of direct memory oom. Even though, these read
buffers are still not accounted into {{ -XX:MaxDirectMemorySize }}. It would be
good to account these read buffers into something already accounted in {{
-XX:MaxDirectMemorySize }}, i.e. network buffer pool.
> Control the direct memory in FileChannelBoundedData.FileBufferReader
> --------------------------------------------------------------------
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Network
> Affects Versions: 1.10.0
> Reporter: Jingsong Lee
> Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism,
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism.
> Batch jobs could run little by little, but memory shortage would consume a
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the
> number of subpartitions that can be requested at the same time will be more.
> We have no idea how much memory.
> Here are my rough thoughts:
> * Obtain memory from network buffers.
> * provide "The maximum number of subpartitions that can be requested at the
> same time".
--
This message was sent by Atlassian Jira
(v8.3.4#803005)