[
https://issues.apache.org/jira/browse/FLINK-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804879#comment-16804879
]
Till Rohrmann commented on FLINK-9904:
--------------------------------------
Yes, please provide a fix for this problem [~hroongta].
> Allow users to control MaxDirectMemorySize
> ------------------------------------------
>
> Key: FLINK-9904
> URL: https://issues.apache.org/jira/browse/FLINK-9904
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Coordination
> Affects Versions: 1.4.2, 1.5.1
> Reporter: Himanshu Roongta
> Priority: Minor
>
> For people who use docker image and run flink in pods, currently, there is no
> way to update
> {{MaxDirectMemorySize}}
> (Well one can create a custom version of
> [taskmanager.sh|https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/taskmanager.sh])
>
> As a result, it starts with a value of 8388607T . If the param
> {{taskmanager.memory.preallocate}} is set to false (default) the clean up
> will only occur when the MaxDirectMemorySize limit is hit and a gc full cycle
> kicks in. However with pods especially in kuberenete they will get killed
> because pods do not run at such a high value. (In our case we run 8GB per pod)
>
> The fix would be to allow it be configurable via {{flink-conf}}. We can still
> have a default of 8388607T to avoid a breaking change.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)