[
https://issues.apache.org/jira/browse/FLINK-9904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chesnay Schepler closed FLINK-9904.
-----------------------------------
Resolution: Duplicate
Subsumed by FLIP-49/FLIP-116.
> Allow users to control MaxDirectMemorySize
> ------------------------------------------
>
> Key: FLINK-9904
> URL: https://issues.apache.org/jira/browse/FLINK-9904
> Project: Flink
> Issue Type: Improvement
> Components: Deployment / Scripts
> Affects Versions: 1.4.2, 1.5.1, 1.7.2, 1.8.0, 1.9.0
> Reporter: Himanshu Roongta
> Assignee: Ji Liu
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> For people who use docker image and run flink in pods, currently, there is no
> way to update
> {{MaxDirectMemorySize}}
> (Well one can create a custom version of
> [taskmanager.sh|https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/taskmanager.sh])
>
> As a result, it starts with a value of 8388607T . If the param
> {{taskmanager.memory.preallocate}} is set to false (default) the clean up
> will only occur when the MaxDirectMemorySize limit is hit and a gc full cycle
> kicks in. However with pods especially in kuberenete they will get killed
> because pods do not run at such a high value. (In our case we run 8GB per pod)
>
> The fix would be to allow it be configurable via {{flink-conf}}. We can still
> have a default of 8388607T to avoid a breaking change.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)