[ 
https://issues.apache.org/jira/browse/FLINK-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415388#comment-16415388
 ] 

Kirill A. Korinskiy commented on FLINK-8809:
--------------------------------------------

[~greghogan] I get your point thanks for explanation and I agree that change 
default value by customer requests is bad idea.

Meanwhile, I'd like to notice that not only Flink managements memory segments 
allocation. This memory also used  by any consumer or producer that's using 
NIO. For example kafka producer that used NIO to network communication and if 
I'd like process a lot of messages at peak (1G and bigger), it may be an issue 
and reason to kill whole JVM over OOM Killer. After that the job will be 
restarted and user may have another peak that may create a new OOM Killer at 
peak ;)

> Decrease maximum value of DirectMemory at default config
> --------------------------------------------------------
>
>                 Key: FLINK-8809
>                 URL: https://issues.apache.org/jira/browse/FLINK-8809
>             Project: Flink
>          Issue Type: Bug
>          Components: TaskManager
>            Reporter: Kirill A. Korinskiy
>            Priority: Major
>
> Good day!
>  
> Have I can see since this 
> [commit|https://github.com/apache/flink/commit/6c44d93d0a9da725ef8b1ad2a94889f79321db73]
>  TaskManager uses 8,388,607 terabytes as maximum out of heap memory. I guess 
> that not any system has so much memory and it may be a reason to kill java 
> process by OOM Killer.
>  
> I suggest to decrease this value to reasonable value by default.
>  
> Right now I see only one way to overstep this hardcoded value: setup 
> FLINK_TM_HEAP to 0, and specified heap size by hand over 
> FLINK_ENV_JAVA_OPTS_TM. 
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to