[ 
https://issues.apache.org/jira/browse/FLINK-29178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17599278#comment-17599278
 ] 

Martijn Visser commented on FLINK-29178:
----------------------------------------

[~zhangyang93] This question is better suited for the Flink mailing list or 
Slack channel, since Jira is reserved for (confirmed) bugs. Can you ask your 
question there?

> flink - on -yarn out-off-memory
> -------------------------------
>
>                 Key: FLINK-29178
>                 URL: https://issues.apache.org/jira/browse/FLINK-29178
>             Project: Flink
>          Issue Type: Bug
>          Components: API / DataStream, Deployment / YARN
>    Affects Versions: 1.14.2
>         Environment: |thread num:24*2|
> |Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz|
> |mem: 64GB|
> |1333 MHz|
> |disk size:1497G|
> |RW:6.0 Gb/s(600M/s)|
> |BCM5709*4;|
> |JAVA build 1.8.0_281-b09|
>            Reporter: zhangyang
>            Priority: Major
>         Attachments: flink.gc_05.log.0 (2).current, job_manager.txt
>
>
> {color:#202124}hello,{color}
> My task has an "Out Of Memory" exception error after running for 3 hours. The 
> version of my cluster is 1.14.2, the memory allocation of TaskManager is 2g, 
> and the attachment is the log file of jvm's heap memory analysis, which 
> involves the flink framework. So asking for help, thanks!
> [^flink.gc_05.log.0 (2).current]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to