[ 
https://issues.apache.org/jira/browse/FLINK-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691632#comment-16691632
 ] 

Daniel Harper edited comment on FLINK-10928 at 11/19/18 12:20 PM:
------------------------------------------------------------------

h1. Why does YARN kill the containers with out of memory

We run FLINK on EMR with the following memory settings:

{code}
--taskManagerMemory 6500 
--jobManagerMemory 6272 
--detached 
-Dcontainerized.heap-cutoff-ratio=0.15 
-Dclassloader.resolve-order=parent-first 
-Dparallelism.default=32 
-Dstate.backend=filesystem 
-Dyarn.maximum-failed-containers=-1 
-Djobmanager.web.checkpoints.history=1000 
"-Dakka.ask.timeout=60 s" 
"-Denv.java.opts=-Xloggc:/var/log/hadoop-yarn/flink_gc_$(basename <LOG_DIR> | 
egrep -o 'container_[0-9]+_[0-9]+')_%p.log -XX:GCLogFileSize=200M 
-XX:NumberOfGCLogFiles=10 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintTenuringDistribution -XX:+PrintGCCause -XX:+PrintGCDateStamps 
-XX:+UseG1GC" 
-Dstate.backend.fs.checkpointdir=s3a://...
-Dstate.checkpoints.dir=s3a://...
-Dstate.savepoints.dir=s3a://...
{code}

Through YARN we can see that each container gets allocated with 6528mb (heap 
4995mb, off heap 1533mb)

The question is, why does the YARN container get killed after a few restarts? 

One avenue I investigated was restricting the s3 connection pool size for 
hadoop to force it to restart

h2. Simulating restarts on TEST

After deploying this to TEST we observed the following on one of the task 
managers by connecting via JMX

* Upon each restart the metaspace size + number of classes loaded increased 
* Prior to YARN killing the container, the job was restarting roughly every 30 
seconds which seemed to accelerate the metaspace size used

Screenshots from JVISUALVM shown below

heap
!Screen Shot 2018-11-16 at 15.49.15.png! 

metaspace
!Screen Shot 2018-11-16 at 15.49.03.png!  

h2. Is this a problem?

This is what we are not sure about. 

Is it possible for the task manager to allocate memory outside of the 'off 
heap' allocation, which would cause YARN to kill the container?

The metaspace size is currently unbounded so I am making the assumption this is 
the cause, but I'm happy to be corrected otherwise.

I noticed there was a ticket FLINK-10317 related to setting an upper bound to 
the metaspace size but it looks like there's some concern about what to set 
this to. 


was (Author: djharper):
h1. Why does YARN kill the containers with out of memory

We run FLINK on EMR with the following memory settings:

{code}
--taskManagerMemory 6500 
--jobManagerMemory 6272 
--detached 
-Dcontainerized.heap-cutoff-ratio=0.15 
-Dclassloader.resolve-order=parent-first 
-Dparallelism.default=32 
-Dstate.backend=filesystem 
-Dyarn.maximum-failed-containers=-1 
-Djobmanager.web.checkpoints.history=1000 
"-Dakka.ask.timeout=60 s" 
"-Denv.java.opts=-Xloggc:/var/log/hadoop-yarn/flink_gc_$(basename <LOG_DIR> | 
egrep -o 'container_[0-9]+_[0-9]+')_%p.log -XX:GCLogFileSize=200M 
-XX:NumberOfGCLogFiles=10 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintTenuringDistribution -XX:+PrintGCCause -XX:+PrintGCDateStamps 
-XX:+UseG1GC" 
-Dstate.backend.fs.checkpointdir=s3a://...
-Dstate.checkpoints.dir=s3a://...
-Dstate.savepoints.dir=s3a://...
{code}

Through YARN we can see that each container gets allocated with 6528mb (heap 
4995mb, off heap 1533mb)

The question is, why does the YARN container get killed after a few restarts? 

One avenue I investigated was restricting the s3 connection pool size for 
hadoop to force it to restart

h2. Simulating restarts on TEST

After deploying this to TEST we observed the following on one of the task 
managers by connecting via JMX

* Upon each restart the metaspace size + number of classes loaded increased 
* Prior to YARN killing the container, the job was restarting roughly every 30 
seconds which seemed to accelerate the metaspace size used

Screenshots from JVISUALVM shown below

heap
!Screen Shot 2018-11-16 at 15.49.15.png! 
!Screen Shot 2018-11-16 at 15.49.15.png|thumbnail! 

metaspace
!Screen Shot 2018-11-16 at 15.49.03.png!  

h2. Is this a problem?

This is what we are not sure about. 

Is it possible for the task manager to allocate memory outside of the 'off 
heap' allocation, which would cause YARN to kill the container?

The metaspace size is currently unbounded so I am making the assumption this is 
the cause, but I'm happy to be corrected otherwise.

I noticed there was a ticket FLINK-10317 related to setting an upper bound to 
the metaspace size but it looks like there's some concern about what to set 
this to. 

> Job unable to stabilise after restart 
> --------------------------------------
>
>                 Key: FLINK-10928
>                 URL: https://issues.apache.org/jira/browse/FLINK-10928
>             Project: Flink
>          Issue Type: Bug
>         Environment: AWS EMR 5.17.0
> FLINK 1.5.2
> BEAM 2.7.0
>            Reporter: Daniel Harper
>            Priority: Major
>         Attachments: Screen Shot 2018-11-16 at 15.49.03.png, Screen Shot 
> 2018-11-16 at 15.49.15.png
>
>
> We've seen a few instances of this occurring in production now (it's 
> difficult to reproduce) but essentially we've seen the following sequence of 
> events: 
> 1. Job restarts due to exception
> 2. Job restores from a checkpoint but we get the exception
> {code}
> Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
> Timeout waiting for connection from pool
> {code}
> 3. Job restarts
> 4. Job restores from a checkpoint but we get the same exception
> .... repeat a few times within 2-3 minutes....
> 5. YARN kills containers with out of memory
> {code}
> 2018-11-14 00:16:04,430 INFO  org.apache.flink.yarn.YarnResourceManager       
>               - Closing TaskExecutor connection 
> container_1541433014652_0001_01_000716 because: Container 
> [pid=7725,containerID=container_1541433014652_0001_01_
> 000716] is running beyond physical memory limits. Current usage: 6.4 GB of 
> 6.4 GB physical memory used; 8.4 GB of 31.9 GB virtual memory used. Killing 
> container.
> Dump of the process-tree for container_1541433014652_0001_01_000716 :
>         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>         |- 7725 7723 7725 7725 (bash) 0 0 115863552 696 /bin/bash -c 
> /usr/lib/jvm/java-openjdk/bin/java -Xms4995m -Xmx4995m 
> -XX:MaxDirectMemorySize=1533m 
> -Xloggc:/var/log/hadoop-yarn/flink_gc_container_1541433014652_0001_%p.log 
> -XX:GCLogF
> ileSize=200M -XX:NumberOfGCLogFiles=10 -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCCause 
> -XX:+PrintGCDateStamps -XX:+UseG1GC 
> -Dlog.file=/var/log/hadoop-yarn/containers/application_1541433014652_00
> 01/container_1541433014652_0001_01_000716/taskmanager.log 
> -Dlog4j.configuration=file:./log4j.properties 
> org.apache.flink.yarn.YarnTaskExecutorRunner --configDir . 1> 
> /var/log/hadoop-yarn/containers/application_1541433014652_0001/container
> _1541433014652_0001_01_000716/taskmanager.out 2> 
> /var/log/hadoop-yarn/containers/application_1541433014652_0001/container_1541433014652_0001_01_000716/taskmanager.err
>         |- 7738 7725 7725 7725 (java) 6959576 976377 8904458240 1671684 
> /usr/lib/jvm/java-openjdk/bin/java -Xms4995m -Xmx4995m 
> -XX:MaxDirectMemorySize=1533m 
> -Xloggc:/var/log/hadoop-yarn/flink_gc_container_1541433014652_0001_%p.log 
> -XX:GCL
> ogFileSize=200M -XX:NumberOfGCLogFiles=10 -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCCause 
> -XX:+PrintGCDateStamps -XX:+UseG1GC 
> -Dlog.file=/var/log/hadoop-yarn/containers/application_1541433014652
> _0001/container_1541433014652_0001_01_000716/taskmanager.log 
> -Dlog4j.configuration=file:./log4j.properties 
> org.apache.flink.yarn.YarnTaskExecutorRunner --configDir .
>  
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {code}
> 6. YARN allocates new containers but the job is never able to get back into a 
> stable state, with constant restarts until eventually the job is cancelled 
> We've seen this occurring too FLINK-10848 with some task managers allocated 
> but sitting 'idle' state. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to