[ https://issues.apache.org/jira/browse/YARN-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726879#comment-16726879 ]
Andres Namm commented on YARN-4714: ----------------------------------- I have built another answer which depends whether you are using spark client or cluster mode. * In cluster mode it failed when I specified *Driver Memory* --driver-memory to be 512m. (The default setting requested 2GB of am resources (This consists of driver memory + Overhead requested for Application Master) which was enough) * In client mode the setting that mattered was spark.yarn.am.memory as by default this requested only 1024m for the AM which is too little as Java 8 requires a lot of virtual memory. > 1024m seemed to be working. Answer is described [here|https://github.com/AndresNamm/SparkConfAndDebugging/blob/master/Debug/SparkMemoryIssue.md] > [Java 8] Over usage of virtual memory > ------------------------------------- > > Key: YARN-4714 > URL: https://issues.apache.org/jira/browse/YARN-4714 > Project: Hadoop YARN > Issue Type: Bug > Reporter: Mohammad Kamrul Islam > Assignee: Mohammad Kamrul Islam > Priority: Blocker > Attachments: HADOOP-11364.01.patch > > > In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop > due to excessive virtual memory allocation. Although the physical memory > usage is low. > The most common error message is "Container [pid=??,containerID=container_??] > is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB > physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing > container." > We see this problem for MR job as well as in spark driver/executor. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org