[ 
https://issues.apache.org/jira/browse/SPARK-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228646#comment-14228646
 ] 

meiyoula commented on SPARK-4598:
---------------------------------

[~joshrosen],

I use the this two-day's github master code to test this, and just run the 
example SparkPi with default driver memory. It is the command: ./spark-submit 
--class org.apache.spark.examples.SparkPi --master yarn-client  
../lib/spark-examples*.jar 100000

When the application is running and has executed 50,000 tasks, I open the 
stagepage in SparkUI, the web shutdown;
When the application is finished, I open the stagepage in HistoryServer, the 
web shutdown. Attention, the HistoryServer memory is also use default value.
  

> Paginate stage page to avoid OOM with > 100,000 tasks
> -----------------------------------------------------
>
>                 Key: SPARK-4598
>                 URL: https://issues.apache.org/jira/browse/SPARK-4598
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>            Priority: Critical
>
> In HistoryServer stage page, clicking the task href in Description, it occurs 
> the GC error. The detail error message is:
> 2014-11-17 16:36:30,851 | WARN  | [qtp1083955615-352] | Error for 
> /history/application_1416206401491_0010/stages/stage/ | 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:590)
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> 2014-11-17 16:36:30,851 | WARN  | [qtp1083955615-364] | handle failed | 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:697)
> java.lang.OutOfMemoryError: GC overhead limit exceeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to