[ 
https://issues.apache.org/jira/browse/SPARK-24414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16494083#comment-16494083
 ] 

Thomas Graves commented on SPARK-24414:
---------------------------------------

to reproduce this simply start a shell:

$SPARK_HOME/bin/spark-shell --num-executors 5  --master yarn --deploy-mode 
client

Run something that gets some tasks failures but not all:

sc.parallelize(1 to 10000, 10).map { x =>
 | if (SparkEnv.get.executorId.toInt >= 1 && SparkEnv.get.executorId.toInt <= 
4) throw new RuntimeException("Bad executor")
 | else (x % 3, x)
 | }.reduceByKey((a, b) => a + b).collect()

 

Go to the stages page and you will only see 10 tasks rendered when it should 
has 21 total between succeeded and failed. 

> Stages page doesn't show all task attempts when failures
> --------------------------------------------------------
>
>                 Key: SPARK-24414
>                 URL: https://issues.apache.org/jira/browse/SPARK-24414
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 2.3.0
>            Reporter: Thomas Graves
>            Priority: Critical
>
> If you have task failures, the StagePage doesn't render all the task attempts 
> properly.  It seems to make the table the size of the total number of 
> successful tasks rather then including all the failed tasks.
> Even though the table size is smaller, if you sort by various columns you can 
> see all the tasks are actually there, it just seems the size of the table is 
> wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to