[ 
https://issues.apache.org/jira/browse/SPARK-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16924940#comment-16924940
 ] 

Zheng Shao commented on SPARK-1529:
-----------------------------------

For huge Spark jobs, we are seeing frequent failures due to the Shuffle phase: 
local disk being full, unable to connect to External Shuffle Service.

The frustrating fact is that for these huge jobs, even a single task retry 
sometimes takes a very long time.

It will be great to have the option to write the temporary shuffle data to a 
reliable storage, e.g. HDFS but also others.

Maybe it should be a different Shuffle Manager implementation altogether.

Any thoughts?

> Support DFS based shuffle in addition to Netty shuffle
> ------------------------------------------------------
>
>                 Key: SPARK-1529
>                 URL: https://issues.apache.org/jira/browse/SPARK-1529
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Patrick Wendell
>            Assignee: Kannan Rajah
>            Priority: Major
>         Attachments: Spark Shuffle using HDFS.pdf
>
>
> In some environments, like with MapR, local volumes are accessed through the 
> Hadoop filesystem interface. Shuffle is implemented by writing intermediate 
> data to local disk and serving it to remote node using Netty as a transport 
> mechanism. We want to provide an HDFS based shuffle such that data can be 
> written to HDFS (instead of local disk) and served using HDFS API on the 
> remote nodes. This could involve exposing a file system abstraction to Spark 
> shuffle and have 2 modes of running it. In default mode, it will write to 
> local disk and in the DFS mode, it will write to HDFS.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to