[ 
https://issues.apache.org/jira/browse/SPARK-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated SPARK-6112:
------------------------------
    Description: 
HDFS Lazy_Persist policy provide possibility to cache the RDD off_heap in hdfs. 
We may want to provide similar capacity to Tachyon by leveraging hdfs RAM_DISK 
feature, if the user environment does not have tachyon deployed. 

With this feature, it potentially provides possibility to share RDD in memory 
across different jobs and even share with jobs other than spark, and avoid the 
RDD recomputation if executors crash. 

  was:
HDFS Lazy_Persist policy provide possibility to cache the RDD off_heap in hdfs. 
We may want to provide similar capacity to Tachyon by leveraging hdfs RAM_DISK 
feature, if the user environment does not have tachyon deployed. 

With this feature, it potentially provides possibility to share RDD in memory 
across different jobs and even share with jobs other than spark.


> Leverage HDFS RAM_DISK capacity to provide off_heap feature similar to 
> Tachyon 
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-6112
>                 URL: https://issues.apache.org/jira/browse/SPARK-6112
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Zhan Zhang
>
> HDFS Lazy_Persist policy provide possibility to cache the RDD off_heap in 
> hdfs. We may want to provide similar capacity to Tachyon by leveraging hdfs 
> RAM_DISK feature, if the user environment does not have tachyon deployed. 
> With this feature, it potentially provides possibility to share RDD in memory 
> across different jobs and even share with jobs other than spark, and avoid 
> the RDD recomputation if executors crash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to