Not sure what you mean in "its parents have to reuse it by creating new
RDDs".

As SparkPlan.execute returns new RDD every time, you won't expect the cached
RDD can be reused automatically, even you reuse the SparkPlan in several
queries.

Btw, is there any existing ways to reuse SparkPlan?



summerDG wrote
> Thank you very much. The reason why the output is empty is that the query
> involves join. I forgot to mention it in the question. So even I succeed
> in caching the RDD, the following SparkPlans in the query will not reuse
> it.
> If there is a SparkPlan of the query, which has several "parent" nodes,
> its "parents" have to reuse it by creating new RDDs?





-----
Liang-Chi Hsieh | @viirya 
Spark Technology Center 
http://www.spark.tc/ 
--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/How-to-cache-SparkPlan-execute-for-reusing-tp21097p21100.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to