Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5714#discussion_r29208973
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -1121,23 +1121,23 @@ class SQLContext(@transient val sparkContext:
SparkContext)
def assertAnalyzed(): Unit = analyzer.checkAnalysis(analyzed)
lazy val analyzed: LogicalPlan = analyzer.execute(logical)
- lazy val withCachedData: LogicalPlan = {
+ def withCachedData: LogicalPlan = {
assertAnalyzed()
cacheManager.useCachedData(analyzed)
}
- lazy val optimizedPlan: LogicalPlan = optimizer.execute(withCachedData)
+ def optimizedPlan: LogicalPlan = optimizer.execute(withCachedData)
// TODO: Don't just pick the first one...
- lazy val sparkPlan: SparkPlan = {
+ def sparkPlan: SparkPlan = {
SparkPlan.currentContext.set(self)
planner(optimizedPlan).next()
}
// executedPlan should not be used to initialize any SparkPlan. It
should be
// only used for execution.
- lazy val executedPlan: SparkPlan =
prepareForExecution.execute(sparkPlan)
+ def executedPlan: SparkPlan = prepareForExecution.execute(sparkPlan)
/** Internal version of the RDD. Avoids copies and has no schema */
- lazy val toRdd: RDD[Row] = executedPlan.execute()
+ def toRdd: RDD[Row] = executedPlan.execute()
--- End diff --
Yes, thanks for reminding, I will submit code with the another approach.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]