sarutak commented on issue #25280: [SPARK-28548][SQL] explain() shows wrong 
result for persisted DataFrames after some operations
URL: https://github.com/apache/spark/pull/25280#issuecomment-516919505
 
 
   @viirya With my change, we can get following result.
   
   ```
   val df = spark.range(10)
   df.explain  // show query plan without cache
   df.collect  // execution without cache
   df.persist
   df.explain // show query plan without cache
   df.collect // execution without cache
   df.queryExecution.executedPlan.find(_.isInstanceOf[InMemoryTableScanExec]) 
// None
   ```
   
   After `collect`, `persist` is still ignored but this result is different 
from one of `2.4.3` and same for `master` branch.
   As you mentioned `df.collect` and some operations materialize the 
`executedPlan` and cause this problem.
   
   Some operations including `df.show` don't cause this problem because they 
create the new root plan at the time of execution implicitly so I wonder 
creating dummy root plan at the time of executing `collect` or similar 
operations resolve this type of problem?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to