[ 
https://issues.apache.org/jira/browse/SPARK-21484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang-Chi Hsieh updated SPARK-21484:
------------------------------------
    Description: 
After the cal of persist/unpersis, the query plans of a Dataset should be 
changed accordingly. But currently the query plans are the same. So you will 
see the inconsistent query plans like:
{code}
scala> val x1 = Seq(1).toDF()
x1: org.apache.spark.sql.DataFrame = [value: int]
scala> println(x1.queryExecution.executedPlan) // query plans are materialized 
before persist()
LocalTableScan [value#1]
println(x1.queryExecution.executedPlan) 
scala> x1.persist()
scala> x1.count()
scala> println(x1.queryExecution.executedPlan)
LocalTableScan [value#1]
{code}

{code}
scala> val x1 = Seq(1).toDF()
x1: org.apache.spark.sql.DataFrame = [value: int]
scala> x1.persist()
scala> x1.count()
scala> println(x1.queryExecution.executedPlan) // query plans are materialized 
after persist()
InMemoryTableScan [value#24]
   +- InMemoryRelation [value#24], true, 10000, StorageLevel(disk, memory, 
deserialized, 1 replicas)
         +- LocalTableScan [value#1]
scala> x1.unpersist()
scala> println(x1.queryExecution.executedPlan)
InMemoryTableScan [value#24]
   +- InMemoryRelation [value#24], true, 10000, StorageLevel(disk, memory, 
deserialized, 1 replicas)
         +- LocalTableScan [value#1]
{code}



> Wrong query plans of Dataset after persist/unperesist
> -----------------------------------------------------
>
>                 Key: SPARK-21484
>                 URL: https://issues.apache.org/jira/browse/SPARK-21484
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Liang-Chi Hsieh
>
> After the cal of persist/unpersis, the query plans of a Dataset should be 
> changed accordingly. But currently the query plans are the same. So you will 
> see the inconsistent query plans like:
> {code}
> scala> val x1 = Seq(1).toDF()
> x1: org.apache.spark.sql.DataFrame = [value: int]
> scala> println(x1.queryExecution.executedPlan) // query plans are 
> materialized before persist()
> LocalTableScan [value#1]
> println(x1.queryExecution.executedPlan) 
> scala> x1.persist()
> scala> x1.count()
> scala> println(x1.queryExecution.executedPlan)
> LocalTableScan [value#1]
> {code}
> {code}
> scala> val x1 = Seq(1).toDF()
> x1: org.apache.spark.sql.DataFrame = [value: int]
> scala> x1.persist()
> scala> x1.count()
> scala> println(x1.queryExecution.executedPlan) // query plans are 
> materialized after persist()
> InMemoryTableScan [value#24]
>    +- InMemoryRelation [value#24], true, 10000, StorageLevel(disk, memory, 
> deserialized, 1 replicas)
>          +- LocalTableScan [value#1]
> scala> x1.unpersist()
> scala> println(x1.queryExecution.executedPlan)
> InMemoryTableScan [value#24]
>    +- InMemoryRelation [value#24], true, 10000, StorageLevel(disk, memory, 
> deserialized, 1 replicas)
>          +- LocalTableScan [value#1]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to