Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9404#discussion_r43662693
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala ---
    @@ -353,4 +354,44 @@ class CachedTableSuite extends QueryTest with 
SharedSQLContext {
         assert(sparkPlan.collect { case e: InMemoryColumnarTableScan => e 
}.size === 3)
         assert(sparkPlan.collect { case e: PhysicalRDD => e }.size === 0)
       }
    +
    +  /**
    +   * Verifies that the plan for `df` contains `expected` number of 
Exchange operators.
    +   */
    +  private def verifyNumExchanges(df: DataFrame, expected: Int): Unit = {
    +    assert(df.queryExecution.executedPlan.collect { case e: Exchange => e 
}.size == expected)
    +  }
    +
    +  test("A cached table preserves the partitioning and ordering of its 
cached SparkPlan") {
    +    val table3x = testData.unionAll(testData).unionAll(testData)
    +    table3x.registerTempTable("testData3x")
    +
    +    sql("SELECT key, value FROM testData3x ORDER BY 
key").registerTempTable("orderedTable")
    +    sqlContext.cacheTable("orderedTable")
    +    assertCached(sqlContext.table("orderedTable"))
    +    // Should not have an exchange as the query is already sorted on the 
group by key.
    +    verifyNumExchanges(sql("SELECT key, count(*) FROM orderedTable GROUP 
BY key"), 0)
    +    checkAnswer(
    +      sql("SELECT key, count(*) FROM orderedTable GROUP BY key ORDER BY 
key"),
    +      sql("SELECT key, count(*) FROM testData3x GROUP BY key ORDER BY 
key").collect())
    +    sqlContext.uncacheTable("orderedTable")
    +
    +    // Set up two tables distributed in the same way.
    +    testData.distributeBy(Column("key") :: Nil, 5).registerTempTable("t1")
    +    testData2.distributeBy(Column("a") :: Nil, 5).registerTempTable("t2")
    +    sqlContext.cacheTable("t1")
    +    sqlContext.cacheTable("t2")
    +
    +    // Joining them should result in no exchanges.
    +    verifyNumExchanges(sql("SELECT * FROM t1 t1 JOIN t2 t2 ON t1.key = 
t2.a"), 0)
    --- End diff --
    
    ah seems partitioning the data to `5` partitions does the trick at here 
(the default parallelism is set to 5 in our tests). If you change it tom 
something like `10`, this test will fail... Unfortunately, we do not have the 
concept of equivalent class right now. So, at 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/Exchange.scala#L229-L240,
 `allCompatible` method does not really do what we want at here (btw, 
`allCompatible` method is trying to make sure that partitioning schemes of all 
children are compatible with each other, i.e. making sure they partition the 
data with the same partitioner and with the same number of partitions).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to