Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21529#discussion_r194863357
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala ---
    @@ -679,6 +679,17 @@ class PlannerSuite extends SharedSQLContext {
         }
         assert(rangeExecInZeroPartition.head.outputPartitioning == 
UnknownPartitioning(0))
       }
    +
    +  test("SPARK-24495: EnsureRequirements can return wrong plan when reusing 
the same key in join") {
    +    withSQLConf(("spark.sql.shuffle.partitions", "1"),
    +      ("spark.sql.constraintPropagation.enabled", "false"),
    +      ("spark.sql.autoBroadcastJoinThreshold", "-1")) {
    +      val df1 = spark.range(100).repartition(2, $"id", $"id")
    --- End diff --
    
    no, the issue can happen with range partition(because of the double 
transformation issue), the code in the ticket can reproduce the bug and it has 
no hash partitioning.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to