wangyum opened a new pull request #35214:
URL: https://github.com/apache/spark/pull/35214


   ### What changes were proposed in this pull request?
   
   This pr push down deterministic projection through SQL UNION and combine 
them. For example:
   ```scala
   spark.range(1).selectExpr("CAST(id AS decimal(18, 1)) AS 
id").write.saveAsTable("t1")
   spark.range(2).selectExpr("CAST(id AS decimal(18, 2)) AS 
id").write.saveAsTable("t2")
   spark.range(3).selectExpr("CAST(id AS decimal(18, 3)) AS 
id").write.saveAsTable("t3")
   spark.range(4).selectExpr("CAST(id AS decimal(18, 4)) AS 
id").write.saveAsTable("t4")
   spark.range(5).selectExpr("CAST(id AS decimal(18, 5)) AS 
id").write.saveAsTable("t5")
   
   spark.sql("SELECT id FROM t1 UNION SELECT id FROM t2 UNION SELECT id FROM t3 
UNION SELECT id FROM t4 UNION SELECT id FROM t5").explain(true)
   ```
   
   Before this pr:
   ```
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[id#36], functions=[], output=[id#36])
      +- Exchange hashpartitioning(id#36, 5), ENSURE_REQUIREMENTS, [id=#159]
         +- HashAggregate(keys=[id#36], functions=[], output=[id#36])
            +- Union
               :- HashAggregate(keys=[id#34], functions=[], output=[id#36])
               :  +- Exchange hashpartitioning(id#34, 5), ENSURE_REQUIREMENTS, 
[id=#154]
               :     +- HashAggregate(keys=[id#34], functions=[], 
output=[id#34])
               :        +- Union
               :           :- HashAggregate(keys=[id#32], functions=[], 
output=[id#34])
               :           :  +- Exchange hashpartitioning(id#32, 5), 
ENSURE_REQUIREMENTS, [id=#149]
               :           :     +- HashAggregate(keys=[id#32], functions=[], 
output=[id#32])
               :           :        +- Union
               :           :           :- HashAggregate(keys=[id#30], 
functions=[], output=[id#32])
               :           :           :  +- Exchange hashpartitioning(id#30, 
5), ENSURE_REQUIREMENTS, [id=#144]
               :           :           :     +- HashAggregate(keys=[id#30], 
functions=[], output=[id#30])
               :           :           :        +- Union
               :           :           :           :- Project [cast(id#25 as 
decimal(19,2)) AS id#30]
               :           :           :           :  +- FileScan parquet 
default.t1[id#25]
               :           :           :           +- Project [cast(id#26 as 
decimal(19,2)) AS id#31]
               :           :           :              +- FileScan parquet 
default.t2[id#26]
               :           :           +- Project [cast(id#27 as decimal(20,3)) 
AS id#33]
               :           :              +- FileScan parquet default.t3[id#27]
               :           +- Project [cast(id#28 as decimal(21,4)) AS id#35]
               :              +- FileScan parquet default.t4[id#28]
               +- Project [cast(id#29 as decimal(22,5)) AS id#37]
                  +- FileScan parquet default.t5[id#29]
   
   ```
   
   After this pr:
   ```
   AdaptiveSparkPlan isFinalPlan=false
   +- HashAggregate(keys=[id#36], functions=[], output=[id#36])
      +- Exchange hashpartitioning(id#36, 5), ENSURE_REQUIREMENTS, [id=#111]
         +- HashAggregate(keys=[id#36], functions=[], output=[id#36])
            +- Union
               :- Project [cast(cast(cast(cast(id#25 as decimal(19,2)) as 
decimal(20,3)) as decimal(21,4)) as decimal(22,5)) AS id#36]
               :  +- FileScan parquet default.t1[id#25]
               :- Project [cast(cast(cast(cast(id#26 as decimal(19,2)) as 
decimal(20,3)) as decimal(21,4)) as decimal(22,5)) AS id#49]
               :  +- FileScan parquet default.t2[id#26]
               :- Project [cast(cast(cast(id#27 as decimal(20,3)) as 
decimal(21,4)) as decimal(22,5)) AS id#47]
               :  +- FileScan parquet default.t3[id#27]
               :- Project [cast(cast(id#28 as decimal(21,4)) as decimal(22,5)) 
AS id#44]
               :  +- FileScan parquet default.t4[id#28]
               +- Project [cast(id#29 as decimal(22,5)) AS id#37]
                  +- FileScan parquet default.t5[id#29]
   ```
   
   ### Why are the changes needed?
   
   Improve query performance by reduce shuffles.
   
   ### Does this PR introduce _any_ user-facing change?
   
   No.
   
   ### How was this patch tested?
   
   Unit test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to