EnricoMi opened a new pull request, #40334:
URL: https://github.com/apache/spark/pull/40334

   ### What changes were proposed in this pull request?
   A `DataSourceV2` reporting a `KeyGroupedPartitioning` through 
`SupportsReportPartitioning` does not have to implement `HasPartitionKey`, and 
is thus not limited to a single key per partition.
   
   ### Why are the changes needed?
   Before Spark 3.3, `DataSourceV2` implementations could report a 
`ClusteredDistribution`, which allowed Spark to exploit the existing 
partitioning and avoid an extra hash partitioning shuffle step. Transformations 
like `groupBy` or window functions (with the right group / partitioning keys) 
would then be executed on the partitioning provided by the data source.
   
   ### Does this PR introduce _any_ user-facing change?
   This improves performance as existing partitioning is reused.
   
   ### How was this patch tested?
   Existing tests have been fixed. They falsely reported partitions with 
multiple keys via `HasPartitionKey`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to