cloud-fan commented on PR #36995:
URL: https://github.com/apache/spark/pull/36995#issuecomment-1225904328
In general, this feature looks reasonable, but it's interesting to discuss
the behavior of "v2 write required distribution" with this new feature.
Let's assume the required distribution is `ClusteredDistribution`, its doc
says
```
/**
* A distribution where tuples that share the same values for clustering
expressions are co-located
* in the same partition.
*
* @since 3.2.0
*/
@Experimental
public interface ClusteredDistribution extends Distribution
```
This means, the clustering expressions are the keys, and Spark makes sure
records with the same keys go to the same partition. What Spark does is: for
each record, calculate the keys, hash the keys and assign a partition ID for
the record based on the hash of the keys.
How can we use this feature to implement bucket writing? We can use the
expression (a v2 function) that calculates the bucket ID as the clustering
expressions. Then Spark will make sure records with the same bucket ID will be
in the same partition. However, the problem of this approach is low parallelism
(at most number of buckets).
A different approach is to use the bucket columns as the clustering
expressions. Spark will make sure records with the same bucket columns values
will be in the same partition. Then the v2 write can require a local sort with
bucket id (a v2 function) so that records with the same bucket ID will be
grouped together.
That said, I think most users will not use bucket transform as the
clustering expressions. If they do, it's there choice and Spark won't do
anything wrong.
What do you think? @sunchao @aokolnychyi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]