aokolnychyi commented on PR #36995:
URL: https://github.com/apache/spark/pull/36995#issuecomment-1227738065

   > This means, the clustering expressions are the keys, and Spark makes sure 
records with the same keys go to the same partition. What Spark does is: for 
each record, calculate the keys, hash the keys and assign a partition ID for 
the record based on the hash of the keys.
   >
   > How can we use this feature to implement bucket writing? We can use the 
expression (a v2 function) that calculates the bucket ID as the clustering 
expressions. Then Spark will make sure records with the same bucket ID will be 
in the same partition. However, the problem of this approach is low parallelism 
(at most number of buckets).
   >
   > A different approach is to use the bucket columns as the clustering 
expressions. Spark will make sure records with the same bucket columns values 
will be in the same partition. Then the v2 write can require a local sort with 
bucket id (a v2 function) so that records with the same bucket ID will be 
grouped together.
   
   @cloud-fan, I agree with your summary. It seems to me like a classic 
trade-off between fewer files (clustering by bucket ID) and better parallelism 
(local sort by bucket ID). I believe the current Spark API is flexible enough 
so that data sources can request either of those depending on their internal 
logic. The third alternative is to leverage an ordered distribution by bucket 
ID + some other key. In that case, Spark will do skew estimation while 
determining ranges.
   
   To sum up, I feel the logic in this PR works and the existing API should 
cover all discussed cases.
   
   What do you think, @cloud-fan @sunchao?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to