cloud-fan commented on pull request #35657:
URL: https://github.com/apache/spark/pull/35657#issuecomment-1067928403


   IIUC, the required framework-level change is:
   1. add a new `KeyGroupedPartitioning` (or whatever name) which puts all the 
records sharing the same key in one partition, and each partitioning only have 
one key. This new partitioning can satisfy `ClusteredDistribution` and can be 
compatible with other `KeyGroupedPartitioning` if the key expressions are the 
same.
   2. add a new `DataSourceTransform` expression which takes some input columns 
and calculates the result with a v2 function. Two `DataSourceTransform` 
expressions are semantically equal to each other if the inputs and the v2 
functions are the same.
   
   I think the above should be sufficient for supporting data source bucketed 
join, and more changes are needed to really support storage-partition join. 
IIUC we don't plan to support storage-partition join in this PR, and it seems 
to me that all the code about the partition values is dead code for now, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to