lst-codes commented on issue #1404:
URL: 
https://github.com/apache/arrow-datafusion/issues/1404#issuecomment-990621702


   Isn't this more a problem that you have to specify the number of partitions 
ahead of time, and if your partition size is a smaller number than the number 
of distinct combinations of the expressions you're hashing against, you'll end 
up with multiple variations of your expression set within the same partition?
   
   It sounds like what you're asking for is the Spark equivalent of 
`partitionBy(...cols)`, so this issue might be better thought of as a feature 
request for a new logical partitioning scheme.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to