HeartSaVioR edited a comment on pull request #32875:
URL: https://github.com/apache/spark/pull/32875#issuecomment-1023854865


   I would suspect we will do the same mistake unintentionally if we don't 
explicitly call out. Please bring back the way for "operators" to explicitly 
require Spark's internal hash partition on specifying requirement on 
distribution. This would be future-proof when the state partitioning can be 
flexible - at that time the operators would require Spark to fit to the 
partitioning of state (even if it's not Spark's internal hash partition).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to