WangGuangxin commented on pull request #35460:
URL: https://github.com/apache/spark/pull/35460#issuecomment-1040409122
> Isn't this why you shouldn't partition, shuffle, etc on some random value?
use a hash?
The data analyst may always have various needs such as `distributed by
rand()` to redistribute data evenly
or `select * from
(select concat(key1, rand()) as key1 from tbl1) a
join
(select key2 from tbl2) b
on a.key1 = b.key2` to work around skew data, which is a valid SQL in
Spark.
Both of these sqls will generate a `HashPartitioning` with non-deterministic
expressions.
If we don't support shuffle by random value, we should disable this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]