Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14498
Is Spark's hashing function semantically equivalent to Hive's ? AFAIK, its
not. I think it would be better to have a mode to be able to use Hive's hash
method. eg. case when this would be needed: Users running a query in Hive want
to switch to Spark. As this happens, you want to verify if the data produced is
same or not. Also, for a brief time the pipeline would run in both the engines.
Upstream consumers of the data generated should not see differences due to
running in the different engines
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]