Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13847
Do we need to hash all values? This could be a performance issue if
`hashCode` is called frequently on very large arrays.
Story: MLlib had some performance issues caused by `Vector.hashCode`, which
is called during Pyrolite serialization. It saves the Vector as the key in a
hash map to avoid re-serialization of the same object. But the `hashCode` costs
almost the same as re-serialization.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]