Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13847#discussion_r68100206
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java
---
@@ -298,6 +298,10 @@ public UnsafeMapData getMap(int ordinal) {
return map;
}
+ // This `hashCode` computation could consume much processor time for
large data.
+ // If the computation becomes a bottleneck, we can use a light-weight
logic; the first fixed bytes
+ // are used to compute `hashCode` (See `Vector.hashCode`).
+ // The same issue exists in `UnsafeMapData.hashCode`.
--- End diff --
The same issue also exists for UnsafeRow...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]