Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14464#discussion_r73224507
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala
---
@@ -459,8 +459,8 @@ private[execution] final class LongToUnsafeRowMap(val
mm: TaskMemoryManager, cap
*/
def getValue(key: Long, resultRow: UnsafeRow): UnsafeRow = {
if (isDense) {
- val idx = (key - minKey).toInt
- if (idx >= 0 && key <= maxKey && array(idx) > 0) {
+ val idx = (key - minKey).toInt // could overflow
+ if (key >= minKey && key <= maxKey && array(idx) > 0) {
--- End diff --
Yeah I see where this is going but I think this doesn't totally eliminate
the problem. `key - minKey` could still overflow such that the `int` is
positive and even `>= minKey`. It seems like we need to test the keys against
each other as longs, and only then covert to an `int` to index into the array?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]