Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/21311#discussion_r187857950
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala
---
@@ -568,13 +568,16 @@ private[execution] final class LongToUnsafeRowMap(val
mm: TaskMemoryManager, cap
}
// There is 8 bytes for the pointer to next value
- if (cursor + 8 + row.getSizeInBytes > page.length * 8L +
Platform.LONG_ARRAY_OFFSET) {
+ val needSize = cursor + 8 + row.getSizeInBytes
+ val nowSize = page.length * 8L + Platform.LONG_ARRAY_OFFSET
+ if (needSize > nowSize) {
val used = page.length
if (used >= (1 << 30)) {
sys.error("Can not build a HashedRelation that is larger than 8G")
}
- ensureAcquireMemory(used * 8L * 2)
- val newPage = new Array[Long](used * 2)
+ val multiples = math.max(math.ceil(needSize.toDouble / (used *
8L)).toInt, 2)
+ ensureAcquireMemory(used * 8L * multiples)
--- End diff --
Do we move the size check into before the allocation? IIUC, we have to
check `used * multiplies` <= ByteArrayMethods.MAX_ROUNDED_ARRAY_LENGTH` now.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]