Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21311#discussion_r189910750
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/joins/HashedRelationSuite.scala
---
@@ -254,6 +254,30 @@ class HashedRelationSuite extends SparkFunSuite with
SharedSQLContext {
map.free()
}
+ test("LongToUnsafeRowMap with big values") {
+ val taskMemoryManager = new TaskMemoryManager(
+ new StaticMemoryManager(
+ new SparkConf().set(MEMORY_OFFHEAP_ENABLED.key, "false"),
+ Long.MaxValue,
+ Long.MaxValue,
+ 1),
+ 0)
+ val unsafeProj = UnsafeProjection.create(Seq(BoundReference(0,
StringType, false)))
+ val keys = Seq(0L)
+ val map = new LongToUnsafeRowMap(taskMemoryManager, 1)
+ val bigStr = UTF8String.fromString("x" * 1024 * 1024 * 2)
--- End diff --
let's add a comment to say, the page array is initialized with length `1 <<
17`, so here we need a value larger than `1 << 18`
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]