Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18855#discussion_r133009803
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -1415,6 +1415,79 @@ class BlockManagerSuite extends SparkFunSuite with
Matchers with BeforeAndAfterE
super.fetchBlockSync(host, port, execId, blockId)
}
}
+
+ def testGetOrElseUpdateForLargeBlock(storageLevel: StorageLevel) {
--- End diff --
Have you measured how long these tests take? I've seen this tried before in
other changes related to 2g limits, and this kind of test was always
ridiculously slow.
You can avoid this kind of test by making the chunk size configurable, e.g.
in this line you're adding above:
val chunkSize = math.min(remaining, Int.MaxValue)
Then your test can run fast and not use a lot of memory. You just need to
add extra checks that the data is being chunked properly, instead of relying on
the JVM not throwing errors at you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]