squito commented on a change in pull request #23453: [SPARK-26089][CORE] Handle
corruption in large shuffle blocks
URL: https://github.com/apache/spark/pull/23453#discussion_r253133156
##########
File path: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala
##########
@@ -211,6 +213,49 @@ class UtilsSuite extends SparkFunSuite with
ResetSystemProperties with Logging {
assert(os.toByteArray.toList.equals(bytes.toList))
}
+ test("copyStreamUpTo") {
+ // input array initialization
+ val bytes = Array.ofDim[Byte](1200)
+ Random.nextBytes(bytes)
+
+ val limit = 1000
+ // testing for inputLength less than, equal to and greater than limit
+ List(900, 1000, 1100).foreach { inputLength =>
Review comment:
sorry I'm being very nitpicky here, but I really want to make sure we're
testing the exact edge conditions when the limit is -2, -1, 0, +1, +2 of the
full length. Its OK if you don't free the buffer till you are past it, but it
should happen shortly after the limit.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]