Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14551#discussion_r74393311
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -874,4 +874,38 @@ class UtilsSuite extends SparkFunSuite with
ResetSystemProperties with Logging {
}
}
}
+
+ test("chi square test of randomizeInPlace") {
+ // Parameters
+ val arraySize = 10
+ val numTrials = 1000
+ val threshold = 0.05
+ val seed = 1L
+
+ // results[i][j]: how many times Utils.randomize moves an element from
position j to position i
+ val results: Array[Array[Long]] = Array.ofDim(arraySize, arraySize)
+
+ // This must be seeded because even a fair random process will fail
this test with
+ // probability equal to the value of `threshold`, which is
inconvenient for a unit test.
+ val rand = new java.util.Random(seed)
+ val range = 0 until arraySize
+
+ for {
+ _ <- 0 until numTrials
+ trial = Utils.randomizeInPlace(range.toArray, rand)
--- End diff --
Hm, but perhaps your original version was easier to read than this chained
form of nested loops though, on second thought here.
I've actually never seen this type of expression even in Scala. I'm not
sure I'd call this well-known. I'm having trouble getting into the nested
assignment mixed in with loop indices... aren't you technically generating a
tuple at each iteration of each loop this way? when the 'product' of each loop
is just 0-1 values, conceptually. I desugared it to see and that seems true.
And everything but the body is in braces.
Digression: the version I suggested is certainly more like Java/C++/C#, and
it's great that it's possible in Scala too. That has some limited value to
readers. Lots of stuff is possible in Scala and some is obviously more compact,
and therefore readable and less error-prone, and should be used. I think this
is just difference from a standard expression for its own sake, to use syntax
because it's merely possible in Scala. Lots of things can be written in a
complicated way in Scala.
It's also not consistent with how the Spark code base is written.
I know it's a minor digression but sometimes worthwhile. I'd favor some
kind of "compromise" solution like your original version, which felt a little
more like the rest of the code base. I'd prefer a conventional loop construct
like the rest of the code, but don't feel strongly about _that_.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]