LuciferYang commented on PR #36393:
URL: https://github.com/apache/spark/pull/36393#issuecomment-1112256605
> I'm not sure this is worth doing, it just doesn't save any memory
Do a simple size test
```scala
println(s"Size of Arrays.asList(1) is
${SizeEstimator.estimate(java.util.Arrays.asList(1))}")
println(s"Size of Collections.singletonList(1) is " +
s"${SizeEstimator.estimate(java.util.Collections.singletonList(1))}")
```
The results as follows:
```
Size of Arrays.asList(1) is 64
Size of Collections.singletonList(1) is 40
```
And for performance
``` scala
val valuesPerIteration = 1000000
val benchmark = new Benchmark(
s"Test singletonList",
valuesPerIteration,
output = output)
benchmark.addCase("Arrays.asList") { _: Int =>
for (i <- 0L until valuesPerIteration) {
java.util.Arrays.asList(i)
}
}
benchmark.addCase("Collections.singletonList") { _: Int =>
for (i <- 0L until valuesPerIteration) {
java.util.Collections.singletonList(i)
}
}
benchmark.run()
```
The results as follows:
```
OpenJDK 64-Bit Server VM 1.8.0_322-b06 on Mac OS X 11.4
Apple M1
Test singletonList: Best Time(ms) Avg Time(ms)
Stdev(ms) Rate(M/s) Per Row(ns) Relative
------------------------------------------------------------------------------------------------------------------------
Arrays.asList 5 5
0 213.4 4.7 1.0X
Collections.singletonList 4 4
0 252.7 4.0 1.2X
```
Although the advantage is very small, it is good for performance and memory
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]