gagafunctor commented on a change in pull request #23983: [SPARK-26881][core]
Heuristic for tree aggregate depth
URL: https://github.com/apache/spark/pull/23983#discussion_r262952742
##########
File path:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
##########
@@ -117,6 +118,7 @@ class RowMatrix @Since("1.0.0") (
// Computes n*(n+1)/2, avoiding overflow in the multiplication.
// This succeeds when n <= 65535, which is checked above
val nt = if (n % 2 == 0) ((n / 2) * (n + 1)) else (n * ((n + 1) / 2))
+ val grammianSizeInMb = (SizeEstimator.estimate(new BDV[Double](nt)) /
1000).toInt
Review comment:
I'm not sure if this is the right way to do it here. I was hesitating
between:
- Doing that, meaning using a standard way of estimating object size, at the
cost of allocating the dense vector
- Not allocating the dense vector, at the cost of estimating its size with
an ad-hoc computation
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]