srowen commented on a change in pull request #26948: [SPARK-30120][ML] LSH
approxNearestNeighbors should use BoundedPriorityQueue when numNearestNeighbors
is small
URL: https://github.com/apache/spark/pull/26948#discussion_r360402434
##########
File path: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala
##########
@@ -138,21 +137,37 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
// Limit the use of hashDist since it's controversial
val hashDistUDF = udf((x: Seq[Vector]) => hashDistance(x, keyHash),
DataTypes.DoubleType)
val hashDistCol = hashDistUDF(col($(outputCol)))
-
- // Compute threshold to get around k elements.
- // To guarantee to have enough neighbors in one pass, we need (p - err)
* N >= M
- // so we pick quantile p = M / N + err
- // M: the number of nearest neighbors; N: the number of elements in
dataset
- val relativeError = 0.05
- val approxQuantile = numNearestNeighbors.toDouble / count + relativeError
val modelDatasetWithDist = modelDataset.withColumn(distCol, hashDistCol)
- if (approxQuantile >= 1) {
- modelDatasetWithDist
+
+ if (numNearestNeighbors < 1000) {
Review comment:
Hm, why would we predicate this on numNearestNeighbors? I'm not clear why a
priority queue helps particularly when this is small, vs a quantile; both are
doing something kinda similar. I'd expect one or the other to be consistently
faster or slower. I also generally imagine this argument will be smallish, so,
if this approach is good for < 1000, and not bad for 10000 or something, just
use the queue?
I understood when the idea was to collect() small data sets and just pick
directly the top k.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]