srowen commented on a change in pull request #26948: [SPARK-30120][ML] LSH 
approxNearestNeighbors should use BoundedPriorityQueue when numNearestNeighbors 
is small
URL: https://github.com/apache/spark/pull/26948#discussion_r360901804
 
 

 ##########
 File path: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala
 ##########
 @@ -138,21 +137,37 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
       // Limit the use of hashDist since it's controversial
       val hashDistUDF = udf((x: Seq[Vector]) => hashDistance(x, keyHash), 
DataTypes.DoubleType)
       val hashDistCol = hashDistUDF(col($(outputCol)))
-
-      // Compute threshold to get around k elements.
-      // To guarantee to have enough neighbors in one pass, we need (p - err) 
* N >= M
-      // so we pick quantile p = M / N + err
-      // M: the number of nearest neighbors; N: the number of elements in 
dataset
-      val relativeError = 0.05
-      val approxQuantile = numNearestNeighbors.toDouble / count + relativeError
       val modelDatasetWithDist = modelDataset.withColumn(distCol, hashDistCol)
-      if (approxQuantile >= 1) {
-        modelDatasetWithDist
+
+      if (numNearestNeighbors < 1000) {
 
 Review comment:
   I think I have the same question as in the other PR: if this is faster than 
quantiles for small neighbors, then I'd expect it's faster for everything. I 
don't know if it is though? my guess is that it wouldn't be. You save the 
count() but the count() isn't particularly expensive. The question might be how 
much that saves and at what scale.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to