zhengruifeng commented on a change in pull request #26415: [SPARK-18409][ML]
LSH approxNearestNeighbors should use approxQuantile instead of sort
URL: https://github.com/apache/spark/pull/26415#discussion_r348883859
##########
File path: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala
##########
@@ -112,7 +112,9 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
numNearestNeighbors: Int,
singleProbe: Boolean,
distCol: String): Dataset[_] = {
- require(numNearestNeighbors > 0, "The number of nearest neighbors cannot
be less than 1")
+ val count = dataset.count()
Review comment:
sorry for late reply.
1, Since `approxNearestNeighbors` is for query with only one key, it is
supposed to be called many times in practice. Is `val count = dataset.count()`
too expensive between calls? Can it be precomputed somewhere?
2, Do we need `numNearestNeighbors <= count`? refer to scala's and RDD's
behavior:
```
scala> Array(1).take(2)
res1: Array[Int] = Array(1)
```
```
scala> val rdd = sc.range(0, 1)
rdd: org.apache.spark.rdd.RDD[Long] = MapPartitionsRDD[1] at range at
<console>:24
scala> rdd.count
res0: Long = 1
scala> rdd.take
take takeAsync takeOrdered takeSample
scala> rdd.take(3)
res1: Array[Long] = Array(0)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]