[
https://issues.apache.org/jira/browse/FLINK-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101101#comment-15101101
]
ASF GitHub Bot commented on FLINK-1745:
---------------------------------------
Github user chiwanpark commented on the pull request:
https://github.com/apache/flink/pull/1220#issuecomment-171852976
@danielblazevski, I think we can use `crossWithTiny` and `crossWithHuge`
method to reduce shuffle cost. Best approach is that counting elements in both
datasets and decide method to cross, but currently we simply add a parameter to
decide this like following:
```scala
import
org.apache.flink.api.common.operators.base.CrossOperatorBase.CrossHint
class KNN {
// ...
def setSizeHint(sizeHint: CrossHint): KNN = {
parameters.add(SizeHint, sizeHint)
this
}
// ...
}
object KNN {
// ...
case object SizeHint extends Parameter[CrossHint] {
val defaultValue: Option[CrossHint] = None
}
// ...
}
```
And we can use the parameter in `predictValues` method:
```scala
val crossTuned = sizeHint match {
case Some(hint) if hint == CrossHint.FIRST_IS_SMALL =>
trainingSet.crossWithHuge(inputSplit)
case Some(hint) if hint == CrossHint.SECOND_IS_SMALL =>
trainingSet.crossWithTiny(inputSplit)
case _ => trainingSet.cross(inputSplit)
}
val crossed = crossTuned.mapPartition {
// ...
}
// ...
```
We have to decide the name of added parameter (`SizeHint`) and add
documentation of explanation that which dataset is first (training) and which
dataset is second (testing).
By the way, there is no documentation for k-NN. Could you add the
documentation to `docs/ml` directory?
> Add exact k-nearest-neighbours algorithm to machine learning library
> --------------------------------------------------------------------
>
> Key: FLINK-1745
> URL: https://issues.apache.org/jira/browse/FLINK-1745
> Project: Flink
> Issue Type: New Feature
> Components: Machine Learning Library
> Reporter: Till Rohrmann
> Assignee: Daniel Blazevski
> Labels: ML, Starter
>
> Even though the k-nearest-neighbours (kNN) [1,2] algorithm is quite trivial
> it is still used as a mean to classify data and to do regression. This issue
> focuses on the implementation of an exact kNN (H-BNLJ, H-BRJ) algorithm as
> proposed in [2].
> Could be a starter task.
> Resources:
> [1] [http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm]
> [2] [https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)