[
https://issues.apache.org/jira/browse/FLINK-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14978968#comment-14978968
]
ASF GitHub Bot commented on FLINK-1745:
---------------------------------------
Github user danielblazevski commented on the pull request:
https://github.com/apache/flink/pull/1220#issuecomment-151942911
@tillrohrmann I now have more time to go back and try to finalize this PR
in the next couple of weeks. After debugging a bit, I noticed that in your
modification of `partitionBox`, the variable `center` is different before and
after the call of `partitionBox` in `makeChildren`. For example, in
`makeChildren` I added some lines to print to the console, namely
``` scala
println("center before partitioning = " + center)
val cPart = partitionBox(center, width)
println("cPart = " + cPart)
val mappedWidth = 0.5*width.asBreeze
children = cPart.map(p => new Node(p, mappedWidth.fromBreeze, null))
println("center after partitioning = " + center)
```
The output to console is
```
center before partitioning = DenseVector(0.0, 0.0)
cPart = List(DenseVector(-0.5, -0.25), DenseVector(-0.5, 0.25),
DenseVector(0.5, -0.25), DenseVector(0.5, 0.25))
center after partitioning = DenseVector(0.5, 0.25)
```
So the output `cPart` looks good, but the value of `center` after
partitioning should still be `(0.0,0.0)`. I'm confused as to how it is even
changed to `(0.5, 0.25)` the final entry of `cPart`, and hence not clear how to
fix that. I imagine it should be an easy fix; of course I can use a hack to
update `center` to be the average of `cPart`, but that seems wasteful since
`center`for a given node should not be changed.
> Add exact k-nearest-neighbours algorithm to machine learning library
> --------------------------------------------------------------------
>
> Key: FLINK-1745
> URL: https://issues.apache.org/jira/browse/FLINK-1745
> Project: Flink
> Issue Type: New Feature
> Components: Machine Learning Library
> Reporter: Till Rohrmann
> Assignee: Daniel Blazevski
> Labels: ML, Starter
>
> Even though the k-nearest-neighbours (kNN) [1,2] algorithm is quite trivial
> it is still used as a mean to classify data and to do regression. This issue
> focuses on the implementation of an exact kNN (H-BNLJ, H-BRJ) algorithm as
> proposed in [2].
> Could be a starter task.
> Resources:
> [1] [http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm]
> [2] [https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)