[ 
https://issues.apache.org/jira/browse/FLINK-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14942703#comment-14942703
 ] 

ASF GitHub Bot commented on FLINK-1745:
---------------------------------------

Github user danielblazevski commented on the pull request:

    https://github.com/apache/flink/pull/1220#issuecomment-145364401
  
    Thanks @chiwanpark for the very useful comments.  I have made changes to 
the comments, which can be found here:
    
https://github.com/danielblazevski/flink/tree/FLINK-1745/flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/nn
    
    I also changed the testing of KNN + QuadTree, which can be found here:
    
https://github.com/danielblazevski/flink/tree/FLINK-1745/flink-staging/flink-ml/src/test/scala/org/apache/flink/ml/nn
    
    Since useQuadTree is now a parameter, I did not need KNNQuadTreeSuite 
anymore and I removed it.
    
    I did not address comment 6 yet.  I need to have the training set before I 
can define a non-user specified useQuadTree, so any main if(useQuadTree) should 
come within ` val crossed = trainingSet.cross(inputSplit).mapPartition {`
    
    About your last "P.S" comment,  Creating the quadtree after the cross 
operation is likely more efficient -- each CPU/Node will form their own 
quadtree, which is what is suggested for the R-tree here:
    https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf
    
    This will result less communication overhead than creating a more global 
quadtree, if that is what you were referring to.


> Add exact k-nearest-neighbours algorithm to machine learning library
> --------------------------------------------------------------------
>
>                 Key: FLINK-1745
>                 URL: https://issues.apache.org/jira/browse/FLINK-1745
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Daniel Blazevski
>              Labels: ML, Starter
>
> Even though the k-nearest-neighbours (kNN) [1,2] algorithm is quite trivial 
> it is still used as a mean to classify data and to do regression. This issue 
> focuses on the implementation of an exact kNN (H-BNLJ, H-BRJ) algorithm as 
> proposed in [2].
> Could be a starter task.
> Resources:
> [1] [http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm]
> [2] [https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to