Github user avulanov commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1290#discussion_r19719009
  
    --- Diff: docs/mllib-ann.md ---
    @@ -0,0 +1,223 @@
    +---
    +layout: global
    +title: Artificial Neural Networks - MLlib
    +displayTitle: <a href="mllib-guide.html">MLlib</a> - Artificial Neural 
Networks
    +---
    +
    +# Introduction
    +
    +This document describes the MLlib's Artificial Neural Network (ANN) 
implementation.
    +
    +The implementation currently consist of the following files:
    +
    +* 'ArtificialNeuralNetwork.scala': implements the ANN
    +* 'ANNSuite': implements automated tests for the ANN and its gradient
    +* 'ANNDemo': a demo that approximates three functions and shows a 
graphical representation of
    +the result
    +
    +# Summary of usage
    +
    +The "ArtificialNeuralNetwork" object is used as an interface to the neural 
network. It is
    +called as follows:
    +
    +```
    +val annModel = ArtificialNeuralNetwork.train(rdd, hiddenLayersTopology, 
maxNumIterations)
    --- End diff --
    
    @manishamde given the size of PR @mengxr suggested to split it into 
multiple PRs. There is an implementation of a classifier that is based on this 
artificial neural network https://github.com/avulanov/spark/tree/annclassifier. 
It employes `RDD[LabeledPoint]` and implements MLlib `Classifier`. Softmax 
output and cross-entropy error is usually used for better classification 
performance and they are not yet implemented. We've discussed this issue with 
@bgreeven and our thinking is to have interface in this PR that allows setting 
different error function and optimizer. Does it make sense?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to