Github user avulanov commented on the issue:

    https://github.com/apache/spark/pull/13621
  
    @sethah Thank you for posting the result of your experiment! It looks 
interesting. It is hard to say how good does it work without numerical results 
for a particular application, such as e.g. classification error rate. Could you 
compute the classification error rate on the mnist test data with and without 
autoencoder pre-training in Spark? I did this a while ago for the network with 
two hidden layers with 300 and 100 neurons. Autoencoder allowed to improve over 
standard training and reach the error rate reported in 
http://yann.lecun.com/exdb/mnist/. The other useful application of autoencoder 
is unsupervised learning. In this case, it will be interesting to compare the 
losses for sigmoid and relu autoencoders on the validation set. Would you mind 
checking this?
    
    Autoencoder is also used to pre-train deep networks that does not converge 
otherwise due to vanishing gradient issue. There is an example of this use-case 
in the unit test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to