Hi There,

Just wondering how you would fine-tune a DBN for a completely unsupervised 
task i.e. practical implementation of "Fine-tune all the parameters of this 
deep architecture with respect to a proxy for the DBN log- likelihood". 

Would this be something like, for example, a negative log likelihood 
between the original input and the reconstruction of the data when 
propogated entirely up and down the network? What makes the final layer an 
rbm and the rest just normally directed. Or would the only way you can do 
this be to completely un-roll the network and fine-tune like a deep 
autoencoder (as in reducing the dimensionality of data with neural 
networks)? 

Many thanks,
Jim



-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to