So I'm going to reply to my own question in case it helps anyone else out. Had another look at the paper there, I had forgotten about the contrastive wake-sleep algorithm. That's what's used to train the algorithm completely unsupervised.
On Tuesday, 12 July 2016 15:40:48 UTC+1, Jim O' Donoghue wrote: > > Hi There, > > Just wondering how you would fine-tune a DBN for a completely unsupervised > task i.e. practical implementation of "Fine-tune all the parameters of this > deep architecture with respect to a proxy for the DBN log- likelihood". > > Would this be something like, for example, a negative log likelihood > between the original input and the reconstruction of the data when > propogated entirely up and down the network? What makes the final layer an > rbm and the rest just normally directed. Or would the only way you can do > this be to completely un-roll the network and fine-tune like a deep > autoencoder (as in reducing the dimensionality of data with neural > networks)? > > Many thanks, > Jim > > > > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
