Prasen, I thought the whole point of doing the RBM approach to autoencoders / dimensional reduction was to do the stacked approach, since you don't need to do full convergence layer by layer, but instead do the layer-by-layer "contrastive divergence" technique which Hinton advocates, and then do fine-tuning at the end? I wouldn't imagine you'd get very good relevance on a single layer.
-jake On Fri, Dec 4, 2009 at 8:37 AM, prasenjit mukherjee < [email protected]> wrote: > I did try out on some sample data where my visible layer was Linear > and hidden layer was StochasticBinary. Using a single layer RBM > didnt give me great results. I guess I should try out the stacked RBM > approach. > > BTW, Anybody used single layer RBM on a doc X term probability matrix > ( aka Continuous visible layer ) with values 0-1 for collaborative > filtering ? > > -Prasen > > On Thu, Dec 3, 2009 at 12:40 AM, Olivier Grisel > <[email protected]> wrote: > > 2009/12/2 Jake Mannix <[email protected]>: > >> Prasen, > >> > >> I was just talking about this on here last week. Yes, RBM-based > >> clustering can be viewed as > >> a nonlinear SVD. I'm pretty interested in your findings on this. Do > you > >> have any RBM code you > >> care to contribute to Mahout? > > > > Hi, > > > > I have some C + python code for stacking autoencoders which share > > similar features as DBN (stacked RBM) here: > > http://bitbucket.org/ogrisel/libsgd/wiki/Home > > > > This is still pretty much work in progress, I will let you know when I > > have easy to run sample demos. > > > > However, this algo is not trivially mapreducable but I plan to > > investigate on that matters in the coming weeks. Would be nice to have > > a pure JVM version too. I am also planning to play with clojure + > > incanter (with the parallelcolt library as a backend for linear > > algebra) to make it easier to work with Hadoop. > > > > -- > > Olivier > > http://twitter.com/ogrisel - http://code.oliviergrisel.name > > >
