Hi everyone,

I am generating samples from a DBN which has been pretrained by stacking k 
RBMs one on top of another, in an unsupervised way.
After pretraining, to generate the samples from the whole networks I think 
I should do a foward pass, sampling each RBM hidden unit given the visibles 
and then I should do a backward pass, sampling visible units given the 
hidden values.
I start the chain either by feeding a random visible configuration or a 
zero vector to the first RBM. 
This is like one step of my Gibbs chain; I would then perform first several 
burn-in steps and then actually take samples only after them. 
Is this the correct way to generate N samples from a DBN?

Mimicking the code in DeepLearningTutorials/code/DBN.py, I would do 
something like this:
def gibbs_vhv(self, v0_sample):

        v_sample = v0_sample
        for rbm in self.rbm_layers:
            pre_sigmoid_h, h_mean, h_sample = rbm.sample_h_given_v(v_sample)
            v_sample = h_sample

        h_sample = v_sample
        for rbm in reversed(self.rbm_layers):
            pre_sigmoid_v, v_mean, v_sample = rbm.sample_v_given_h(h_sample)
            h_sample = v_sample

        return h_sample

My main issue is that in this case my samples are very similar one to 
another. For instance, if I am training my DBN on MNIST the generated digit 
samples are slightly variations of a same digit class. 
Am I doing something not appropriate? How could I generate 'more different' 
samples?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to