First of all, autorec updates only those part of the weight matrix that are 
connected to visible unit, not the entire weight matrix every time.

On Tuesday, 28 June 2016 12:40:30 UTC+2, shashank gupta wrote:
>
> Hi all,
>
> I am trying to implement sparse autoencoder in theano for collaborative 
> filtering. Since the data in CF setting. Main idea in AutoRec based CF is 
> to backpropagate error only to weights whose corresponding input is known. 
> Initially I was planning to use theano's sparse tensors but I read that it 
> cannot be ported to GPU. So I decided to implement it using theano's scan 
> functionality. 
>
>
> I am using the following code for autorec:
>
> input is : user-user rating matrix : n * n (in batches with unknown values 
> filled with zeros)
> Output : user-user dense matrix
>
>    def AE(n,k)
>         w = np.random.uniform(low=-np.
> sqrt(6 / float(self.n + self.k)),
>                               high=np.sqrt(6 / float(self.n + self.k)),
>                               size=(self.n, self.k)).astype(np.float32)
>         v = np.random.uniform(low=-np.sqrt(6 / float(self.r + self.k)),
>                               high=np.sqrt(6 / float(self. + self.k)),
>                               size=(self.k, self.r)).astype(np.float32)
>         MU = np.zeros((self.k)).astype(np.float32)
>         B = np.zeros((self.n)).astype(np.float32)
>         # Creating theano shared variables from these
>         W = theano.shared(w, name='W', borrow=True)
>         V = theano.shared(v, name='V', borrow=True)
>         mu = theano.shared(MU, name='mu', borrow=True)
>         b = theano.shared(B, name='b', borrow=True)
>         self.param = [W, V, mu, b]
>         
>         rating = T.matrix()
>         
>         ######################## Theano scan part 
> ##############################
>         def step(rat, W, V, mu, b):
>             res = T.zeros_like(rat)
>             rat_nz = T.neq(rat, 0).nonzero()[0]
>             tar = rat[rat_nz]
>             hidden_activation = T.nnet.sigmoid(T.dot(V[:, rat_nz], 
> rat[rat_nz]) #\
>                                        + mu)
>
>             output_activation = T.nnet.sigmoid(T.dot(W[rat_nz, :], \
>                                              hidden_activation) \
>                                        + b[rat_nz])
>             res = T.set_subtensor(res[rat_nz] ,output_activation)
>             return res
>             
>
>         scan_res, scan_updates = theano.scan(fn=step, outputs_info=None, \
>                                              sequences=[rating],
>                                              non_sequences=[W, V, \
>                                                             mu, b])
>
>         self.loss = T.sum((scan_res - rating) ** 2) + \
>             0.001 * T.sum(W ** 2) + 0.001 * T.sum(V ** 2)
>         grads = T.grad(self.loss, self.param)
>         updates = [(param, param - lr * grad) for (param, grad) in \
>                    zip(self.param, grads)]
>
>         self.ae_batch = theano.function([rating], self.loss, 
> updates=updates)
>
> Now I am not very familiar with theano scan, so can anyone please verify 
> that I am doing the correct thing. Thanks.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to