Hi, did you solve this? Could you tell us how did you do it? I would 
appreciate it.

El martes, 8 de diciembre de 2015, 21:57:35 (UTC+1), J Zam escribió:
>
> Hi,
>
> This may have been asked before but I haven't found an answer for it in 
> the topics. I'm trying to apply dropout to an MLP with a linear regression 
> layer as output. My question is with regards to the dropout component, 
> after looking around, I have my dropout function as:
>
> def drop(input, rng, p=0.5): 
>
>     srng = RandomStreams(rng.randint(999999))
>     
>     mask = srng.binomial(n=1, p=1.-p, size=input.shape)
>     return input * T.cast(mask, theano.config.floatX) / (1.-p)
>
> I'm not sure I understand correctly, but why is there a need to divide by 
> (1. -p) ?
>
> Also, I have been reading that there is a need for re-scaling of weights 
> when dropout is applied:
>
>
> http://christianherta.de/lehre/dataScience/machineLearning/neuralNetworks/Dropout.php
> http://arxiv.org/pdf/1207.0580v1.pdf (A.1)
>
> and I'm not sure at what step to do this or what is it that it 
> accomplishes. 
>
> I'm trying to get my head around it and any help would be appreciate it.
>
> Thanks!
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to