Hi, 

On Tuesday, April 11, 2017 at 8:43:48 PM UTC+5:30, nouiz wrote:
>
> It would be great to know why they don't like that implementation.
>

While training an outdated GAN implementation, I used the relu from theano 
in the generator and at ~500 epoch the generator loss became 100% and 
saturated. Can it be said that encountering a dying relu problem is more 
while using our implementation? 


> I don't know why you get this error. Can you delete your Theano cache and 
> try again?
>
> Fred
>
> On Mon, Apr 10, 2017 at 8:44 AM Ramana Subramanyam <[email protected] 
> <javascript:>> wrote:
>
>> Hi,
>> This is the traceback I'm getting when I tried to compute ReLU with 
>> bigger values(As it was reported in OpenAI Gym that ReLU from 
>> tensor.nnet.relu isn't stable, 
>> https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/nn.py#L12
>>  
>> ) : http://dpaste.com/28DM3WX
>> I tried on CPU and it works as expected
>>
>> Regards, 
>> Ramana
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to