Hi,
Somewhere else, I saw a comment that says it doesn't perform well with 
bigger values. I am not able to recollect where I saw that. I will try to 
reproduce with some big random values and cross check with a numpy 
implementation. If it doesn't match,then I will ask Salimans. 
Deleting cache solves this, but this happens quite often and today it 
affected the training process. For example, 20 epochs run as expected, on 
21st I see this error. I trained the network on a notebook and haven't 
faced when I executed my code as a python file. 

Regards, 
Ramana

On Tuesday, April 11, 2017 at 8:43:48 PM UTC+5:30, nouiz wrote:
>
> It would be great to know why they don't like that implementation.
>
> I don't know why you get this error. Can you delete your Theano cache and 
> try again?
>
> Fred
>
> On Mon, Apr 10, 2017 at 8:44 AM Ramana Subramanyam <[email protected] 
> <javascript:>> wrote:
>
>> Hi,
>> This is the traceback I'm getting when I tried to compute ReLU with 
>> bigger values(As it was reported in OpenAI Gym that ReLU from 
>> tensor.nnet.relu isn't stable, 
>> https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/nn.py#L12
>>  
>> ) : http://dpaste.com/28DM3WX
>> I tried on CPU and it works as expected
>>
>> Regards, 
>> Ramana
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to