Hi, 
Nope, I compile outside the loop and the call to the function was perfectly 
fine for 20 times and the 21st time this happened. Does theano wipe off 
cache on it's own? If that was the case, I think it is possible for that to 
have happened. 
Also, I resumed my training and it has reached around 30 epochs and there 
is no problem in the current run. 

Ramana

On Wednesday, April 12, 2017 at 2:41:09 AM UTC+5:30, nouiz wrote:
>
> The error happen during the compilation of a Theano function. 
>
> Do you compile new Theano function at each epoch?
>
> On Mon, Apr 10, 2017 at 8:44 AM Ramana Subramanyam <[email protected] 
> <javascript:>> wrote:
>
>> Hi,
>> This is the traceback I'm getting when I tried to compute ReLU with 
>> bigger values(As it was reported in OpenAI Gym that ReLU from 
>> tensor.nnet.relu isn't stable, 
>> https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/nn.py#L12
>>  
>> ) : http://dpaste.com/28DM3WX
>> I tried on CPU and it works as expected
>>
>> Regards, 
>> Ramana
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to