Also, note that this happen during the compilation of a Theano function. It seem related to this code:
<ipython-input-50-125142517723> in <module>() 2 print(np.random.uniform(10000, 10001, 96).shape) 3 test_var = theano.shared(np.random.uniform(10000, 10001, 96).reshape(2, 3, 4, 4))----> 4 theano.function([], theano.tensor.nnet.relu(test_var))() Fred On Mon, Apr 10, 2017 at 8:44 AM Ramana Subramanyam <[email protected]> wrote: > Hi, > This is the traceback I'm getting when I tried to compute ReLU with bigger > values(As it was reported in OpenAI Gym that ReLU from tensor.nnet.relu > isn't stable, > https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/nn.py#L12 > ) : http://dpaste.com/28DM3WX > I tried on CPU and it works as expected > > Regards, > Ramana > > -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
