There is some cleanup, but only when the process shutdown and only on old modules. We stop using the module for 1 weeks before deleting them. So if you have an experiments running for more then 1 weeks, in theory, it could happen.
Tell us if it happen again. Fred On Mon, Apr 10, 2017 at 8:44 AM Ramana Subramanyam <[email protected]> wrote: > Hi, > This is the traceback I'm getting when I tried to compute ReLU with bigger > values(As it was reported in OpenAI Gym that ReLU from tensor.nnet.relu > isn't stable, > https://github.com/openai/improved-gan/blob/master/mnist_svhn_cifar10/nn.py#L12 > ) : http://dpaste.com/28DM3WX > I tried on CPU and it works as expected > > Regards, > Ramana > > -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
