Hi to all, a theano newbie here.

I'm trying to training a MLP on MNIST dataset with mini-batch sgd (using 
cpu)  following deeplearning.net tutorial. 
The default batch-size is set to 20 but when I launch the script after 2 
epoch I'm out of ram.
I noticed that every time train_model(index) is called something is stored 
in RAM. If I use all of training data this doesn't happen.

What I'm missing?

Thanks.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to