This is more of a Python question I suppose but considering I have to 
forward the data into a Theano CNN, I think it's ultimately relevant here. 

I have 14k train and 167k test images (vectors) of size 200*200 = 40k.
 
I am trying to pickle this data, tried just the train initially - the 
process terminates without getting completed. 

Is this too much? Should i be doing some sort of resizing/dimensionality 
reduction before trying to train and test this?

If so, any recommended methods?

Also, my resultant image is the difference of two images (verification for 
face recognition), so if i apply some sort of dim reduction, should that be 
done before or after taking the difference?

Any help would be appreciated!

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to