Hi there,

I have been reading the documentation as well as the discussion on using 
float16 instead of float32 to reduce memory footprint on a single GPU.

My task is to train a 3D convolutional neural network which has a large 
memory footprint (as a result of the feature cubes and their gradients).

How should I shift from currently float32 to float16 if possible?

What is the step-by-step guide to do so?

Simply updating the configuration file with floatX = float16 breaks the 
scipt.


Many thanks,

Best,

Wilson

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to