For float 16, 32 add 64, just car twice. From memory, Theano won't remove the first cast. If it does, you could disable and optimization.
For the other, can you show me the Python code? Le jeu. 2 févr. 2017 12:39, Michael Klachko <[email protected]> a écrit : > I'm also interested in how to do that efficiently. Currently, when I want > to quantize weights, I pull them from GPU using get_value(), quantize them > in Python, and them import them back to GPU with set_value(). But, of > course, this is very slow. For binary quantization, I can use Theano > function: > > Wb = T.cast(T.switch(W,1,-1), theano.config.floatX) > > > Any suggestions? > > > On Sunday, July 10, 2016 at 8:45:18 PM UTC-7, Kan Kawabata wrote: > > Hello, I am trying to study the effect of quantization error in the input > and was wondering if there are any theano function that allows me to round > a tensor value to its x-bit representation (e.g. round from float64 to > float16 representation but keep the tensor as float64 type). I'm not sure > what is the best way to go about this in theano and would appreciate any > insight. > > Thank you, > > Kan Kawabata > > -- > > --- > You received this message because you are subscribed to the Google Groups > "theano-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
