I'm also interested in how to do that efficiently. Currently, when I want 
to quantize weights, I pull them from GPU using get_value(), quantize them 
in Python, and them import them back to GPU with set_value(). But, of 
course, this is very slow. For binary quantization, I can use Theano 
function:

Wb = T.cast(T.switch(W,1,-1), theano.config.floatX)  


Any suggestions?


On Sunday, July 10, 2016 at 8:45:18 PM UTC-7, Kan Kawabata wrote:
>
> Hello, I am trying to study the effect of quantization error in the input 
> and was wondering if there are any theano function that allows me to round 
> a tensor value to its x-bit representation (e.g. round from float64 to 
> float16 representation but keep the tensor as float64 type). I'm not sure 
> what is the best way to go about this in theano and would appreciate any 
> insight.
>
> Thank you,
>
> Kan Kawabata
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to