Doing computation with Float16s is not really reasonable – the IEEE standard 
describes this as a format that is only for storage.


> On Dec 23, 2014, at 9:23 AM, Tobias Knopp <[email protected]> wrote:
> 
> I suppose that the fft limitation is due fftw supporting only float32 and 
> float64.
> 
> I am not sure if simd supports float16. If not you should not expect any 
> speed gains.
> 
> Cheers
> 
> Tobi
> 
> Am Dienstag, 23. Dezember 2014 11:56:46 UTC+1 schrieb Mark B:
>> 
>> I was wondering how Julia supports half precision operations? It seems it 
>> does (more or less) but I'm not sure if there's a lot of type conversion 
>> going on behind the scenes with associated overhead. Would it be more 
>> efficient to store and crunch on Float32s?
>> 
>> julia> rand(Float16,2,2) * rand(Float16,2,2)
>> 2x2 Array{Float16,2}:
>>  0.58301  1.0508
>>  0.48145  0.73438
>> 
>> julia> sparse(rand(Float16,2,2))
>> 2x2 sparse matrix with 4 Float16 entries:
>>         [1, 1]  =  0.448
>>         [2, 1]  =  0.15771
>>         [1, 2]  =  0.79932
>>         [2, 2]  =  0.50928
>> 
>> julia> fft(rand(Float16,2,2))
>> 2x2 Array{Complex{Float64},2}:
>>    1.76245+0.0im  -0.0603027+0.0im
>>  -0.129639+0.0im   -0.390869+0.0im
>> 
>> Oops for the last one - is fft always double precision?
>> 

Reply via email to