Re: [julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Erik Schnetter
As matrix multiplication has a cost of O(N^3) while using only O(N^2)
elements, and since many integers (those smaller than 2^52 or so) can be
represented exactly as Float64 values, one approach could be to convert the
matrices to Float64, multiply them, and then convert back. For 64-bit
integers this might even be the fastest option allowed by common hardware.
For 32-bit integers, you could investigate whether using Float32 as
intermediate representation suffices.

-erik

On Wed, Sep 21, 2016 at 7:18 PM, Lutfullah Tomak 
wrote:

> Float matrix multiplication uses heavily optimized openblas but integer
> matrix multiplication is a generic one from julia and there is not much you
> can do to improve it a lot because not all cpus have simd multiplication
> and addition for integers.




-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Lutfullah Tomak
Float matrix multiplication uses heavily optimized openblas but integer matrix 
multiplication is a generic one from julia and there is not much you can do to 
improve it a lot because not all cpus have simd multiplication and addition for 
integers.

[julia-users] Int Matrix multiplication versus Float Matrix multiplication

2016-09-21 Thread Joaquim Masset Lacombe Dias Garcia

Does anyone knows why the float matrix multiplication is much faster in the 
following code:

function foo()

  n = 2000
  a = rand(1:10, n, n)
  b = rand(n, n)
  @time a*a
  @time b*b
  
  nothing
end

I get:


julia> test()
  6.715335 seconds (9 allocations: 30.518 MB, 0.00% gc time)
  0.120801 seconds (3 allocations: 30.518 MB, 7.88% gc time)

Thanks!

(btw, I tried using A_mul_B! and the time improvement was not 
significant...)