(and I realize that my code is really just resultmx[r,c] = !b[r,c] but I 
wanted to focus on the timing of equivalent boolean operators.)

On Tuesday, September 8, 2015 at 3:21:35 PM UTC-7, Seth wrote:
>
> Hi all,
>
> I ran into some puzzling performance today with sparse matrices. I defined
>
> _column(a::AbstractSparseArray, i::Integer) = sub(a.rowval, a.colptr[i]:a.
> colptr[i+1]-1)
>
> # material nonimplication
> ⊅(p::Bool, q::Bool) = p & !q 
>
> function ⊅(a::SparseMatrixCSC, b::SparseMatrixCSC) 
>     (m,n) = size(a) 
>     resultmx = spzeros(Bool,m,n) 
>     for c = 1:n 
>         for r in _column(a,n) 
>             # info("row $r, col $c") 
>             resultmx[r,c] = ⊅(a[r,c], b[r,c]) 
>         end 
>     end 
>     return resultmx 
> end
>
> a = sprandbool(1000000,1000000,0.0001) 
> b = sprandbool(1000000,1000000,0.0001)
>
>
> and ran it in comparison with the & operator:
>
> julia> @time z = a ⊅ b;
>  15.272250 seconds (1.10 M allocations: 73.870 MB)
>
> julia> @time z = a & b;   # this is still going ~10 minutes later.
>
>
> It seems strange that my home-grown function is orders of magnitude more 
> efficient than a built-in boolean primitive. Am I missing something?
>

Reply via email to