>
> But speaking of writing parallel matrix vector products in native
> Julia, this might be a great use case for shared arrays (although
> right now I think only dense shared arrays exist). Amit, can you
> comment on this?
>

Actually you could use a shared array for both the sparse matrix and output
vector. A CSC/CSR sparse matrix is just 3 dense arrays. If you load up the
matrix row-wise into a shared array, you could assign each worker to do a
subset of row-vector dot products, outputting into a shared vector. This
could potentially work reasonably well as long as the memory bandwidth
scales. You'll have to drop down to a low-level coding style (no more A*b),
but this can probably be implemented in < 15 lines.



> > I see it's possible to link Julia with MKL. I haven't tried this yet,
> but if
> > I do, will A*b (where A is sparse) call MKL to perform the matrix vector
> > product?
>

No, it will not. There was a PR for this (
https://github.com/JuliaLang/julia/pull/4525) so you could probably use
that code. I have can't comment on how efficient MKL is for multicore
sparse mat-vec.

Reply via email to