FWIW, the julia sparse matrix vector multiplication is very simple. I expect that further speedups possible by using libraries such as OSKI or MKL. In the ideal world, we would have autotuning kernels in Julia itself.
-viral On Monday, June 1, 2015 at 9:12:01 AM UTC-4, Eduardo Lenz wrote: > > Hi Andreas...Its me again :0) > > Its just an example ..I agree that it is quite unfair :0) > > But we are solving very large 3D homogeneization problems (basically, 3D > finite elements) and the > diference is still very impressive. Specially if you consider the amount > of memory needed to > solve such systems with cholfact when compared to this iterative method. > > This code is a very basic modification of the traditional CG method, where > the cofficient > matrix is changed in order to scale the diference between the larger and > the smaller > eigenvalues. I am realy amazed with the speed of sparse multiplications in > Julia...its > very fast ! > > > > > > > On Monday, June 1, 2015 at 10:05:18 AM UTC-3, Andreas Noack wrote: >> >> I think the chosen matrix has very good convergence properties for >> iterative methods, but I agree that iterative methods are very useful to >> have in Julia. There is already quite a few implementations in >> >> https://github.com/JuliaLang/IterativeSolvers.jl >> >> I'm not sure if these methods cover the one you chose, so you could have >> a look and see if there is something to contribute there. >> >> Den søndag den 31. maj 2015 kl. 21.37.23 UTC-4 skrev Eduardo Lenz: >>> >>> Hi. >>> >>> One of my students is solving some large sparse systems (more than 20K >>> equations). The coeficient matrix >>> is symmetric and positive definite, with large sparsivity (1% of non >>> zero elements in some cases). >>> >>> After playing around a little bit with cholfact we decided to compare >>> the time with a very simple implementation >>> of the conjugate gradient method with diagonal scaling. >>> >>> The code is in >>> >>> https://gist.github.com/CodeLenz/92086ba37035fe8d9ed8#file-gistfile1-txt >>> <https://www.google.com/url?q=https%3A%2F%2Fgist.github.com%2FCodeLenz%2F92086ba37035fe8d9ed8%23file-gistfile1-txt&sa=D&sntz=1&usg=AFQjCNHejjwPU4C7HybkkB2Y_kY2sPA7zQ> >>> >>> And, as for example, the solution of Ax=b for >>> >>> julia> A = sprand(10000,10000,0.01); A = A'+A; >>> A=A+100*rand()*speye(10000,10000) >>> >>> takes 16 seconds with cholfact(A) and 600 milliseconds !!! with DCGC >>> (tol=1E-10) >>> >>> Also, as expected, the memory consumption with CG is very low, allowing >>> the solution >>> of very large systems. >>> >>> The same pattern is observed for different leves of sparsivity and for >>> different random matrices. >>> >>> I would like to thank the Julia developers for such amazing tool ! >>> >>> >>> >>>
