Following your advice, I tried the code again, this time I also used MUMPS 
solver from https://github.com/lruthotto/MUMPS.jl 
I used a 42x43x44 grid. These are the results:

MUMPS: elapsed time: 2.09091471 seconds
lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated, 
0.45% gc time)

and in Matlab:
Elapsed time is 5.423656 seconds.

Thanks a lot Tim and Viral for your quick and helpful comments.

Kind regards,
Ehsan


On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
>
> Thanks, that is great. I was wondering about the symmetry checker - we 
> have the naive one currently, but I can just use the CHOLMOD one now. 
>
> -viral 
>
>
>
> > On 06-Jan-2015, at 2:22 am, Tim Davis <[email protected] <javascript:>> 
> wrote: 
> > 
> > oops.  Yes, your factorize function is broken.  You might try mine 
> instead, in my 
> > factorize package. 
> > 
> > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is 
> symmetric and 
> > with positive diagonals.  I think I have a MATLAB interface for it too. 
>  The code is efficient, 
> > since it doesn't form A transpose, and it quits early as soon as 
> asymmetry is detected. 
> > 
> > It does rely on the fact that MATLAB requires its sparse matrices to 
> have sorted row indices 
> > in each column, however. 
> > 
> > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah <[email protected] 
> <javascript:>> wrote: 
> > Tim - thanks for the reference. The paper will come in handy. This is a 
> longstanding issue, that we just haven’t got around to addressing yet, but 
> perhaps now is a good time. 
> > 
> > https://github.com/JuliaLang/julia/issues/3295 
> > 
> > We have a very simplistic factorize() for sparse matrices that must have 
> been implemented as a stopgap. This is what it currently does and that 
> explains everything. 
> > 
> > # placing factorize here for now. Maybe add a new file 
> > function factorize(A::SparseMatrixCSC) 
> >     m, n = size(A) 
> >     if m == n 
> >         Ac = cholfact(A) 
> >         Ac.c.minor == m && ishermitian(A) && return Ac 
> >     end 
> >     return lufact(A) 
> > end 
> > 
> > -viral 
> > 
> > 
> > 
> > > On 06-Jan-2015, at 1:57 am, Tim Davis <[email protected] <javascript:>> 
> wrote: 
> > > 
> > > That does sound like a glitch in the "\" algorithm, rather than in 
> UMFPACK.  The OpenBLAS is pretty good. 
> > > 
> > > This is very nice in Julia: 
> > > 
> > > F = lufact (d["M"]) ; F \ d 
> > > 
> > > That's a great idea to have a factorization object like that.  I have 
> a MATLAB toolbox that does 
> > > the same thing, but it's not a built-in function inside MATLAB.  It's 
> written in M, so it can be slow for 
> > > small matrices.   With it, however, I can do: 
> > > 
> > > F = factorize (A) ;    % does an LU, Cholesky, QR, SVD, or whatever. 
>  Uses my polyalgorithm for "\". 
> > > x = F\b ; 
> > > 
> > > I can do S = inverse(A); which returns a factorization, not an 
> inverse, but with a flag 
> > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a 
> copy of A inside it, as well). 
> > > 
> > > You can also specify the factorization, such as 
> > > 
> > >  F=factorize(A, 'lu') 
> > > F=factorize(A,'svd') ; etc. 
> > > 
> > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested. 
>  I've suggested the same 
> > > feature to The MathWorks. 
> > > 
> > > My factorize function includes a backslash polyalgorithm, if you're 
> interested in taking a look. 
> > > 
> > > Algorithm 930: FACTORIZE: an object-oriented linear system solver for 
> MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39, 
> Issue 4, pp. 28:1 - 28:18, 2013. 
> > > 
> http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
>  
> > > 
> > > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah <[email protected] 
> <javascript:>> wrote: 
> > > The BLAS will certainly make a difference, but OpenBLAS is reasonably 
> good. 
> > > 
> > > I also wonder what is happening in our \ polyalgorithm. The profile 
> suggests the code is trying Cholesky decomposition, but it really shouldn't 
> since the matrix is not symmetric. If I just do the lufact(), which 
> essentially calls Umfpack, I can match Matlab timing: 
> > > 
> > > @time F = lufact(d["M"]); F \ d["RHS"]; 
> > > 
> > > -viral 
> > > 
> > > 
> > > On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote: 
> > > The difference could be the BLAS.  MATLAB comes with its own BLAS 
> library, and the performance 
> > > of the BLAS has a huge impact on the performance of UMFPACK, 
> particularly for 3D discretizations. 
> > > 
> > > On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari <[email protected] 
> <javascript:>> wrote: 
> > > I'm solving diffusion equation in Matlab on a 3D uniform grid 
> (31x32x33) and Julia. I use the "\" to solve the linear system of 
> equations. Here is the performance of the linear solver in Julia: 
> > > elapsed time: 2.743971424 seconds (35236720 bytes allocated) 
> > > 
> > > and Matlab (I used spparms('spumoni',1) to see what "\" does in 
> Matlab): 
> > > sp\: bandwidth = 1056+1+1056. 
> > > sp\: is A diagonal? no. 
> > > sp\: is band density (0.00) > bandden (0.50) to try banded solver? no. 
> > > sp\: is A triangular? no. 
> > > sp\: is A morally triangular? no. 
> > > sp\: is A a candidate for Cholesky (symmetric, real positive 
> diagonal)? no. 
> > > sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering. 
> > > sp\: UMFPACK's factorization was successful. 
> > > sp\: UMFPACK's solve was successful. 
> > > Elapsed time is 0.819120 seconds. 
> > > 
> > > I have uploaded the sparse matrix (M) and the right-hand side (RHS) 
> vectors in a mat file here: 
> > > 
> https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0 
> > > 
> > > I read in the documents that Julia uses Umfpack for sparse matrices. 
> My question is why umfpack is faster when it is called from matlab? 
> > > 
> > > The matlab and julia codes are here: 
> > > 
> https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0 
> > > 
> https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0 
> > > 
> > > and the FVM codes are here: 
> > > https://github.com/simulkade/FVTool 
> > > https://github.com/simulkade/JFVM 
> > > 
> > > Thanks a lot in advance, 
> > > 
> > > Ehsan 
> > > 
> > > 
> > > On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote: 
> > > I guess it is the last 20 years of sparse solver work packed into one 
> function. Not many fields can boast of providing this level of usability 
> out of their work. :-) 
> > > 
> > > There are a class of people who believe that things like \ encourage 
> blackbox usage, with people doing stuff they do not understand, and there 
> are others who believe in standing on the shoulders of giants. 
> > > 
> > > I find that we have taken a good approach in Julia, where we have \ 
> and it will have the perfect polyalgorithm at some point. But, you also 
> have the option of digging deeper with interfaces such as lufact(), 
> cholfact(), qrfact(), and finally, even if that does not work out for you, 
> call the LAPACK and SuiteSparse functions directly. 
> > > 
> > > -viral 
> > > 
> > > On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski 
> wrote: 
> > > Goodness. This is why there needs to be a polyalgorithm – no mortal 
> user could know all of this stuff! 
> > > 
> > > 
> > > On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah <[email protected]> wrote: 
> > > Doug, 
> > > 
> > > Ideally, the backslash needs to look for diagonal matrices, triangular 
> matrices and permutations thereof, banded matrices and the least squares 
> problems (non-square). In case it is square, symmetric and hermitian, with 
> a heavy diagonal(?), cholesky can be attempted, with a fallback to LU. I 
> believe we do some of this in the dense \ polyalgorithm, but I am not sure 
> if we look for the banded cases yet. 
> > > 
> > > This is what Octave does: 
> > > 
> http://www.gnu.org/software/octave/doc/interpreter/Sparse-Linear-Algebra.html#Sparse-Linear-Algebra
>  
> > > 
> > > This is Tim's Factorize for solving linear and least squares systems: 
> > > 
> http://www.cise.ufl.edu/research/sparse/SuiteSparse/current/SuiteSparse/MATLAB_Tools/Factorize/Doc/factorize_demo.html
>  
> > > 
> > > -viral 
> > > 
> > > 
> > > On Tuesday, June 4, 2013 8:18:39 PM UTC+5:30, Douglas Bates wrote: 
> > > On Thursday, May 30, 2013 10:10:59 PM UTC-5, Mingming Wang wrote: 
> > > Hi, 
> > > 
> > > I am trying to port my MATLAB program to Julia. The for loop is about 
> 25% faster. But the backslash is about 10 times slower. It seems in MATLAB, 
> the backslash is parallelized automatically. Is there any plan in Julia to 
> do this? BTW, the matrix I am solving is sparse and symmetric. 
> > > 
> > > For a sparse symmetric matrix try 
> > > 
> > > cholfact(A)\b 
> > > 
> > > The simple 
> > > 
> > > A\b 
> > > 
> > > call will always use an LU decomposition from UMFPACK. 
> > > 
> > > 
> > > 
> > > 
> > 
> > 
>
>

Reply via email to