I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
and Julia. I use the "\" to solve the linear system of equations. Here is
the performance of the linear solver in Julia:
elapsed time: 2.743971424 seconds (35236720 bytes allocated)
and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
sp\: bandwidth = 1056+1+1056.
sp\: is A diagonal? no.
sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
sp\: is A triangular? no.
sp\: is A morally triangular? no.
sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? no.
sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
sp\: UMFPACK's factorization was successful.
sp\: UMFPACK's solve was successful.
Elapsed time is 0.819120 seconds.
I have uploaded the sparse matrix (M) and the right-hand side (RHS) vectors
in a mat file here:
https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
I read in the documents that Julia uses Umfpack for sparse matrices. My
question is why umfpack is faster when it is called from matlab?
The matlab and julia codes are here:
https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
and the FVM codes are here:
https://github.com/simulkade/FVTool
https://github.com/simulkade/JFVM
Thanks a lot in advance,
Ehsan
On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:
>
> I guess it is the last 20 years of sparse solver work packed into one
> function. Not many fields can boast of providing this level of usability
> out of their work. :-)
>
> There are a class of people who believe that things like \ encourage
> blackbox usage, with people doing stuff they do not understand, and there
> are others who believe in standing on the shoulders of giants.
>
> I find that we have taken a good approach in Julia, where we have \ and it
> will have the perfect polyalgorithm at some point. But, you also have the
> option of digging deeper with interfaces such as lufact(), cholfact(),
> qrfact(), and finally, even if that does not work out for you, call the
> LAPACK and SuiteSparse functions directly.
>
> -viral
>
> On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:
>>
>> Goodness. This is why there needs to be a polyalgorithm – no mortal user
>> could know all of this stuff!
>>
>>
>> On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah <[email protected]> wrote:
>>
>>> Doug,
>>>
>>> Ideally, the backslash needs to look for diagonal matrices, triangular
>>> matrices and permutations thereof, banded matrices and the least squares
>>> problems (non-square). In case it is square, symmetric and hermitian, with
>>> a heavy diagonal(?), cholesky can be attempted, with a fallback to LU. I
>>> believe we do some of this in the dense \ polyalgorithm, but I am not sure
>>> if we look for the banded cases yet.
>>>
>>> This is what Octave does:
>>>
>>> http://www.gnu.org/software/octave/doc/interpreter/Sparse-Linear-Algebra.html#Sparse-Linear-Algebra
>>>
>>> This is Tim's Factorize for solving linear and least squares systems:
>>>
>>> http://www.cise.ufl.edu/research/sparse/SuiteSparse/current/SuiteSparse/MATLAB_Tools/Factorize/Doc/factorize_demo.html
>>>
>>> -viral
>>>
>>>
>>> On Tuesday, June 4, 2013 8:18:39 PM UTC+5:30, Douglas Bates wrote:
>>>>
>>>> On Thursday, May 30, 2013 10:10:59 PM UTC-5, Mingming Wang wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to port my MATLAB program to Julia. The for loop is about
>>>>> 25% faster. But the backslash is about 10 times slower. It seems in
>>>>> MATLAB,
>>>>> the backslash is parallelized automatically. Is there any plan in Julia
>>>>> to
>>>>> do this? BTW, the matrix I am solving is sparse and symmetric.
>>>>>
>>>>
>>>> For a sparse symmetric matrix try
>>>>
>>>> cholfact(A)\b
>>>>
>>>> The simple
>>>>
>>>> A\b
>>>>
>>>> call will always use an LU decomposition from UMFPACK.
>>>>
>>>>
>>>
>>