Hi Dominique.

I was not aware of this library. Thanks !

On Thursday, June 4, 2015 at 10:31:24 AM UTC-3, Dominique Orban wrote:
>
> FWIW, on your example, here's the time taken by the CG implemented in 
> Krylov.jl (https://github.com/optimizers/Krylov.jl)
>
> julia> A = sprand(10000,10000,0.01); A = A'+A; 
> A=A+100*rand()*speye(10000,10000);
> julia> b = A * ones(10000);
> julia> cg(A, b, rtol=1.0e-10, atol=1.0e-10);
> julia> @time cg(A, b, rtol=1.0e-10, atol=1.0e-10);
> elapsed time: 0.047595623 seconds (1136352 bytes allocated)
>
>
> Preallocation doesn't save much time in this case, but quite a bit of 
> memory:
>
> julia> using LinearOperators
> julia> Ap = zeros(10000);
> julia> op = LinearOperator(10000, Float64, p -> A_mul_B!(1.0,  A, p, 0.0, 
> Ap))
> julia> cg(op, b, rtol=1.0e-10, atol=1.0e-10);
> julia> @time cg(op, b, rtol=1.0e-10, atol=1.0e-10)
> elapsed time: 0.0450219 seconds (241552 bytes allocated)
>
>
> This CG doesn't support preconditioning, but that would be quite trivial 
> to add.
>
>
>
> On Monday, June 1, 2015 at 3:12:01 PM UTC+2, Eduardo Lenz wrote:
>>
>> Hi Andreas...Its me again :0)
>>
>> Its just an example ..I agree that it is quite unfair :0)
>>
>> But  we are solving very large 3D homogeneization problems (basically, 3D 
>> finite elements) and the 
>> diference is still  very impressive. Specially if you consider the amount 
>> of memory needed to 
>> solve such systems with cholfact when compared to this iterative method.
>>
>> This code is a very basic modification of the traditional CG method, 
>> where the cofficient 
>> matrix is changed in order to scale the diference between the larger and 
>> the smaller 
>> eigenvalues. I am realy amazed with the speed of sparse multiplications 
>> in Julia...its 
>> very fast !
>>
>>
>>
>>
>>
>>
>> On Monday, June 1, 2015 at 10:05:18 AM UTC-3, Andreas Noack wrote:
>>>
>>> I think the chosen matrix has very good convergence properties for 
>>> iterative methods, but I agree that iterative methods are very useful to 
>>> have in Julia. There is already quite a few implementations in
>>>
>>> https://github.com/JuliaLang/IterativeSolvers.jl
>>>
>>> I'm not sure if these methods cover the one you chose, so you could have 
>>> a look and see if there is something to contribute there.
>>>
>>> Den søndag den 31. maj 2015 kl. 21.37.23 UTC-4 skrev Eduardo Lenz:
>>>>
>>>> Hi.
>>>>
>>>> One of my students is solving some large sparse systems (more than 20K 
>>>> equations). The coeficient matrix 
>>>> is symmetric and positive definite, with large sparsivity (1% of non 
>>>> zero elements in some cases).
>>>>
>>>> After playing around a little bit with cholfact we decided to compare 
>>>> the time with a very simple implementation
>>>> of the conjugate gradient method with diagonal scaling. 
>>>>
>>>> The code is in
>>>>
>>>> https://gist.github.com/CodeLenz/92086ba37035fe8d9ed8#file-gistfile1-txt 
>>>> <https://www.google.com/url?q=https%3A%2F%2Fgist.github.com%2FCodeLenz%2F92086ba37035fe8d9ed8%23file-gistfile1-txt&sa=D&sntz=1&usg=AFQjCNHejjwPU4C7HybkkB2Y_kY2sPA7zQ>
>>>>
>>>> And, as for example, the solution of Ax=b for 
>>>>
>>>> julia> A = sprand(10000,10000,0.01); A = A'+A; 
>>>> A=A+100*rand()*speye(10000,10000)
>>>>
>>>> takes 16 seconds with cholfact(A) and 600 milliseconds !!! with DCGC 
>>>> (tol=1E-10)
>>>>
>>>> Also, as expected, the memory consumption with CG is very low, allowing 
>>>> the solution 
>>>> of very large systems. 
>>>>
>>>> The same pattern is observed for different leves of sparsivity and for 
>>>> different random matrices.
>>>>
>>>> I would like to thank the Julia developers for such amazing tool !
>>>>
>>>>
>>>>
>>>>

Reply via email to