It really depends on what he means by "large" and "sparse". There is no 
indication from the OP that he specifically is choosing a direct over an 
iterative method, just that he knows \ is the go-to for solving Ax=b that 
he tried. It should be mentioned that direct solvers are O(n^3) and 
factorizations are not necessarily sparse, and so depending on what is 
meant by those terms, an algorithm which uses a factorization solver may 
not be tenable.

Another point is that he likely shouldn't be using Bigs for this. He should 
likely try ArbFloats or DoubleDoubles to make the computation faster.  

On Wednesday, August 10, 2016 at 9:14:56 PM UTC-7, Ralph Smith wrote:
>
> The OP wants extremely high precision and indicated that he was willing to 
> factor the matrix.  I recommended iterative refinement which converges very 
> quickly, and exploits the state-of-the-art direct solvers.  The solvers in 
> IterativeSolvers.jl are for a different domain, where the matrix is too 
> large or expensive to factor.  To get high accuracy with them generally 
> requires tailored preconditioners, which are not "out of the box". In fact 
> one usually need a preconditioner to get any convergence with the 
> non-symmetric ones for interesting ranks.  (I've been struggling for months 
> to find a good preconditioner for an application of GMRES in my work, so 
> this is a sore point.)
>
> On Wednesday, August 10, 2016 at 10:56:22 PM UTC-4, Chris Rackauckas wrote:
>>
>> Yes textbook answer is, why do you want to use `\`? Iterative techniques 
>> are likely better suited for the problem. There's no need to roll you own, 
>> the package IterativeSolvers.jl has a good number of techniques implemented 
>> which are well-suited for the problem since A is a large sparse matrix. 
>> Their methods should work out of the box with Bigs, though you will likely 
>> want to adjust the tolerances.
>>
>> On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>>>
>>> Here is a textbook answer.  Appropriate choice of n depends on condition 
>>> of A.
>>>
>>> """
>>>
>>>     iterimprove(A,b,n=1,verbose=true)
>>>
>>>  
>>>
>>> Solve `A x = b` for `x` using iterative improvement 
>>>
>>> """ 
>>>
>>> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>>>>                                    b::Vector{T},n=1,verbose=true)
>>>>
>>>      eps(T) < eps(Float64) || throw(ArgumentError("wrong 
>>>> implementation")) 
>>>
>>>      A0 = SparseMatrixCSC{Float64}(A)
>>>>     F = factorize(A0)
>>>>     x = zeros(b)
>>>>     r = copy(b)
>>>>     for iter = 1:n+1
>>>>         y = F \ Vector{Float64}(r)
>>>>         for i in eachindex(x)
>>>>             x[i] += y[i]
>>>>         end
>>>>         r = b - A * x
>>>>         if verbose
>>>>             @printf "at iter %d resnorm = %.3g\n" iter norm(r)
>>>>         end
>>>>     end
>>>>     x
>>>> end
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen 
>>> wrote:
>>>>
>>>> Hello
>>>>
>>>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax 
>>>> = b.
>>>> For this task I'm using Julias *SparseMatrixCSC *type for the 
>>>> definition of my matrices and Julias built in backslash ' \ ' operator for 
>>>> the solution of the system.
>>>> I need *quadruple precision* and thus I've been trying to implement my 
>>>> routine with the *BigFloat *type together with the SparseMatrixCSC 
>>>> type.
>>>>
>>>> To illustrate this, I give a simple example here:
>>>> set_bigfloat_precision(128);
>>>> A  = speye(BigFloat, 2, 2);
>>>> b = ones(BigFloat, 2, 1);
>>>> x = A\b;
>>>>
>>>> If I do this I either get a StackOverFlow error:
>>>> ERROR: StackOverflowError:
>>>>  in copy at array.jl:100
>>>>  in float at sparse/sparsematrix.jl:234
>>>>  in call at essentials.jl:57 (repeats 254 times)
>>>>
>>>> or the solver seems to run forever and never terminates. As the second 
>>>> error indicates it seems like the sparse solver only accepts the normal 
>>>> *float* types.
>>>> My question is then, is there a way to get quadruple precision with the 
>>>> standard solvers in Julia, in this case UMFpack I assume ? or should I 
>>>> look 
>>>> for something else (in this case any suggestions :) ) ?
>>>>
>>>> Regards Nicklas A.
>>>>
>>>>

Reply via email to