Re: [julia-users] Re: Problems with A\b and BigFloat

2016-08-11 Thread Nicklas Andersen
Hi, this actually helped a lot and made good sense. 
The differential equation I'm working with do include a set of singular 
matrices and with the application of a higher order time integration 
method, I do end up with a matrix A that is badly conditioned.
Computing the condition number in Julia cond(A) yields an enormous result 
(it is nearly too embarrasing to state actually - I get 4.8e11). Computing 
det(A) gave me Inf too.
I guess I've been too naive to think I could reuse old code for lower order 
time integration methods, which gave me quite good results, still with a 
relatively high condition number (1.6e10)?

Seems like my next step now, would be to "unravel" my matrix A.
 

Den torsdag den 11. august 2016 kl. 14.47.24 UTC+2 skrev Tamas Papp:
>
> I don't know anything about your problem domain, but are you sure that 
> the errors are not a conditioning problem? Increasing precision can 
> mitigate this to a limited extent, but when you increase the dimension 
> you quickly run out of precision, so it is rarely the solution. Have you 
> checked the singular values of A? (Sorry if you already did this as the 
> first thing, just thought I would ask). 
>
> On Thu, Aug 11 2016, Nicklas Andersen wrote: 
>
> > I know I might be contradicting myself by saying *"I would like not to 
> > introduce too much error by the use of an iterative solver"* and then 
> going 
> > on with propagating errors, direct solvers and a wish for quadruple 
> > precision. 
> > In theory direct solvers give an exact solution, while iterative give an 
> > approximation. In this case, when doing the further analysis it would be 
> a 
> > lot easier for me, to just argue for a direct solver than an iterative 
> > solver. 
> > I hope you somehow get what I'm trying to say. 
> > 
> > Thank you again :) 
> > 
> > Den torsdag den 11. august 2016 kl. 14.04.40 UTC+2 skrev Nicklas 
> Andersen: 
> >> 
> >> Hey again. 
> >> 
> >> Thank you all for the nice answers. I was in a bit of hurry and didn't 
> >> have time to go into too much detail, so to clarify: 
> >> The system I'm trying to solve arises from the space dicretization of a 
> >> *linear* partial differential algebraic equation. 
> >> To advance the solution in time I need to solve a system Ax=b at each 
> time 
> >> step. 
> >> Large is a bit loosely formulated, since the system more or less only 
> has 
> >> size around 500x500 to 2000x2000, but it needs to be solved, lets say, 
> at 
> >> most 640 times. 
> >> I would prefer a direct solver since I need the results for an analysis 
> of 
> >> the time integration method and would like not to introduce too much 
> error 
> >> by the use of an iterative solver. 
> >> That said, speed is not my nr. 1 priority, but it would be nice. 
> >> 
> >> The reason I need quadruple precision is that it seems like some 
> >> components introduce round off error and these errors propagate, such 
> that 
> >> I in the end get negative convergence of my method. 
> >> 
> >> Regard Nicklas 
> >> 
>
>

Re: [julia-users] Re: Problems with A\b and BigFloat

2016-08-11 Thread Tamas Papp
I don't know anything about your problem domain, but are you sure that
the errors are not a conditioning problem? Increasing precision can
mitigate this to a limited extent, but when you increase the dimension
you quickly run out of precision, so it is rarely the solution. Have you
checked the singular values of A? (Sorry if you already did this as the
first thing, just thought I would ask).

On Thu, Aug 11 2016, Nicklas Andersen wrote:

> I know I might be contradicting myself by saying *"I would like not to 
> introduce too much error by the use of an iterative solver"* and then going 
> on with propagating errors, direct solvers and a wish for quadruple 
> precision. 
> In theory direct solvers give an exact solution, while iterative give an 
> approximation. In this case, when doing the further analysis it would be a 
> lot easier for me, to just argue for a direct solver than an iterative 
> solver.
> I hope you somehow get what I'm trying to say.
>
> Thank you again :)
>
> Den torsdag den 11. august 2016 kl. 14.04.40 UTC+2 skrev Nicklas Andersen:
>>
>> Hey again.
>>
>> Thank you all for the nice answers. I was in a bit of hurry and didn't 
>> have time to go into too much detail, so to clarify:
>> The system I'm trying to solve arises from the space dicretization of a 
>> *linear* partial differential algebraic equation.
>> To advance the solution in time I need to solve a system Ax=b at each time 
>> step. 
>> Large is a bit loosely formulated, since the system more or less only has 
>> size around 500x500 to 2000x2000, but it needs to be solved, lets say, at 
>> most 640 times.
>> I would prefer a direct solver since I need the results for an analysis of 
>> the time integration method and would like not to introduce too much error 
>> by the use of an iterative solver.
>> That said, speed is not my nr. 1 priority, but it would be nice. 
>>
>> The reason I need quadruple precision is that it seems like some 
>> components introduce round off error and these errors propagate, such that 
>> I in the end get negative convergence of my method.
>>
>> Regard Nicklas
>>



[julia-users] Re: Problems with A\b and BigFloat

2016-08-11 Thread Nicklas Andersen
I know I might be contradicting myself by saying *"I would like not to 
introduce too much error by the use of an iterative solver"* and then going 
on with propagating errors, direct solvers and a wish for quadruple 
precision. 
In theory direct solvers give an exact solution, while iterative give an 
approximation. In this case, when doing the further analysis it would be a 
lot easier for me, to just argue for a direct solver than an iterative 
solver.
I hope you somehow get what I'm trying to say.

Thank you again :)

Den torsdag den 11. august 2016 kl. 14.04.40 UTC+2 skrev Nicklas Andersen:
>
> Hey again.
>
> Thank you all for the nice answers. I was in a bit of hurry and didn't 
> have time to go into too much detail, so to clarify:
> The system I'm trying to solve arises from the space dicretization of a 
> *linear* partial differential algebraic equation.
> To advance the solution in time I need to solve a system Ax=b at each time 
> step. 
> Large is a bit loosely formulated, since the system more or less only has 
> size around 500x500 to 2000x2000, but it needs to be solved, lets say, at 
> most 640 times.
> I would prefer a direct solver since I need the results for an analysis of 
> the time integration method and would like not to introduce too much error 
> by the use of an iterative solver.
> That said, speed is not my nr. 1 priority, but it would be nice. 
>
> The reason I need quadruple precision is that it seems like some 
> components introduce round off error and these errors propagate, such that 
> I in the end get negative convergence of my method.
>
> Regard Nicklas
>


[julia-users] Re: Problems with A\b and BigFloat

2016-08-11 Thread Nicklas Andersen
Hey again.

Thank you all for the nice answers. I was in a bit of hurry and didn't have 
time to go into too much detail, so to clarify:
The system I'm trying to solve arises from the space dicretization of a 
*linear* partial differential algebraic equation.
To advance the solution in time I need to solve a system Ax=b at each time 
step. 
Large is a bit loosely formulated, since the system more or less only has 
size around 500x500 to 2000x2000, but it needs to be solved, lets say, at 
most 640 times.
I would prefer a direct solver since I need the results for an analysis of 
the time integration method and would like not to introduce too much error 
by the use of an iterative solver.
That said, speed is not my nr. 1 priority, but it would be nice. 

The reason I need quadruple precision is that it seems like some components 
introduce round off error and these errors propagate, such that I in the 
end get negative convergence of my method.

Regard Nicklas


[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
It really depends on what he means by "large" and "sparse". There is no 
indication from the OP that he specifically is choosing a direct over an 
iterative method, just that he knows \ is the go-to for solving Ax=b that 
he tried. It should be mentioned that direct solvers are O(n^3) and 
factorizations are not necessarily sparse, and so depending on what is 
meant by those terms, an algorithm which uses a factorization solver may 
not be tenable.

Another point is that he likely shouldn't be using Bigs for this. He should 
likely try ArbFloats or DoubleDoubles to make the computation faster.  

On Wednesday, August 10, 2016 at 9:14:56 PM UTC-7, Ralph Smith wrote:
>
> The OP wants extremely high precision and indicated that he was willing to 
> factor the matrix.  I recommended iterative refinement which converges very 
> quickly, and exploits the state-of-the-art direct solvers.  The solvers in 
> IterativeSolvers.jl are for a different domain, where the matrix is too 
> large or expensive to factor.  To get high accuracy with them generally 
> requires tailored preconditioners, which are not "out of the box". In fact 
> one usually need a preconditioner to get any convergence with the 
> non-symmetric ones for interesting ranks.  (I've been struggling for months 
> to find a good preconditioner for an application of GMRES in my work, so 
> this is a sore point.)
>
> On Wednesday, August 10, 2016 at 10:56:22 PM UTC-4, Chris Rackauckas wrote:
>>
>> Yes textbook answer is, why do you want to use `\`? Iterative techniques 
>> are likely better suited for the problem. There's no need to roll you own, 
>> the package IterativeSolvers.jl has a good number of techniques implemented 
>> which are well-suited for the problem since A is a large sparse matrix. 
>> Their methods should work out of the box with Bigs, though you will likely 
>> want to adjust the tolerances.
>>
>> On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>>>
>>> Here is a textbook answer.  Appropriate choice of n depends on condition 
>>> of A.
>>>
>>> """
>>>
>>> iterimprove(A,b,n=1,verbose=true)
>>>
>>>  
>>>
>>> Solve `A x = b` for `x` using iterative improvement 
>>>
>>> """ 
>>>
>>> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
b::Vector{T},n=1,verbose=true)

>>>  eps(T) < eps(Float64) || throw(ArgumentError("wrong 
 implementation")) 
>>>
>>>  A0 = SparseMatrixCSC{Float64}(A)
 F = factorize(A0)
 x = zeros(b)
 r = copy(b)
 for iter = 1:n+1
 y = F \ Vector{Float64}(r)
 for i in eachindex(x)
 x[i] += y[i]
 end
 r = b - A * x
 if verbose
 @printf "at iter %d resnorm = %.3g\n" iter norm(r)
 end
 end
 x
 end
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen 
>>> wrote:

 Hello

 I'm trying to solve a large, sparse and unsymmetrical linear system Ax 
 = b.
 For this task I'm using Julias *SparseMatrixCSC *type for the 
 definition of my matrices and Julias built in backslash ' \ ' operator for 
 the solution of the system.
 I need *quadruple precision* and thus I've been trying to implement my 
 routine with the *BigFloat *type together with the SparseMatrixCSC 
 type.

 To illustrate this, I give a simple example here:
 set_bigfloat_precision(128);
 A  = speye(BigFloat, 2, 2);
 b = ones(BigFloat, 2, 1);
 x = A\b;

 If I do this I either get a StackOverFlow error:
 ERROR: StackOverflowError:
  in copy at array.jl:100
  in float at sparse/sparsematrix.jl:234
  in call at essentials.jl:57 (repeats 254 times)

 or the solver seems to run forever and never terminates. As the second 
 error indicates it seems like the sparse solver only accepts the normal 
 *float* types.
 My question is then, is there a way to get quadruple precision with the 
 standard solvers in Julia, in this case UMFpack I assume ? or should I 
 look 
 for something else (in this case any suggestions :) ) ?

 Regards Nicklas A.



[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Ralph Smith
The OP wants extremely high precision and indicated that he was willing to 
factor the matrix.  I recommended iterative refinement which converges very 
quickly, and exploits the state-of-the-art direct solvers.  The solvers in 
IterativeSolvers.jl are for a different domain, where the matrix is too 
large or expensive to factor.  To get high accuracy with them generally 
requires tailored preconditioners, which are not "out of the box". In fact 
one usually need a preconditioner to get any convergence with the 
non-symmetric ones for interesting ranks.  (I've been struggling for months 
to find a good preconditioner for an application of GMRES in my work, so 
this is a sore point.)

On Wednesday, August 10, 2016 at 10:56:22 PM UTC-4, Chris Rackauckas wrote:
>
> Yes textbook answer is, why do you want to use `\`? Iterative techniques 
> are likely better suited for the problem. There's no need to roll you own, 
> the package IterativeSolvers.jl has a good number of techniques implemented 
> which are well-suited for the problem since A is a large sparse matrix. 
> Their methods should work out of the box with Bigs, though you will likely 
> want to adjust the tolerances.
>
> On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>>
>> Here is a textbook answer.  Appropriate choice of n depends on condition 
>> of A.
>>
>> """
>>
>> iterimprove(A,b,n=1,verbose=true)
>>
>>  
>>
>> Solve `A x = b` for `x` using iterative improvement 
>>
>> """ 
>>
>> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>>>b::Vector{T},n=1,verbose=true)
>>>
>>  eps(T) < eps(Float64) || throw(ArgumentError("wrong 
>>> implementation")) 
>>
>>  A0 = SparseMatrixCSC{Float64}(A)
>>> F = factorize(A0)
>>> x = zeros(b)
>>> r = copy(b)
>>> for iter = 1:n+1
>>> y = F \ Vector{Float64}(r)
>>> for i in eachindex(x)
>>> x[i] += y[i]
>>> end
>>> r = b - A * x
>>> if verbose
>>> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
>>> end
>>> end
>>> x
>>> end
>>
>>
>>
>> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>>>
>>> Hello
>>>
>>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>>> b.
>>> For this task I'm using Julias *SparseMatrixCSC *type for the 
>>> definition of my matrices and Julias built in backslash ' \ ' operator for 
>>> the solution of the system.
>>> I need *quadruple precision* and thus I've been trying to implement my 
>>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>>
>>> To illustrate this, I give a simple example here:
>>> set_bigfloat_precision(128);
>>> A  = speye(BigFloat, 2, 2);
>>> b = ones(BigFloat, 2, 1);
>>> x = A\b;
>>>
>>> If I do this I either get a StackOverFlow error:
>>> ERROR: StackOverflowError:
>>>  in copy at array.jl:100
>>>  in float at sparse/sparsematrix.jl:234
>>>  in call at essentials.jl:57 (repeats 254 times)
>>>
>>> or the solver seems to run forever and never terminates. As the second 
>>> error indicates it seems like the sparse solver only accepts the normal 
>>> *float* types.
>>> My question is then, is there a way to get quadruple precision with the 
>>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>>> for something else (in this case any suggestions :) ) ?
>>>
>>> Regards Nicklas A.
>>>
>>>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
Yes textbook answer is, why do you want to use `\`? Iterative techniques 
are likely better suited for the problem. There's no need to roll you own, 
the package IterativeSolvers.jl has a good number of techniques implemented 
which are well-suited for the problem since A is a large sparse matrix. 
Their methods should work out of the box with Bigs, though you will likely 
want to adjust the tolerances.

On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>
> Here is a textbook answer.  Appropriate choice of n depends on condition 
> of A.
>
> """
>
> iterimprove(A,b,n=1,verbose=true)
>
>  
>
> Solve `A x = b` for `x` using iterative improvement 
>
> """ 
>
> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>>b::Vector{T},n=1,verbose=true)
>>
>  eps(T) < eps(Float64) || throw(ArgumentError("wrong implementation")) 
>
>  A0 = SparseMatrixCSC{Float64}(A)
>> F = factorize(A0)
>> x = zeros(b)
>> r = copy(b)
>> for iter = 1:n+1
>> y = F \ Vector{Float64}(r)
>> for i in eachindex(x)
>> x[i] += y[i]
>> end
>> r = b - A * x
>> if verbose
>> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
>> end
>> end
>> x
>> end
>
>
>
> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>>
>> Hello
>>
>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>> b.
>> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
>> of my matrices and Julias built in backslash ' \ ' operator for the 
>> solution of the system.
>> I need *quadruple precision* and thus I've been trying to implement my 
>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>
>> To illustrate this, I give a simple example here:
>> set_bigfloat_precision(128);
>> A  = speye(BigFloat, 2, 2);
>> b = ones(BigFloat, 2, 1);
>> x = A\b;
>>
>> If I do this I either get a StackOverFlow error:
>> ERROR: StackOverflowError:
>>  in copy at array.jl:100
>>  in float at sparse/sparsematrix.jl:234
>>  in call at essentials.jl:57 (repeats 254 times)
>>
>> or the solver seems to run forever and never terminates. As the second 
>> error indicates it seems like the sparse solver only accepts the normal 
>> *float* types.
>> My question is then, is there a way to get quadruple precision with the 
>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>> for something else (in this case any suggestions :) ) ?
>>
>> Regards Nicklas A.
>>
>>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Ralph Smith
Here is a textbook answer.  Appropriate choice of n depends on condition of 
A.

"""

iterimprove(A,b,n=1,verbose=true)

 

Solve `A x = b` for `x` using iterative improvement 

""" 

function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>b::Vector{T},n=1,verbose=true)
>
 eps(T) < eps(Float64) || throw(ArgumentError("wrong implementation")) 

 A0 = SparseMatrixCSC{Float64}(A)
> F = factorize(A0)
> x = zeros(b)
> r = copy(b)
> for iter = 1:n+1
> y = F \ Vector{Float64}(r)
> for i in eachindex(x)
> x[i] += y[i]
> end
> r = b - A * x
> if verbose
> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
> end
> end
> x
> end



On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
Though I don't know if they have sparse algorithms. But they have a good 
base something there to help you get started making one...

On Wednesday, August 10, 2016 at 2:20:54 PM UTC-7, Chris Rackauckas wrote:
>
> GenericSVD.jl  has linear 
> solver routines which work for generic number types (like BigFloat). You 
> can use an SVD to solve the linear system. It's not as fast as other 
> methods, but you may find this useful.
>
> On Wednesday, August 10, 2016 at 12:47:10 PM UTC-7, Nicklas Andersen wrote:
>>
>> Hello
>>
>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>> b.
>> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
>> of my matrices and Julias built in backslash ' \ ' operator for the 
>> solution of the system.
>> I need *quadruple precision* and thus I've been trying to implement my 
>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>
>> To illustrate this, I give a simple example here:
>> set_bigfloat_precision(128);
>> A  = speye(BigFloat, 2, 2);
>> b = ones(BigFloat, 2, 1);
>> x = A\b;
>>
>> If I do this I either get a StackOverFlow error:
>> ERROR: StackOverflowError:
>>  in copy at array.jl:100
>>  in float at sparse/sparsematrix.jl:234
>>  in call at essentials.jl:57 (repeats 254 times)
>>
>> or the solver seems to run forever and never terminates. As the second 
>> error indicates it seems like the sparse solver only accepts the normal 
>> *float* types.
>> My question is then, is there a way to get quadruple precision with the 
>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>> for something else (in this case any suggestions :) ) ?
>>
>> Regards Nicklas A.
>>
>>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
GenericSVD.jl  has linear 
solver routines which work for generic number types (like BigFloat). You 
can use an SVD to solve the linear system. It's not as fast as other 
methods, but you may find this useful.

On Wednesday, August 10, 2016 at 12:47:10 PM UTC-7, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Kristoffer Carlsson
The sparse solvers use UMFPACK and CHOLMOD which are C-libraries and thus 
only support the standard number types. You would need a pure julia written 
solver that could take any number type.

The stackoverflow error was fixed 
here: https://github.com/JuliaLang/julia/pull/14902 

On Wednesday, August 10, 2016 at 9:47:10 PM UTC+2, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>