Once you have the LU factorization, LAPACK (xGECON) can give an
estimate of the condition number.

Perhaps routines that use an LU factorization could have a tolerance
parameter that has a small default value (for the reciprocal condition
number). When this parameter is a Real, the routine could check that the
matrix is not worse-conditioned. When this parameter has some value that
is not a number (eg NA), the condition number would not be calculated
and the routine would proceed anyway.

Same applies to routines using QR factorization, or an SVD (in which
case you can of course get the exact condition number, not only an
estimate).

Best,

Tamas

On Mon, Dec 29 2014, Andreas Noack <[email protected]> wrote:

> Our inv uses the LU factorization and if the matrix is singular this can
> give an inverse with very large errors instead of a singular exception.
> MATLAB gives the same result but with a warning that the result might be
> inaccurate. We have considered also giving a warning, but haven't made a
> decision yet.
>
> Note that you have the option of doing the calculation in rational instead
> of floating point arithmetic, i.e.
>
> julia> Ar = A//1
> 3x3 Array{Rational{Int64},2}:
>   2//1   1//1  -1//1
>   1//1  -2//1  -3//1
>  -3//1  -1//1   2//1
>
> julia> inv(Ar)
> ERROR: SingularException(3)
>  in naivesub! at linalg/triangular.jl:257
>  in A_ldiv_B! at linalg/bidiag.jl:143
>  in inv at linalg/factorization.jl:812
>  in inv at linalg/dense.jl:328
>
> which will detect the singularity.
>
> 2014-12-29 4:52 GMT+01:00 Tomas Lycken <[email protected]>:
>
>> That probably confirms my hypothesis: when you move down to Float32, the
>> rounding errors from the arithmetic are truncated away, so that the
>> singularity of the matrix is preserved.
>>
>> // T
>>
>>
>> On Monday, December 29, 2014 4:42:01 AM UTC+1, Ben Zeckel wrote:
>>>
>>> Interesting. if I move down to Float32 (not the default on my 64-bit
>>> version), I a better result.
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.4
>>> Commit 3392026* (2014-12-26 10:42 UTC)
>>> Platform Info:
>>>   System: Windows (x86_64-w64-mingw32)
>>>   CPU: Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> A = [2 1 -1; 1 -2 -3; -3 -1 2];
>>>
>>> julia> A
>>> 3x3 Array{Float64,2}:
>>>   2.0   1.0  -1.0
>>>   1.0  -2.0  -3.0
>>>  -3.0  -1.0   2.0
>>>
>>> julia> det(A)
>>> 1.5543122344752192e-15
>>>
>>> julia> eps(Float64)
>>> 2.220446049250313e-16
>>>
>>> julia> nextfloat(0.0)
>>> 5.0e-324
>>>
>>> julia> A = float32(A)
>>> 3x3 Array{Float32,2}:
>>>   2.0   1.0  -1.0
>>>   1.0  -2.0  -3.0
>>>  -3.0  -1.0   2.0
>>>
>>> julia> det(A)
>>> 0.0f0
>>>
>>> julia> inv(A)
>>> ERROR: SingularException(3)
>>>  in inv at linalg/lu.jl:149
>>>  in inv at linalg/dense.jl:328
>>>
>>> julia>
>>>
>>> On Sunday, December 28, 2014 10:05:15 PM UTC-5, Tomas Lycken wrote:
>>>>
>>>> The determinant of your failing A is nonzero:
>>>>
>>>> julia> det(A)
>>>> 1.5543122344752192e-15
>>>>
>>>> Of course, given its magnitude, that's probably just floating-point
>>>> error noise - but it makes me guess that the matrix inversion fails (i.e.
>>>> doesn't fail) for the same reason; somewhere along the calculations, enough
>>>> floating point error is accumulated to make the matrix non-singular (albeit
>>>> still very ill-conditioned, as seen by the monstrously large numbers in the
>>>> inverse...). I'm not confident enough to definitely rule out a bug
>>>> somewhere, but I'd consider it pretty likely that floating point arithmetic
>>>> is the root cause in this instance =)
>>>>
>>>> // Tomas
>>>>
>>>> On Monday, December 29, 2014 3:17:06 AM UTC+1, Ben Zeckel wrote:
>>>>>
>>>>>
>>>>> I am attempting to learn Julia and am experimenting with the matrix
>>>>> function inv to calculate the inverse of some matrices. Some are singular
>>>>> and some are nonsingular.
>>>>> One case (below) gives me the wrong result but it is a simple example
>>>>> so I must be misusing Julia (I found a similar issue with large numbers in
>>>>> the basic factorial function in the documentation returning 0 which came
>>>>> down to me not realizing the overflow behavior immediately)
>>>>>
>>>>> These attempts works as expected
>>>>>
>>>>> julia> A = [1 2 3; 0 2 2; 1 2 3]; inv(A)
>>>>> ERROR: SingularException(3)
>>>>>  in inv at linalg/lu.jl:149
>>>>>  in inv at linalg/dense.jl:328
>>>>>
>>>>> julia> A = [1 1 1; 0 2 3; 5 5 1]; inv(A)
>>>>> 3x3 Array{Float64,2}:
>>>>>   1.625  -0.5  -0.125
>>>>>  -1.875   0.5   0.375
>>>>>   1.25    0.0  -0.25
>>>>>
>>>>> julia> inv(A) * A
>>>>> 3x3 Array{Float64,2}:
>>>>>  1.0  0.0  0.0
>>>>>  0.0  1.0  0.0
>>>>>  0.0  0.0  1.0
>>>>>
>>>>>
>>>>> This does not
>>>>>
>>>>> jjulia> A = [2 1 -1; 1 -2 -3; -3 -1 2]; inv(A)
>>>>> 3x3 Array{Float64,2}:
>>>>>  -4.5036e15  -6.43371e14  -3.21686e15
>>>>>   4.5036e15   6.43371e14   3.21686e15
>>>>>  -4.5036e15  -6.43371e14  -3.21686e15
>>>>>
>>>>> julia> A * inv(A)
>>>>> 3x3 Array{Float64,2}:
>>>>>  0.0  -0.125  -0.5
>>>>>  0.0   0.75    0.0
>>>>>  0.0   0.0     0.0
>>>>>
>>>>> Double checking manually, in julia with rref, and with wolframalpha.com
>>>>> shows the expected results on this singular matrix
>>>>>
>>>>> julia> A = [2 1 -1; 1 -2 -3; -3 -1 2]; rref([A eye(3)])
>>>>> 3x6 Array{Float64,2}:
>>>>>  1.0  0.0  -1.0  0.0   0.142857  -0.285714
>>>>>  0.0  1.0   1.0  0.0  -0.428571  -0.142857
>>>>> * 0.0  0.0   0.0 * 1.0   0.142857   0.714286
>>>>>
>>>>> inv {{2,1,-1}, {1,-2,-3}, {-3,-1,2}}
>>>>> http://www.wolframalpha.com/input/?i=inv+%7B%7B2%2C1%2C-1%
>>>>> 7D%2C+%7B1%2C-2%2C-3%7D%2C+%7B-3%2C-1%2C2%7D%7D
>>>>>
>>>>> Thanks for any help,
>>>>>
>>>>> Ben
>>>>>
>>>>>

Reply via email to