Dear Andreas,
Thank you for your response and I apologize for not writing earlier as I
got swamped with teaching.
Your suggestion of using cholfact together with "shift" argument is a
reasonable one. I need to test it and see what results I'll get for a
specific problem at hand. Also, I should probably make a leap and install
v0.4.
I would like to ask a related question on a slightly different approach.
While in general the solution is not unique when A is low rank, it seems
that in my problem vector b is in the range of A (this is an empirical
observation). The following approach seems to result in an estimator with
good statistical properties.
# A -- sparse matrix
# b -- vector
D, U, _ = eigs(A; nev = size(A, 1)-1 )
indC = findfirst((x) -> x < 5e-5, D)
D[indC:end] = 0
for j=1:indC
D[j] = 1. / D[j]
end
hat_x = U * diagm(D) * U' * b
In the above snippet, first eigen-decomposition of A is performed. Next, a
low rank approximation of A is formed by keeping all eigenvalues larger
that 5e-5 (this is a tuning parameter). Finally, x is obtained by computing
the pseudo-inverse of the low-rank approximation and multiplying with b.
The function eigs accepts the parameter "nev", however, in general I do not
know how many eigenvalues are going to be above a certain cutoff (in the
snippet above, 5e-5). Is it possible to pass a parameter to obtain all
eigenvalues and eigenvectors above certain threshold? Or should I simply
identify one eigenvalue at a time and deflate A by removing the
corresponding eigenvector?
Thanks, Mladen
On Tuesday, April 21, 2015 at 9:09:29 AM UTC-5, Andreas Noack wrote:
>
> I'm not sure what the best solution is here because I don't fully
> understand your objective. If A has low rank then the solution is not
> unique and if it has almost low rank, the solution is very ill conditioned.
> A solution could our new "shift" argument to our complex Cholesky
> factorization. This will introduce some regularization to the problem. This
> requires latest master of Julia. Then you can do
> julia> A = sprandn(20,20,0.1);A = A'A;
>
> julia> b = A*ones(size(A, 1));
>
> julia> A\b
> ERROR: ArgumentError: matrix has one or more zero pivots
> in ldltfact at sparse/cholmod.jl:1042
> in ldltfact at sparse/cholmod.jl:1048
> in factorize at sparse/linalg.jl:688
> in \ at linalg/generic.jl:244
>
> julia> cholfact(A, shift = 1e-16)\b
> 20-element Array{Float64,1}:
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.38612
> 0.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
> 1.0
>
> 2015-04-20 20:32 GMT-04:00 Mladen Kolar <[email protected] <javascript:>>:
>
>> Hi,
>>
>> I am interested in solving a linear system Ax = b where A is a sparse,
>> symmetric positive semi-definite matrix.
>> A is not exactly low rank, however, for the estimation problem at hand,
>> one can approximate A with a matrix that has rank around sqrt(p) where A is
>> p x p matrix and still obtain an estimator of x that will have good
>> statistical properties.
>>
>> Is there an incomplete Cholesky factorization for sparse symmetric
>> matrices in julia? Or an svd implementation for sparse matrices?
>> What would be a recommended way of solving the above problem?
>>
>> Kind regards,
>> Mladen
>>
>
>