> On Feb 1, 2024, at 6:57 AM, Niclas Götting <[email protected]>
> wrote:
>
> Thank you very much for the input!
>
> I've spent a lot of time to compress the linear system to a quarter of its
> size. This resulted in a form, though, which cannot be represented by
> Kronecker products. Maybe I should return to the original form..
>
> The new structure of the linear system is as follows:
>
> +---+---+--------+
> | A | 0 | -2*B^T |
> +---+---+--------+
> | 0 | D | -C^T |
> +---+---+--------+
> | B | C | D |
> +---+---+--------+
>
> The matrices D and C are huge and should therefore be accountable for most of
> the computational cost. Looking at the visual structure of C, I assume that
> it can maybe be written as a sum of (skew-)symmetric matrices. A, B, and D
> are not symmetric at all, though.
>
> Can the block substructure alone be helpful for solving the linear system in
> a smarter manner?
Possibly, it depends on details about A and D. If good preconditioners are
available for A and D separately then PCFIELDSPLIT can be used to construct a
preconditioner for the entire matrix. If A is small then in the fieldsplit
process LU can be used for A so it does not need a good preconditioner. So what
is the structure of D?
>
> On 1/31/24 18:45, Barry Smith wrote:
>> For large problems, preconditioners have to take advantage of some
>> underlying mathematical structure of the operator to perform well (require
>> few iterations). Just black-boxing the system with simple preconditioners
>> will not be effective.
>>
>> So, one needs to look at the Liouvillian Superoperator's structure to see
>> what one can take advantage of. I first noticed that it can be represented
>> as a Kronecker product: A x I or a combination of Kronecker products? In
>> theory, one can take advantage of Kronecker structure to solve such systems
>> much more efficiently than just directly solving the huge system naively as
>> a huge system. In addition it may be possible to use the Kronecker structure
>> of the operator to perform matrix-vector products with the operator much
>> more efficiently than by first explicitly forming the huge matrix
>> representation and doing the multiplies with that. I suggest some googling
>> with linear solver, preconditioning, Kronecker product.
>>
>>> On Jan 31, 2024, at 6:51 AM, Niclas Götting <[email protected]>
>>> wrote:
>>>
>>> Hi all,
>>>
>>> I've been trying for the last couple of days to solve a linear system using
>>> iterative methods. The system size itself scales exponentially (64^N) with
>>> the number of components, so I receive sizes of
>>>
>>> * (64, 64) for one component
>>> * (4096, 4096) for two components
>>> * (262144, 262144) for three components
>>>
>>> I can solve the first two cases with direct solvers and don't run into any
>>> problems; however, the last case is the first nontrivial and it's too large
>>> for a direct solution, which is why I believe that I need an iterative
>>> solver.
>>>
>>> As I know the solution for the first two cases, I tried to reproduce them
>>> using GMRES and failed on the second, because GMRES didn't converge and
>>> seems to have been going in the wrong direction (the vector to which it
>>> "tries" to converge is a totally different one than the correct solution).
>>> I went as far as -ksp_max_it 1000000, which takes orders of magnitude
>>> longer than the LU solution and I'd intuitively think that GMRES should not
>>> take *that* much longer than LU. Here is the information I have about this
>>> (4096, 4096) system:
>>>
>>> * not symmetric (which is why I went for GMRES)
>>> * not singular (SVD: condition number 1.427743623238e+06, 0 of 4096
>>> singular values are (nearly) zero)
>>> * solving without preconditioning does not converge (DIVERGED_ITS)
>>> * solving with iLU and natural ordering fails due to zeros on the diagonal
>>> * solving with iLU and RCM ordering does not converge (DIVERGED_ITS)
>>>
>>> After some searching I also found [this](http://arxiv.org/abs/1504.06768)
>>> paper, which mentions the use of ILUTP, which I believe in PETSc should be
>>> used via hypre, which, however, threw a SEGV for me, and I'm not sure if
>>> it's worth debugging at this point in time, because I might be missing
>>> something entirely different.
>>>
>>> Does anybody have an idea how this system could be solved in finite time,
>>> such that the method also scales to the three component problem?
>>>
>>> Thank you all very much in advance!
>>>
>>> Best regards
>>> Niclas
>>>