John,
The individual block tridiagonal systems are pretty small for solving in
parallel with MPI, you are unlikely to get much improvement from focusing on
these.
Some general comments/suggestions:
1) ADI methods generally are difficult to parallelize with MPI
2)
Hello,
The block matrices tend to be dense and can be large depending on the
amount of unknowns. The overall block tridiagonal can be large as well
as the size depends on the number of grid points in a given index
direction. It would not be unheard of to have greater 100 rows in the
global
Thanks for the help.
I didn't want to get too far into the weeds about the numerical method,
just that I have a block tridiagonal system that needs to be solved. If
it helps any, the system comes from an ADI scheme on the Navier-Stokes
equations. The [A], [B], and [C] block matrices
John,
How large are your blocks and are they dense? Also generally how many
blocks do you have? The BAIJ formats are for when the blocks are dense.
As Jed notes we don't have specific parallel block tridiagonal solvers.
You can use the parallel direct solvers such as MUMPS,
Where do your tridiagonal systems come from? Do you need to solve one
at a time, or batches of tridiagonal problems?
Although it is not in PETSc, we have some work on solving the sort of
tridiagonal systems that arise in compact discretizations, which it
turns out can be solved much faster than
Hello,
I need a parallel block tridiagonal solver and thought PETSc would be
perfect. However, there seems to be no specific example showing exactly
which VecCreate and MatCreate functions to use. I searched the archive
and the web and there is no explicit block tridiagonal examples