On Mon, May 19, 2014 at 1:42 PM, Jonathan Wong <[email protected]>wrote:

> Thanks for the input. To clarify, I'm trying to compare GPU algorithms to
> Petsc, and they only have cg/jacobi for what I'm comparing at the moment.
> This is why I'm not using gmres (which also works well).
>
> I can solve the problem with the GPU (custom code) using CG + jacobi for
> all the meshes. On the CPU side, I can solve everything with cg/bjacobi and
> almost all of my meshes with cg/jacobi except for my 50k node mesh. I can
> solve the problem with my finite element built-in direct solver (just takes
> awhile) on one processor. I've been reading that by default the bjacobi pc
> uses one block per processor. So I had assumed that for one processor
> block-jacobi and jacobi would give similar results. cg+bjacobi works fine.
> cg+jacobi does not.
>

"Jacobi" means preconditioning by the inverse of the diagonal of the
matrix. Block-Jacobi means using a preconditioner
formed from each of the blocks, in this case 1 block. By default the inner
preconditioner is ILU(0), not jacobi. You can
make them equivalent using -sub_pc_type jacobi.

   Matt


> I'll just look into the preconditioner code and use KSPview to try to
> figure out what the differences are for one processor. I'm not sure why the
> GPU can consistently solve the problem with cg/jacobi. I'm assuming this is
> due to the way round-off or the order of operations differences between the
> two.
>
>
> On Mon, May 19, 2014 at 6:35 AM, Jed Brown <[email protected]> wrote:
>
>> Matthew Knepley <[email protected]> writes:
>> > No, Block-Jacobi and Jacobi are completely different. If you are not
>> > positive definite, you should be using MINRES.
>>
>> MINRES requires an SPD preconditioner.
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

Reply via email to