"Sun, Hui" <[email protected]> writes:

> Thanks Jed. It converges now. With a 32 by 32 grid, it takes 3.1076 seconds 
> on 8 cores and 7.13586 seconds on 2 cores. With a 64 by 64 grid, it takes 
> 18.1767s and 55.0017s respectively. That seems quite reasonable. 
>
> By the way, how do I know which matrix solver and which preconditioner is 
> being called? 

-ksp_view (or -snes_view, which includes the same information once per 
nonlinear solve).

> Besides, I have another question: I try to program finite difference
> for 2D Stokes flow with Dirichlet or Neumann bdry conditions, using
> staggered MAC grid. I looked up all the examples in snes, there are
> three stokes flow examples, all of which are finite element. I was
> thinking about naming (i-1/2,j), (i,j-1/2) and (i,j) all as (i,j),
> then define u, v, p as three petscscalers on (i,j), but in that case u
> will have one more column than p and v will have one more row than
> p. If there is already something there in PETSc about MAC grid, then I
> don't have to worry about those details. Do you know any examples or
> references doing that?

What you describe is a common approach.  You set trivial "boundary
conditions" for those silent dofs and otherwise ignore them.

Attachment: pgpf41DXXspK3.pgp
Description: PGP signature

Reply via email to