Re: [petsc-users] PETSc User Meeting 2017, June 14-16 in Boulder, Colorado

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 11:26 PM, Jed Brown  wrote:

> The program is up on the website:
>
>   https://www.mcs.anl.gov/petsc/meetings/2017/


Put Toby on the oanel.

  Matt


>
> If you haven't registered yet, we can still accommodate you, but please
> register soon.  If you haven't booked lodging, please do that soon --
> the on-campus lodging option will close on *Tuesday, May 30*.
>
>   https://confreg.colorado.edu/CSM2017
>
> We are looking forward to seeing you in Boulder!
>
> Jed Brown  writes:
>
> > We'd like to invite you to join us at the 2017 PETSc User Meeting held
> > at the University of Colorado Boulder on June 14-16, 2017.
> >
> >   http://www.mcs.anl.gov/petsc/meetings/2017/
> >
> > The first day consists of tutorials on various aspects and features of
> > PETSc. The second and third days will be devoted to exchange,
> > discussions, and a refinement of strategies for the future with our
> > users. We encourage you to present work illustrating your own use of
> > PETSc, for example in applications or in libraries built on top of
> > PETSc.
> >
> > Registration for the PETSc User Meeting 2017 is free for students and
> > $75 for non-students. We can host a maximum of 150 participants, so
> > register soon (and by May 15).
> >
> >   http://www.eventzilla.net/web/e/petsc-user-meeting-2017-2138890185
> >
> > We are also offering low-cost lodging on campus.  A lodging registration
> > site will be available soon and announced here and on the website.
> >
> > Thanks to the generosity of Intel, we will be able to offer a limited
> > number of student travel grants. We are also soliciting additional
> > sponsors -- please contact us if you are interested.
> >
> >
> > We are looking forward to seeing you in Boulder!
> >
> > Please contact us at petsc2...@mcs.anl.gov if you have any questions or
> > comments.
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] PETSc User Meeting 2017, June 14-16 in Boulder, Colorado

2017-05-25 Thread Jed Brown
The program is up on the website:

  https://www.mcs.anl.gov/petsc/meetings/2017/

If you haven't registered yet, we can still accommodate you, but please
register soon.  If you haven't booked lodging, please do that soon --
the on-campus lodging option will close on *Tuesday, May 30*.

  https://confreg.colorado.edu/CSM2017

We are looking forward to seeing you in Boulder!

Jed Brown  writes:

> We'd like to invite you to join us at the 2017 PETSc User Meeting held
> at the University of Colorado Boulder on June 14-16, 2017.
>
>   http://www.mcs.anl.gov/petsc/meetings/2017/
>
> The first day consists of tutorials on various aspects and features of
> PETSc. The second and third days will be devoted to exchange,
> discussions, and a refinement of strategies for the future with our
> users. We encourage you to present work illustrating your own use of
> PETSc, for example in applications or in libraries built on top of
> PETSc.
>
> Registration for the PETSc User Meeting 2017 is free for students and
> $75 for non-students. We can host a maximum of 150 participants, so
> register soon (and by May 15).
>
>   http://www.eventzilla.net/web/e/petsc-user-meeting-2017-2138890185
>
> We are also offering low-cost lodging on campus.  A lodging registration
> site will be available soon and announced here and on the website.
>
> Thanks to the generosity of Intel, we will be able to offer a limited
> number of student travel grants. We are also soliciting additional
> sponsors -- please contact us if you are interested.
>
>
> We are looking forward to seeing you in Boulder!
>
> Please contact us at petsc2...@mcs.anl.gov if you have any questions or
> comments.


signature.asc
Description: PGP signature


Re: [petsc-users] PCFactorSetShiftType does not work in code but -pc_factor_set_shift_type works

2017-05-25 Thread Danyang Su

Hi Hong,

It works like a charm. I really appreciate your help.

Regards,

Danyang


On 17-05-25 07:49 AM, Hong wrote:

Danyang:
You must access inner pc, then set shift. See
petsc/src/ksp/ksp/examples/tutorials/ex7.c

For example, I add following to 
petsc/src/ksp/ksp/examples/tutorials/ex2.c, line 191:

  PetscBool isbjacobi;
  PCpc;
  ierr = KSPGetPC(ksp,);CHKERRQ(ierr);
  ierr = 
PetscObjectTypeCompare((PetscObject)pc,PCBJACOBI,);CHKERRQ(ierr);

  if (isbjacobi) {
PetscInt nlocal;
KSP  *subksp;
PC   subpc;

ierr = KSPSetUp(ksp);CHKERRQ(ierr);
ierr = KSPGetPC(ksp,);CHKERRQ(ierr);

/* Extract the array of KSP contexts for the local blocks */
ierr = PCBJacobiGetSubKSP(pc,,NULL,);CHKERRQ(ierr);
printf("isbjacobi, nlocal %D, set option to subpc...\n",nlocal);
for (i=0; i> wrote:

Dear Hong,

I just tested with different number of processors for the
same matrix. It sometimes got "ERROR: Arguments are
incompatible" for different number of processors. It works
fine using 4, 8, or 24 processors, but failed with "ERROR:
Arguments are incompatible" using 16 or 48 processors. The
error information is attached. I tested this on my local
computer with 6 cores 12 threads. Any suggestion on this?

Thanks,

Danyang


On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

Awesome. Thanks for testing the case. I will try your
options for the code and get back to you later.


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 2:22 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 20:03, Matthew Knepley  wrote:
> >
> >
> > Hmm, I thought I made adjacency per field. I have to look. That way, no
> problem with the Stokes example. DG is still weird.
>
> You might, we don't right now.  We just make the topological adjacency
> that is "large enough", and then make fields on that.
>
> >
> > That seems baroque. So this is just another adjacency pattern. You
> should be able to easily define it, or if you are a patient person,
> > wait for me to do it. Its here
> >
> > https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c
> 1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master&
> fileviewer=file-view-default#plexdistribute.c-239
> >
> > I am more than willing to make this overridable by the user through
> function composition or another mechanism.
>
> Hmm, that naive thing of just modifying the XXX_Support_Internal to
> compute with DMPlexGetTransitiveClosure rather than DMPlexGetCone didn't do
> what I expected, but I don't understand the way this bootstrapping is done
> very well.
>

It should do the right thing. Notice that you have to be careful about the
arrays that you use since I reuse them for efficiency here.
What is going wrong?

   Matt


> Cheers,
>
> Lawrence
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 20:03, Matthew Knepley  wrote:
> 
> 
> Hmm, I thought I made adjacency per field. I have to look. That way, no 
> problem with the Stokes example. DG is still weird.

You might, we don't right now.  We just make the topological adjacency that is 
"large enough", and then make fields on that.

> 
> That seems baroque. So this is just another adjacency pattern. You should be 
> able to easily define it, or if you are a patient person,
> wait for me to do it. Its here
> 
> https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239
> 
> I am more than willing to make this overridable by the user through function 
> composition or another mechanism.

Hmm, that naive thing of just modifying the XXX_Support_Internal to compute 
with DMPlexGetTransitiveClosure rather than DMPlexGetCone didn't do what I 
expected, but I don't understand the way this bootstrapping is done very well.

Cheers,

Lawrence




Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:58 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 19:46, Matthew Knepley  wrote:
> >
> > Sounds like DG. I will get out my dead chicken for the incantation
>
> Actually no!  Mixed H(div)-L2 for Stokes.  Which has facet integrals for
> partially discontinuous fields.  If you do redundant compute for such
> terms, you need a depth-2 FEM adjacency, which is just grim.  Equally we
> have some strange users who have jump terms in CG formulations.


Hmm, I thought I made adjacency per field. I have to look. That way, no
problem with the Stokes example. DG is still weird.

  Matt


>
> Lawrence




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 19:46, Matthew Knepley  wrote:
> 
> Sounds like DG. I will get out my dead chicken for the incantation

Actually no!  Mixed H(div)-L2 for Stokes.  Which has facet integrals for 
partially discontinuous fields.  If you do redundant compute for such terms, 
you need a depth-2 FEM adjacency, which is just grim.  Equally we have some 
strange users who have jump terms in CG formulations.

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:38 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
> > On 25 May 2017, at 19:23, Matthew Knepley  wrote:
> >
> > Ok, let me clarify.
> >
> > Given shared facets, I'd like closure(support(facet)) this is a subset
> of the fem adjacency. "Add in the cell and its closure from the remote
> rank". This doesn't include remote cells I can only see through vertices.
> Without sending data evaluated at facet quad points, I think this is the
> adjacency I need to compute facet integrals: all the dofs in
> closure(support(facet)).
> >
> > This seems incoherent to me. For FV, dofs reside in the cells, so you
> should only need the cell for adjacency. If you
> > need dofs defined at vertices, then you should also need cells which are
> only attached by vertices. How could this
> > scheme be consistent without this?
>
> OK, so what I think is this:
>
> I need to compute integrals over cells and facets.
>

Sounds like DG. I will get out my dead chicken for the incantation.


> So I do:
>
> GlobalToLocal(INSERT_VALUES)
> ComputeIntegralsOnOwnedEntities
> LocalToGlobal(ADD_VALUES)
>
> That way, an integration is performed on every entity exactly once, and
> LocalToGlobal ensures that I get a consistent assembled Vec.
>
> OK, so if I only compute cell integrals, then the zero overlap
> distribution with all the points in the closure of the cell (including some
> remote points) is sufficient.
>

Yep.


> If I compute facet integrals, I need both cells (and their closure) in the
> support of the facet.  Again, each facet is only integrated by one process,
> and the LocalToGlobal adds in contributions to remote dofs.  This is the
> same as cell integrals, just I need a bit more data, no?
>
> The other option is to notice that what I actually need when I compute a
> facet integral is the test function and/or any coefficients evaluated at
> quadrature points on the facet.  So if I don't want the extra overlapped
> halo, then what I need to do is for the remote process to evaluate any
> coefficients at the quad points, then send the evaluated data to the facet
> owner.  Now the facet owner can compute the integral, and again
> LocalToGlobal adds in contributions to remote dofs.


That seems baroque. So this is just another adjacency pattern. You should
be able to easily define it, or if you are a patient person,
wait for me to do it. Its here


https://bitbucket.org/petsc/petsc/src/01c3230e040078628f5e559992965c1c4b6f473d/src/dm/impls/plex/plexdistribute.c?at=master=file-view-default#plexdistribute.c-239

I am more than willing to make this overridable by the user through
function composition or another mechanism.

  Thanks,

 Matt


>
> Lawrence




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell

> On 25 May 2017, at 19:23, Matthew Knepley  wrote:
> 
> Ok, let me clarify. 
> 
> Given shared facets, I'd like closure(support(facet)) this is a subset of the 
> fem adjacency. "Add in the cell and its closure from the remote rank". This 
> doesn't include remote cells I can only see through vertices. Without sending 
> data evaluated at facet quad points, I think this is the adjacency I need to 
> compute facet integrals: all the dofs in closure(support(facet)).
> 
> This seems incoherent to me. For FV, dofs reside in the cells, so you should 
> only need the cell for adjacency. If you
> need dofs defined at vertices, then you should also need cells which are only 
> attached by vertices. How could this
> scheme be consistent without this?

OK, so what I think is this:

I need to compute integrals over cells and facets.

So I do:

GlobalToLocal(INSERT_VALUES)
ComputeIntegralsOnOwnedEntities
LocalToGlobal(ADD_VALUES)

That way, an integration is performed on every entity exactly once, and 
LocalToGlobal ensures that I get a consistent assembled Vec.

OK, so if I only compute cell integrals, then the zero overlap distribution 
with all the points in the closure of the cell (including some remote points) 
is sufficient.

If I compute facet integrals, I need both cells (and their closure) in the 
support of the facet.  Again, each facet is only integrated by one process, and 
the LocalToGlobal adds in contributions to remote dofs.  This is the same as 
cell integrals, just I need a bit more data, no?

The other option is to notice that what I actually need when I compute a facet 
integral is the test function and/or any coefficients evaluated at quadrature 
points on the facet.  So if I don't want the extra overlapped halo, then what I 
need to do is for the remote process to evaluate any coefficients at the quad 
points, then send the evaluated data to the facet owner.  Now the facet owner 
can compute the integral, and again LocalToGlobal adds in contributions to 
remote dofs.

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 1:10 PM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

>
>
> On 25 May 2017, at 18:05, Matthew Knepley  wrote:
>
> If you want that, is there a reason you cannot use the FEM style
> FALSE+TRUE?
> If you already want the closure, usually the star is not really adding
> anything new.
>
>
> Ok, let me clarify.
>
> Given shared facets, I'd like closure(support(facet)) this is a subset of
> the fem adjacency. "Add in the cell and its closure from the remote rank".
> This doesn't include remote cells I can only see through vertices. Without
> sending data evaluated at facet quad points, I think this is the adjacency
> I need to compute facet integrals: all the dofs in closure(support(facet)).
>

This seems incoherent to me. For FV, dofs reside in the cells, so you
should only need the cell for adjacency. If you
need dofs defined at vertices, then you should also need cells which are
only attached by vertices. How could this
scheme be consistent without this?

  Thanks,

Matt


> I thought this was what the fv adjacency was, but I think I was mistaken.
> That is support(cone(p)) for all p that I have.
> Now I do a rendezvous to gather everything in the closure of these new
> points. But I think that means I still don't have some cells?
>
> Make sense?
>
> Lawrence
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell


> On 25 May 2017, at 18:05, Matthew Knepley  wrote:
> 
> If you want that, is there a reason you cannot use the FEM style FALSE+TRUE?
> If you already want the closure, usually the star is not really adding 
> anything new.

Ok, let me clarify. 

Given shared facets, I'd like closure(support(facet)) this is a subset of the 
fem adjacency. "Add in the cell and its closure from the remote rank". This 
doesn't include remote cells I can only see through vertices. Without sending 
data evaluated at facet quad points, I think this is the adjacency I need to 
compute facet integrals: all the dofs in closure(support(facet)).

I thought this was what the fv adjacency was, but I think I was mistaken. That 
is support(cone(p)) for all p that I have.
Now I do a rendezvous to gather everything in the closure of these new points. 
But I think that means I still don't have some cells?

Make sense?

Lawrence

Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 11:27 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

> On 25/05/17 16:25, Matthew Knepley wrote:
> > On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell
> >  > > wrote:
> >
> > Dear petsc-users,
> >
> > I am trying to distribute a triangle mesh with a cell halo defined by
> > FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> > distribution, I want the cell on the other side of it).
> >
> > Reading the documentation, I think I do:
> >
> > DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> > DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
> >
> > and then
> > DMPlexDistribute(..., ovelap=1)
> >
> > If I do this for a simple mesh and then try and do anything on it, I
> > run into all sorts of problems because I have a plex where I have
> some
> > facets, but not even one cell in the support of the facet.  Is this
> to
> > be expected?
> >
> >
> > Hmm. I don't think so. You should have at least one cell in the
> > support of every facet.
> > TS ex11 works exactly this way.
> >
> > When using that adjacency, the overlap cells you get will not have
> > anything but the
> > facet connecting them to that partition. Although, if you have
> > adjacent cells in that overlap layer,
> > you can get ghost faces between those.
> >
> > With the code below, do you get an interpolated mesh when you create
> > it there. That call in C
> > has another argument
> >
> >   http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/
> DMPlexCreateFromCellList.html
>
> The mesh is interpolated.
>
>
> OK, so let's see if I can understand what the different adjacency
> relations are:
>
> usecone=False, useclosure=False:
>
> adj(p) => cone(p) + cone(support(p))
>
> usecone=True, useclosure=False:
>
> adj(p) => support(p) + support(cone(p))
>
> usecone=False, useclosure=True
>
> adj(p) => closure(star(p))
>
> usecone=True, useclosure=True
>
> adj(p) => star(closure(p))
>
> So let's imagine I have a facet f, the adjacent points are the
> support(cone(f)) so the support of the vertices in 2D, so those are
> some new facets.
>

If you want that, is there a reason you cannot use the FEM style FALSE+TRUE?
If you already want the closure, usually the star is not really adding
anything new.

   Matt


> So now, following https://arxiv.org/pdf/1506.06194.pdf, I need to
> complete this new mesh, so I ask for the closure of these new facets.
> But that might mean I won't ask for cells, right?  So I think I would
> end up with some facets that don't have any support.  And empirically
> I observe that:
>
> e.g. the code attached:
>
> $ mpiexec -n 3 python bar.py
> [0] 7 [0]
> [0] 8 [0]
> [0] 9 [0 1]
> [0] 10 [1]
> [0] 11 [1]
> [0] 12 []
> [1] 10 [0 2]
> [1] 11 [0 1]
> [1] 12 [0]
> [1] 13 [1]
> [1] 14 [2]
> [1] 15 [2]
> [1] 16 [1 3]
> [1] 17 [3]
> [1] 18 [3]
> [2] 7 [0 1]
> [2] 8 [0]
> [2] 9 [0]
> [2] 10 [1]
> [2] 11 []
> [2] 12 [1]
>
>
> What I would like (although I'm not sure if this is supported right
> now), is the overlap to contain closure(support(facet)) for all shared
> facets.  I think that's equivalent to closure(support(p)) \forall p.
>
> That way on any shared facets, I have both cells and their closure.
>
> Is that easy to do?
>
> Lawrence
>
> import sys, petsc4py
> petsc4py.init(sys.argv)
> from petsc4py import PETSc
> import numpy as np
> Lx = Ly = 1
> nx = 1
> ny = 2
>
> xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
> ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
> coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
> 2).reshape(-1, 2)
>
> # cell vertices
> i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
> dtype=PETSc.IntType))
> cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
> (i+1)*(ny+1) + j]
> cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
> 2).reshape(-1, 4)
> idx = [0, 1, 3, 1, 2, 3]
> cells = cells[:, idx].reshape(-1, 3)
>
> comm = PETSc.COMM_WORLD
> if comm.rank == 0:
> dm = PETSc.DMPlex().createFromCellList(2, cells, coords,
> interpolate=True, comm=comm)
> else:
> dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0,
> cells.shape[1]), dtype=PETSc.IntType),
>np.zeros((0, 2),
> dtype=PETSc.RealType),
>interpolate=True,
>comm=comm)
>
> dm.setAdjacencyUseClosure(False)
> dm.setAdjacencyUseCone(True)
>
> dm.distribute(overlap=1)
> sf = dm.getPointSF()
>
> for p in range(*dm.getDepthStratum(dm.getDepth()-1)):
> PETSc.Sys.syncPrint("[%d] %d %s" % (comm.rank, p, dm.getSupport(p)))
>
> PETSc.Sys.syncFlush()
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell


On 25/05/17 16:25, Matthew Knepley wrote:
> On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell
>  > wrote:
> 
> Dear petsc-users,
> 
> I am trying to distribute a triangle mesh with a cell halo defined by
> FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> distribution, I want the cell on the other side of it).
> 
> Reading the documentation, I think I do:
> 
> DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
> 
> and then
> DMPlexDistribute(..., ovelap=1)
> 
> If I do this for a simple mesh and then try and do anything on it, I
> run into all sorts of problems because I have a plex where I have some
> facets, but not even one cell in the support of the facet.  Is this to
> be expected?
> 
> 
> Hmm. I don't think so. You should have at least one cell in the
> support of every facet.
> TS ex11 works exactly this way.
> 
> When using that adjacency, the overlap cells you get will not have
> anything but the
> facet connecting them to that partition. Although, if you have
> adjacent cells in that overlap layer,
> you can get ghost faces between those.
> 
> With the code below, do you get an interpolated mesh when you create
> it there. That call in C
> has another argument
> 
>   
> http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateFromCellList.html

The mesh is interpolated.


OK, so let's see if I can understand what the different adjacency
relations are:

usecone=False, useclosure=False:

adj(p) => cone(p) + cone(support(p))

usecone=True, useclosure=False:

adj(p) => support(p) + support(cone(p))

usecone=False, useclosure=True

adj(p) => closure(star(p))

usecone=True, useclosure=True

adj(p) => star(closure(p))

So let's imagine I have a facet f, the adjacent points are the
support(cone(f)) so the support of the vertices in 2D, so those are
some new facets.

So now, following https://arxiv.org/pdf/1506.06194.pdf, I need to
complete this new mesh, so I ask for the closure of these new facets.
But that might mean I won't ask for cells, right?  So I think I would
end up with some facets that don't have any support.  And empirically
I observe that:

e.g. the code attached:

$ mpiexec -n 3 python bar.py
[0] 7 [0]
[0] 8 [0]
[0] 9 [0 1]
[0] 10 [1]
[0] 11 [1]
[0] 12 []
[1] 10 [0 2]
[1] 11 [0 1]
[1] 12 [0]
[1] 13 [1]
[1] 14 [2]
[1] 15 [2]
[1] 16 [1 3]
[1] 17 [3]
[1] 18 [3]
[2] 7 [0 1]
[2] 8 [0]
[2] 9 [0]
[2] 10 [1]
[2] 11 []
[2] 12 [1]


What I would like (although I'm not sure if this is supported right
now), is the overlap to contain closure(support(facet)) for all shared
facets.  I think that's equivalent to closure(support(p)) \forall p.

That way on any shared facets, I have both cells and their closure.

Is that easy to do?

Lawrence

import sys, petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
import numpy as np
Lx = Ly = 1
nx = 1
ny = 2

xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
2).reshape(-1, 2)

# cell vertices
i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
dtype=PETSc.IntType))
cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
(i+1)*(ny+1) + j]
cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
2).reshape(-1, 4)
idx = [0, 1, 3, 1, 2, 3]
cells = cells[:, idx].reshape(-1, 3)

comm = PETSc.COMM_WORLD
if comm.rank == 0:
dm = PETSc.DMPlex().createFromCellList(2, cells, coords,
interpolate=True, comm=comm)
else:
dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0,
cells.shape[1]), dtype=PETSc.IntType),
   np.zeros((0, 2),
dtype=PETSc.RealType),
   interpolate=True,
   comm=comm)

dm.setAdjacencyUseClosure(False)
dm.setAdjacencyUseCone(True)

dm.distribute(overlap=1)
sf = dm.getPointSF()

for p in range(*dm.getDepthStratum(dm.getDepth()-1)):
PETSc.Sys.syncPrint("[%d] %d %s" % (comm.rank, p, dm.getSupport(p)))

PETSc.Sys.syncFlush()



signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Matthew Knepley
On Thu, May 25, 2017 at 9:27 AM, Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk> wrote:

> Dear petsc-users,
>
> I am trying to distribute a triangle mesh with a cell halo defined by
> FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
> distribution, I want the cell on the other side of it).
>
> Reading the documentation, I think I do:
>
> DMPlexSetAdjacencyUseCone(PETSC_TRUE)
> DMPlexSetAdjacencyUseClosure(PETSC_FALSE)
>
> and then
> DMPlexDistribute(..., ovelap=1)
>
> If I do this for a simple mesh and then try and do anything on it, I
> run into all sorts of problems because I have a plex where I have some
> facets, but not even one cell in the support of the facet.  Is this to
> be expected?
>

Hmm. I don't think so. You should have at least one cell in the support of
every facet.
TS ex11 works exactly this way.

When using that adjacency, the overlap cells you get will not have anything
but the
facet connecting them to that partition. Although, if you have adjacent
cells in that overlap layer,
you can get ghost faces between those.

With the code below, do you get an interpolated mesh when you create it
there. That call in C
has another argument


http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateFromCellList.html

If its just cells and vertices, you could get some bizarre things like you
see.

Matt


> For example the following petsc4py code breaks when run on 3 processes:
>
> $ mpiexec -n 3 python bork.py
> [1] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [1] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [1] Petsc has generated inconsistent data
> [1] Number of depth 2 faces 34 does not match permuted nubmer 29
> : error code 77
> [2] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [2] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [2] Petsc has generated inconsistent data
> [2] Number of depth 2 faces 33 does not match permuted nubmer 28
> : error code 77
> [0] DMPlexGetOrdering() line 133 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [0] DMPlexCreateOrderingClosure_Static() line 41 in
> /data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
> [0] Petsc has generated inconsistent data
> [0] Number of depth 2 faces 33 does not match permuted nubmer 31
>
> $ cat > bork.py<<\EOF
> from petsc4py import PETSc
> import numpy as np
> Lx = Ly = 1
> nx = ny = 4
>
> xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
> ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
> coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
> 2).reshape(-1, 2)
>
> # cell vertices
> i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
> dtype=PETSc.IntType))
> cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
> (i+1)*(ny+1) + j]
> cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
> 2).reshape(-1, 4)
> idx = [0, 1, 3, 1, 2, 3]
> cells = cells[:, idx].reshape(-1, 3)
>
> comm = PETSc.COMM_WORLD
> if comm.rank == 0:
> dm = PETSc.DMPlex().createFromCellList(2, cells, coords, comm=comm)
> else:
> dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0, 4),
> dtype=PETSc.IntType),
>np.zeros((0, 2),
> dtype=PETSc.RealType),
>comm=comm)
>
> dm.setAdjacencyUseClosure(False)
> dm.setAdjacencyUseCone(True)
>
> dm.distribute(overlap=1)
>
> dm.getOrdering(PETSc.Mat.OrderingType.RCM)
>
> dm.view()
> EOF
>
> Am I doing something wrong?  Is this not expected to work?
>
> Cheers,
>
> Lawrence
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] PCFactorSetShiftType does not work in code but -pc_factor_set_shift_type works

2017-05-25 Thread Hong
Danyang:
You must access inner pc, then set shift. See
petsc/src/ksp/ksp/examples/tutorials/ex7.c

For example, I add following to petsc/src/ksp/ksp/examples/tutorials/ex2.c,
line 191:
  PetscBool isbjacobi;
  PCpc;
  ierr = KSPGetPC(ksp,);CHKERRQ(ierr);
  ierr =
PetscObjectTypeCompare((PetscObject)pc,PCBJACOBI,);CHKERRQ(ierr);
  if (isbjacobi) {
PetscInt nlocal;
KSP  *subksp;
PC   subpc;

ierr = KSPSetUp(ksp);CHKERRQ(ierr);
ierr = KSPGetPC(ksp,);CHKERRQ(ierr);

/* Extract the array of KSP contexts for the local blocks */
ierr = PCBJacobiGetSubKSP(pc,,NULL,);CHKERRQ(ierr);
printf("isbjacobi, nlocal %D, set option to subpc...\n",nlocal);
for (i=0; i
> I have implemented this option in the code, as we also need to use
> configuration from file for convenience. When I run the code using options,
> it works fine, however, when I run the code using configuration file, it
> does not work. The code has two set of equations, flow and reactive, with
> prefix been set to "flow_" and "react_". When I run the code using
>
> mpiexec -n 4 ../executable -flow_sub_pc_factor_shift_type nonzero
> -react_sub_pc_factor_shift_type nonzero
>
> it works. However, if I run using
>
> mpiexec -n 4 ../executable
>
> and let the executable file read the options from file, it just does not
> work at "call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  or
> none, positive_definite ...". Do I miss something here?
>
> Below is the pseudo code I have used for flow equations, similar for
> reactive equations.
>
>   call MatCreateAIJ(Petsc_Comm_World,nndof,nndof,nngbldof, &
> nngbldof,d_nz,PETSC_NULL_INTEGER,o_nz, &
> PETSC_NULL_INTEGER,a_flow,ierr)
>   CHKERRQ(ierr)
>
> call MatSetFromOptions(a_flow,ierr)
> CHKERRQ(ierr)
>
> call KSPCreate(Petsc_Comm_World, ksp_flow, ierr)
> CHKERRQ(ierr)
>
> call KSPAppendOptionsPrefix(ksp_flow,"flow_",ierr)
> CHKERRQ(ierr)
>
> call KSPSetInitialGuessNonzero(ksp_flow,   &
> b_initial_guess_nonzero_flow, ierr)
> CHKERRQ(ierr)
>
> call KSPSetInitialGuessNonzero(ksp_flow,   &
> b_initial_guess_nonzero_flow, ierr)
> CHKERRQ(ierr)
>
> call KSPSetDM(ksp_flow,dmda_flow%da,ierr)
> CHKERRQ(ierr)
> call KSPSetDMActive(ksp_flow,PETSC_FALSE,ierr)
> CHKERRQ(ierr)
>
> *CHECK IF READ OPTION FROM FILE*
> if (read_option_from_file) then
>
>   call KSPSetType(ksp_flow, KSPGMRES, ierr)!or KSPBCGS or
> others...
>   CHKERRQ(ierr)
>
>   call KSPGetPC(ksp_flow, pc_flow, ierr)
>   CHKERRQ(ierr)
>
>   call PCSetType(pc_flow,PCBJACOBI, ierr)   !or PCILU or
> PCJACOBI or PCHYPRE ...
>   CHKERRQ(ierr)
>
>   call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  or
> none, positive_definite ...
>   CHKERRQ(ierr)
>
> end if
>
> call PCFactorGetMatSolverPackage(pc_flow,solver_pkg_flow,ierr)
> CHKERRQ(ierr)
>
> call compute_jacobian(rank,dmda_flow%da,   &
>   a_flow,a_in,ia_in,ja_in,nngl_in, &
>   row_idx_l2pg,col_idx_l2pg,   &
>   b_non_interlaced)
> call KSPSetFromOptions(ksp_flow,ierr)
> CHKERRQ(ierr)
>
> call KSPSetUp(ksp_flow,ierr)
> CHKERRQ(ierr)
>
> call KSPSetUpOnBlocks(ksp_flow,ierr)
> CHKERRQ(ierr)
>
> call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
> CHKERRQ(ierr)
>
>
> Thanks and Regards,
>
> Danyang
> On 17-05-24 06:32 PM, Hong wrote:
>
> Remove your option '-vecload_block_size 10'.
> Hong
>
> On Wed, May 24, 2017 at 3:06 PM, Danyang Su  wrote:
>
>> Dear Hong,
>>
>> I just tested with different number of processors for the same matrix. It
>> sometimes got "ERROR: Arguments are incompatible" for different number of
>> processors. It works fine using 4, 8, or 24 processors, but failed with
>> "ERROR: Arguments are incompatible" using 16 or 48 processors. The error
>> information is attached. I tested this on my local computer with 6 cores 12
>> threads. Any suggestion on this?
>>
>> Thanks,
>>
>> Danyang
>>
>> On 17-05-24 12:28 PM, Danyang Su wrote:
>>
>> Hi Hong,
>>
>> Awesome. Thanks for testing the case. I will try your options for the
>> code and get back to you later.
>>
>> Regards,
>>
>> Danyang
>>
>> On 17-05-24 12:21 PM, Hong wrote:
>>
>> Danyang :
>> I tested your data.
>> Your matrices encountered zero pivots, e.g.
>> petsc/src/ksp/ksp/examples/tutorials (master)
>> $ mpiexec -n 

[petsc-users] DMPlex distribution with FVM adjacency

2017-05-25 Thread Lawrence Mitchell
Dear petsc-users,

I am trying to distribute a triangle mesh with a cell halo defined by
FVM adjacency (i.e. if I have a facet in the initial (0-overlap)
distribution, I want the cell on the other side of it).

Reading the documentation, I think I do:

DMPlexSetAdjacencyUseCone(PETSC_TRUE)
DMPlexSetAdjacencyUseClosure(PETSC_FALSE)

and then
DMPlexDistribute(..., ovelap=1)

If I do this for a simple mesh and then try and do anything on it, I
run into all sorts of problems because I have a plex where I have some
facets, but not even one cell in the support of the facet.  Is this to
be expected?

For example the following petsc4py code breaks when run on 3 processes:

$ mpiexec -n 3 python bork.py
[1] DMPlexGetOrdering() line 133 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[1] DMPlexCreateOrderingClosure_Static() line 41 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[1] Petsc has generated inconsistent data
[1] Number of depth 2 faces 34 does not match permuted nubmer 29
: error code 77
[2] DMPlexGetOrdering() line 133 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[2] DMPlexCreateOrderingClosure_Static() line 41 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[2] Petsc has generated inconsistent data
[2] Number of depth 2 faces 33 does not match permuted nubmer 28
: error code 77
[0] DMPlexGetOrdering() line 133 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[0] DMPlexCreateOrderingClosure_Static() line 41 in
/data/lmitche1/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[0] Petsc has generated inconsistent data
[0] Number of depth 2 faces 33 does not match permuted nubmer 31

$ cat > bork.py<<\EOF
from petsc4py import PETSc
import numpy as np
Lx = Ly = 1
nx = ny = 4

xcoords = np.linspace(0.0, Lx, nx + 1, dtype=PETSc.RealType)
ycoords = np.linspace(0.0, Ly, ny + 1, dtype=PETSc.RealType)
coords = np.asarray(np.meshgrid(xcoords, ycoords)).swapaxes(0,
2).reshape(-1, 2)

# cell vertices
i, j = np.meshgrid(np.arange(nx, dtype=PETSc.IntType), np.arange(ny,
dtype=PETSc.IntType))
cells = [i*(ny+1) + j, i*(ny+1) + j+1, (i+1)*(ny+1) + j+1,
(i+1)*(ny+1) + j]
cells = np.asarray(cells, dtype=PETSc.IntType).swapaxes(0,
2).reshape(-1, 4)
idx = [0, 1, 3, 1, 2, 3]
cells = cells[:, idx].reshape(-1, 3)

comm = PETSc.COMM_WORLD
if comm.rank == 0:
dm = PETSc.DMPlex().createFromCellList(2, cells, coords, comm=comm)
else:
dm = PETSc.DMPlex().createFromCellList(2, np.zeros((0, 4),
dtype=PETSc.IntType),
   np.zeros((0, 2),
dtype=PETSc.RealType),
   comm=comm)

dm.setAdjacencyUseClosure(False)
dm.setAdjacencyUseCone(True)

dm.distribute(overlap=1)

dm.getOrdering(PETSc.Mat.OrderingType.RCM)

dm.view()
EOF

Am I doing something wrong?  Is this not expected to work?

Cheers,

Lawrence



signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] PETSC OO C guide/standard?

2017-05-25 Thread John Chludzinski
Thanks.

C++ has now become the apotheosis of "no value-added complexity".

Even Bjarne Stroustrup admits to understanding only a small fraction of the
whole.

On Wed, May 24, 2017 at 9:53 AM, Matthew Knepley  wrote:

> On Wed, May 24, 2017 at 8:50 AM, John Chludzinski 
> wrote:
>
>> Considering that the current C++ standard is >1600 pages and counting
>> (still glomming on new "features"), I'm planning to try an OO style of C
>> coding style.
>>
>> The standard's size (number of pages) being the best (and only
>> *practical*) means to measure language complexity.
>>
>
> Here is another thing I wrote talking about OO in PETSc:
>
>   https://arxiv.org/abs/1209.1711
>
> Matt
>
>
>> On Wed, May 24, 2017 at 9:11 AM, Matthew Knepley 
>> wrote:
>>
>>> On Wed, May 24, 2017 at 8:03 AM, John Chludzinski <
>>> jchludzin...@gmail.com> wrote:
>>>
 Is there a guide for how to write/develop PETSC OO C code? How a
 "class" is defined/implemented? How you implement inheritance? Memory
 management? Etc?

>>>
>>> We have a guide: http://www.mcs.anl.gov/petsc/developers/developers.pdf
>>>
>>> If its not in there, you can mail the list.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
 ---John

>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> http://www.caam.rice.edu/~mk51/
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> http://www.caam.rice.edu/~mk51/
>


[petsc-users] Postdoc Position in the Computational Seismology Group at ETH Zurich.

2017-05-25 Thread Michael Afanasiev
Hi everyone,

The computational seismology group at ETH Zurich is looking for a postdoc to 
work with us on Salvus (www.salvus.io ) - a 
spectral-element software package for full-waveform modelling and inversion. 
The exact focus of the job is tied to the applicant's strengths and interests, 
and ranges from HPC engineering to tackling large-scale frequency domain 
(Helmholtz) applications. The code is currently integrated with PETSc, and 
utilizes DMPLEX for unstructured mesh management. Please find more details 
below.

Cheers,
Mike.
_

Postdoctoral research position: Full-waveform modeling and inversion across the 
scales
 
The Computational Seismology Group at ETH Zürich is seeking to appoint a 
postdoctoral researcher to work on Salvus, an open-source framework for 
full-waveform modeling and inversion (http://salvus.io ). 
The position is full-time (100%) for a duration of 24 months, with possibility 
for extension. Earliest starting date is 1 June 2017.
 
 Background:
 
Salvus is a modular open-source code package for large-scale waveform modelling 
and inversion built on the basis of modern programming principles. This project 
will enable Salvus to (1) harness large homogeneous and various heterogeneous 
HPC architectures that are available today, and (2) easily adapt to future 
architectures, requiring minimal code modifications.
 
The project is intended to position Salvus as a top wavefield modelling and 
inversion package in the exascale era. To ensure performance of Salvus on 
today's and tomorrow's supercomputing platforms, work will focus on 
cross-architecture developments, code and I/O optimisation, and systematic 
testing and validation. This will be complemented by actions to increase and 
broaden the usability and impact of Salvus. They include workflow developments, 
the implementation of frequency-domain solvers, and extensions of the physics 
that can be modelled.
 
The successful candidate will be embedded into the team of Salvus developers 
and users covering a wide range of fields, including Computational Science, 
Applied Mathematics, Seismology, Exploration and Environmental Geophysics, 
Geothermal Energy, and Geofluids. She or he will have access to Piz Daint, 
currently Europe’s fastest supercomputer, located at the Swiss National 
Supercomputing Center (CSCS, www.cscs.ch ).
 
Apart from the core responsibilities listed below, the successful candidate 
will have considerable freedom of research in order to develop an independent 
scientific career. Topics of interest to the group include, but are not limited 
to real-world waveform modeling and inversion applications, the development of 
methods for uncertainty analysis, and the transfer of Salvus to new domains 
outside traditional seismology.
 
 Core responsibilities:
 
Cross-architecture developments, leveraging Salvus’ mixin-based design to 
implement hardware-specific versions of compute-intensive code segments, while 
leaving most of the code unchanged.
General code optimisations to achieve maximal performance from single nodes to 
full machine runs.
I/O optimisation to handle the enormous data volumes needed in adjoint 
simulations. Sub-tasks include the incorporation and extension of a previously 
developed wavefield compression library, and the interfacing to modern parallel 
seismic data formats.
Workflow developments to facilitate the solution of large-scale inverse 
problems, including the automatic orchestration of a large number of HPC jobs.

Expected qualifications:
 
The ideal candidate should have the following attributes:
 
PhD degree in geophysics, computer science, physics, applied mathematics or a 
related field,
strong programming skills in C or C++,
experience developing software which exploits large scale HPC platforms with a 
strong knowledge of MPI and experience with at least one other parallel 
paradigm (OpenMP, CUDA, OpenCL),
experience with collaborative software development (i.e. continuous integration 
services),
experience with finite element methods, numerical wave propagation, and/or 
inverse problems,
experience with Krylov methods and preconditioners - specifically either 
domain-decomposition methods and or multigrid methods (geometric, algebraic). 
 
Furthermore, the successful candidate is expected to have excellent 
organizational, communication and interpersonal skills that allow her or him to 
work in a highly collaborative and interdisciplinary environment.
 
 Application:
 
To apply for this position, please send your full resume, cover letter and the 
names of three references to Prof. Andreas Fichtner 
(andreas.ficht...@erdw.ethz.ch ). If 
possible, please also attach a link to one or more software packages you have 
been involved with (GitHub, GitLab, Bitbucket, …). The position will remain 
open until filled.

[petsc-users] PCFactorSetShiftType does not work in code but -pc_factor_set_shift_type works

2017-05-25 Thread Danyang Su

Dear Hong and Barry,

I have implemented this option in the code, as we also need to use 
configuration from file for convenience. When I run the code using 
options, it works fine, however, when I run the code using configuration 
file, it does not work. The code has two set of equations, flow and 
reactive, with prefix been set to "flow_" and "react_". When I run the 
code using


mpiexec -n 4 ../executable -flow_sub_pc_factor_shift_type nonzero 
-react_sub_pc_factor_shift_type nonzero


it works. However, if I run using

mpiexec -n 4 ../executable

and let the executable file read the options from file, it just does not 
work at "call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  or 
none, positive_definite ...". Do I miss something here?


Below is the pseudo code I have used for flow equations, similar for 
reactive equations.


  call MatCreateAIJ(Petsc_Comm_World,nndof,nndof,nngbldof, &
nngbldof,d_nz,PETSC_NULL_INTEGER,o_nz, &
PETSC_NULL_INTEGER,a_flow,ierr)
  CHKERRQ(ierr)

call MatSetFromOptions(a_flow,ierr)
CHKERRQ(ierr)

call KSPCreate(Petsc_Comm_World, ksp_flow, ierr)
CHKERRQ(ierr)

call KSPAppendOptionsPrefix(ksp_flow,"flow_",ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow,   &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow,   &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetDM(ksp_flow,dmda_flow%da,ierr)
CHKERRQ(ierr)
call KSPSetDMActive(ksp_flow,PETSC_FALSE,ierr)
CHKERRQ(ierr)

*CHECK IF READ OPTION FROM FILE*
if (read_option_from_file) then

  call KSPSetType(ksp_flow, KSPGMRES, ierr)!or KSPBCGS or 
others...

  CHKERRQ(ierr)

  call KSPGetPC(ksp_flow, pc_flow, ierr)
  CHKERRQ(ierr)

  call PCSetType(pc_flow,PCBJACOBI, ierr)   !or PCILU or 
PCJACOBI or PCHYPRE ...

  CHKERRQ(ierr)

  call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  
or none, positive_definite ...

  CHKERRQ(ierr)

end if

call PCFactorGetMatSolverPackage(pc_flow,solver_pkg_flow,ierr)
CHKERRQ(ierr)

call compute_jacobian(rank,dmda_flow%da,   &
a_flow,a_in,ia_in,ja_in,nngl_in, &
row_idx_l2pg,col_idx_l2pg,   &
  b_non_interlaced)
call KSPSetFromOptions(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUp(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUpOnBlocks(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)


Thanks and Regards,

Danyang

On 17-05-24 06:32 PM, Hong wrote:

Remove your option '-vecload_block_size 10'.
Hong

On Wed, May 24, 2017 at 3:06 PM, Danyang Su > wrote:


Dear Hong,

I just tested with different number of processors for the same
matrix. It sometimes got "ERROR: Arguments are incompatible" for
different number of processors. It works fine using 4, 8, or 24
processors, but failed with "ERROR: Arguments are incompatible"
using 16 or 48 processors. The error information is attached. I
tested this on my local computer with 6 cores 12 threads. Any
suggestion on this?

Thanks,

Danyang


On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

Awesome. Thanks for testing the case. I will try your options for
the code and get back to you later.

Regards,

Danyang


On 17-05-24 12:21 PM, Hong wrote:

Danyang :
I tested your data.
Your matrices encountered zero pivots, e.g.
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs
b_react_in_2.bin -ksp_monitor -ksp_error_if_not_converged

[15]PETSC ERROR: Zero pivot in LU factorization:
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot

[15]PETSC ERROR: Zero pivot row 1249 value 2.05808e-14 tolerance
2.22045e-14
...

Adding option '-sub_pc_factor_shift_type nonzero', I got
mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin
-ksp_monitor -ksp_error_if_not_converged
-sub_pc_factor_shift_type nonzero -mat_view ascii::ascii_info

Mat Object: 24 MPI processes
  type: mpiaij
  rows=45, cols=45
  total: nonzeros=6991400, allocated nonzeros=6991400
  total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
  0 KSP Residual norm 5.84911755e+01
  1 KSP Residual norm 6.824179430230e-01
  2 KSP Residual norm 3.994483555787e-02
  3 KSP Residual norm