Re: [petsc-users] mpi_aij MatGetSubMatrix with mat_block_size!=1

2015-11-30 Thread Eric Chamberland
Le 2015-11-30 11:18, Lawrence Mitchell a écrit : The block size of the submatrix comes from the block size that lives on the IS used to define it. So set a block size on the IS you make (ISSetBlockSize). Great! It works! :) Thanks! Eric Cheers, LAwrence

Re: [petsc-users] DMPlex: Ghost points after DMRefine

2015-11-30 Thread Morten Nobel-Jørgensen
Hi Matt I don’t think the problem is within Petsc - rather somewhere in my code. When I dump the DMPlex using DMView (ascii-info–detail) the ghost mapping seems to be setup correctly. Is there a better way to determine if a local point is a ghost point? The way I iterate the DMPlex is like

Re: [petsc-users] mpi_aij MatGetSubMatrix with mat_block_size!=1

2015-11-30 Thread Lawrence Mitchell
On 30/11/15 16:14, Eric Chamberland wrote: > Hi, > > Using PETSc 3.5.3. > > We have a "A" matrix, mpi_aij with block_size=3. > > We create a IS with ISCreateStride, then extract A_00 with > MatGetSubMatrix(..., MAT_INITIAL_MATRIX,...). > > We know that A_00 is block_size = 3 and mpi_aij,

[petsc-users] mpi_aij MatGetSubMatrix with mat_block_size!=1

2015-11-30 Thread Eric Chamberland
Hi, Using PETSc 3.5.3. We have a "A" matrix, mpi_aij with block_size=3. We create a IS with ISCreateStride, then extract A_00 with MatGetSubMatrix(..., MAT_INITIAL_MATRIX,...). We know that A_00 is block_size = 3 and mpi_aij, however the matrix created by PETSc doesn't have the

[petsc-users] Output newton step

2015-11-30 Thread Alex Lindsay
Is there an option for outputting the Newton step after my linear solve? Alex

Re: [petsc-users] PETSC error: Caught signal number 8 FPE

2015-11-30 Thread Soumya Mukherjee
Thanks for the reply. The error message shows [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Scalar value must be same on all processes, argument # 3 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.6.1, Jul,

Re: [petsc-users] [petsc-maint] Memory usage function: output for all ranks

2015-11-30 Thread Jed Brown
Andrey Ovsyannikov writes: > Thanks for your quick response. I like Massif tool and I have been using it > recently. However, I was not able to run Valgrind for large jobs. I am > interested in memory analysis of large scale runs with more than 1000 MPI > ranks.

Re: [petsc-users] [petsc-maint] Memory usage function: output for all ranks

2015-11-30 Thread Barry Smith
PETSc reporting of memory usage for objects is unfortunately not that great; for example distinguishing between temporary work space allocation vs memory that is kept for the life of the object is not always clear. Associating memory with particular objects requires the PETSc source code to

Re: [petsc-users] Output newton step

2015-11-30 Thread Barry Smith
> On Nov 30, 2015, at 2:19 PM, Alex Lindsay wrote: > > Is there an option for outputting the Newton step after my linear solve? > > Alex Do you want the solution of the linear system before the line search (line search may shrink the vector) use -ksp_view_solution or

[petsc-users] Memory usage function: output for all ranks

2015-11-30 Thread Andrey Ovsyannikov
Dear PETSc team, I am working on optimization of Chombo-Crunch CFD code for next-generation supercomputer architectures at NERSC (Berkeley Lab) and we use PETSc AMG solver. During memory analysis study I faced with a difficulty to get memory usage data from PETSc for all MPI ranks. I am looking

Re: [petsc-users] [petsc-maint] Memory usage function: output for all ranks

2015-11-30 Thread Andrey Ovsyannikov
Hi Matt, Thanks for your quick response. I like Massif tool and I have been using it recently. However, I was not able to run Valgrind for large jobs. I am interested in memory analysis of large scale runs with more than 1000 MPI ranks. PetscMemoryGetCurrentUsage() works fine for this puprpose

Re: [petsc-users] [petsc-maint] Memory usage function: output for all ranks

2015-11-30 Thread Richard Mills
Andrey, Maybe this is what you tried, but did you try running only a handful of MPI ranks (out of your 1000) with Massif? I've had success doing things that way. You won't know what every rank is doing, but you may be able to get a good idea from your sample. --Richard On Mon, Nov 30, 2015 at

Re: [petsc-users] [petsc-maint] Memory usage function: output for all ranks

2015-11-30 Thread Matthew Knepley
On Mon, Nov 30, 2015 at 5:20 PM, Andrey Ovsyannikov wrote: > Dear PETSc team, > > I am working on optimization of Chombo-Crunch CFD code for next-generation > supercomputer architectures at NERSC (Berkeley Lab) and we use PETSc AMG > solver. During memory analysis study I

[petsc-users] Weighted Jacobi

2015-11-30 Thread Timothée Nicolas
Hi all, Is weighted Jacobi available as a preconditioner ? I can't find it in the list of preconditioners. If not, what is the rationale between this choice ? It is pretty straightforward to code, so if it is not available I can do it without problem I guess, but I am just wondering. In the

Re: [petsc-users] PETSC error: Caught signal number 8 FPE

2015-11-30 Thread Soumya Mukherjee
It is a PETSc error. And I just wanted to know if runs without an error in your machine. On Nov 30, 2015 4:34 AM, "Jose E. Roman" wrote: > > I am not going to run your code. We are not a free debugging service. You have to debug the code yourself, and let us know only if the

Re: [petsc-users] PETSC error: Caught signal number 8 FPE

2015-11-30 Thread Matthew Knepley
On Mon, Nov 30, 2015 at 7:59 AM, Soumya Mukherjee wrote: > It is a PETSc error. And I just wanted to know if runs without an error in > your machine. > This is not a PETSc error, as such. PETSc installs a signal handler so that we can try and get more information

[petsc-users] DMPlex: Ghost points after DMRefine

2015-11-30 Thread Morten Nobel-Jørgensen
I have a very simple unstructured mesh composed of two triangles (four vertices) with one shared edge using a DMPlex: /|\ / | \ \ | / \|/ After distributing this mesh to two processes, each process owns a triangle. However one process owns tree vertices, while the last vertex is owned by the

Re: [petsc-users] DMPlex: Ghost points after DMRefine

2015-11-30 Thread Matthew Knepley
On Mon, Nov 30, 2015 at 7:01 AM, Morten Nobel-Jørgensen wrote: > I have a very simple unstructured mesh composed of two triangles (four > vertices) with one shared edge using a DMPlex: > > /|\ > / | \ > \ | / > \|/ > > After distributing this mesh to two processes, each