Re: [petsc-users] read argument from an XML file

2020-08-20 Thread Jed Brown
Alex Fleeter  writes:

> Thanks, we will try that. I have never used YAML before.

It's meant to be more human-readable than XML, and is widely used these days.

> Anyway, we feel using command line arguments is a bit old fashioned. It can
> be quite desirable to set parameters from a human-readable file.

My usual workflow is to build up comprehensive options by experimenting on the 
command line, then put them in a file (either using the basic format that Barry 
mentioned or YAML).


Re: [petsc-users] read argument from an XML file

2020-08-20 Thread Barry Smith

  Alex,

  If you are ok with plan text you can also put all your "command line 
arguments" in a text file (using # for comment lines) like

-ksp_rtol 1.e-3
# very tight tolerance for SNES
-snes_rtol 1.e-12

and provide the file to PETSc via 

the command line argument :-) -options_file filename

a file with a standard name petscrc in the running directory or

PetscOptionsInsertFile() 
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsInsertFile.html
 


Barry


> On Aug 20, 2020, at 11:44 PM, Alex Fleeter  wrote:
> 
> Thanks, we will try that. I have never used YAML before.
> 
> Anyway, we feel using command line arguments is a bit old fashioned. It can 
> be quite desirable to set parameters from a human-readable file.
> 
> On Thu, Aug 20, 2020 at 9:38 PM Jed Brown  > wrote:
> You can read from a YAML file with -options_file_yaml file.yaml or via
> 
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sys/PetscOptionsInsertFileYAML.html
>  
> 
> 
> Would this work for you or do you really want XML?
> 
> Alex Fleeter mailto:luis.satur...@gmail.com>> 
> writes:
> 
> > Hi:
> >
> > Does PETSc have or plan to enable reading arguments from an XML file?
> > Something like the Teuchos package in Trilinos.
> >
> > Thanks,
> >
> > AF



Re: [petsc-users] read argument from an XML file

2020-08-20 Thread Alex Fleeter
Thanks, we will try that. I have never used YAML before.

Anyway, we feel using command line arguments is a bit old fashioned. It can
be quite desirable to set parameters from a human-readable file.

On Thu, Aug 20, 2020 at 9:38 PM Jed Brown  wrote:

> You can read from a YAML file with -options_file_yaml file.yaml or via
>
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sys/PetscOptionsInsertFileYAML.html
>
> Would this work for you or do you really want XML?
>
> Alex Fleeter  writes:
>
> > Hi:
> >
> > Does PETSc have or plan to enable reading arguments from an XML file?
> > Something like the Teuchos package in Trilinos.
> >
> > Thanks,
> >
> > AF
>


Re: [petsc-users] read argument from an XML file

2020-08-20 Thread Jed Brown
You can read from a YAML file with -options_file_yaml file.yaml or via

https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sys/PetscOptionsInsertFileYAML.html

Would this work for you or do you really want XML?

Alex Fleeter  writes:

> Hi:
>
> Does PETSc have or plan to enable reading arguments from an XML file?
> Something like the Teuchos package in Trilinos.
>
> Thanks,
>
> AF


[petsc-users] read argument from an XML file

2020-08-20 Thread Alex Fleeter
Hi:

Does PETSc have or plan to enable reading arguments from an XML file?
Something like the Teuchos package in Trilinos.

Thanks,

AF


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Junchao Zhang
Manav,
 I downloaded your petsc_mat.tgz but could not reproduce the problem, on
both Linux and Mac. I used the petsc commit id df0e4300 you mentioned.
 On Linux, I have openmpi-4.0.2 + gcc-8.3.0, and petsc is configured
--with-debugging --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort
--COPTFLAGS="-g -O0" --FOPTFLAGS="-g -O0" --CXXOPTFLAGS="-g -O0"
--PETSC_ARCH=linux-host-dbg
 On Mac, I have mpich-3.3.1 + clang-11.0.0-apple, and petsc is
configured --with-debugging=1 --with-cc=mpicc --with-cxx=mpicxx
--with-fc=mpifort --with-ctable=0 COPTFLAGS="-O0 -g" CXXOPTFLAGS="-O0 -g"
PETSC_ARCH=mac-clang-dbg

mpirun -n 8 ./test
rank: 1 : stdout.processor.1
rank: 4 : stdout.processor.4
rank: 0 : stdout.processor.0
rank: 5 : stdout.processor.5
rank: 6 : stdout.processor.6
rank: 7 : stdout.processor.7
rank: 3 : stdout.processor.3
rank: 2 : stdout.processor.2
rank: 1 : Beginning reading nnz...
rank: 4 : Beginning reading nnz...
rank: 0 : Beginning reading nnz...
rank: 5 : Beginning reading nnz...
rank: 7 : Beginning reading nnz...
rank: 2 : Beginning reading nnz...
rank: 3 : Beginning reading nnz...
rank: 6 : Beginning reading nnz...
rank: 5 : Finished reading nnz
rank: 5 : Beginning mat preallocation...
rank: 3 : Finished reading nnz
rank: 3 : Beginning mat preallocation...
rank: 4 : Finished reading nnz
rank: 4 : Beginning mat preallocation...
rank: 7 : Finished reading nnz
rank: 7 : Beginning mat preallocation...
rank: 1 : Finished reading nnz
rank: 1 : Beginning mat preallocation...
rank: 0 : Finished reading nnz
rank: 0 : Beginning mat preallocation...
rank: 2 : Finished reading nnz
rank: 2 : Beginning mat preallocation...
rank: 6 : Finished reading nnz
rank: 6 : Beginning mat preallocation...
rank: 5 : Finished preallocation
rank: 5 : Beginning reading and setting matrix values...
rank: 1 : Finished preallocation
rank: 1 : Beginning reading and setting matrix values...
rank: 7 : Finished preallocation
rank: 7 : Beginning reading and setting matrix values...
rank: 2 : Finished preallocation
rank: 2 : Beginning reading and setting matrix values...
rank: 4 : Finished preallocation
rank: 4 : Beginning reading and setting matrix values...
rank: 0 : Finished preallocation
rank: 0 : Beginning reading and setting matrix values...
rank: 3 : Finished preallocation
rank: 3 : Beginning reading and setting matrix values...
rank: 6 : Finished preallocation
rank: 6 : Beginning reading and setting matrix values...
rank: 1 : Finished reading and setting matrix values
rank: 1 : Beginning mat assembly...
rank: 5 : Finished reading and setting matrix values
rank: 5 : Beginning mat assembly...
rank: 4 : Finished reading and setting matrix values
rank: 4 : Beginning mat assembly...
rank: 2 : Finished reading and setting matrix values
rank: 2 : Beginning mat assembly...
rank: 3 : Finished reading and setting matrix values
rank: 3 : Beginning mat assembly...
rank: 7 : Finished reading and setting matrix values
rank: 7 : Beginning mat assembly...
rank: 6 : Finished reading and setting matrix values
rank: 6 : Beginning mat assembly...
rank: 0 : Finished reading and setting matrix values
rank: 0 : Beginning mat assembly...
rank: 1 : Finished mat assembly
rank: 3 : Finished mat assembly
rank: 7 : Finished mat assembly
rank: 0 : Finished mat assembly
rank: 5 : Finished mat assembly
rank: 2 : Finished mat assembly
rank: 4 : Finished mat assembly
rank: 6 : Finished mat assembly

--Junchao Zhang


On Thu, Aug 20, 2020 at 5:29 PM Junchao Zhang 
wrote:

> I will have a look and report back to you. Thanks.
> --Junchao Zhang
>
>
> On Thu, Aug 20, 2020 at 5:23 PM Manav Bhatia 
> wrote:
>
>> I have created a standalone test that demonstrates the problem at my end.
>> I have stored the indices, etc.  from my problem in a text file for each
>> rank, which I use to initialize the matrix.
>> Please note that the test is specifically for 8 ranks.
>>
>> The .tgz file is on my google drive:
>> https://drive.google.com/file/d/1R-WjS36av3maXX3pUyiR3ndGAxteTVj-/view?usp=sharing
>>
>>
>> This contains a README file with instructions on running. Please note
>> that the work directory needs the index files.
>>
>> Please let me know if I can provide any further information.
>>
>> Thank you all for your help.
>>
>> Regards,
>> Manav
>>
>> On Aug 20, 2020, at 12:54 PM, Jed Brown  wrote:
>>
>> Matthew Knepley  writes:
>>
>> On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia 
>> wrote:
>>
>>
>>
>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini 
>> wrote:
>>
>> Can you add a MPI_Barrier before
>>
>> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>>
>>
>> With a MPI_Barrier before this function call:
>> —  three of the processes have already hit this barrier,
>> —  the other 5 are inside MatStashScatterGetMesg_Private ->
>> MatStashScatterGetMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
>> processes)
>>
>>
>> This is not itself evidence of inconsistent state.  You can use
>>
>>  -build_twosided allreduce
>>
>> to avoid the nonblocking sparse algorithm.
>>
>>
>> 

Re: [petsc-users] On re usability of A matrix and b vector

2020-08-20 Thread Barry Smith


> On Aug 20, 2020, at 7:45 PM, baikadi pranay  wrote:
> 
> Thank you for the clarification. 
> 
>  I also want to get two more things clarified. After i^th call to KSPSolve(), 
> I get x^i as the solution vector. I do a bunch of updates to A matrix and b 
> vector using this x^i and copy x^i into a vector called x_old. 
> 
> 1) When I call KSPSolve for the (i+1)th time using KSPSolve(ksp,b,x,ierr), 
> the variable x already has the solution of the previous iteration in it. Does 
> that mean I need to create a new x vector each time I call KSPSolve? 
> 2) On the similar lines, do I need to create new x_old after each iteration? 

  No, you do not have to do anything special with the vectors, the KSP does not 
track them like it tracks the matrix.

  Separate note: If you are using the previous solution as an initial guess for 
the next solve you need to call KSPSetInitialGuessNonzero() go tell KSP to use 
the initial guess (otherwise it always starts with a zero initial guess)

   Barry

> 
> Please let me know if the questions are unclear, so that I can elaborate. 
> ᐧ
> 
> On Wed, Aug 19, 2020 at 9:06 PM Barry Smith  > wrote:
> 
>   KSP knows if the matrix has changed and rebuilds the parts of the 
> preconditioner it needs if the matrix has changed. When the matrix has not 
> changed it uses the exact same preconditioner as before.
> 
> 
>Barry
> 
>  This "trick" is done by having an integer state variable inside the matrix 
> object. Each time the matrix is changed by MatSetValues() etc. the state 
> variable is incremented. KSPSolve() keeps a record of the state variable of 
> the matrix for each call to SNESSolve(), if the state variable has increased 
> it knows the matrix has changed so updates the preconditioner. 
> 
>> On Aug 19, 2020, at 9:05 PM, baikadi pranay > > wrote:
>> 
>> Hello,
>> 
>> I am trying to solve the poisson equation iteratively using BiCGStab in 
>> FORTRAN 90. After every call to KSPSolve, I update the central coefficients 
>> of A matrix and the b vector (and then solve the new linear equation system, 
>> repeating the process until convergence is achieved). I want to know whether 
>> the A matrix and b vector that are created initially can be used in the 
>> iteration process or do I need to create a new A matrix and b vector in each 
>> iteration.
>> 
>> Please let me know if you need any further information.
>> 
>> Thank you.
>> 
>> Sincerely,
>> Pranay.
>> ᐧ
> 



Re: [petsc-users] On re usability of A matrix and b vector

2020-08-20 Thread baikadi pranay
Thank you for the clarification.

 I also want to get two more things clarified. After i^th call to
KSPSolve(), I get x^i as the solution vector. I do a bunch of updates to A
matrix and b vector using this x^i and copy x^i into a vector called x_old.

1) When I call KSPSolve for the (i+1)th time using KSPSolve(ksp,b,x,ierr),
the variable x already has the solution of the previous iteration in it.
Does that mean I need to create a new x vector each time I call KSPSolve?
2) On the similar lines, do I need to create new x_old after each
iteration?

Please let me know if the questions are unclear, so that I can elaborate.
ᐧ

On Wed, Aug 19, 2020 at 9:06 PM Barry Smith  wrote:

>
>   KSP knows if the matrix has changed and rebuilds the parts of the
> preconditioner it needs if the matrix has changed. When the matrix has not
> changed it uses the exact same preconditioner as before.
>
>
>Barry
>
>  This "trick" is done by having an integer state variable inside the
> matrix object. Each time the matrix is changed by MatSetValues() etc. the
> state variable is incremented. KSPSolve() keeps a record of the state
> variable of the matrix for each call to SNESSolve(), if the state variable
> has increased it knows the matrix has changed so updates the
> preconditioner.
>
> On Aug 19, 2020, at 9:05 PM, baikadi pranay 
> wrote:
>
> Hello,
>
> I am trying to solve the poisson equation iteratively using BiCGStab in
> FORTRAN 90. After every call to KSPSolve, I update the central coefficients
> of A matrix and the b vector (and then solve the new linear equation
> system, repeating the process until convergence is achieved). I want to
> know whether the A matrix and b vector that are created initially can be
> used in the iteration process or do I need to create a new A matrix and b
> vector in each iteration.
>
> Please let me know if you need any further information.
>
> Thank you.
>
> Sincerely,
> Pranay.
> ᐧ
>
>
>


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Junchao Zhang
I will have a look and report back to you. Thanks.
--Junchao Zhang


On Thu, Aug 20, 2020 at 5:23 PM Manav Bhatia  wrote:

> I have created a standalone test that demonstrates the problem at my end.
> I have stored the indices, etc.  from my problem in a text file for each
> rank, which I use to initialize the matrix.
> Please note that the test is specifically for 8 ranks.
>
> The .tgz file is on my google drive:
> https://drive.google.com/file/d/1R-WjS36av3maXX3pUyiR3ndGAxteTVj-/view?usp=sharing
>
>
> This contains a README file with instructions on running. Please note that
> the work directory needs the index files.
>
> Please let me know if I can provide any further information.
>
> Thank you all for your help.
>
> Regards,
> Manav
>
> On Aug 20, 2020, at 12:54 PM, Jed Brown  wrote:
>
> Matthew Knepley  writes:
>
> On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia 
> wrote:
>
>
>
> On Aug 20, 2020, at 8:31 AM, Stefano Zampini 
> wrote:
>
> Can you add a MPI_Barrier before
>
> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>
>
> With a MPI_Barrier before this function call:
> —  three of the processes have already hit this barrier,
> —  the other 5 are inside MatStashScatterGetMesg_Private ->
> MatStashScatterGetMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
> processes)
>
>
> This is not itself evidence of inconsistent state.  You can use
>
>  -build_twosided allreduce
>
> to avoid the nonblocking sparse algorithm.
>
>
> Okay, you should run this with -matstash_legacy just to make sure it is not
> a bug in your MPI implementation. But it looks like
> there is inconsistency in the parallel state. This can happen because we
> have a bug, or it could be that you called a collective
> operation on a subset of the processes. Is there any way you could cut down
> the example (say put all 1s in the matrix, etc) so
> that you could give it to us to run?
>
>
>


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia
I have created a standalone test that demonstrates the problem at my end. I 
have stored the indices, etc.  from my problem in a text file for each rank, 
which I use to initialize the matrix.
Please note that the test is specifically for 8 ranks. 

The .tgz file is on my google drive: 
https://drive.google.com/file/d/1R-WjS36av3maXX3pUyiR3ndGAxteTVj-/view?usp=sharing
 

 

This contains a README file with instructions on running. Please note that the 
work directory needs the index files. 

Please let me know if I can provide any further information. 

Thank you all for your help. 

Regards,
Manav

> On Aug 20, 2020, at 12:54 PM, Jed Brown  wrote:
> 
> Matthew Knepley mailto:knep...@gmail.com>> writes:
> 
>> On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia  wrote:
>> 
>>> 
>>> 
>>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini 
>>> wrote:
>>> 
>>> Can you add a MPI_Barrier before
>>> 
>>> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>>> 
>>> 
>>> With a MPI_Barrier before this function call:
>>> —  three of the processes have already hit this barrier,
>>> —  the other 5 are inside MatStashScatterGetMesg_Private ->
>>> MatStashScatterGetMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
>>> processes)
> 
> This is not itself evidence of inconsistent state.  You can use
> 
>  -build_twosided allreduce
> 
> to avoid the nonblocking sparse algorithm.
> 
>> 
>> Okay, you should run this with -matstash_legacy just to make sure it is not
>> a bug in your MPI implementation. But it looks like
>> there is inconsistency in the parallel state. This can happen because we
>> have a bug, or it could be that you called a collective
>> operation on a subset of the processes. Is there any way you could cut down
>> the example (say put all 1s in the matrix, etc) so
>> that you could give it to us to run?



Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Jed Brown
Matthew Knepley  writes:

> On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia  wrote:
>
>>
>>
>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini 
>> wrote:
>>
>> Can you add a MPI_Barrier before
>>
>> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>>
>>
>> With a MPI_Barrier before this function call:
>> —  three of the processes have already hit this barrier,
>> —  the other 5 are inside MatStashScatterGetMesg_Private ->
>> MatStashScatterGetMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
>> processes)

This is not itself evidence of inconsistent state.  You can use

  -build_twosided allreduce

to avoid the nonblocking sparse algorithm.

>
> Okay, you should run this with -matstash_legacy just to make sure it is not
> a bug in your MPI implementation. But it looks like
> there is inconsistency in the parallel state. This can happen because we
> have a bug, or it could be that you called a collective
> operation on a subset of the processes. Is there any way you could cut down
> the example (say put all 1s in the matrix, etc) so
> that you could give it to us to run?


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia


> On Aug 20, 2020, at 11:21 AM, Stefano Zampini  
> wrote:
> 
> 
> 
>> On Aug 20, 2020, at 5:59 PM, Manav Bhatia > > wrote:
>> 
>> 
>> 
>>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini >> > wrote:
>>> 
>>> ((Mat_SeqAIJ*)aij->B->data)->nonew
>>> mat->was_assembled
>>> aij->donotstash
>>> mat->nooffprocentries
>>> 
>> 
>> The values for the last three variables are all False on all 8 processes. 
> 
> Thanks, it seems a bug in MPI or on our side. As Matt said, can you run with 
> -matstash_legacy?

I did that with the result that a smaller case that had been running got stuck 
in the MatAssemblyEnd_MPIAIJ routine with -matstash_legacy. Not sure what this 
implies. 

> Also, make sure to run with a debug version of PETSc (configure using 
> —with-debugging=1).

Trying that now. 

> How feasible is to write a driver code to run with the same mesh, same 
> discretization and same equations to solve but with the matrix assembly only? 
> Does it hang in that case too?

My code is on GitHub (https://github.com/MASTmultiphysics/MAST3), including the 
specific example that is producing this error 
(https://github.com/MASTmultiphysics/MAST3/tree/master/examples/structural/example_6),
 but has multiple dependencies, including libMesh/PETSc/SLEPc/Eigen. 

The mesh generation and sparsity pattern creation is currently done by libMesh. 

If needed, I may be able to store all the necessary information (mesh, nnz, 
noz) in a text file, remove dependencies on libMesh and directly try to 
initialize with information from a file. Would that help? 

> 
>> 
>> Regards,
>> Manav



Re: [petsc-users] PetscFV and TS implicit

2020-08-20 Thread Jed Brown
Matthew Knepley  writes:

> I could never get the FVM stuff to make sense to me for implicit methods.
> Here is my problem understanding. If you have an FVM method, it decides
> to move "stuff" from one cell to its neighboring cells depending on the
> solution to the Riemann problem on each face, which computed the flux. This
> is
> fine unless the timestep is so big that material can flow through into the
> cells beyond the neighbor. Then I should have considered the effect of the
> Riemann problem for those interfaces. That would be in the Jacobian, but I
> don't know how to compute that Jacobian. I guess you could do everything
> matrix-free, but without a preconditioner it seems hard.

So long as we're using method of lines, the flux is just instantaneous flux, 
not integrated over some time step.  It has the same meaning for implicit and 
explicit.

An explicit method would be unstable if you took such a large time step (CFL) 
and an implicit method will not simultaneously be SSP and higher than first 
order, but it's still a consistent discretization of the problem.

It's common (done in FUN3D and others) to precondition with a first-order 
method, where gradient reconstruction/limiting is skipped.  That's what I'd 
recommend because limiting creates nasty nonlinearities and the resulting 
discretizations lack h-ellipticity which makes them very hard to solve.


Re: [petsc-users] PetscFV and TS implicit

2020-08-20 Thread Matthew Knepley
On Thu, Aug 20, 2020 at 7:45 AM Thibault Bridel-Bertomeu <
thibault.bridelberto...@gmail.com> wrote:

> Dear all,
>
> I have a finite volume code inspired from TS example ex11.c (with a
> riemann solver, etc...).
> So far, I used only explicit time stepping through the TSSSP, and to set
> the RHS of my hyperbolic system I used :
>
> TSSetType(ts, TSSSP);
> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM 
> ,
>  );
>
> after setting the right Riemann solver in the TS associated to the DM.
> Now, in some cases where the physics is stationary, I would like to reach
> the steady state faster and use an implicit timestepping operator to do so.
> After looking through the examples, especially TS examples ex48.c and
> ex53.c, I simply tried to set
>
> TSSetType(ts, TSBEULER);
> DMTSSetIFunctionLocal(dm, DMPlexTSComputeIFunctionFEM 
> ,
>  );
> DMTSSetIJacobianLocal(dm, DMPlexTSComputeIJacobianFEM 
> ,
>  );
>
> instead of the previous calls. It compiles fine, and it runs. However,
> nothing happens : it behaves like there is no time evolution at all, the
> solution does not change from its initial state.
> From the source code, it is my understanding that the
> DMPlexTSComputeIFunctionFEM and DMPlexTSComputeIJacobianFEM methods, in
> spite of their names, call respectively DMPlexComputeResidual_Internal and
> DMPlexComputeJacobian_Internal that can handle a FVM discretization.
>
> What am I missing ? Are there other steps to take before I can simply try
> to run with a finite volume discretization and an implicit time stepping
> algorithm ?
>
> Thank you very much for your help in advance !
>

I could never get the FVM stuff to make sense to me for implicit methods.
Here is my problem understanding. If you have an FVM method, it decides
to move "stuff" from one cell to its neighboring cells depending on the
solution to the Riemann problem on each face, which computed the flux. This
is
fine unless the timestep is so big that material can flow through into the
cells beyond the neighbor. Then I should have considered the effect of the
Riemann problem for those interfaces. That would be in the Jacobian, but I
don't know how to compute that Jacobian. I guess you could do everything
matrix-free, but without a preconditioner it seems hard.

Operationally, I always mark FVM things explicit, so they are only handled
by RHSFunction/Jacobian, except for the u_t term which I stick in the
IJacobian.
I was never successful in running an implicit FVM and still do not
understand it. I could not find any literature that I understood either.

If there is a definite thing you want changed, I might be able to do it.

  Thanks,

 Matt


> Thibault Bridel-Bertomeu
> —
> Eng, MSc, PhD
> Research Engineer
> CEA/CESTA
> 33114 LE BARP
> Tel.: (+33)557046924
> Mob.: (+33)611025322
> Mail: thibault.bridelberto...@gmail.com
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Stefano Zampini


> On Aug 20, 2020, at 5:59 PM, Manav Bhatia  wrote:
> 
> 
> 
>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini > > wrote:
>> 
>> ((Mat_SeqAIJ*)aij->B->data)->nonew
>> mat->was_assembled
>> aij->donotstash
>> mat->nooffprocentries
>> 
> 
> The values for the last three variables are all False on all 8 processes. 

Thanks, it seems a bug in MPI or on our side. As Matt said, can you run with 
-matstash_legacy? Also, make sure to run with a debug version of PETSc 
(configure using —with-debugging=1).
How feasible is to write a driver code to run with the same mesh, same 
discretization and same equations to solve but with the matrix assembly only? 
Does it hang in that case too?

> 
> Regards,
> Manav
> 



Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia


> On Aug 20, 2020, at 8:31 AM, Stefano Zampini  
> wrote:
> 
> ((Mat_SeqAIJ*)aij->B->data)->nonew
> mat->was_assembled
> aij->donotstash
> mat->nooffprocentries
> 

The values for the last three variables are all False on all 8 processes. 

Regards,
Manav



Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Junchao Zhang
Then could you provide a test case and build instructions?
--Junchao Zhang


On Thu, Aug 20, 2020 at 10:25 AM Manav Bhatia  wrote:

>
>
> On Aug 20, 2020, at 10:22 AM, Junchao Zhang 
> wrote:
>
> See if you could reproduce the problem on another machine, e.g., a Linux
> workstation with MPICH?
>
>
> Yes, the same behavior happened on another machine with Centos 7 with an
> older build of PETSc.
>


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia


> On Aug 20, 2020, at 10:22 AM, Junchao Zhang  wrote:
> 
> See if you could reproduce the problem on another machine, e.g., a Linux 
> workstation with MPICH?

Yes, the same behavior happened on another machine with Centos 7 with an older 
build of PETSc. 

Re: [petsc-users] Disable PETSC_HAVE_CLOSURE

2020-08-20 Thread Jed Brown
Barry, this is a side-effect of your Swift experiment.  Does that need to be in 
a header (even if it's a private header)?

The issue may be that you test with a C compiler and it gets included in C++ 
source.

Fande Kong  writes:

> Hi All,
>
> We (moose team) hit an error message when compiling PETSc, recently. The
> error is related to "PETSC_HAVE_CLOSURE." Everything runs well if I am
> going to turn this flag off by making the following changes:
>
>
> git diff
> diff --git a/config/BuildSystem/config/utilities/closure.py
> b/config/BuildSystem/config/utilities/closure.py
> index 6341ddf271..930e5b3b1b 100644
> --- a/config/BuildSystem/config/utilities/closure.py
> +++ b/config/BuildSystem/config/utilities/closure.py
> @@ -19,8 +19,8 @@ class Configure(config.base.Configure):
>   includes = '#include \n'
>   body = 'int (^closure)(int);'
>   self.pushLanguage('C')
> - if self.checkLink(includes, body):
> - self.addDefine('HAVE_CLOSURE','1')
> +# if self.checkLink(includes, body):
> +# self.addDefine('HAVE_CLOSURE','1')
>  def configure(self):
>   self.executeTest(self.configureClosure)
>
>
> I was wondering if there exists a configuration option to disable "Closure"
> C syntax?  I did not find one by running "configuration --help"
>
> Please let me know if you need more information.
>
>
> Thanks,
>
> Fande,
>
>
> In file included from
> /Users/milljm/projects/moose/scripts/../libmesh/src/solvers/petscdmlibmesh.C:25:
> /Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:29:
> warning: 'PetscVFPrintfSetClosure' initialized and declared 'extern'
>   15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
> char*));
>|  ^~~
> /Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:53:
> error: expected primary-expression before 'int'
>   15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
> char*));
>|  ^~~
>  CXX   src/systems/libmesh_opt_la-equation_systems_io.lo
> In file included from
> /Users/milljm/projects/moose/petsc/include/petsc/private/dmimpl.h:7,
>  from
> /Users/milljm/projects/moose/scripts/../libmesh/src/solvers/petscdmlibmeshimpl.C:26:
> /Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:29:
> warning: 'PetscVFPrintfSetClosure' initialized and declared 'extern'
>   15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
> char*));
>|  ^~~
> /Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:53:
> error: expected primary-expression before 'int'
>   15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
> char*));


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Matthew Knepley
On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia  wrote:

>
>
> On Aug 20, 2020, at 8:31 AM, Stefano Zampini 
> wrote:
>
> Can you add a MPI_Barrier before
>
> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>
>
> With a MPI_Barrier before this function call:
> —  three of the processes have already hit this barrier,
> —  the other 5 are inside MatStashScatterGetMesg_Private ->
> MatStashScatterGerMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
> processes)
>

Okay, you should run this with -matstash_legacy just to make sure it is not
a bug in your MPI implementation. But it looks like
there is inconsistency in the parallel state. This can happen because we
have a bug, or it could be that you called a collective
operation on a subset of the processes. Is there any way you could cut down
the example (say put all 1s in the matrix, etc) so
that you could give it to us to run?

  Thanks,

 Matt


> Also, in order to assess where the issue is, we need to see the values
> (per rank) of
>
> ((Mat_SeqAIJ*)aij->B->data)->nonew
> mat->was_assembled
> aij->donotstash
> mat->nooffprocentries
>
>
> I am working to get this information.
>
> Another question: is this the first matrix assembly of the code?
>
>
> Yes, this is the first matrix assembly in the code.
>
> If you change to pc_none, do you get the same issue?
>
>
> Yes, with "-pc_type none” the code is stuck at the same spot.
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Disable PETSC_HAVE_CLOSURE

2020-08-20 Thread Fande Kong
Hi All,

We (moose team) hit an error message when compiling PETSc, recently. The
error is related to "PETSC_HAVE_CLOSURE." Everything runs well if I am
going to turn this flag off by making the following changes:


git diff
diff --git a/config/BuildSystem/config/utilities/closure.py
b/config/BuildSystem/config/utilities/closure.py
index 6341ddf271..930e5b3b1b 100644
--- a/config/BuildSystem/config/utilities/closure.py
+++ b/config/BuildSystem/config/utilities/closure.py
@@ -19,8 +19,8 @@ class Configure(config.base.Configure):
  includes = '#include \n'
  body = 'int (^closure)(int);'
  self.pushLanguage('C')
- if self.checkLink(includes, body):
- self.addDefine('HAVE_CLOSURE','1')
+# if self.checkLink(includes, body):
+# self.addDefine('HAVE_CLOSURE','1')
 def configure(self):
  self.executeTest(self.configureClosure)


I was wondering if there exists a configuration option to disable "Closure"
C syntax?  I did not find one by running "configuration --help"

Please let me know if you need more information.


Thanks,

Fande,


In file included from
/Users/milljm/projects/moose/scripts/../libmesh/src/solvers/petscdmlibmesh.C:25:
/Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:29:
warning: 'PetscVFPrintfSetClosure' initialized and declared 'extern'
  15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
char*));
   |  ^~~
/Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:53:
error: expected primary-expression before 'int'
  15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
char*));
   |  ^~~
 CXX   src/systems/libmesh_opt_la-equation_systems_io.lo
In file included from
/Users/milljm/projects/moose/petsc/include/petsc/private/dmimpl.h:7,
 from
/Users/milljm/projects/moose/scripts/../libmesh/src/solvers/petscdmlibmeshimpl.C:26:
/Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:29:
warning: 'PetscVFPrintfSetClosure' initialized and declared 'extern'
  15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
char*));
   |  ^~~
/Users/milljm/projects/moose/petsc/include/petsc/private/petscimpl.h:15:53:
error: expected primary-expression before 'int'
  15 | PETSC_EXTERN PetscErrorCode PetscVFPrintfSetClosure(int (^)(const
char*));


Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia


> On Aug 20, 2020, at 8:31 AM, Stefano Zampini  
> wrote:
> 
> Can you add a MPI_Barrier before
> 
> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
> 

With a MPI_Barrier before this function call:
—  three of the processes have already hit this barrier, 
—  the other 5 are inside MatStashScatterGetMesg_Private -> 
MatStashScatterGerMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3 processes)


> Also, in order to assess where the issue is, we need to see the values (per 
> rank) of 
> 
> ((Mat_SeqAIJ*)aij->B->data)->nonew
> mat->was_assembled
> aij->donotstash
> mat->nooffprocentries
> 

I am working to get this information. 

> Another question: is this the first matrix assembly of the code?

Yes, this is the first matrix assembly in the code. 

> If you change to pc_none, do you get the same issue?


Yes, with "-pc_type none” the code is stuck at the same spot. 

Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia
Stefano, 

I will report the results to these shortly. To your second question, this is 
the first matrix assembly of the code. 

-Manav

> On Aug 20, 2020, at 8:31 AM, Stefano Zampini  
> wrote:
> 
> Manav
> 
> Can you add a MPI_Barrier before
> 
> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
> 
> Also, in order to assess where the issue is, we need to see the values (per 
> rank) of 
> 
> ((Mat_SeqAIJ*)aij->B->data)->nonew
> mat->was_assembled
> aij->donotstash
> mat->nooffprocentries
> 
> Another question: is this the first matrix assembly of the code?
> If you change to pc_none, do you get the same issue?
> 
>> On Aug 20, 2020, at 3:10 PM, Manav Bhatia > > wrote:
>> 
>> 
>> 
>>> On Aug 19, 2020, at 9:39 PM, Matthew Knepley >> > wrote:
>>> 
>>> Jed is more knowledgeable about the communication, but I have a simple 
>>> question about the FEM method. Normally, the way
>>> we divide unknowns is that the only unknowns which might have entries 
>>> computed off-process are those on the partition boundary.
>>> However, it sounds like you have a huge number of communicated values. Is 
>>> it possible that the division of rows in your matrix does
>>> not match the division of the cells you compute element matrices for?
>> 
>> 
>> I hope that is not the case. I am using libMesh to manage the mesh and 
>> creation of sparsity pattern, which uses Parmetis to create the partitions. 
>> libMesh ensures that off-process entries are only at the partition boundary 
>> (unless an extra set of DoFs are marked for coupling. 
>> 
>> I also printed and looked at the n_nz and n_oz values on each rank and it 
>> does not seem to raise any flags.  
>> 
>> I will try to dig in a bit further to make sure everything checks out. 
>> 
>> Looking at the screenshots I had shared yesterday, all processes are in this 
>> function: 
>> 
>> PetscErrorCode MatAssemblyEnd_MPIAIJ(Mat mat,MatAssemblyType mode)
>> {
>>   Mat_MPIAIJ *aij = (Mat_MPIAIJ*)mat->data;
>>   Mat_SeqAIJ *a   = (Mat_SeqAIJ*)aij->A->data;
>>   PetscErrorCode ierr;
>>   PetscMPIIntn;
>>   PetscInt   i,j,rstart,ncols,flg;
>>   PetscInt   *row,*col;
>>   PetscBool  other_disassembled;
>>   PetscScalar*val;
>> 
>>   /* do not use 'b = (Mat_SeqAIJ*)aij->B->data' as B can be reset in 
>> disassembly */
>> 
>>   PetscFunctionBegin;
>>   if (!aij->donotstash && !mat->nooffprocentries) {
>> while (1) {
>>   ierr = 
>> MatStashScatterGetMesg_Private(>stash,);CHKERRQ(ierr);
>>   if (!flg) break;
>> 
>>   for (i=0; i> /* Now identify the consecutive vals belonging to the same row */
>> for (j=i,rstart=row[j]; j>   if (row[j] != rstart) break;
>> }
>> if (j < n) ncols = j-i;
>> else   ncols = n-i;
>> /* Now assemble all these values with a single function call */
>> ierr = 
>> MatSetValues_MPIAIJ(mat,1,row+i,ncols,col+i,val+i,mat->insertmode);CHKERRQ(ierr);
>> 
>> i = j;
>>   }
>> }
>> ierr = MatStashScatterEnd_Private(>stash);CHKERRQ(ierr);
>>   }
>>   ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>>   ierr = MatAssemblyEnd(aij->A,mode);CHKERRQ(ierr);
>> 
>>   /* determine if any processor has disassembled, if so we must
>>  also disassemble ourselfs, in order that we may reassemble. */
>>   /*
>>  if nonzero structure of submatrix B cannot change then we know that
>>  no processor disassembled thus we can skip this stuff
>>   */
>>   if (!((Mat_SeqAIJ*)aij->B->data)->nonew) {
>> ierr = 
>> MPIU_Allreduce(>was_assembled,_disassembled,1,MPIU_BOOL,MPI_PROD,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
>> if (mat->was_assembled && !other_disassembled) {
>>   ierr = MatDisAssemble_MPIAIJ(mat);CHKERRQ(ierr);
>> }
>>   }
>>   if (!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) {
>> ierr = MatSetUpMultiply_MPIAIJ(mat);CHKERRQ(ierr);
>>   }
>>   ierr = MatSetOption(aij->B,MAT_USE_INODES,PETSC_FALSE);CHKERRQ(ierr);
>>   ierr = MatAssemblyBegin(aij->B,mode);CHKERRQ(ierr);
>>   ierr = MatAssemblyEnd(aij->B,mode);CHKERRQ(ierr);
>> 
>>   ierr = PetscFree2(aij->rowvalues,aij->rowindices);CHKERRQ(ierr);
>> 
>>   aij->rowvalues = 0;
>> 
>>   ierr = VecDestroy(>diag);CHKERRQ(ierr);
>>   if (a->inode.size) mat->ops->multdiagonalblock = 
>> MatMultDiagonalBlock_MPIAIJ;
>> 
>>   /* if no new nonzero locations are allowed in matrix then only set the 
>> matrix state the first time through */
>>   if ((!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) || 
>> !((Mat_SeqAIJ*)(aij->A->data))->nonew) {
>> PetscObjectState state = aij->A->nonzerostate + aij->B->nonzerostate;
>> ierr = 
>> MPIU_Allreduce(,>nonzerostate,1,MPIU_INT64,MPI_SUM,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
>>   }
>>   PetscFunctionReturn(0);
>> }
>> 
>> 
>>  I noticed that of the 8 MPI processes, 2 were stuck at 
>>   ierr = 
>> 

Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Stefano Zampini
Manav

Can you add a MPI_Barrier before

ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);

Also, in order to assess where the issue is, we need to see the values (per 
rank) of 

((Mat_SeqAIJ*)aij->B->data)->nonew
mat->was_assembled
aij->donotstash
mat->nooffprocentries

Another question: is this the first matrix assembly of the code?
If you change to pc_none, do you get the same issue?

> On Aug 20, 2020, at 3:10 PM, Manav Bhatia  wrote:
> 
> 
> 
>> On Aug 19, 2020, at 9:39 PM, Matthew Knepley > > wrote:
>> 
>> Jed is more knowledgeable about the communication, but I have a simple 
>> question about the FEM method. Normally, the way
>> we divide unknowns is that the only unknowns which might have entries 
>> computed off-process are those on the partition boundary.
>> However, it sounds like you have a huge number of communicated values. Is it 
>> possible that the division of rows in your matrix does
>> not match the division of the cells you compute element matrices for?
> 
> 
> I hope that is not the case. I am using libMesh to manage the mesh and 
> creation of sparsity pattern, which uses Parmetis to create the partitions. 
> libMesh ensures that off-process entries are only at the partition boundary 
> (unless an extra set of DoFs are marked for coupling. 
> 
> I also printed and looked at the n_nz and n_oz values on each rank and it 
> does not seem to raise any flags.  
> 
> I will try to dig in a bit further to make sure everything checks out. 
> 
> Looking at the screenshots I had shared yesterday, all processes are in this 
> function: 
> 
> PetscErrorCode MatAssemblyEnd_MPIAIJ(Mat mat,MatAssemblyType mode)
> {
>   Mat_MPIAIJ *aij = (Mat_MPIAIJ*)mat->data;
>   Mat_SeqAIJ *a   = (Mat_SeqAIJ*)aij->A->data;
>   PetscErrorCode ierr;
>   PetscMPIIntn;
>   PetscInt   i,j,rstart,ncols,flg;
>   PetscInt   *row,*col;
>   PetscBool  other_disassembled;
>   PetscScalar*val;
> 
>   /* do not use 'b = (Mat_SeqAIJ*)aij->B->data' as B can be reset in 
> disassembly */
> 
>   PetscFunctionBegin;
>   if (!aij->donotstash && !mat->nooffprocentries) {
> while (1) {
>   ierr = 
> MatStashScatterGetMesg_Private(>stash,);CHKERRQ(ierr);
>   if (!flg) break;
> 
>   for (i=0; i /* Now identify the consecutive vals belonging to the same row */
> for (j=i,rstart=row[j]; j   if (row[j] != rstart) break;
> }
> if (j < n) ncols = j-i;
> else   ncols = n-i;
> /* Now assemble all these values with a single function call */
> ierr = 
> MatSetValues_MPIAIJ(mat,1,row+i,ncols,col+i,val+i,mat->insertmode);CHKERRQ(ierr);
> 
> i = j;
>   }
> }
> ierr = MatStashScatterEnd_Private(>stash);CHKERRQ(ierr);
>   }
>   ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>   ierr = MatAssemblyEnd(aij->A,mode);CHKERRQ(ierr);
> 
>   /* determine if any processor has disassembled, if so we must
>  also disassemble ourselfs, in order that we may reassemble. */
>   /*
>  if nonzero structure of submatrix B cannot change then we know that
>  no processor disassembled thus we can skip this stuff
>   */
>   if (!((Mat_SeqAIJ*)aij->B->data)->nonew) {
> ierr = 
> MPIU_Allreduce(>was_assembled,_disassembled,1,MPIU_BOOL,MPI_PROD,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
> if (mat->was_assembled && !other_disassembled) {
>   ierr = MatDisAssemble_MPIAIJ(mat);CHKERRQ(ierr);
> }
>   }
>   if (!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) {
> ierr = MatSetUpMultiply_MPIAIJ(mat);CHKERRQ(ierr);
>   }
>   ierr = MatSetOption(aij->B,MAT_USE_INODES,PETSC_FALSE);CHKERRQ(ierr);
>   ierr = MatAssemblyBegin(aij->B,mode);CHKERRQ(ierr);
>   ierr = MatAssemblyEnd(aij->B,mode);CHKERRQ(ierr);
> 
>   ierr = PetscFree2(aij->rowvalues,aij->rowindices);CHKERRQ(ierr);
> 
>   aij->rowvalues = 0;
> 
>   ierr = VecDestroy(>diag);CHKERRQ(ierr);
>   if (a->inode.size) mat->ops->multdiagonalblock = 
> MatMultDiagonalBlock_MPIAIJ;
> 
>   /* if no new nonzero locations are allowed in matrix then only set the 
> matrix state the first time through */
>   if ((!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) || 
> !((Mat_SeqAIJ*)(aij->A->data))->nonew) {
> PetscObjectState state = aij->A->nonzerostate + aij->B->nonzerostate;
> ierr = 
> MPIU_Allreduce(,>nonzerostate,1,MPIU_INT64,MPI_SUM,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
>   }
>   PetscFunctionReturn(0);
> }
> 
> 
>  I noticed that of the 8 MPI processes, 2 were stuck at 
>   ierr = 
> MatStashScatterGetMesg_Private(>stash,);CHKERRQ(ierr);
> 
> Other two were stuck at
>  ierr = MatStashScatterEnd_Private(>stash);CHKERRQ(ierr);
> 
> And remaining four were under
>  ierr = MatSetUpMultiply_MPIAIJ(mat);CHKERRQ(ierr);
> 
> Is it expected for processes to be at different stages in this function? 
> 
> -Manav
> 
> 



Re: [petsc-users] MatAssemblyEnd taking too long

2020-08-20 Thread Manav Bhatia


> On Aug 19, 2020, at 9:39 PM, Matthew Knepley  wrote:
> 
> Jed is more knowledgeable about the communication, but I have a simple 
> question about the FEM method. Normally, the way
> we divide unknowns is that the only unknowns which might have entries 
> computed off-process are those on the partition boundary.
> However, it sounds like you have a huge number of communicated values. Is it 
> possible that the division of rows in your matrix does
> not match the division of the cells you compute element matrices for?


I hope that is not the case. I am using libMesh to manage the mesh and creation 
of sparsity pattern, which uses Parmetis to create the partitions. 
libMesh ensures that off-process entries are only at the partition boundary 
(unless an extra set of DoFs are marked for coupling. 

I also printed and looked at the n_nz and n_oz values on each rank and it does 
not seem to raise any flags.  

I will try to dig in a bit further to make sure everything checks out. 

Looking at the screenshots I had shared yesterday, all processes are in this 
function: 

PetscErrorCode MatAssemblyEnd_MPIAIJ(Mat mat,MatAssemblyType mode)
{
  Mat_MPIAIJ *aij = (Mat_MPIAIJ*)mat->data;
  Mat_SeqAIJ *a   = (Mat_SeqAIJ*)aij->A->data;
  PetscErrorCode ierr;
  PetscMPIIntn;
  PetscInt   i,j,rstart,ncols,flg;
  PetscInt   *row,*col;
  PetscBool  other_disassembled;
  PetscScalar*val;

  /* do not use 'b = (Mat_SeqAIJ*)aij->B->data' as B can be reset in 
disassembly */

  PetscFunctionBegin;
  if (!aij->donotstash && !mat->nooffprocentries) {
while (1) {
  ierr = 
MatStashScatterGetMesg_Private(>stash,);CHKERRQ(ierr);
  if (!flg) break;

  for (i=0; iinsertmode);CHKERRQ(ierr);

i = j;
  }
}
ierr = MatStashScatterEnd_Private(>stash);CHKERRQ(ierr);
  }
  ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
  ierr = MatAssemblyEnd(aij->A,mode);CHKERRQ(ierr);

  /* determine if any processor has disassembled, if so we must
 also disassemble ourselfs, in order that we may reassemble. */
  /*
 if nonzero structure of submatrix B cannot change then we know that
 no processor disassembled thus we can skip this stuff
  */
  if (!((Mat_SeqAIJ*)aij->B->data)->nonew) {
ierr = 
MPIU_Allreduce(>was_assembled,_disassembled,1,MPIU_BOOL,MPI_PROD,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
if (mat->was_assembled && !other_disassembled) {
  ierr = MatDisAssemble_MPIAIJ(mat);CHKERRQ(ierr);
}
  }
  if (!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) {
ierr = MatSetUpMultiply_MPIAIJ(mat);CHKERRQ(ierr);
  }
  ierr = MatSetOption(aij->B,MAT_USE_INODES,PETSC_FALSE);CHKERRQ(ierr);
  ierr = MatAssemblyBegin(aij->B,mode);CHKERRQ(ierr);
  ierr = MatAssemblyEnd(aij->B,mode);CHKERRQ(ierr);

  ierr = PetscFree2(aij->rowvalues,aij->rowindices);CHKERRQ(ierr);

  aij->rowvalues = 0;

  ierr = VecDestroy(>diag);CHKERRQ(ierr);
  if (a->inode.size) mat->ops->multdiagonalblock = MatMultDiagonalBlock_MPIAIJ;

  /* if no new nonzero locations are allowed in matrix then only set the matrix 
state the first time through */
  if ((!mat->was_assembled && mode == MAT_FINAL_ASSEMBLY) || 
!((Mat_SeqAIJ*)(aij->A->data))->nonew) {
PetscObjectState state = aij->A->nonzerostate + aij->B->nonzerostate;
ierr = 
MPIU_Allreduce(,>nonzerostate,1,MPIU_INT64,MPI_SUM,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
  }
  PetscFunctionReturn(0);
}


 I noticed that of the 8 MPI processes, 2 were stuck at 
  ierr = 
MatStashScatterGetMesg_Private(>stash,);CHKERRQ(ierr);

Other two were stuck at
 ierr = MatStashScatterEnd_Private(>stash);CHKERRQ(ierr);

And remaining four were under
 ierr = MatSetUpMultiply_MPIAIJ(mat);CHKERRQ(ierr);

Is it expected for processes to be at different stages in this function? 

-Manav




[petsc-users] PetscFV and TS implicit

2020-08-20 Thread Thibault Bridel-Bertomeu
Dear all,

I have a finite volume code inspired from TS example ex11.c (with a riemann
solver, etc...).
So far, I used only explicit time stepping through the TSSSP, and to set
the RHS of my hyperbolic system I used :

TSSetType(ts, TSSSP);
DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM
,
);

after setting the right Riemann solver in the TS associated to the DM.
Now, in some cases where the physics is stationary, I would like to reach
the steady state faster and use an implicit timestepping operator to do so.
After looking through the examples, especially TS examples ex48.c and
ex53.c, I simply tried to set

TSSetType(ts, TSBEULER);
DMTSSetIFunctionLocal(dm, DMPlexTSComputeIFunctionFEM
,
);
DMTSSetIJacobianLocal(dm, DMPlexTSComputeIJacobianFEM
,
);

instead of the previous calls. It compiles fine, and it runs. However,
nothing happens : it behaves like there is no time evolution at all, the
solution does not change from its initial state.
>From the source code, it is my understanding that the
DMPlexTSComputeIFunctionFEM and DMPlexTSComputeIJacobianFEM methods, in
spite of their names, call respectively DMPlexComputeResidual_Internal and
DMPlexComputeJacobian_Internal that can handle a FVM discretization.

What am I missing ? Are there other steps to take before I can simply try
to run with a finite volume discretization and an implicit time stepping
algorithm ?

Thank you very much for your help in advance !

Thibault Bridel-Bertomeu
—
Eng, MSc, PhD
Research Engineer
CEA/CESTA
33114 LE BARP
Tel.: (+33)557046924
Mob.: (+33)611025322
Mail: thibault.bridelberto...@gmail.com