[petsc-users] Fourth annual PETSc users meeting Imperial College, London UK, June 4-6, 2018.

2018-02-14 Thread Smith, Barry F.
PETSc users, We are pleased to announce the fourth annual PETSc user meeting will take place at Imperial College, London UK, June 4-6, 2018. For more information and to register please go to http://www.mcs.anl.gov/petsc/meetings/2018/index.html There is some money available for travel

Re: [petsc-users] SNESQN number of past states

2018-02-14 Thread Smith, Barry F.
I stuck the line PetscOptionsSetValue(NULL,"-snes_qn_m", "50"); in src/snes/examples/tutorials/ex19.c and called it with -da_refine 2 -snes_monitor -snes_type qn -snes_view and the results showed Stored subspace size: 50 so I am afraid it is something unique to exactly your code that

Re: [petsc-users] How to efficiently represent a diagonal matrix?

2018-02-14 Thread Smith, Barry F.
Fande, I think you should just use AIJ, all the algorithms MatMult, MatFactor, MatSolve when the matrix is diagonal are order n work with a relatively small constant, and the overhead of using AIJ instead of a custom format is probably at most a factor of three and since work is order

Re: [petsc-users] SNESQN number of past states

2018-02-14 Thread Bikash Kanungo
Thanks Barry and Matthew. @Barry: I'm following the same procedure as you've mentioned - PetscOptionsSetValue() precede SNESSetFromOptions. Here's the snippet for my code: --- error =

Re: [petsc-users] How to efficiently represent a diagonal matrix?

2018-02-14 Thread Jed Brown
Fande Kong writes: > On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. wrote: > >> >> What are you doing with the matrix? >> > > We are doing an explicit method. PDEs are discretized using a finite > element method, so there is a mass matrix. The mass

Re: [petsc-users] How to efficiently represent a diagonal matrix?

2018-02-14 Thread Matthew Knepley
On Wed, Feb 14, 2018 at 7:11 PM, Fande Kong wrote: > On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. > wrote: > >> >> What are you doing with the matrix? >> > > We are doing an explicit method. PDEs are discretized using a finite > element method, so

Re: [petsc-users] SNESQN number of past states

2018-02-14 Thread Matthew Knepley
On Wed, Feb 14, 2018 at 6:43 PM, Smith, Barry F. wrote: > > Hmm, > > 1) make sure you call PetscOptionsSetValue() before you call to > SNESSetFromOptions() > > 2) make sure you call SNESSetFromOptions() > > 3) did you add a prefix to the SNES object? If so make sure you

Re: [petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py?

2018-02-14 Thread Matthew Knepley
On Wed, Feb 14, 2018 at 6:05 PM, HeeHo Park wrote: > I just found a user group on PETSc website. Can someone please answer the > question below? > I think it will work using with da.getVecArray(U) as u for i in range(mstart, mend): u[i] =

Re: [petsc-users] SNESQN number of past states

2018-02-14 Thread Smith, Barry F.
Hmm, 1) make sure you call PetscOptionsSetValue() before you call to SNESSetFromOptions() 2) make sure you call SNESSetFromOptions() 3) did you add a prefix to the SNES object? If so make sure you include it in the PetscOptionsSetValue() call. I can't see a reason why it won't work.

Re: [petsc-users] FEM & conformal mesh

2018-02-14 Thread Matthew Knepley
On Tue, Jan 23, 2018 at 11:14 AM, Yann Jobic wrote: > Hello, > > I'm trying to understand the numbering of quadrature points in order to > solve the FEM system, and how you manage this numbering in order to allow > conformal mesh. I looked in several files in order to

Re: [petsc-users] How to efficiently represent a diagonal matrix?

2018-02-14 Thread Smith, Barry F.
What are you doing with the matrix? We don't have a diagonal matrix but it would be easy to add such a beast if it was performance critical, which it probably isn't. Barry > On Feb 14, 2018, at 3:57 PM, Fande Kong wrote: > > Hi All, > > If a matrix is always

[petsc-users] SNESQN number of past states

2018-02-14 Thread Bikash Kanungo
Hi, I'm using the L-BFGS QN solver. In order to set the number of past states (also the restart size if I use Periodic restart), to say 50, I'm using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it still shows "Stored subspace size: 10", i.e., the default value of 10 is not

[petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py?

2018-02-14 Thread HeeHo Park
I just found a user group on PETSc website. Can someone please answer the question below? Thanks! -- Forwarded message -- From: HeeHo Park Date: Wed, Feb 14, 2018 at 5:04 PM Subject: what is the equivalent DMDAVecRestoreArray() function in petsc4py? To:

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
And we found that the code runs fine on Haswell. A KNL compiler bug not a PETSc/hypre bug. Mark On Wed, Feb 14, 2018 at 3:58 PM, Mark Adams wrote: > >> Your point about data decomposition is a good one. Even if you want to >> run with threads, you must decompose your data

[petsc-users] How to efficiently represent a diagonal matrix?

2018-02-14 Thread Fande Kong
Hi All, If a matrix is always diagonal, what a good way to represent the matrix? Still MPIAIJ, MPIBAIJ? Can we have a specific implementation for this? Fande,

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
> > > Your point about data decomposition is a good one. Even if you want to run > with threads, you must decompose your data intelligently > to get good performance. Can't you do the MPI shared work and still pass > it off as work necessary for threading anyway? > > We don't have any resources to

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-14 Thread Xiangdong
Thanks a lot, Barry! I see that you had implemented the bs=3 special case. I will play with these codes and add at least bs=2 case and try to get it working for parallel baij. I will let you know the update. Thank you. Xiangdong On Wed, Feb 14, 2018 at 2:57 PM, Smith, Barry F.

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-14 Thread Smith, Barry F.
In the PETSc git branch barry/feature-baij-blockdiagonal-scale I have done the "heavy lifting" for what you need. See https://bitbucket.org/petsc/petsc/branch/barry/feature-baij-blockdiagonal-scale It scales the Seq BAIJ matrix by its block diagonal. You will need to write a routine to

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-14 Thread Xiangdong
The idea goes back to the alternate-block-factorization (ABF) method https://link.springer.com/article/10.1007/BF01932753 and is widely used in the reservoir simulation, where the unknowns are pressure and saturation. Although the coupled equations are parabolic, the pressure equations/variables

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-14 Thread Smith, Barry F.
Hmm, I never had this idea presented to me, I have no way to know if it is particularly good or bad. So essentially you transform the matrix "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT instead of using PCFIELDSPLIT directly on the original equations. Maybe in the

Re: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse

2018-02-14 Thread Xiangdong
The reason for the operation invdiag(A)*A is to have a decoupled matrix/physics for preconditioning. For example, after the transformation, the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for bs=3). One can extract a submatrix (e.g. corresponding to only first unknown) and apply

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Matthew Knepley
On Wed, Feb 14, 2018 at 5:36 AM, Mark Adams wrote: > >> > >> > We have been tracking down what look like compiler bugs and we have >> only taken at peak performance to make sure we are not wasting our time >> with threads. >> >>You are wasting your time. There are better

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
> > > > > > We have been tracking down what look like compiler bugs and we have only > taken at peak performance to make sure we are not wasting our time with > threads. > >You are wasting your time. There are better ways to deal with global > metadata than with threads. > OK while agree with