Re: [petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem

2017-07-03 Thread Barry Smith
> On Jul 3, 2017, at 12:19 PM, Jed Brown wrote: > > Scaling by the volume element causes the rediscretized coarse grid > problem to be scaled like a Galerkin coarse operator. This is done > automatically when you use finite element methods. Galerkin coarse grid operator

Re: [petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem

2017-07-03 Thread Jed Brown
Scaling by the volume element causes the rediscretized coarse grid problem to be scaled like a Galerkin coarse operator. This is done automatically when you use finite element methods. Jason Lefley writes: >> On Jun 26, 2017, at 7:52 PM, Matthew Knepley

Re: [petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 10:06 AM, Jason Lefley wrote: > > On Jun 26, 2017, at 7:52 PM, Matthew Knepley wrote: > > On Mon, Jun 26, 2017 at 8:37 PM, Jason Lefley > wrote: > >> Okay, when you say a Poisson problem, I assumed

Re: [petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem

2017-07-03 Thread Jason Lefley
> On Jun 26, 2017, at 7:52 PM, Matthew Knepley wrote: > > On Mon, Jun 26, 2017 at 8:37 PM, Jason Lefley > wrote: >> Okay, when you say a Poisson problem, I assumed you meant >> >> div grad phi = f >> >>

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 9:23 AM, Damian Kaliszan wrote: > Hi, > > > >> 1) You can call Bcast on PETSC_COMM_WORLD > > To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm > using petsc4py) > > >> 2) If you are using WORLD, the number of iterates will be the

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Lawrence Mitchell
> On 3 Jul 2017, at 15:23, Damian Kaliszan wrote: > > Hi, > > >>> 1) You can call Bcast on PETSC_COMM_WORLD > > To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm > using petsc4py) Use PETSc.COMM_WORLD.tompi4py() to get an mpi4py communicator that

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Hi, >> 1) You can call Bcast on PETSC_COMM_WORLD To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm using petsc4py) >> 2) If you are using WORLD, the number of iterates will be the same on each >> process since iteration is collective. Yes, this is how it should be.

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Title: Re: [petsc-users] Is OpenMP still available for PETSc? HI, W liście datowanym 3 lipca 2017 (15:50:17) napisano: On Mon, Jul 3, 2017 at 8:47 AM, Damian Kaliszan wrote: Hi, I use MPI.COMM_WORLD as communicator because I use bcast to send around some other info

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 8:47 AM, Damian Kaliszan wrote: > Hi, > I use MPI.COMM_WORLD as communicator because I use bcast to send around > some other info to all processes before reading in the matrices and running > solver. > (I can't call bcast on PETSc.COMM_WORLD). > 1)

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Title: Re: [petsc-users] Is OpenMP still available for PETSc? Hi, I use MPI.COMM_WORLD as communicator because I use bcast to send around some other info to all processes before reading in the matrices and running solver. (I can't call bcast on PETSc.COMM_WORLD). Best, Damian W liście

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 4:18 AM, Damian Kaliszan wrote: > Hi, > > OK. So this is clear now. > Maybeyouwill be able to help the answer the question I raised > some time ago, why, when > submitting a slurm job and setting different number of >

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Hi, OK. So this is clear now. Maybeyouwill be able to help the answer the question I raised some time ago, why, when submitting a slurm job and setting different number of --cpus-per-task, the execution times of ksp solver (gmres type) vary when max number of