Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-06 Thread Franck Houssen
aliszan" <dam...@man.poznan.pl> > À: "Franck Houssen" <franck.hous...@inria.fr> > Cc: petsc-users@mcs.anl.gov > Envoyé: Jeudi 6 Juillet 2017 09:56:58 > Objet: Re: [petsc-users] Is OpenMP still available for PETSc? > > Dear Franck, > > Thank you fo

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-06 Thread Damian Kaliszan
<dam...@man.poznan.pl> >> À: "Franck Houssen" <franck.hous...@inria.fr>, "Barry Smith" >> <bsm...@mcs.anl.gov> >> Cc: petsc-users@mcs.anl.gov >> Envoyé: Mercredi 5 Juillet 2017 10:50:39 >> Objet: Re: [petsc-users] Is OpenMP sti

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-05 Thread Franck Houssen
n" <franck.hous...@inria.fr>, "Barry Smith" > <bsm...@mcs.anl.gov> > Cc: petsc-users@mcs.anl.gov > Envoyé: Mercredi 5 Juillet 2017 10:50:39 > Objet: Re: [petsc-users] Is OpenMP still available for PETSc? > > Thank you:) > > Few notes on what you

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-05 Thread Damian Kaliszan
Hope this may help !... > Franck > Note: activating/deactivating hyper-threading (if available - > generally in BIOS when possible) may also change performances. > - Mail original - >> De: "Barry Smith" <bsm...@mcs.anl.gov> >> À: "Damian Kal

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-05 Thread Damian Kaliszan
Hope this may help !... > Franck > Note: activating/deactivating hyper-threading (if available - > generally in BIOS when possible) may also change performances. > - Mail original - >> De: "Barry Smith" <bsm...@mcs.anl.gov> >> À: "Damian Kal

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-05 Thread Franck Houssen
anl.gov> > Envoyé: Mardi 4 Juillet 2017 19:04:36 > Objet: Re: [petsc-users] Is OpenMP still available for PETSc? > > >You may need to ask a slurm expert. I have no idea what cpus-per-task >means > > > > On Jul 4, 2017, at 4:16 AM, Damian Kaliszan <dam...@

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-04 Thread Barry Smith
You may need to ask a slurm expert. I have no idea what cpus-per-task means > On Jul 4, 2017, at 4:16 AM, Damian Kaliszan wrote: > > Hi, > > Yes, this is exactly what I meant. > Please find attached output for 2 input datasets and for 2 various slurm > configs each:

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 9:23 AM, Damian Kaliszan wrote: > Hi, > > > >> 1) You can call Bcast on PETSC_COMM_WORLD > > To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm > using petsc4py) > > >> 2) If you are using WORLD, the number of iterates will be the

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Lawrence Mitchell
> On 3 Jul 2017, at 15:23, Damian Kaliszan wrote: > > Hi, > > >>> 1) You can call Bcast on PETSC_COMM_WORLD > > To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm > using petsc4py) Use PETSc.COMM_WORLD.tompi4py() to get an mpi4py communicator that

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Hi, >> 1) You can call Bcast on PETSC_COMM_WORLD To be honest I can't find Bcast method in petsc4py.PETSc.Comm (I'm using petsc4py) >> 2) If you are using WORLD, the number of iterates will be the same on each >> process since iteration is collective. Yes, this is how it should be.

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Title: Re: [petsc-users] Is OpenMP still available for PETSc? HI, W liście datowanym 3 lipca 2017 (15:50:17) napisano: On Mon, Jul 3, 2017 at 8:47 AM, Damian Kaliszan <dam...@man.poznan.pl> wrote: Hi, I use MPI.COMM_WORLD as communicator because I use bcast to send around some othe

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 8:47 AM, Damian Kaliszan wrote: > Hi, > I use MPI.COMM_WORLD as communicator because I use bcast to send around > some other info to all processes before reading in the matrices and running > solver. > (I can't call bcast on PETSc.COMM_WORLD). > 1)

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Title: Re: [petsc-users] Is OpenMP still available for PETSc? Hi, I use MPI.COMM_WORLD as communicator because I use bcast to send around some other info to all processes before reading in the matrices and running solver. (I can't call bcast on PETSc.COMM_WORLD). Best, Damian W liście

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Matthew Knepley
On Mon, Jul 3, 2017 at 4:18 AM, Damian Kaliszan wrote: > Hi, > > OK. So this is clear now. > Maybeyouwill be able to help the answer the question I raised > some time ago, why, when > submitting a slurm job and setting different number of >

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-03 Thread Damian Kaliszan
Hi, OK. So this is clear now. Maybeyouwill be able to help the answer the question I raised some time ago, why, when submitting a slurm job and setting different number of --cpus-per-task, the execution times of ksp solver (gmres type) vary when max number of

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-01 Thread Barry Smith
> On Jul 1, 2017, at 3:15 PM, Damian Kaliszan wrote: > > Hi, > So... - - with-openmp=0/1 configuration option seems to be useless?... It merely enables the compiler flags for compiling with OpenMP; for example if your code has OpenMP in it. > In one of my previous

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-07-01 Thread Damian Kaliszan
Hi, So... - - with-openmp=0/1 configuration option seems to be useless?... In one of my previous messages I wrote that, when openmp enabled, and OMP_NUM_THREADS set, I notice different timings for ksp solver. Strange? Best, Damian W dniu 1 lip 2017, 00:50, o 00:50, użytkownik Barry Smith

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-06-30 Thread Danyang Su
Hi Barry, Thanks for the quick response. What I want to test is to check if OpenMP has any benefit when total degrees of freedoms per processor drops below 5k. When using pure MPI my code shows good speedup if total degrees of freedoms per processor is above 10k. But below this value, the

Re: [petsc-users] Is OpenMP still available for PETSc?

2017-06-30 Thread Barry Smith
The current version of PETSc does not use OpenMP, you are free to use OpenMP in your portions of the code of course. If you want PETSc using OpenMP you have to use the old, unsupported version of PETSc. We never found any benefit to using OpenMP. Barry > On Jun 30, 2017, at 5:40 PM,