On Wed, 27 Jun 2018, Mark Adams wrote:
> On Wed, Jun 27, 2018 at 12:40 PM Satish Balay wrote:
>
> > >
> > Configure Options: --configModules=PETSc.Configure
> > --optionsModule=config.compilerOptions --with-cc=cc --with-cxx=CC
> > --with-fc=ftn COPTFLAGS=" -g -fp-model fast -xMIC-AVX512
On Wed, Jun 27, 2018 at 12:40 PM Satish Balay wrote:
> >
> Configure Options: --configModules=PETSc.Configure
> --optionsModule=config.compilerOptions --with-cc=cc --with-cxx=CC
> --with-fc=ftn COPTFLAGS=" -g -fp-model fast -xMIC-AVX512 -qopt-report=5
> -hcpu=mic-knl -qopenmp-simd" CXXOP
I was able to build hypre on theta (KNL) by adding the following two lines
of code to hypre.py
args.append('--host')
args.append('--host-alias')
It might help.
Fande Kong,
On Wed, Jun 27, 2018 at 10:39 AM, Satish Balay wrote:
> >
> Configure Options: --configModules=PETSc.Configure
>
Configure Options: --configModules=PETSc.Configure
--optionsModule=config.compilerOptions --with-cc=cc --with-cxx=CC --with-fc=ftn
COPTFLAGS=" -g -fp-model fast -xMIC-AVX512 -qopt-report=5 -hcpu=mic-knl
-qopenmp-simd" CXXOPTFLAGS="-g -fp-model fast -xMIC-AVX512 -qopt-report=5
-hcpu
I did not set OMP_NUM_THREADS in my .bashrc or job script. The job ran out
of time.
If I did export OMP_NUM_THREADS=1 in job script on Cori, the job ran very
slowly, i.e., finished in 200 seconds compared to 1s without --with-openmp.
--Junchao Zhang
On Tue, Jun 26, 2018 at 12:05 PM, Balay, Satish
Mark Adams writes:
> On Tue, Jun 26, 2018 at 1:06 PM Satish Balay wrote:
>
>> I wonder if these jobs are scheduled in such a way so that they are not
>> oversubscribed.
>>
>> i.e number_mpi_jobs_per_node * number_of_openmp_threads_per_node <=
>> no_of_cores_per_node
>
>
> You are right(ish). Thi
I hate OpenMP with a passion
> On Jun 26, 2018, at 1:59 PM, Mark Adams wrote:
>
>
>
> On Tue, Jun 26, 2018 at 1:06 PM Satish Balay wrote:
> I wonder if these jobs are scheduled in such a way so that they are not
> oversubscribed.
>
> i.e number_mpi_jobs_per_node * number_of_openmp_thr
On Tue, Jun 26, 2018 at 1:06 PM Satish Balay wrote:
> I wonder if these jobs are scheduled in such a way so that they are not
> oversubscribed.
>
> i.e number_mpi_jobs_per_node * number_of_openmp_threads_per_node <=
> no_of_cores_per_node
You are right(ish). This fixed the problem:
export OMP_
I wonder if these jobs are scheduled in such a way so that they are not
oversubscribed.
i.e number_mpi_jobs_per_node * number_of_openmp_threads_per_node <=
no_of_cores_per_node
Satish
On Tue, 26 Jun 2018, Mark Adams wrote:
> Interesting, I am seeing the same thing with ksp/ex56 (elasticity) w
On Tue, 26 Jun 2018, Mark Adams wrote:
> >
> >
> > Download the following packages to /home/balay/tmp
> >
> > hypre ['git://https://github.com/LLNL/hypre', '
> > https://github.com/LLNL/hypre/archive/v2.14.0.tar.gz']
> >
> >
> Satish, dumb question, but I'm not sure how to get this. I did a 'git c
>
>
> Download the following packages to /home/balay/tmp
>
> hypre ['git://https://github.com/LLNL/hypre', '
> https://github.com/LLNL/hypre/archive/v2.14.0.tar.gz']
>
>
Satish, dumb question, but I'm not sure how to get this. I did a 'git clone
https://github.com/LLNL/hypre' but I don't see an arc
BTW, this is from that super slow with-openmp run on 8 procs. Barrier looks
sad.
Average time to get PetscTime(): 5.00679e-07
Average time for MPI_Barrier(): 0.1064
Average time
On Tue, 26 Jun 2018, Mark Adams wrote:
> On Tue, Jun 26, 2018 at 12:29 AM Balay, Satish wrote:
>
> > Perhaps petsc built with openmp is triggering the problem.
> >
> >
> >
> > You might want to install hyper separately with openmp. And then install
> > petsc with this prebuilt hypre and see if t
Interesting, I am seeing the same thing with ksp/ex56 (elasticity) with
30^3 grid on each process. One process runs fine (1.5 sec) but 8 processes
with 30^3 on each process took 156 sec.
And, PETSc's log_view is running extremely slow. I have the total time
(156) but each event is taking like a mi
On Tue, Jun 26, 2018 at 8:26 AM, Mark Adams wrote:
>
>
> On Tue, Jun 26, 2018 at 12:19 AM Junchao Zhang
> wrote:
>
>> Mark,
>> Your email reminded me my recent experiments. My PETSc was configured
>> --with-openmp=1.
>> With hypre, my job ran out of time. That was on an Argonne Xeon cluster.
or KNL?
Thanks,
>
>
> Satish
>
>
> --
> *From:* petsc-dev on behalf of Yang,
> Ulrike Meier
> *Sent:* Monday, June 25, 2018 8:20:58 PM
> *To:* Mark Adams; Smith, Barry F.
> *Cc:* For users of the development version of PETSc; t...@lbl.gov
> Trebotich
> *Subject:* R
On Tue, Jun 26, 2018 at 12:19 AM Junchao Zhang wrote:
> Mark,
> Your email reminded me my recent experiments. My PETSc was configured
> --with-openmp=1.
> With hypre, my job ran out of time. That was on an Argonne Xeon cluster.
>
Interesting. I tested on Cori's Haswell nodes and it looked fin
: Monday, June 25, 2018 8:20:58 PM
To: Mark Adams; Smith, Barry F.
Cc: For users of the development version of PETSc; t...@lbl.gov Trebotich
Subject: Re: [petsc-dev] problem with hypre with '--with-openmp=1'
Hi Mark,
I don’t have an account on KNL and have not tested hypre there with OpenMP.
Mark,
Your email reminded me my recent experiments. My PETSc was
configured --with-openmp=1.
With hypre, my job ran out of time. That was on an Argonne Xeon cluster.
I repeated the experiments on Cori's Haswell nodes. --with-openmp=1,
"Linear solve converged due to CONVERGED_RTOL iterations 5"
the development version of PETSc ;
Yang, Ulrike Meier ; t...@lbl.gov Trebotich
Subject: Re: [petsc-dev] problem with hypre with '--with-openmp=1'
On Fri, Jun 22, 2018 at 4:39 PM Smith, Barry F.
mailto:bsm...@mcs.anl.gov>> wrote:
> On Jun 22, 2018, at 3:33 PM, Mark A
On Fri, Jun 22, 2018 at 4:39 PM Smith, Barry F. wrote:
>
>
> > On Jun 22, 2018, at 3:33 PM, Mark Adams wrote:
> >
> > We are using KNL (Cori) and hypre is not working when configured with
> '--with-openmp=1', even when not using threads (as far as I can tell, I
> never use threads).
>
>It do
> On Jun 22, 2018, at 3:33 PM, Mark Adams wrote:
>
> We are using KNL (Cori) and hypre is not working when configured with
> '--with-openmp=1', even when not using threads (as far as I can tell, I never
> use threads).
It does seem to run correctly without the --with-openmp option?
We are using KNL (Cori) and hypre is not working when configured
with '--with-openmp=1', even when not using threads (as far as I can tell,
I never use threads).
Hypre is not converging, for instance with an optimized build:
srun -n 1 ./ex56 -pc_type hypre -ksp_monitor -ksp_converged_reason
-ksp
23 matches
Mail list logo