Performance wise, I would suggest to use "-xAVX" instead of "-axcore-avx2". Based on our experience with running PETSc on a variety of Xeon processors (including KNL), using AVX2 yields comparable and sometimes worse performance than using AVX. But if your machine supports AVX-512, it is definitely beneficial to use AVX-512.
Hong (Mr.) > On Apr 5, 2018, at 10:03 AM, Randall Mackie <rlmackie...@gmail.com> wrote: > > Dear PETSc users, > > I’m curious if anyone else experiences problems using DMDAVecGetArrayF90 in > conjunction with Intel compilers? > We have had many problems (typically 11 SEGV segmentation violations) when > PETSc is compiled in optimize mode (with various combinations of options). > These same codes run valgrind clean with gfortran, so I assume this is an > Intel bug, but before we submit a bug report I wanted to see if anyone else > had similar experiences? > We have basically gone back and replaced our calls to DMDAVecGetArrayF90 with > calls to VecGetArrayF90 and pass those pointers into a “local” subroutine > that works fine. > > In case anyone is curious, the attached test code shows this behavior when > PETSc is compiled with the following options: > > ./configure \ > --with-clean=1 \ > --with-debugging=0 \ > --with-fortran=1 \ > --with-64-bit-indices \ > --download-mpich=../mpich-3.3a2.tar.gz \ > --with-blas-lapack-dir=/opt/intel/mkl \ > --with-cc=icc \ > --with-fc=ifort \ > --with-cxx=icc \ > --FOPTFLAGS='-O2 -xSSSE3 -axcore-avx2' \ > --COPTFLAGS='-O2 -xSSSE3 -axcore-avx2' \ > --CXXOPTFLAGS='-O2 -xSSSE3 -axcore-avx2’ \ > > > > Thanks, Randy M. > > <cmd_test><makefile><test.F90>