Re: [petsc-dev] [petsc-maint] Incident INC0122538 MKL on Cori/KNL

2018-07-03 Thread Jeff Hammond
s would > just be valgrind not supporting the instruction set. You could > downgrade the target arch via compiler flags or see if the latest (dev > version?) Valgrind supports the instruction. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] GAMG error with MKL

2018-07-03 Thread Jeff Hammond
raph_AGG() line 832 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/impls/gamg/agg.c >>> [0]PETSC ERROR: #4 PCSetUp_GAMG() line 517 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/impls/gamg/gamg.c >>> [0]PETSC ERROR: #5 PCSetUp() line 932 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #6 KSPSetUp() line 381 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #7 KSPSolve() line 612 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 224 in >>> /global/u2/m/madams/petsc_install/petsc/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #9 SNESSolve() line 4350 in >>> /global/u2/m/madams/petsc_install/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #10 main() line 161 in >>> /global/homes/m/madams/petsc_install/petsc/src/snes/examples/tutorials/ex19.c >>> [0]PETSC ERROR: PETSc Option Table entries: >>> [0]PETSC ERROR: -ksp_monitor_short >>> [0]PETSC ERROR: -mat_type aijmkl >>> [0]PETSC ERROR: -options_left >>> [0]PETSC ERROR: -pc_type gamg >>> [0]PETSC ERROR: -snes_monitor_short >>> >>> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] GAMG error with MKL

2018-07-04 Thread Jeff Hammond
On Wed, Jul 4, 2018 at 6:31 AM Matthew Knepley wrote: > On Tue, Jul 3, 2018 at 10:32 PM Jeff Hammond > wrote: > >> >> >> On Tue, Jul 3, 2018 at 4:35 PM Mark Adams wrote: >> >>> On Tue, Jul 3, 2018 at 1:00 PM Richard Tran Mills >>> wrote: >

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
e MPI+OpenMP application code will be appreciated, though I don't know if > there are any good solutions. > > > > --Richard > > > > On Wed, Jul 4, 2018 at 11:38 PM, Smith, Barry F. > wrote: > > > >Jed, > > > > You could use your

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
On Mon, Jul 9, 2018 at 6:38 AM, Matthew Knepley wrote: > On Mon, Jul 9, 2018 at 9:34 AM Jeff Hammond > wrote: > >> On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. >> wrote: >> >>> >>> Richard, >>> >>> The problem is that

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
r model. >>> >> >> If this implies that BoxLib will use omp-parallel and then use explicit >> threading in a manner similar to MPI (omp_get_num_threads=MPI_Comm_size >> and omp_get_thread_num=MPI_Comm_rank), then this is the Right Way to >> write Op

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
On Mon, Jul 9, 2018 at 11:36 AM, Smith, Barry F. wrote: > > > > On Jul 9, 2018, at 8:33 AM, Jeff Hammond wrote: > > > > > > > > On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. > wrote: > > > > Richard, > > > > The

Re: [petsc-dev] GAMG error with MKL

2018-07-10 Thread Jeff Hammond
On Tue, Jul 10, 2018 at 9:33 AM, Jed Brown wrote: > Mark Adams writes: > > > On Mon, Jul 9, 2018 at 7:19 PM Jeff Hammond > wrote: > > > >> > >> > >> On Mon, Jul 9, 2018 at 7:38 AM, Mark Adams wrote: > >> > >>

Re: [petsc-dev] GAMG error with MKL

2018-07-10 Thread Jeff Hammond
On Tue, Jul 10, 2018 at 11:27 AM, Richard Tran Mills wrote: > On Mon, Jul 9, 2018 at 10:04 AM, Jed Brown wrote: > >> Jeff Hammond writes: >> >> > This is the textbook Wrong Way to write OpenMP and the reason that the >> > thread-scalability of DOE appl

Re: [petsc-dev] How to enforce private data/methods in PETSc?

2018-08-11 Thread Jeff Hammond
example, is an array assumed > to be > >>>>>> sorted (that is hard to know). With C++, one can use private to > minimize > >>>>>> data exposure. > >>>>>> > >>>>> > >>>>> This just has to be coding disciplin

Re: [petsc-dev] PETSc - MPI3 functionality

2018-09-10 Thread Jeff Hammond
but let's discuss the > communication pattern first. You said you are working with a FEM model, > but also mention "igatherv". Is this for some sequential mesh > processing task or is it related to the solver? There isn't a > neighborhood igatherv and MPI_Igatherv isn't a pattern that should ever > be needed in a FEM solver. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-08 Thread Jeff Hammond
0200023a6) >> >>>libevent_pthreads-2.1.so.6 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libevent_pthreads-2.1.so.6 >> (0x200023ae) >> >>>libopen-rte.so.3 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-rte.so.3 >> (0x200023b1) >> >>>libopen-pal.so.3 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-pal.so.3 >> (0x200023c2) >> >>>libXau.so.6 => /usr/lib64/libXau.so.6 (0x200023d1) >> >>> >> >>> >> >>>> On Feb 7, 2020, at 2:31 PM, Smith, Barry F. >> wrote: >> >>>> >> >>>> >> >>>> ldd -o on the executable of both linkings of your code. >> >>>> >> >>>> My guess is that without PETSc it is linking the static version of >> the needed libraries and with PETSc the shared. And, in typical fashion, >> the shared libraries are off on some super slow file system so take a long >> time to be loaded and linked in on demand. >> >>>> >> >>>> Still a performance bug in Summit. >> >>>> >> >>>> Barry >> >>>> >> >>>> >> >>>>> On Feb 7, 2020, at 12:23 PM, Zhang, Hong via petsc-dev < >> petsc-dev@mcs.anl.gov> wrote: >> >>>>> >> >>>>> Hi all, >> >>>>> >> >>>>> Previously I have noticed that the first call to a CUDA function >> such as cudaMalloc and cudaFree in PETSc takes a long time (7.5 seconds) on >> summit. Then I prepared a simple example as attached to help OCLF reproduce >> the problem. It turned out that the problem was caused by PETSc. The >> 7.5-second overhead can be observed only when the PETSc lib is linked. If I >> do not link PETSc, it runs normally. Does anyone have any idea why this >> happens and how to fix it? >> >>>>> >> >>>>> Hong (Mr.) >> >>>>> >> >>>>> bash-4.2$ cat ex_simple.c >> >>>>> #include >> >>>>> #include >> >>>>> #include >> >>>>> >> >>>>> int main(int argc,char **args) >> >>>>> { >> >>>>> clock_t start,s1,s2,s3; >> >>>>> double cputime; >> >>>>> double *init,tmp[100] = {0}; >> >>>>> >> >>>>> start = clock(); >> >>>>> cudaFree(0); >> >>>>> s1 = clock(); >> >>>>> cudaMalloc((void **)&init,100*sizeof(double)); >> >>>>> s2 = clock(); >> >>>>> cudaMemcpy(init,tmp,100*sizeof(double),cudaMemcpyHostToDevice); >> >>>>> s3 = clock(); >> >>>>> printf("free time =%lf malloc time =%lf copy time =%lf\n",((double) >> (s1 - start)) / CLOCKS_PER_SEC,((double) (s2 - s1)) / >> CLOCKS_PER_SEC,((double) (s3 - s2)) / CLOCKS_PER_SEC); >> >>>>> >> >>>>> return 0; >> >>>>> } >> >>>>> >> >>>>> >> >>>> >> >>> >> >> >> > >> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Fwd: [Xlab] El Capitan CPU announcement

2020-03-10 Thread Jeff Hammond
>> >> AMD >> >> https://www.anandtech.com/show/15581/el-capitan-supercomputer-detailed-amd-cpus-gpus-2-exaflops >> >> >> _______ >> Xlab mailing list >> x...@lists.cels.anl.gov >> https://lists.cels.anl.gov/mailman/listinfo/xlab >> >> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Request for comments: allow C99 internally

2020-03-10 Thread Jeff Hammond
t;> > Definitely +1 for variadic macros and for-loop declarations, but not VLAs. > > > -- > Lisandro Dalcin > > Research Scientist > Extreme Computing Research Center (ECRC) > King Abdullah University of Science and Technology (KAUST) > http://ecrc.kaust.edu.sa/ > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[petsc-dev] why does the Ubuntu PETSc package install coarray Fortran?

2020-05-31 Thread Jeff Hammond
will be used. -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] why does the Ubuntu PETSc package install coarray Fortran?

2020-06-01 Thread Jeff Hammond
gt; octave-iso2mesh-0:1.9.1-5.fc32.x86_64 > > petsc-0:3.12.3-2.fc32.x86_64 > > > > I don't know what the equivalent in deb world is. > > > > So I would manually check on some of the petsc dependencies > > > > apt install libblas-dev > > > >

Re: [petsc-dev] Meaning of PETSc matrices with zero rows but nonzero columns?

2020-06-01 Thread Jeff Hammond
ive > > A 3x0, B 0x8, C 8x7 -> (ABC) is a valid 3x7 matrix (empty) > > If I understand you right, (AB) would be a 0x0 matrix, and it can no longer > be multiplied against C > > > Richard, what is the hardship in preserving the shape relations? > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
---send entire error message to petsc-ma...@mcs.anl.gov-- What do I need to do to use a transpose view properly outside of M*V? Thanks, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
We've had limited demand for more > sophisticated operations on objects of type MATTRANSPOSE and there > probably isn't much benefit in fusing the parallel version here anyway. > > Jeff Hammond writes: > > > I am trying to understand how to use a transpos

Re: [petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
Jeff dumb. Make copy paste error. Sorry. Jeff On Mon, Jun 1, 2020 at 5:02 PM Jeff Hammond wrote: > I'm still unable to get a basic matrix transpose working. I may be > stupid, but I cannot figure out why the object is in the wrong state, no > matter what I do. > > T

Re: [petsc-dev] Googling PetscMatlabEngine leads to 3.7 version

2020-06-17 Thread Jeff Hammond
ish > > On Mon, 8 Jun 2020, Barry Smith wrote: > > > > >Googling PetscMatlabEngine leads to 3.7 version but > PetscMatlabEngineDestroy leads to 3.13. > > > > Any idea why some of the manual pages don't point to the latest > version? > > > >Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Question about MPICH device we use

2020-07-23 Thread Jeff Hammond
, Matthew Knepley wrote: > > > >> We default to ch3:sock. Scott MacLachlan just had a long thread on the > >> Firedrake list where it ended up that reconfiguring using ch3:nemesis > had a > >> 2x performance boost on his 16-core proc, and noticeable effect on the

Re: [petsc-dev] Question about MPICH device we use

2020-07-26 Thread Jeff Hammond
On Thu, Jul 23, 2020 at 9:35 PM Satish Balay wrote: > On Thu, 23 Jul 2020, Jeff Hammond wrote: > > > Open-MPI refuses to let users over subscribe without an extra flag to > > mpirun. > > Yes - and when using this flag - it lets the run through - but there is > still

Re: [petsc-dev] PetscMallocAlign for Cuda

2020-09-03 Thread Jeff Hammond
t;> >> >> >> > On Sep 2, 2020, at 8:58 AM, Mark Adams wrote: >> >> >> > >> >> >> > PETSc mallocs seem to boil down to PetscMallocAlign. There are switches >> in here but I don't see a Cuda malloc. THis would see

Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-05 Thread Jeff Hammond
t > some of these names even mean. What does "PASCAL60" vs. "PASCAL61" even > mean? Do you know of where this is even documented? I can't really find > anything about it in the Kokkos documentation. The only thing I can really > find is an issue or two a

Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-05 Thread Jeff Hammond
okkos etc for THAT > system. > I will try to write something for you tomorrow. For NVIDIA hardware, the sole dependency will be nvcc. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] NEVER put // into PETSc code. PETSc is C89, the only real C.

2016-06-22 Thread Jeff Hammond
, Barry Smith wrote: > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] NEVER put // into PETSc code. PETSc is C89, the only real C.

2016-06-23 Thread Jeff Hammond
, which has useful features like atomics in the language (which are useful not just for threaded programs, but also MPI+shared-memory, which I know the PETSc team likes). I too am a huge fan of VLAs, but VLA for non-POD types are "hard", which is why C++ doesn't sup

Re: [petsc-dev] WTF

2016-06-29 Thread Jeff Hammond
Matt > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] WTF

2016-06-30 Thread Jeff Hammond
On Wed, Jun 29, 2016 at 8:18 PM, Barry Smith wrote: > > > On Jun 29, 2016, at 10:06 PM, Jeff Hammond > wrote: > > > > > > > > On Wednesday, June 29, 2016, Barry Smith wrote: > > > >Who are these people and why to they have this webpage? &

Re: [petsc-dev] WTF

2016-06-30 Thread Jeff Hammond
> The default settings often don’t work well for 3D problems. Are 2D (1D?) problems really the common case for PDE solvers? Aren't interesting problems 3D? Shouldn't the defaults be set to optimize for 3D? Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-01 Thread Jeff Hammond
be in, just not for the very large scale. > > Rough ideas and pointers to publications are all useful. There is an > extremely short fuse so the sooner the better, > > Thanks > > Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-07 Thread Jeff Hammond
hierarchy of fast memory (e.g. HBM), regular memory (e.g. DRAM), and slow (likely nonvolatile) memory on a node. Xeon Phi and some GPUs have caches, but it is unclear to me if it actually benefits software like PETSc to consider them. Figuring out how to run PETSc effectively on KNL should be generally useful... Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-07 Thread Jeff Hammond
On Thu, Jul 7, 2016 at 4:34 PM, Richard Mills wrote: > On Fri, Jul 1, 2016 at 4:13 PM, Jeff Hammond > wrote: > >> [...] >> >> Maybe I am just biased because I spend all of my time reading >> www.nextplatform.com, but I hear machine learning is becoming an >

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
t your adjoint matrices, provided the NVM bandwidth is sufficient. *Disclaimer: All of these are academic comments. Do not use them to try to influence others or make any decisions. Do your own research and be skeptical of everything I derived from the Internet.* Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills wrote: > > > On Fri, Jul 8, 2016 at 9:40 AM, Jeff Hammond > wrote: > >> >>> > 1) How do we run at bandwidth peak on new architectures like Cori or >>> Aurora? >>> >>> Huh, there is a

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
On Friday, July 8, 2016, Barry Smith wrote: > > > On Jul 8, 2016, at 12:17 PM, Jeff Hammond > wrote: > > > > > > > > On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills > wrote: > > > > > > On Fri, Jul 8, 2016 at 9:40 AM, Jeff Hammond >

Re: [petsc-dev] Elemental for sparse, distributed-memory, direct, quad-precision linear solves?

2016-08-23 Thread Jeff Hammond
face. I would > > check first. > > > >Matt > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Question regarding updating PETSc Fortran examples to embrace post F77 constructs

2016-08-26 Thread Jeff Hammond
If PETSc has symbols greater than 6 characters, it has never been Fortran 77 compliant anyways, so it's a bit strange to ask permission to break it in another way in the examples. Sorry for being a pedant but we've had this debate in the MPI Forum and it is false to conflate punchcard Fortran

Re: [petsc-dev] Fwd: hbw_malloc on KNL

2016-08-31 Thread Jeff Hammond
just absurd. I am wondering if you have any clue > on why this happens and how to fix it. FYI, I attached the driver for the > PETSc example. > > Thanks, > Hong > > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] fortran literals

2016-09-01 Thread Jeff Hammond
https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.htmlhas real{32,64,128} that does a nice job for this situation. Jeff Sent from my iPhone > On Sep 1, 2016, at 4:15 PM, Blaise A Bourdin wrote: > > Hi, > > If I recall correctly, fortran does not mandate that "selected_real_kin

Re: [petsc-dev] fortran literals

2016-09-01 Thread Jeff Hammond
1, 2016, Blaise A Bourdin wrote: > Neat. > I was not aware of this. > > Blaise > > > On Sep 1, 2016, at 9:26 PM, Jeff Hammond > wrote: > > > > https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.htmlhas > real{32,64,128} that does a nice

Re: [petsc-dev] fortran literals

2016-09-02 Thread Jeff Hammond
> University of Illinois Urbana-Champaign > > > > > > On Sep 1, 2016, at 6:15 PM, Blaise A Bourdin > wrote: > > If I recall correctly, fortran does not mandate that > "selected_real_kind(5)" means the same across compilers, so that hardcoding > kind v

Re: [petsc-dev] exascale applications and PETSc usefulness

2016-09-07 Thread Jeff Hammond
some of them. If anyone has > useful information on the needs/approaches of these projects please let us > know so we can determine if there any place for PETSc. > >Thanks > > Barry > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] KNL MatMult performance and unrolling.

2016-09-28 Thread Jeff Hammond
and non-unrolled also on Xeon. > >We've never done a good job of managing our unrolling, where, how and > when we do it and macros for unrolling such as PetscSparseDensePlusDot. > Intel would say just throw it all away. > >Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Configure fails on macOS with MKL

2016-12-21 Thread Jeff Hammond
Is Petsc build system not capable of running "find" on $MKLROOT as a hedge against subdirectories reorganization? At least with Intel compilers, "-mkl" is the easy button unless you're trying to get threaded BLAS/LAPACK with ScaLAPACK. Jeff On Wed, Dec 21, 2016 at 7:37 AM Pierre Jolivet wrote:

Re: [petsc-dev] segfaults with mpich 3.2 and clang (Apple LLVM 8.0.0, clang 800.0.42.1)

2017-02-10 Thread Jeff Hammond
g PETSc via the PyLith installer which builds mpich, NetCDF, HDF5, > etc from scratch using the Apple's clang. If I swap mpich3.2 for > mpich3.1.3, these segfaults go away, so I am inclined to blame mpich3.2. > > > > If this is a unknown or undiagnosed issue, I will try a sta

Re: [petsc-dev] download problem p4est on Cori at NERSC

2017-02-18 Thread Jeff Hammond
e I should do > with-batch=1, > > > On Wed, Feb 15, 2017 at 4:36 PM, Mark Adams wrote: > >> I get this error on the KNL partition at NERSC. P4est works on the >> Haswell partition and other downloads seem to work on the KNL partition. >> Any ideas? >>

Re: [petsc-dev] Compilation with ESSL SMP

2017-02-20 Thread Jeff Hammond
h the option > --with-blas-lapack-lib option. > > Wrt libmkl_intel_thread vs libmkl_sequential - configure defaults to > libmkl_sequential for reqular use. > > libmkl_intel_thread is picked up only when MKL pardiso is requested > [so the assumption here is aware of pardiso requirements wrt &

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-09 Thread Jeff Hammond
ALUES,is,ierr) > > call VecScatterCreate(this%xVec,PETSC_NULL_OBJECT,vec,is,this% > from_petsc,ierr) > > ! reverse scatter object > > > > If we want to make this change then I could help a developer or you > > can get me set up with a (small) test problem and a branch and I can > > do it at NERSC. > > > > Thanks, > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-09 Thread Jeff Hammond
etween). One consequence of using libnuma to manage MCDRAM is that one can call numa_move_pages, which Jed has asserted is the single most important function call in the history of memory management ;-) Jeff > Perhaps I should try memkind calls since they may become much better. > &g

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-10 Thread Jeff Hammond
. As I understand it, calloc != malloc+memset, and the > differences might be important in multicore+multithreading scenarios and > the first-touch policy. > > My intuition is that any HPC code that benefits from mapping the zero page vs memset is doing something wrong. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-10 Thread Jeff Hammond
> > Disappointing on my Mac. > > WARNING: Could not locate OpenMP support. This warning can also occur if > compiler already supports OpenMP > checking omp.h usability... no > checking omp.h presence... no > checking for omp.h... no > omp.h is required for this library > Blame Apple for not s

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-10 Thread Jeff Hammond
> On Mar 10, 2017, at 12:42 PM, Jed Brown wrote: > > Jeff Hammond writes: >> My intuition is that any HPC code that benefits from mapping the zero page >> vs memset is doing something wrong. > > The relevant issue is the interface, from > > a = malloc(size

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-12 Thread Jeff Hammond
On Sat, Mar 11, 2017 at 9:00 AM Jed Brown wrote: > Jeff Hammond writes: > > I agree 100% that multithreaded codes that fault pages from the main > thread in a NUMA environment are doing something wrong ;-) > > > > Does calloc *guarantee* pages are not mapped? If I calloc

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-13 Thread Jeff Hammond
s malloc, which also > doesn't promise unfaulted pages. This is one reason some of us keep saying > that OpenMP sucks. It's a shitty standard that obstructs better standards > from being created. > > > On March 12, 2017 11:19:49 AM MDT, Jeff Hammond > wrote: > >

[petsc-dev] fork for programming models debate (was "Using multiple mallocs with PETSc")

2017-03-14 Thread Jeff Hammond
On Mon, Mar 13, 2017 at 8:08 PM, Jed Brown wrote: > > Jeff Hammond writes: > > > OpenMP did not prevent OpenCL, > > This programming model isn't really intended for architectures with > persistent caches. > It's not clear to me how much this should matter

Re: [petsc-dev] fork for programming models debate (was "Using multiple mallocs with PETSc")

2017-03-15 Thread Jeff Hammond
On Tue, Mar 14, 2017 at 8:52 PM Jed Brown wrote: > Jeff Hammond writes: > > > On Mon, Mar 13, 2017 at 8:08 PM, Jed Brown wrote: > >> > >> Jeff Hammond writes: > >> > >> > OpenMP did not prevent OpenCL, > >> > >>

Re: [petsc-dev] PETSc statistics

2017-06-06 Thread Jeff Hammond
;> > >> I usually say tens of thousands. Our users meeting attracts 100+ > presenters > >> each year. > >> > >> Matt > >> > >> > >>> Any other metrics would be welcome as well. > >>> > >>> Thanks, > >>> Dave > >>> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which > their > >> experiments lead. > >> -- Norbert Wiener > >> > >> http://www.caam.rice.edu/~mk51/ > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] configure error at NERSC

2017-08-10 Thread Jeff Hammond
pilers/conftest.c >>>> > > Possible ERROR while running compiler: >>>> > > stderr: >>>> > > ModuleCmd_Load.c(244):ERROR:105: Unable to locate a modulefile for >>>> > > 'autoconf' >>>> > > Mod

Re: [petsc-dev] IMPORTANT nightly builds are down do not add to next

2017-12-30 Thread Jeff Hammond
w and Ubuntu platforms. You can get almost all compiler versions and MPI libraries in both, with reasonable effort. I can do an initial implementation if you aren’t inclined to do it yourselves. Jeff > >Basically another Satish, ideas on how to find such a person? > > Impossible to find "another Satish". > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] IMPORTANT nightly builds are down do not add to next

2017-12-30 Thread Jeff Hammond
On Sat, Dec 30, 2017 at 3:34 PM Jed Brown wrote: > Jeff Hammond writes: > > > On Sat, Dec 30, 2017 at 12:04 PM Jed Brown wrote: > > > >> "Smith, Barry F." writes: > >> > >> >> On Dec 30, 2017, at 3:53 AM, Matthew Knepley > wrot

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-09 Thread Jeff Hammond
lg; > ierr = > MPI_Attr_get(PETSC_COMM_WORLD,Petsc_Counter_keyval,&counter,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_CORRUPT,"Bad MPI > communicator supplied ???"); > } > > Any ideas? > > Mark > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] petsc/master: unable to link in C++ with last night PETSC_WITH_EXTERNAL_LIB variable changes

2018-02-10 Thread Jeff Hammond
gt; > >> Is this a normal and definitive change or an unwanted/unobserved bug? >> >> Thanks, >> >> Eric >> >> ps: here are the logs: >> >> this night: >> >> - >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_make.log >> >> >> a day before: >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_make.log >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/> > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] petsc/master: unable to link in C++ with last night PETSC_WITH_EXTERNAL_LIB variable changes

2018-02-10 Thread Jeff Hammond
t > file or don't include it before mpi.h is included. > Indeed, everybody should compiler MPI codes with "-DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1" now. I'll ask MPICH and Open-MPI to switch the default to exclude C++ bindings. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] More CMake hatred

2018-04-06 Thread Jeff Hammond
t; > > > > *** > > > > > > [knepley@rush:/projects/academic/knepley/PETSc3/petsc]$ which mpicc > > > > /util/common/openmpi/3.0.0/gcc-4.8.5/bin/mpicc > > > > [kneple

Re: [petsc-dev] GCC8 Fortran length changes from int to size_t

2018-05-02 Thread Jeff Hammond
he > PETSC_MIXED_LEN / PETSC_END_LEN to use size_t instead of int. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] GCC8 Fortran length changes from int to size_t

2018-05-03 Thread Jeff Hammond
er arguments manually and it would > be great to remove that manual step. > >Thanks > > Barry > > > > On May 2, 2018, at 11:42 PM, Jeff Hammond > wrote: > > > > Or you could just use ISO_C_BINDING. Decent compilers should support it. > > &g

Re: [petsc-dev] building with MKL

2018-07-01 Thread Jeff Hammond
etsc/src/snes/examples/tutorials/ex19.c >> > > [0]PETSC ERROR: PETSc Option Table entries: >> > > [0]PETSC ERROR: -da_refine 3 >> > > [0]PETSC ERROR: -ksp_monitor >> > > [0]PETSC ERROR: -mat_type aijmkl >> > > [0]PETSC ERROR: -options_left >> > > [0]PETSC ERROR: -pc_type gamg >> > > [0]PETSC ERROR: -snes_monitor_short >> > > [0]PETSC ERROR: -snes_view >> > > [0]PETSC ERROR: End of Error Message ---se >> > >> > On Sat, Jun 30, 2018 at 3:08 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > OK, that got further. >> > >> > On Sat, Jun 30, 2018 at 3:03 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > Like this? >> > >> > >> > >> >> '--with-blaslapack-lib=/opt/intel/compilers_and_libraries_2018.1.163/linux/mkl/lib/intel64/libmkl_intel_thread.a', >> > >> > >> > On Sat, Jun 30, 2018 at 3:00 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > >> > Specify either "--with-blaslapack-dir" or >> > "--with-blaslapack-lib --with-blaslapack-include". >> > But not both! >> > >> > >> > Get rid of the dir option, and give the full path to the >> > library. >> > >> > >> > What is the syntax for giving the full path? >> > >> > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] building with MKL

2018-07-01 Thread Jeff Hammond
On Sun, Jul 1, 2018 at 10:43 AM Victor Eijkhout wrote: > > > On Jul 1, 2018, at 12:30 PM, Jeff Hammond wrote: > > If you really want to do this, then replace COMMON with CORE to specialize > for SKX. There’s not point to using COMMON if you’ve got the MIC path > already. &

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-20 Thread Jeff Hammond
== >> >> >> >> >> ___ >> discuss mailing list disc...@mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > ___ > discuss mailing list disc...@mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.scie...@gmail.com

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
;>> = EXIT CODE: 56 >>>> = CLEANING UP REMAINING PROCESSES >>>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >>>> >>>> ======= >>>> >>>> >>>> >>>> >>>> __

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
: >> (1) the MPICH library ("application called MPI_Abort...") and (2) the job >> launcher ("BAD TERMINATION..."). You can eliminate the messages from the >> job launcher by providing an error code of 0 in MPI_Abort. >> >> ~Jim. >> >> >

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
m two >> > sources: (1) the MPICH library ("application called MPI_Abort...") and (2) >> > the job launcher ("BAD TERMINATION..."). You can eliminate the messages >> > from the job launcher by providing an error code of 0 in MPI_Abort. >> > >

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
#x27;s confusing to people when you suggest any other relationship than "Jeff is a dude that responses to threads on the MPICH discuss list". Jeff On Fri, Feb 21, 2014 at 1:54 PM, Munson, Todd S. wrote: > > Sounds like someone needs to look up the definition of "customer s

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
CESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > === > > > > > ___ > discuss mailing list disc...@mpich.org > To manage subscription options or unsubscribe: > https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.scie...@gmail.com

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
t; = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> =========== >> >> >> >> >> ___ >> discuss mailing list disc...@mpich.org >> To manage subscription options or unsubscribe: >> https://lists.mpich.org/mailman/listinfo/discuss > > > > -- > Jeff Hammond > jeff.scie...@gmail.com -- Jeff Hammond jeff.scie...@gmail.com

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
https://trac.mpich.org/projects/mpich/ticket/2038 has the patches. Jeff On Fri, Feb 21, 2014 at 3:47 PM, Jeff Hammond wrote: > Barry: > > Can you tolerate the following workaround for Hydra's error cleanup or > do you need it to be internal? I presume you know enough bash to

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
let them decide how to handle it, but he learned MPI from Bill Gropp, so he might not know anything ;-) I apologize for being unpleasant earlier. Best, Jeff > > Barry > > On Feb 21, 2014, at 3:10 PM, Jeff Hammond wrote: > >> Barry: >> >> Would the following beha

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
Jed, Did you look at my patch or the demonstration yet? I posted all the details this afternoon. I tried very hard to support verbosity suppression in a reasonable way at runtime. Do you really want an MPIX call that is equivalent to setenv("")? Is the extra code worth it? (These are seriou

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-22 Thread Jeff Hammond
I'll look into a general setenv-like mechanism for CVARs rather than an abort-specific one-off. Might be there already through MPI_T interface (which is standard, unlike MPIX). Jeff Sent from my iPhone > On Feb 21, 2014, at 11:35 PM, Jed Brown wrote: > > Jeff Hammond writes:

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-22 Thread Jeff Hammond
Thanks for catching this. I tested my patch but not through valgrind. Thanks also for figuring out the line break issue. I figured that was coming from the device but didn't track it down. Jeff On Sat, Feb 22, 2014 at 2:07 PM, Jed Brown wrote: > Jeff Hammond writes: &g

[petsc-dev] PetscSFFetchAndOpBegin_Window

2015-03-11 Thread Jeff Hammond
the rest of the RMA calls use ranks[i]. I assume this is innocuous, but unless you have a mutex on 'sf', it is theoretically possible that sf->ranks[i] could be changed by another thread in such a way as to lead to an undefined or incorrect program. If you prohibit this behavior as

Re: [petsc-dev] PetscSFFetchAndOpBegin_Window

2015-03-11 Thread Jeff Hammond
gt; incorrect program. If you prohibit this behavior as part of the >> internal design contract, then it should be noted. > > sf->ranks is not mutated after setup. Except due to DRAM SDC, rowhammer exploits, gamma ray muons... :-D In any case, it will probably save you 0-10 cycles to use the automatic variable rather than to dereference the struct pointer again :-) Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
/memkind#fork-destination-box. I'm sure that the memkind developers would be willing to review your pull request once you've implemented memkind_move_pages(). Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
r heads totally up their asses > > formatted source code with astyle --style=linux --indent=spaces=4 -y -S > > when everyone knows that any indent that is not 2 characters is totally > insane :-) > > Barry > > >> On Jun 3, 2015, at 9:37 PM, Jeff Hammond wrote: &g

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
On Wed, Jun 3, 2015 at 9:58 PM, Jed Brown wrote: > Jeff Hammond writes: >> The beauty of git/github is one can make branches to try out anything >> they want even if Jed thinks that he knows better than Intel how to >> write system software for Intel's hardware

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
If everyone would just indent with tabs, we could just set the indent spacing with our editors ;-) On Wed, Jun 3, 2015 at 10:01 PM, Barry Smith wrote: > >> On Jun 3, 2015, at 9:58 PM, Jeff Hammond wrote: >> >> http://git.mpich.org/mpich.git/blob/HEAD:/src/mpi/init/init.c

Re: [petsc-dev] Error message while make test petsc-master

2015-08-28 Thread Jeff Hammond
mkl_intel_lp64.a,/home/hector/mkl_static/libmkl_core.a,/home/hector/mkl_static/libmkl_intel_thread.a] > --with-scalapack-include=/share/apps/intel2/mkl/include > --with-scalapack-lib=[/home/hector/mkl_static/libmkl_scalapack_lp64.a,/home/hector/mkl_static/libmkl_blacs_openmpi_lp64.a] > --with-valgrind=1 --with-valgrind-dir=/home/hector/installed > --with-shared-libraries=0 --with-fortran-interfaces=1 > --FC_LINKER_FLAGS="-openmp -openmp-link static" --FFLAGS="-openmp > -openmp-link static" --LIBS="-Wl,--start-group > /home/hector/mkl_static/libmkl_intel_lp64.a > /home/hector/mkl_static/libmkl_core.a > /home/hector/mkl_static/libmkl_intel_thread.a -Wl,--end-group -liomp5 -ldl > -lpthread -lm" > > > > Thanks for your help! > > > > Hector > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] broken options handling with intel 16.0 compilers on mac OS

2015-09-17 Thread Jeff Hammond
Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > http://www.math.lsu.edu/~bourdin > > > > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] broken options handling with intel 16.0 compilers on mac OS

2015-09-23 Thread Jeff Hammond
: > Satish Balay writes: > > And then the MPI c++ libraries.. > > The current MPI standard does not contain C++ bindings. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] broken options handling with intel 16.0 compilers on mac OS

2015-09-23 Thread Jeff Hammond
ntel-16.0/compilers_and_libraries_2016.0.083/mac/compiler/lib/libifcore.a > make a difference? > > And what do you have for: > > cd /opt/HPC/mpich-3.1.4-intel16.0/lib > ls > nm -Ao libmpiifort.* |grep get_command_argument > > BTW: I'm assuming you have sept 20 or newer petsc master branch. > > Satish > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Do we still need to support python 2.4?

2015-10-13 Thread Jeff Hammond
overflow.com/questions/1465036/install-python-2-6-in-centos > [install prebuilt binary from 'fedora epel' > > > http://bda.ath.cx/blog/2009/04/08/installing-python-26-in-centos-5-or-rhel5/ > [install some devel packages from rhel - before attempting to compile > python from

Re: [petsc-dev] Do we still need to support python 2.4?

2015-10-13 Thread Jeff Hammond
On Tuesday, October 13, 2015, Satish Balay wrote: > On Tue, 13 Oct 2015, Jeff Hammond wrote: > > > PETSc can build MUMPS. Why not Python? :-) > > and gcc :) > > I have GCC builds almost completely automated for similar reasons as Python 2.4 (eg my CentOS box is stuck with

Re: [petsc-dev] Bug in MatShift_MPIAIJ ?

2015-10-22 Thread Jeff Hammond
s changed on one process but > not on others triggering disaster in the MatAssemblyEnd_MPIA(). > >> > >> This is now fixed in the maint, master and next branches and will be in > the next patch release. I have also attached the patch to this email. > >> > >> Barry > >> > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Ugly feature of __float128

2015-10-28 Thread Jeff Hammond
done this way. Who wants to convert a > code to __float128 and have to then label all floating point numbers with a > q which of course also means the code is not compilable with other > compilers in double. > > It's ugly, but you can put it in a macro: > > #define NUM(a) a ## q > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] SuperLU build error

2015-12-18 Thread Jeff Hammond
overwriting the change I make. > > There is a similar bug in SuperLU_dist that I also fix by hand. > > Garth > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] thread communicator

2016-01-12 Thread Jeff Hammond
; P.O. Box 808, L-561, Livermore, CA 94551 > > Phone - (925) 422-4377, Email - rfalg...@llnl.gov, Web - > http://people.llnl.gov/falgout2 > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] valgrind errors in the SUPERLU*

2016-01-16 Thread Jeff Hammond
- use this file for your next run - to catch the actual issues. > > https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto has some of > this info.. > > > We normally recommend using valgrind on linux with mpich built with > '--enable-g=meminit' [--download-mpich o

Re: [petsc-dev] MKL issue

2016-01-28 Thread Jeff Hammond
we can check for at configure time or we just wait that > > Intel ships MKL with a shared version of libmkl_blacs_(i)lp64? > > You can write a test for it or wait. Since the test needs to execute > code, it would need to run as part of conftest on a batch system. > Jeff, who works for Intel but is not responsible for any of our software products -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

  1   2   >