Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-05 Thread Jeff Hammond
PETSc, Kokkos etc for THAT > system. > I will try to write something for you tomorrow. For NVIDIA hardware, the sole dependency will be nvcc. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-05 Thread Jeff Hammond
to > trivially map to information you can get from a particular GPU when you > logged into it. For example nvidia-smi doesn't use these names directly. Is > there some mapping from nvidia-smi to these names we could use? If we are > serious about having a non-trivial number of user

Re: [petsc-dev] PetscMallocAlign for Cuda

2020-09-03 Thread Jeff Hammond
t; >> >> >> > On Sep 2, 2020, at 8:58 AM, Mark Adams wrote: >> >> >> > >> >> >> > PETSc mallocs seem to boil down to PetscMallocAlign. There are switches >> in here but I don't see a Cuda malloc. THis would seem to be conveni

Re: [petsc-dev] Question about MPICH device we use

2020-07-26 Thread Jeff Hammond
On Thu, Jul 23, 2020 at 9:35 PM Satish Balay wrote: > On Thu, 23 Jul 2020, Jeff Hammond wrote: > > > Open-MPI refuses to let users over subscribe without an extra flag to > > mpirun. > > Yes - and when using this flag - it lets the run through - but there is > st

Re: [petsc-dev] Question about MPICH device we use

2020-07-23 Thread Jeff Hammond
, Matthew Knepley wrote: > > > >> We default to ch3:sock. Scott MacLachlan just had a long thread on the > >> Firedrake list where it ended up that reconfiguring using ch3:nemesis > had a > >> 2x performance boost on his 16-core proc, and noticeable effect on the

Re: [petsc-dev] Googling PetscMatlabEngine leads to 3.7 version

2020-06-17 Thread Jeff Hammond
gt; > On Mon, 8 Jun 2020, Barry Smith wrote: > > > > >Googling PetscMatlabEngine leads to 3.7 version but > PetscMatlabEngineDestroy leads to 3.13. > > > >Any idea why some of the manual pages don't point to the latest > version? > > > >Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
Jeff dumb. Make copy paste error. Sorry. Jeff On Mon, Jun 1, 2020 at 5:02 PM Jeff Hammond wrote: > I'm still unable to get a basic matrix transpose working. I may be > stupid, but I cannot figure out why the object is in the wrong state, no > matter what I do. > > This i

Re: [petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
ited demand for more > sophisticated operations on objects of type MATTRANSPOSE and there > probably isn't much benefit in fusing the parallel version here anyway. > > Jeff Hammond writes: > > > I am trying to understand how to use a transposed matrix view along the

[petsc-dev] MatCreateTranspose semantics

2020-06-01 Thread Jeff Hammond
age to petsc-ma...@mcs.anl.gov-- What do I need to do to use a transpose view properly outside of M*V? Thanks, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Meaning of PETSc matrices with zero rows but nonzero columns?

2020-06-01 Thread Jeff Hammond
x0, B 0x8, C 8x7 -> (ABC) is a valid 3x7 matrix (empty) > > If I understand you right, (AB) would be a 0x0 matrix, and it can no longer > be multiplied against C > > > Richard, what is the hardship in preserving the shape relations? > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] why does the Ubuntu PETSc package install coarray Fortran?

2020-06-01 Thread Jeff Hammond
t; octave-iso2mesh-0:1.9.1-5.fc32.x86_64 > > petsc-0:3.12.3-2.fc32.x86_64 > > > > I don't know what the equivalent in deb world is. > > > > So I would manually check on some of the petsc dependencies > > > > apt install libblas-dev > > > > apt insta

[petsc-dev] why does the Ubuntu PETSc package install coarray Fortran?

2020-05-31 Thread Jeff Hammond
will be used. -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Request for comments: allow C99 internally

2020-03-10 Thread Jeff Hammond
for variadic macros and for-loop declarations, but not VLAs. > > > -- > Lisandro Dalcin > > Research Scientist > Extreme Computing Research Center (ECRC) > King Abdullah University of Science and Technology (KAUST) > http://ecrc.kaust.edu.sa/ > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Fwd: [Xlab] El Capitan CPU announcement

2020-03-10 Thread Jeff Hammond
> >> AMD >> >> https://www.anandtech.com/show/15581/el-capitan-supercomputer-detailed-amd-cpus-gpus-2-exaflops >> >> >> _______ >> Xlab mailing list >> x...@lists.cels.anl.gov >> https://lists.cels.anl.gov/mailman/listinfo/xlab >> >> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-08 Thread Jeff Hammond
0x200023a6) >> >>>libevent_pthreads-2.1.so.6 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libevent_pthreads-2.1.so.6 >> (0x200023ae) >> >>>libopen-rte.so.3 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-rte.so.3 >> (0x200023b1) >> >>>libopen-pal.so.3 => >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-pal.so.3 >> (0x200023c2) >> >>>libXau.so.6 => /usr/lib64/libXau.so.6 (0x200023d1) >> >>> >> >>> >> >>>> On Feb 7, 2020, at 2:31 PM, Smith, Barry F. >> wrote: >> >>>> >> >>>> >> >>>> ldd -o on the executable of both linkings of your code. >> >>>> >> >>>> My guess is that without PETSc it is linking the static version of >> the needed libraries and with PETSc the shared. And, in typical fashion, >> the shared libraries are off on some super slow file system so take a long >> time to be loaded and linked in on demand. >> >>>> >> >>>> Still a performance bug in Summit. >> >>>> >> >>>> Barry >> >>>> >> >>>> >> >>>>> On Feb 7, 2020, at 12:23 PM, Zhang, Hong via petsc-dev < >> petsc-dev@mcs.anl.gov> wrote: >> >>>>> >> >>>>> Hi all, >> >>>>> >> >>>>> Previously I have noticed that the first call to a CUDA function >> such as cudaMalloc and cudaFree in PETSc takes a long time (7.5 seconds) on >> summit. Then I prepared a simple example as attached to help OCLF reproduce >> the problem. It turned out that the problem was caused by PETSc. The >> 7.5-second overhead can be observed only when the PETSc lib is linked. If I >> do not link PETSc, it runs normally. Does anyone have any idea why this >> happens and how to fix it? >> >>>>> >> >>>>> Hong (Mr.) >> >>>>> >> >>>>> bash-4.2$ cat ex_simple.c >> >>>>> #include >> >>>>> #include >> >>>>> #include >> >>>>> >> >>>>> int main(int argc,char **args) >> >>>>> { >> >>>>> clock_t start,s1,s2,s3; >> >>>>> double cputime; >> >>>>> double *init,tmp[100] = {0}; >> >>>>> >> >>>>> start = clock(); >> >>>>> cudaFree(0); >> >>>>> s1 = clock(); >> >>>>> cudaMalloc((void **),100*sizeof(double)); >> >>>>> s2 = clock(); >> >>>>> cudaMemcpy(init,tmp,100*sizeof(double),cudaMemcpyHostToDevice); >> >>>>> s3 = clock(); >> >>>>> printf("free time =%lf malloc time =%lf copy time =%lf\n",((double) >> (s1 - start)) / CLOCKS_PER_SEC,((double) (s2 - s1)) / >> CLOCKS_PER_SEC,((double) (s3 - s2)) / CLOCKS_PER_SEC); >> >>>>> >> >>>>> return 0; >> >>>>> } >> >>>>> >> >>>>> >> >>>> >> >>> >> >> >> > >> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] https://www.dursi.ca/post/hpc-is-dying-and-mpi-is-killing-it.html

2019-03-18 Thread Jeff Hammond via petsc-dev
On Sun, Mar 17, 2019 at 6:55 PM Jed Brown wrote: > Jeff Hammond writes: > > > When this was written, I was convinced that Dursi was wrong about > > everything because one of the key arguments against MPI was > > fault-intolerance, which I was sure was going to b

Re: [petsc-dev] https://www.dursi.ca/post/hpc-is-dying-and-mpi-is-killing-it.html

2019-03-17 Thread Jeff Hammond via petsc-dev
t; >Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > <http://www.cse.buffalo.edu/~knepley/> > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] PETSc - MPI3 functionality

2018-09-10 Thread Jeff Hammond
discuss the > communication pattern first. You said you are working with a FEM model, > but also mention "igatherv". Is this for some sequential mesh > processing task or is it related to the solver? There isn't a > neighborhood igatherv and MPI_Igatherv isn't a pattern that should ever > be needed in a FEM solver. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] How to enforce private data/methods in PETSc?

2018-08-11 Thread Jeff Hammond
to know). With C++, one can use private to > minimize > >>>>>> data exposure. > >>>>>> > >>>>> > >>>>> This just has to be coding discipline. People should not be accessing > >>>>> private memb

Re: [petsc-dev] GAMG error with MKL

2018-07-10 Thread Jeff Hammond
On Tue, Jul 10, 2018 at 11:27 AM, Richard Tran Mills wrote: > On Mon, Jul 9, 2018 at 10:04 AM, Jed Brown wrote: > >> Jeff Hammond writes: >> >> > This is the textbook Wrong Way to write OpenMP and the reason that the >> > thread-scalability of DOE

Re: [petsc-dev] GAMG error with MKL

2018-07-10 Thread Jeff Hammond
On Tue, Jul 10, 2018 at 9:33 AM, Jed Brown wrote: > Mark Adams writes: > > > On Mon, Jul 9, 2018 at 7:19 PM Jeff Hammond > wrote: > > > >> > >> > >> On Mon, Jul 9, 2018 at 7:38 AM, Mark Adams wrote: > >> > >>

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
On Mon, Jul 9, 2018 at 11:36 AM, Smith, Barry F. wrote: > > > > On Jul 9, 2018, at 8:33 AM, Jeff Hammond wrote: > > > > > > > > On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. > wrote: > > > > Richard, > > > > The

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
llel and then use explicit >> threading in a manner similar to MPI (omp_get_num_threads=MPI_Comm_size >> and omp_get_thread_num=MPI_Comm_rank), then this is the Right Way to >> write OpenMP. >> > > Note, Chombo (Phil Collela) split from BoxLib (John Bell) ab

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
On Mon, Jul 9, 2018 at 6:38 AM, Matthew Knepley wrote: > On Mon, Jul 9, 2018 at 9:34 AM Jeff Hammond > wrote: > >> On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F. >> wrote: >> >>> >>> Richard, >>> >>> The problem is that

Re: [petsc-dev] GAMG error with MKL

2018-07-09 Thread Jeff Hammond
ny good solutions. > > > > --Richard > > > > On Wed, Jul 4, 2018 at 11:38 PM, Smith, Barry F. > wrote: > > > >Jed, > > > > You could use your same argument to argue PETSc should do > "something" to help people who have (righ

Re: [petsc-dev] GAMG error with MKL

2018-07-04 Thread Jeff Hammond
On Wed, Jul 4, 2018 at 6:31 AM Matthew Knepley wrote: > On Tue, Jul 3, 2018 at 10:32 PM Jeff Hammond > wrote: > >> >> >> On Tue, Jul 3, 2018 at 4:35 PM Mark Adams wrote: >> >>> On Tue, Jul 3, 2018 at 1:00 PM Richard Tran Mills >>> wrote: >

Re: [petsc-dev] GAMG error with MKL

2018-07-03 Thread Jeff Hammond
/pc/impls/gamg/agg.c >>> [0]PETSC ERROR: #4 PCSetUp_GAMG() line 517 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/impls/gamg/gamg.c >>> [0]PETSC ERROR: #5 PCSetUp() line 932 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #6 KSPSetUp() line 381 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #7 KSPSolve() line 612 in >>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 224 in >>> /global/u2/m/madams/petsc_install/petsc/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #9 SNESSolve() line 4350 in >>> /global/u2/m/madams/petsc_install/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #10 main() line 161 in >>> /global/homes/m/madams/petsc_install/petsc/src/snes/examples/tutorials/ex19.c >>> [0]PETSC ERROR: PETSc Option Table entries: >>> [0]PETSC ERROR: -ksp_monitor_short >>> [0]PETSC ERROR: -mat_type aijmkl >>> [0]PETSC ERROR: -options_left >>> [0]PETSC ERROR: -pc_type gamg >>> [0]PETSC ERROR: -snes_monitor_short >>> >>> >> -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] [petsc-maint] Incident INC0122538 MKL on Cori/KNL

2018-07-03 Thread Jeff Hammond
s would > just be valgrind not supporting the instruction set. You could > downgrade the target arch via compiler flags or see if the latest (dev > version?) Valgrind supports the instruction. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] building with MKL

2018-07-01 Thread Jeff Hammond
On Sun, Jul 1, 2018 at 10:43 AM Victor Eijkhout wrote: > > > On Jul 1, 2018, at 12:30 PM, Jeff Hammond wrote: > > If you really want to do this, then replace COMMON with CORE to specialize > for SKX. There’s not point to using COMMON if you’ve got the MIC path > already. &

Re: [petsc-dev] building with MKL

2018-07-01 Thread Jeff Hammond
orials/ex19.c >> > > [0]PETSC ERROR: PETSc Option Table entries: >> > > [0]PETSC ERROR: -da_refine 3 >> > > [0]PETSC ERROR: -ksp_monitor >> > > [0]PETSC ERROR: -mat_type aijmkl >> > > [0]PETSC ERROR: -options_left >> > > [0]PETSC ERROR: -pc_type gamg >> > > [0]PETSC ERROR: -snes_monitor_short >> > > [0]PETSC ERROR: -snes_view >> > > [0]PETSC ERROR: End of Error Message ---se >> > >> > On Sat, Jun 30, 2018 at 3:08 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > OK, that got further. >> > >> > On Sat, Jun 30, 2018 at 3:03 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > Like this? >> > >> > >> > >> >> '--with-blaslapack-lib=/opt/intel/compilers_and_libraries_2018.1.163/linux/mkl/lib/intel64/libmkl_intel_thread.a', >> > >> > >> > On Sat, Jun 30, 2018 at 3:00 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote: >> > >> > >> > Specify either "--with-blaslapack-dir" or >> > "--with-blaslapack-lib --with-blaslapack-include". >> > But not both! >> > >> > >> > Get rid of the dir option, and give the full path to the >> > library. >> > >> > >> > What is the syntax for giving the full path? >> > >> > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] GCC8 Fortran length changes from int to size_t

2018-05-03 Thread Jeff Hammond
ions that have character arguments manually and it would > be great to remove that manual step. > >Thanks > > Barry > > > > On May 2, 2018, at 11:42 PM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > > > > Or you could just use ISO_C_BINDING. De

Re: [petsc-dev] GCC8 Fortran length changes from int to size_t

2018-05-02 Thread Jeff Hammond
r version) and change the > PETSC_MIXED_LEN / PETSC_END_LEN to use size_t instead of int. > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] More CMake hatred

2018-04-06 Thread Jeff Hammond
.0/gcc-4.8.5/bin/mpicc > > > > [knepley@rush:/projects/academic/knepley/PETSc3/petsc]$ echo $PATH > > > > > /util/common/openmpi/3.0.0/gcc-4.8.5/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/opt/dell/srvadmin/bin:/user/knepley/bin > > > > I cannot see why it would say this. > > It's complaining that the CC is two words instead of a single path. You > can make a link > > ln -s `which ccache` ~/bin/mpicc > > and then CC=$HOME/bin/mpicc will work. Yes, it's irritating. While we are proposing features that place no value on users’ time, why not just ask them to implement their own MPI wrapper scripts with ccache on the inside? Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] petsc/master: unable to link in C++ with last night PETSC_WITH_EXTERNAL_LIB variable changes

2018-02-10 Thread Jeff Hammond
in petscsys.h, but you might not include it in that > file or don't include it before mpi.h is included. > Indeed, everybody should compiler MPI codes with "-DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1" now. I'll ask MPICH and Open-MPI to switch the default to exclude C++ bindings. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] petsc/master: unable to link in C++ with last night PETSC_WITH_EXTERNAL_LIB variable changes

2018-02-10 Thread Jeff Hammond
e or an unwanted/unobserved bug? >> >> Thanks, >> >> Eric >> >> ps: here are the logs: >> >> this night: >> >> - >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_make.log >> >> >> a day before: >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_make.log >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/> > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-09 Thread Jeff Hammond
; > ierr = > MPI_Attr_get(PETSC_COMM_WORLD,Petsc_Counter_keyval,,);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_CORRUPT,"Bad MPI > communicator supplied ???"); > } > > Any ideas? > > Mark > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] IMPORTANT nightly builds are down do not add to next

2017-12-30 Thread Jeff Hammond
On Sat, Dec 30, 2017 at 3:34 PM Jed Brown <j...@jedbrown.org> wrote: > Jeff Hammond <jeff.scie...@gmail.com> writes: > > > On Sat, Dec 30, 2017 at 12:04 PM Jed Brown <j...@jedbrown.org> wrote: > > > >> "Smith, Barry F." <bsm...@mcs

Re: [petsc-dev] IMPORTANT nightly builds are down do not add to next

2017-12-30 Thread Jeff Hammond
l you mirror Petsc there. Examples of both are easily found with an obvious google search. Travis CI is great for covering generic Mac/Homebrew and Ubuntu platforms. You can get almost all compiler versions and MPI libraries in both, with reasonable effort. I can do an initial implementation if you aren’t inclined to do it yourselves. Jeff > >Basically another Satish, ideas on how to find such a person? > > Impossible to find "another Satish". > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] configure error at NERSC

2017-08-10 Thread Jeff Hammond
d.c(244):ERROR:105: Unable to locate a modulefile for >>>> > > 'autoconf' >>>> > > ModuleCmd_Load.c(244):ERROR:105: Unable to locate a modulefile for >>>> > > 'automake' >>>> > > ModuleCmd_Load.c(244):ERROR:105: Unable to locate a modulefile for >>>> > > 'valgrind' >>>> > > >>>> > > >>>> > >Matt >>>> > > >>>> > > >>>> > >> Thanks >>>> > >> >>>> > > >>>> > > >>>> > > >>>> > > -- >>>> > > What most experimenters take for granted before they begin their >>>> > > experiments is infinitely more interesting than any results to >>>> which their >>>> > > experiments lead. >>>> > > -- Norbert Wiener >>>> > > >>>> > > http://www.caam.rice.edu/~mk51/ >>>> > > >>>> >>> >>> >> > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] PETSc statistics

2017-06-06 Thread Jeff Hammond
;>> > >>> * any estimates of the number of actual users > >>> > >> > >> I usually say tens of thousands. Our users meeting attracts 100+ > presenters > >> each year. > >> > >> Matt > >> > >> > >>> Any other metrics would be welcome as well. > >>> > >>> Thanks, > >>> Dave > >>> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which > their > >> experiments lead. > >> -- Norbert Wiener > >> > >> http://www.caam.rice.edu/~mk51/ > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[petsc-dev] fork for programming models debate (was "Using multiple mallocs with PETSc")

2017-03-14 Thread Jeff Hammond
On Mon, Mar 13, 2017 at 8:08 PM, Jed Brown <j...@jedbrown.org> wrote: > > Jeff Hammond <jeff.scie...@gmail.com> writes: > > > OpenMP did not prevent OpenCL, > > This programming model isn't really intended for architectures with > persistent caches. > It

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-13 Thread Jeff Hammond
s exactly the same as malloc, which also > doesn't promise unfaulted pages. This is one reason some of us keep saying > that OpenMP sucks. It's a shitty standard that obstructs better standards > from being created. > > > On March 12, 2017 11:19:49 AM MDT, Jeff Hammond <j

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-12 Thread Jeff Hammond
On Sat, Mar 11, 2017 at 9:00 AM Jed Brown <j...@jedbrown.org> wrote: > Jeff Hammond <jeff.scie...@gmail.com> writes: > > I agree 100% that multithreaded codes that fault pages from the main > thread in a NUMA environment are doing something wrong ;-) > > >

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-10 Thread Jeff Hammond
> On Mar 10, 2017, at 12:42 PM, Jed Brown <j...@jedbrown.org> wrote: > > Jeff Hammond <jeff.scie...@gmail.com> writes: >> My intuition is that any HPC code that benefits from mapping the zero page >> vs memset is doing something wrong. > > Th

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-10 Thread Jeff Hammond
d implement > PetscCalloc properly. As I understand it, calloc != malloc+memset, and the > differences might be important in multicore+multithreading scenarios and > the first-touch policy. > > My intuition is that any HPC code that benefits from mapping the zero

Re: [petsc-dev] Using multiple mallocs with PETSc

2017-03-09 Thread Jeff Hammond
etween). One consequence of using libnuma to manage MCDRAM is that one can call numa_move_pages, which Jed has asserted is the single most important function call in the history of memory management ;-) Jeff > Perhaps I should try memkind calls since they may become much better. > &g

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-09 Thread Jeff Hammond
> > call VecScatterCreate(this%xVec,PETSC_NULL_OBJECT,vec,is,this% > from_petsc,ierr) > > ! reverse scatter object > > > > If we want to make this change then I could help a developer or you > > can get me set up with a (small) test problem and a branch and I can > > do it at NERSC. > > > > Thanks, > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Compilation with ESSL SMP

2017-02-20 Thread Jeff Hammond
gt; --with-blas-lapack-lib option. > > Wrt libmkl_intel_thread vs libmkl_sequential - configure defaults to > libmkl_sequential for reqular use. > > libmkl_intel_thread is picked up only when MKL pardiso is requested > [so the assumption here is aware of pardiso requirements w

Re: [petsc-dev] download problem p4est on Cori at NERSC

2017-02-18 Thread Jeff Hammond
rk on KNL. Maybe I should do > with-batch=1, > > > On Wed, Feb 15, 2017 at 4:36 PM, Mark Adams <mfad...@lbl.gov> wrote: > >> I get this error on the KNL partition at NERSC. P4est works on the >> Haswell partition and other downloads seem to work on the KNL partitio

Re: [petsc-dev] segfaults with mpich 3.2 and clang (Apple LLVM 8.0.0, clang 800.0.42.1)

2017-02-10 Thread Jeff Hammond
If this is a unknown or undiagnosed issue, I will try a standalone PETSc > install to try to reproduce it. > > > > Thanks, > > Brad > > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Configure fails on macOS with MKL

2016-12-21 Thread Jeff Hammond
Is Petsc build system not capable of running "find" on $MKLROOT as a hedge against subdirectories reorganization? At least with Intel compilers, "-mkl" is the easy button unless you're trying to get threaded BLAS/LAPACK with ScaLAPACK. Jeff On Wed, Dec 21, 2016 at 7:37 AM Pierre Jolivet

Re: [petsc-dev] KNL MatMult performance and unrolling.

2016-09-28 Thread Jeff Hammond
led > and non-unrolled also on Xeon. > >We've never done a good job of managing our unrolling, where, how and > when we do it and macros for unrolling such as PetscSparseDensePlusDot. > Intel would say just throw it all away. > >Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] fortran literals

2016-09-02 Thread Jeff Hammond
t;selected_real_kind(5)" means the same across compilers, so that hardcoding > kind values may not be portable. > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] fortran literals

2016-09-01 Thread Jeff Hammond
<bour...@lsu.edu> wrote: > Neat. > I was not aware of this. > > Blaise > > > On Sep 1, 2016, at 9:26 PM, Jeff Hammond <jeff.scie...@gmail.com > <javascript:;>> wrote: > > > > https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_

Re: [petsc-dev] fortran literals

2016-09-01 Thread Jeff Hammond
https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.htmlhas real{32,64,128} that does a nice job for this situation. Jeff Sent from my iPhone > On Sep 1, 2016, at 4:15 PM, Blaise A Bourdin wrote: > > Hi, > > If I recall correctly, fortran does not mandate that

Re: [petsc-dev] Fwd: hbw_malloc on KNL

2016-08-31 Thread Jeff Hammond
verything on MCDRAM). Although it works somehow, adding a test call to > how_malloc() like this is just absurd. I am wondering if you have any clue > on why this happens and how to fix it. FYI, I attached the driver for the > PETSc example. > > Thanks, > Hong > > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Question regarding updating PETSc Fortran examples to embrace post F77 constructs

2016-08-26 Thread Jeff Hammond
If PETSc has symbols greater than 6 characters, it has never been Fortran 77 compliant anyways, so it's a bit strange to ask permission to break it in another way in the examples. Sorry for being a pedant but we've had this debate in the MPI Forum and it is false to conflate punchcard Fortran

Re: [petsc-dev] Elemental for sparse, distributed-memory, direct, quad-precision linear solves?

2016-08-23 Thread Jeff Hammond
>> external package for PETSc? > > > > > > I think you need Clique and I am not sure we have an interface. I would > > check first. > > > >Matt > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
On Friday, July 8, 2016, Barry Smith <bsm...@mcs.anl.gov> wrote: > > > On Jul 8, 2016, at 12:17 PM, Jeff Hammond <jeff.scie...@gmail.com > <javascript:;>> wrote: > > > > > > > > On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills <richardtmi...@g

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills <richardtmi...@gmail.com> wrote: > > > On Fri, Jul 8, 2016 at 9:40 AM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > >> >>> > 1) How do we run at bandwidth peak on new architectures like Cori or &

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-08 Thread Jeff Hammond
sufficient. *Disclaimer: All of these are academic comments. Do not use them to try to influence others or make any decisions. Do your own research and be skeptical of everything I derived from the Internet.* Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-07 Thread Jeff Hammond
On Thu, Jul 7, 2016 at 4:34 PM, Richard Mills <richardtmi...@gmail.com> wrote: > On Fri, Jul 1, 2016 at 4:13 PM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > >> [...] >> >> Maybe I am just biased because I spend all of my time reading >> www.

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-07 Thread Jeff Hammond
erarchy of shared LLC and DRAM to a 3-tier hierarchy of fast memory (e.g. HBM), regular memory (e.g. DRAM), and slow (likely nonvolatile) memory on a node. Xeon Phi and some GPUs have caches, but it is unclear to me if it actually benefits software like PETSc to consider them. Figuring out how to run PETSc effectively on KNL should be generally useful... Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

2016-07-01 Thread Jeff Hammond
r new codes could be in, just not for the very large scale. > > Rough ideas and pointers to publications are all useful. There is an > extremely short fuse so the sooner the better, > > Thanks > > Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] WTF

2016-06-30 Thread Jeff Hammond
> The default settings often don’t work well for 3D problems. Are 2D (1D?) problems really the common case for PDE solvers? Aren't interesting problems 3D? Shouldn't the defaults be set to optimize for 3D? Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] WTF

2016-06-30 Thread Jeff Hammond
On Wed, Jun 29, 2016 at 8:18 PM, Barry Smith <bsm...@mcs.anl.gov> wrote: > > > On Jun 29, 2016, at 10:06 PM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > > > > > > > > On Wednesday, June 29, 2016, Barry Smith <bsm...@mcs.anl.gov> wrote

Re: [petsc-dev] WTF

2016-06-29 Thread Jeff Hammond
NL since I think this cannot be true. > > > >Matt > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] NEVER put // into PETSc code. PETSc is C89, the only real C.

2016-06-23 Thread Jeff Hammond
Then, in like 2028, PETSc can move to C11, which has useful features like atomics in the language (which are useful not just for threaded programs, but also MPI+shared-memory, which I know the PETSc team likes). I too am a huge fan of VLAs, but VLA for non-POD types are "hard", which is

Re: [petsc-dev] NEVER put // into PETSc code. PETSc is C89, the only real C.

2016-06-22 Thread Jeff Hammond
, Barry Smith <bsm...@mcs.anl.gov> wrote: > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] technical C question

2016-04-07 Thread Jeff Hammond
t;> >> ncols_upper++; >> >> } >> >> } >> >> -ierr = >> PetscIncompleteLLAdd(ncols_upper,cols,levels,cols_lvl,am,nlnk,lnk,lnk_lvl,lnkbt);CHKERRQ(ierr); >> >> +ierr = >> PetscIncompleteLLAdd(ncols,cols,leve

Re: [petsc-dev] Need help with Fortran code/compiler issue

2016-04-07 Thread Jeff Hammond
; /Users/petsc/petsc.clone-4/src/sys/objects/f2003-src/fsrc/optionenum.F:35:0: > >CArray = (/(c_loc(list1(i)),i=1,Len),c_loc(nullc)/) > ^ > Warning: 'list1.data' may be used uninitialized in this function > [-Wmaybe-uninitialized] > > Do any Fortran programmers know if the code is correct or if this is an > incorrect warning from the compiler? > >Thanks > > Barry > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-29 Thread Jeff Hammond
ormance, but about providing the ability for users > to 'implant' their own memory buffers. CUSP doesn't allow it (which was the > initial point of this thread). > > Thanks. Sorry I missed that. Given CPU memcpy is an order of magnitude more bandwidth than PCI offload, I still don't g

Re: [petsc-dev] PETSc release soon, request for input on needed fixes or enhancements

2016-02-28 Thread Jeff Hammond
sh > > On Sun, 28 Feb 2016, Jeff Hammond wrote: > > > On Sunday, February 28, 2016, Matthew Knepley <knep...@gmail.com > <javascript:;>> wrote: > > > > > On Sun, Feb 28, 2016 at 3:12 PM, Barry Smith <bsm...@mcs.anl.gov > <javascript:;> >

Re: [petsc-dev] PETSc release soon, request for input on needed fixes or enhancements

2016-02-28 Thread Jeff Hammond
t; >> >> Hi Barry, >> >> >> >> the configuration of hypre looks broken since last night...? >> >> >> >> Here is the log issued from our automatic compilation of >> petsc-master.tar.gz >> >> at 2016-02-28 02h00: >> >> >> >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/configure_20160228_0200.log >> >> >> >> Everything was fine until last night... >> >> >> >> Thanks, >> >> >> >> Eric >> >> >> >> Le 2016-02-27 15:36, Barry Smith a écrit : >> >>>PETSc Users, >> >>> >> >>> We are planning the PETSc release 3.7 shortly. If you know of any >> bugs >> >>> that need to be fixed or enhancements added before the release >> please >> >>> let us know. >> >>> >> >>> You can think of the master branch of the PETSc repository >> obtainable >> >>> with >> >>> >> >>> git clone https://bitbucket.org/petsc/petsc petsc >> >>> >> >>> as a release candidate for 3.7. Changes for the release are listed at >> >>> http://www.mcs.anl.gov/petsc/documentation/changes/dev.html >> >>> >> >>>Thanks >> >>> >> >>>Barry >> >>> >> >> >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-28 Thread Jeff Hammond
I missing? Jeff > Best regards, > Karli > > > > >>>>> If there is interest we can help in adding this stuff. >>>>> >>>> >>>> What are your time constraints? >>>> >>>> Best regards, >>>> Karli >>>> >>>> >>>> >>> -- >>> Dominic Meiser >>> Tech-X Corporation - 5621 Arapahoe Avenue - Boulder, CO 80303 >>> >> >> > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] MKL issue

2016-01-28 Thread Jeff Hammond
entially imperfect. > > Is it something we can check for at configure time or we just wait that > > Intel ships MKL with a shared version of libmkl_blacs_(i)lp64? > > You can write a test for it or wait. Since the test needs to execute > code, it would need to run as part of co

Re: [petsc-dev] valgrind errors in the SUPERLU*

2016-01-16 Thread Jeff Hammond
> And then - use this file for your next run - to catch the actual issues. > > https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto has some of > this info.. > > > We normally recommend using valgrind on linux with mpich built with > '--enable-g=meminit' [--downloa

Re: [petsc-dev] thread communicator

2016-01-12 Thread Jeff Hammond
Lawrence Livermore National Laboratory > > P.O. Box 808, L-561, Livermore, CA 94551 > > Phone - (925) 422-4377, Email - rfalg...@llnl.gov, Web - > http://people.llnl.gov/falgout2 > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] SuperLU build error

2015-12-18 Thread Jeff Hammond
ternal packages is overwriting the change I make. > > There is a similar bug in SuperLU_dist that I also fix by hand. > > Garth > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Ugly feature of __float128

2015-10-28 Thread Jeff Hammond
>I cannot understand why this is done this way. Who wants to convert a > code to __float128 and have to then label all floating point numbers with a > q which of course also means the code is not compilable with other > compilers in double. > > It's ugly, but you can put it in

Re: [petsc-dev] Bug in MatShift_MPIAIJ ?

2015-10-22 Thread Jeff Hammond
the value of nonew was changed on one process but > not on others triggering disaster in the MatAssemblyEnd_MPIA(). > >> > >> This is now fixed in the maint, master and next branches and will be in > the next patch release. I have also attached the patch to this email. > >> > >> Barry > >> > > > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Do we still need to support python 2.4?

2015-10-13 Thread Jeff Hammond
ckoverflow.com/questions/1465036/install-python-2-6-in-centos > [install prebuilt binary from 'fedora epel' > > > http://bda.ath.cx/blog/2009/04/08/installing-python-26-in-centos-5-or-rhel5/ > [install some devel packages from rhel - before attempting to compile > python from sources] &

Re: [petsc-dev] Do we still need to support python 2.4?

2015-10-13 Thread Jeff Hammond
On Tuesday, October 13, 2015, Satish Balay <ba...@mcs.anl.gov> wrote: > On Tue, 13 Oct 2015, Jeff Hammond wrote: > > > PETSc can build MUMPS. Why not Python? :-) > > and gcc :) > > I have GCC builds almost completely automated for similar reasons as Python

Re: [petsc-dev] broken options handling with intel 16.0 compilers on mac OS

2015-09-23 Thread Jeff Hammond
s > > /opt/intel-16.0/compilers_and_libraries_2016.0.083/mac/compiler/lib/libifcore.a > make a difference? > > And what do you have for: > > cd /opt/HPC/mpich-3.1.4-intel16.0/lib > ls > nm -Ao libmpiifort.* |grep get_command_argument > > BTW: I'm assuming you have sept 20 or newer petsc master branch. > > Satish > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Error message while make test petsc-master

2015-08-28 Thread Jeff Hammond
-link static --LIBS=-Wl,--start-group /home/hector/mkl_static/libmkl_intel_lp64.a /home/hector/mkl_static/libmkl_core.a /home/hector/mkl_static/libmkl_intel_thread.a -Wl,--end-group -liomp5 -ldl -lpthread -lm Thanks for your help! Hector -- Jeff Hammond jeff.scie...@gmail.com http

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
would be willing to review your pull request once you've implemented memkind_move_pages(). Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
On Wed, Jun 3, 2015 at 9:58 PM, Jed Brown j...@jedbrown.org wrote: Jeff Hammond jeff.scie...@gmail.com writes: The beauty of git/github is one can make branches to try out anything they want even if Jed thinks that he knows better than Intel how to write system software for Intel's hardware

Re: [petsc-dev] Adding support memkind allocators in PETSc

2015-06-03 Thread Jeff Hammond
If everyone would just indent with tabs, we could just set the indent spacing with our editors ;-) On Wed, Jun 3, 2015 at 10:01 PM, Barry Smith bsm...@mcs.anl.gov wrote: On Jun 3, 2015, at 9:58 PM, Jeff Hammond jeff.scie...@gmail.com wrote: http://git.mpich.org/mpich.git/blob/HEAD:/src/mpi

[petsc-dev] PetscSFFetchAndOpBegin_Window

2015-03-11 Thread Jeff Hammond
, but unless you have a mutex on 'sf', it is theoretically possible that sf-ranks[i] could be changed by another thread in such a way as to lead to an undefined or incorrect program. If you prohibit this behavior as part of the internal design contract, then it should be noted. Best, Jeff -- Jeff Hammond

Re: [petsc-dev] PetscSFFetchAndOpBegin_Window

2015-03-11 Thread Jeff Hammond
save you 0-10 cycles to use the automatic variable rather than to dereference the struct pointer again :-) Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-22 Thread Jeff Hammond
I'll look into a general setenv-like mechanism for CVARs rather than an abort-specific one-off. Might be there already through MPI_T interface (which is standard, unlike MPIX). Jeff Sent from my iPhone On Feb 21, 2014, at 11:35 PM, Jed Brown j...@jedbrown.org wrote: Jeff Hammond jeff.scie

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-22 Thread Jeff Hammond
Thanks for catching this. I tested my patch but not through valgrind. Thanks also for figuring out the line break issue. I figured that was coming from the device but didn't track it down. Jeff On Sat, Feb 22, 2014 at 2:07 PM, Jed Brown j...@jedbrown.org wrote: Jeff Hammond jeff.scie

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss ___ discuss mailing list disc...@mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
(application called MPI_Abort...) and (2) the job launcher (BAD TERMINATION...). You can eliminate the messages from the job launcher by providing an error code of 0 in MPI_Abort. ~Jim. On Fri, Feb 21, 2014 at 1:19 PM, Jeff Hammond jeff.scie...@gmail.com wrote: Just configure MPICH

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
...). You can eliminate the messages from the job launcher by providing an error code of 0 in MPI_Abort. ~Jim. On Fri, Feb 21, 2014 at 1:19 PM, Jeff Hammond jeff.scie...@gmail.com wrote: Just configure MPICH such that snprintf isn't discovered by configure and you won't see

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
sources: (1) the MPICH library (application called MPI_Abort...) and (2) the job launcher (BAD TERMINATION...). You can eliminate the messages from the job launcher by providing an error code of 0 in MPI_Abort. ~Jim. On Fri, Feb 21, 2014 at 1:19 PM, Jeff Hammond jeff.scie...@gmail.com

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
=== ___ discuss mailing list disc...@mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond jeff.scie...@gmail.com

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
=== ___ discuss mailing list disc...@mpich.org To manage subscription options or unsubscribe: https://lists.mpich.org/mailman/listinfo/discuss -- Jeff Hammond

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
https://trac.mpich.org/projects/mpich/ticket/2038 has the patches. Jeff On Fri, Feb 21, 2014 at 3:47 PM, Jeff Hammond jeff.scie...@gmail.com wrote: Barry: Can you tolerate the following workaround for Hydra's error cleanup or do you need it to be internal? I presume you know enough bash

Re: [petsc-dev] [mpich-discuss] turning off MPI abort messages

2014-02-21 Thread Jeff Hammond
it, but he learned MPI from Bill Gropp, so he might not know anything ;-) I apologize for being unpleasant earlier. Best, Jeff Barry On Feb 21, 2014, at 3:10 PM, Jeff Hammond jeff.scie...@gmail.com wrote: Barry: Would the following behavior be acceptable to you? I have only made the changes

  1   2   >