On Fri, Jul 18, 2014 at 8:16 PM, John Yawney <jyawney...@gmail.com> wrote:
> Hello, > > I had a question about PETSc installations. On my local computer I > configured PETSc (v 3.4.2) using the options: > > ./configure --with-cc=mpicc --with-cxx=mpic++ --download-f-blas-lapack > --download-mpich --download-hypre > > I wrote a test program that defines a vector using DMDAs, computes a dot > product, exchanges halo elements, and computes a low-order FD derivative of > the vector. Under my installation of PETSc everything works fine. For > some reason, when my colleagues run the program, they get segmentation > fault errors. If they change the y and z boundary types to GHOSTED as well, > they get the program to run until the end (seg faults at the end though) > but they get a local value of the dot product. I've attached the main.cpp > file for this script. > > When they installed their versions of PETSc they didn't use the > --download-mpich option but instead used either: > ./configure --download-f-blas-lapack --with-scalar-type=complex > or with the > option: --with-mpi-dir=/home/kim/anaconda/pkgs/mpich2-1.3-py27_0 > > Could this be causing a problem with the parallelization under PETSc? > I have run your code on my machine up to P=8 and used valgrind. No problems turned up other than the fact that you need a "return 0" is missing from main(). If there are still problems, please have them send a stack trace. Thanks, Matt > Thanks for the help and sorry for the long question. > > Best regards, > John > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener