On Sat, 19 Jul 2014, Francis Poulin wrote:

> Hello Barry,
> 
> I was one of the two people that had difficulties with getting the correct 
> results with John's code.  Previously, I didn't have valgrind installed so I 
> installed it using apt-get.  Then I configured it using the following:
> 
> ./configure --with-scalar-type=complex --with-cc=gcc -—with-cxx=c++ 
> --with-fc=gfortran --with-c2html=0 --download-mpich --download-scalapack 
> --download-hypre
> 
> This is on ubuntu and is different from what I tried before in that now I am 
> downloading mpich, scalapack and hypre.   I decided to download scalapack 
> since that seems like it could be useful.  I was told that HYPRE doesn't work 
> with complex variables.  Too bad, but not a big deal.

PETSc does not use scalapack. Its useful only if you are using mumps..

--download-metis --download-parmetis --download-scalapack --download-mumps

> 
> It completes the configure, make all and make test and even gives me the 
> figures of the parallel efficiency (or not quite efficiency maybe).  I didn't 
> catch any errors, but there are possible errors in the log.  When I went to 
> try making an example I found that I can't use petscmpiexec to run anything 
> in serial or parallel.
> 
> fpoulin@vortex:~/software/petsc/src/ts/examples/tutorials$ 
> /home/fpoulin/software/petsc/bin/petscmpiexec -n 1 ./ex1
> -bash: /home/fpoulin/software/petsc/bin/petscmpiexec: /bin/csh: bad 
> interpreter: No such file or directory

Perhaps you do not have csh installed on this machine. You can use mpiexec 
directly

./ex1
/home/fpoulin/software/petsc/arch-linux2-c-debug/bin/mpiexec -n 1 ./ex1
/home/fpoulin/software/petsc/arch-linux2-c-debug/bin/mpiexec -n 2 ./ex1

Satish

> 
> I am sorry to bother you with this.  I am also having issues with my 
> installation on my mac but I thought if i can figure this one out then maybe 
> I will have a better idea what's wrong with the other.
> 
> Thank,
> Francis
> 
> 
> ------------------
> Francis Poulin
> Associate Professor
> Associate Chair, Undergraduate Studies
> Department of Applied Mathematics
> University of Waterloo
> 
> email:           fpou...@uwaterloo.ca
> Web:            https://uwaterloo.ca/poulin-research-group/
> Telephone:  +1 519 888 4567 x32637
> 
> 
> ________________________________________
> From: Barry Smith [bsm...@mcs.anl.gov]
> Sent: Friday, July 18, 2014 9:57 PM
> To: John Yawney
> Cc: petsc-users@mcs.anl.gov; Francis Poulin; Kim Usi
> Subject: Re: [petsc-users] Question about PETSc installs and MPI
> 
>    I ran the program on linux with 1,2, 4 processes under valgrind for both 
> types of boundary conditions and it ran fine.
> 
>    Suggest your colleagues do a test configure of PETSc using —download-mpich 
> and see if they still get the problem or if it runs ok.
> 
>    The can also run with valgrind and see what it reports. 
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> 
>    Barry
> 
> On Jul 18, 2014, at 8:16 PM, John Yawney <jyawney...@gmail.com> wrote:
> 
> > Hello,
> >
> > I had a question about PETSc installations. On my local computer I 
> > configured PETSc (v 3.4.2) using the options:
> >
> > ./configure --with-cc=mpicc --with-cxx=mpic++ --download-f-blas-lapack 
> > --download-mpich --download-hypre
> >
> > I wrote a test program that defines a vector using DMDAs, computes a dot 
> > product, exchanges halo elements, and computes a low-order FD derivative of 
> > the vector. Under my installation of PETSc everything works fine. For some 
> > reason, when my colleagues run the program, they get segmentation fault 
> > errors. If they change the y and z boundary types to GHOSTED as well, they 
> > get the program to run until the end (seg faults at the end though) but 
> > they get a local value of the dot product. I've attached the main.cpp file 
> > for this script.
> >
> > When they installed their versions of PETSc they didn't use the 
> > --download-mpich option but instead used either:
> > ./configure --download-f-blas-lapack --with-scalar-type=complex
> > or with the option: --with-mpi-dir=/home/kim/anaconda/pkgs/mpich2-1.3-py27_0
> >
> > Could this be causing a problem with the parallelization under PETSc?
> >
> > Thanks for the help and sorry for the long question.
> >
> > Best regards,
> > John
> > <main.cpp>
> 
> 

Reply via email to