Re: [petsc-users] Warning while compiling Fortran with PETSc

2022-02-09 Thread Balay, Satish via petsc-users
Are you using the same MPI to build both PETSc and your appliation?

Satish

On Wed, 2022-02-09 at 05:21 +0100, Bojan Niceno wrote:
> To whom it may concern,
> 
> 
> I am working on a Fortran (2003) computational fluid dynamics solver,
> which is actually quite mature, was parallelized with MPI from the
> very beginning and it comes with its own suite of Krylov solvers. 
> Although the code is self-sustained, I am inclined to believe that it
> would be better to use PETSc instead of my own home-grown solvers.
> 
> In the attempt to do so, I have installed PETSc 3.16.4 with following
> options:
> 
> ./configure --with-debugging=yes --download-openmpi=yes --download-
> fblaslapack=yes --download-metis=yes --download-parmetis=yes --
> download-cmake=yes
> 
> on a workstation running Ubuntu 20.04 LTS.  The mpif90 command which
> I use to compile the code, wraps gfortran with OpenMPI, hence the
> option "--download-openmpi=yes" when configuring PETSc.
> 
> Anyhow, installation of PETSc went fine, I managed to link and run it
> with my code, but I am getting the following messages during
> compilation:
> 
> Petsc_Mod.f90:18:6:
> 
>    18 |   use PetscMat, only: tMat, MAT_FINAL_ASSEMBLY
>       |      1
> Warning: Named COMMON block ‘mpi_fortran_bottom’ at (1) shall be of
> the same size as elsewhere (4 vs 8 bytes)
> 
> Petsc_Mod.f90 is a module I wrote for interfacing PETSc.  All works,
> but these messages give me a reason to worry.
> 
> Can you tell what causes this warnings?  I would guess they might
> appear if one mixes OpenMPI with MPICH, but I don't think I even have
> MPICH on my system.
> 
> Please let me know what you think about it?
> 
>     Cheers,
> 
>     Bojan
> 
> 
> 
> 



Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-21 Thread Balay, Satish via petsc-users
I would suggest installing regular 32bit int blas/lapack - And then using it 
with --with-blaslapack-lib option

[we don't know what -fdefault-integer-8 does with --download-fblaslapack - if 
it really creates --known-64-bit-blas-indices variant of blas/lapack or not]

Satish

On Tue, 21 Jan 2020, Smith, Barry F. via petsc-users wrote:

> 
>I would avoid OpenBLAS it just introduces one new variable that could 
> introduce problems. 
> 
>PetscErrorCode is ALWAYS 32 bit,   PetscInt becomes 64 bit with 
> --with-64-bit-indices,   PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is 
> usually 32 bit unless you build with a special BLAS that supports 64 bit 
> indices. 
> 
>In theory the ex5f should be fine,  we test it all the time with all 
> possible values of the integer. Please redo the ./configure with 
> --with-64-bit-indices --download-fblaslapack and send the configure.log this 
> provides the most useful information on the decisions configure has made.
> 
> Barry
> 
> 
> > On Jan 21, 2020, at 4:28 AM, Дмитрий Мельничук 
> >  wrote:
> > 
> > > First you need to figure out what is triggering:
> > 
> > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot 
> > > open shared object file: No such file or directory
> > 
> > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe 
> > > the debugger will help.
> > 
> > >   Second
> > >   VecNorm_Seq line 221 
> > > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c
> > 
> > 
> > >  Debugger is best to find out what is triggering this. Since it is the C 
> > > side of things it would be odd that the Fortran change affects it.
> > 
> > >   Barry
> >  
> > 
> > I am in the process of finding out the causes of these errors.
> > 
> > I'm inclined to the fact that BLAS has still some influence on what is 
> > happening.
> > Because testing of 32-bit version of PETSc gives such weird error with 
> > mpiexec.exe, but Fortran example ex5f completes succeccfully.
> > 
> > I need to say that my solver compiled with 64-bit version of PETSc failed 
> > with Segmentation Violation error (the same as ex5f) when calling 
> > KSPSolve(Krylov,Vec_F,Vec_U,ierr).
> > During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also 
> > VecNorm_Seq  uses several types of integer: PetscErrorCode, PetscInt, 
> > PetscBLASInt.
> > I suspect that PetscBLASInt may conflict with PetscInt.
> > Also I noted that execution of KSPSolve() does not even start , so 
> > arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve().
> > (inserted fprint() in the top of KSPSolve and saw no output)
> > 
> >  
> > So I tried to configure PETSc with --download-fblaslapack 
> > --with-64-bit-blas-indices, but got an error that
> >  
> > fblaslapack does not support -with-64-bit-blas-indices
> >  
> > Switching to flags --download-openblas -with-64-bit-blas-indices was 
> > unsuccessfully too because of error:
> >  
> > Error during download/extract/detection of OPENBLAS:
> > Unable to download openblas
> > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git 
> > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']":
> > fatal: destination path 
> > '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas'
> >  already exists and is not an empty directory.
> > Unable to download package OPENBLAS from: 
> > git://https://github.com/xianyi/OpenBLAS.git
> > * If URL specified manually - perhaps there is a typo?
> > * If your network is disconnected - please reconnect and rerun ./configure
> > * Or perhaps you have a firewall blocking the download
> > * You can run with --with-packages-download-dir=/adirectory and ./configure 
> > will instruct you what packages to download manually
> > * or you can download the above URL manually, to /yourselectedlocation
> >   and use the configure option:
> >   --download-openblas=/yourselectedlocation
> > Unable to download openblas
> > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git 
> > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']":
> > fatal: destination path 
> > '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas'
> >  already exists and is not an empty directory.
> > Unable to download package OPENBLAS from: 
> > git://https://github.com/xianyi/OpenBLAS.git
> > * If URL specified manually - perhaps there is a typo?
> > * If your network is disconnected - please reconnect and rerun ./configure
> > * Or perhaps you have a firewall blocking the download
> > * You can run with --with-packages-download-dir=/adirectory and ./configure 
> > will instruct you what packages to download manually
> > * or you can download the above URL manually, to /yourselectedlocation
> >   and use 

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-15 Thread Balay, Satish via petsc-users
I have some changes (incomplete) here - 

my hack to bfort.

diff --git a/src/bfort/bfort.c b/src/bfort/bfort.c
index 0efe900..31ff154 100644
--- a/src/bfort/bfort.c
+++ b/src/bfort/bfort.c
@@ -1654,7 +1654,7 @@ void PrintDefinition( FILE *fout, int is_function, char 
*name, int nstrings,
 
 /* Add a "decl/result(name) for functions */
 if (useFerr) {
-   OutputFortranToken( fout, 7, "integer" );
+   OutputFortranToken( fout, 7, "PetscErrorCode" );
OutputFortranToken( fout, 1, errArgNameParm);
 } else if (is_function) {
OutputFortranToken( fout, 7, ArgToFortran( rt->name ) );


And my changes to petsc are on branch balay/fix-ftn-i8/maint

Satish

On Wed, 15 Jan 2020, Smith, Barry F. via petsc-users wrote:

> 
>   Working on it now; may be doable
> 
> 
> 
> > On Jan 15, 2020, at 11:55 AM, Matthew Knepley  wrote:
> > 
> > On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук 
> >  wrote:
> > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc 
> > > routines should be suing 'PetscErrorCode for ierr'
> > 
> > If I define ierr as PetscErrorCode for all subroutines given below
> > 
> > call VecDuplicate(Vec_U,Vec_Um,ierr)
> > call VecCopy(Vec_U,Vec_Um,ierr)
> > call VecGetLocalSize(Vec_U,j,ierr)
> > call VecGetOwnershipRange(Vec_U,j1,j2,ierr)
> > 
> > then errors occur with first three subroutines:
> > Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to 
> > INTEGER(8).
> > 
> > Barry,
> > 
> > It looks like the ftn-auto interfaces are using 'integer' for the error 
> > code, whereas the ftn-custom is using PetscErrorCode.
> > Could we make the generated ones use integer?
> > 
> >   Thanks,
> > 
> >  Matt
> >  
> > Therefore I was forced to define ierr as PetscInt for VecDuplicate, 
> > VecCopy, VecGetLocalSize subroutines to fix these errors.
> > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while 
> > others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery 
> > for me.
> > 
> > > What version of PETSc are you using?
> > 
> > version 3.12.2
> > 
> > > Are you seeing this issue with a PETSc example?
> > 
> > I will check it tomorrow  and let you know.
> > 
> > Kind regards,
> > Dmitry Melnichuk
> > 
> >  
> >  
> > 15.01.2020, 17:14, "Balay, Satish" :
> > -fdefault-integer-8 is likely to break things [esp with MPI - where 
> > 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become 
> > incompatible with the MPI library with -fdefault-integer-8.]
> > 
> > And I'm not sure why you are having to use PetscInt for ierr. All PETSc 
> > routines should be suing 'PetscErrorCode for ierr'
> > 
> > What version of PETSc are you using? Are you seeing this issue with a PETSc 
> > example?
> > 
> > Satish
> > 
> > On Wed, 15 Jan 2020, Дмитрий Мельничук wrote:
> >  
> > 
> >  Hello all!
> >   At present time I need to compile solver called Defmod 
> > (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 
> > 95.
> >  Defmod uses PETSc for solving linear algebra system.
> >  Solver compilation with 32-bit version of PETSc does not cause any 
> > problem. 
> >  But solver compilation with 64-bit version of PETSc produces an error with 
> > size of ierr PETSc variable. 
> >   
> >  1. For example, consider the following statements written in Fortran:
> >   
> >   
> >  PetscErrorCode :: ierr_m
> >  PetscInt :: ierr
> >  ...
> >  ...
> >  call VecDuplicate(Vec_U,Vec_Um,ierr) 
> >  call VecCopy(Vec_U,Vec_Um,ierr)
> >  call VecGetLocalSize(Vec_U,j,ierr)
> >  call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m)
> >   
> >   
> >  As can be seen first three subroutunes require ierr to be size of 
> > INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr 
> > to be size of INTEGER(4).
> >  Using the same integer format gives an error:
> >   
> >  There is no specific subroutine for the generic ‘vecgetownershiprange’ at 
> > (1)
> >   
> >  2. Another example is:
> >   
> >   
> >  call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr)
> >  CHKERRA(ierr)
> >  call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr)
> >   
> >   
> >  I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I 
> > choose INTEGER(8), the error "Type mismatch in argument ‘ierr’ at (1); 
> > passed INTEGER(8) to
> >  INTEGER(4)" occurs.
> >  If I define ierr  as INTEGER(4), the error "Type mismatch in argument 
> > ‘ierr’ at (1); passed INTEGER(4) to INTEGER(8)" appears.
> >
> >  3. If I change the sizes of ierr vaiables as error messages require, the 
> > compilation completed successfully, but an error occurs when calculating 
> > the RHS vector with
> >  following message:
> >  [0]PETSC ERROR: Out of range index value -4 cannot be negative 
> >   
> > 
> >  Command to configure 32-bit version of PETSc under Windows 10 using Cygwin:
> >  ./configure --with-cc=x86_64-w64-mingw32-gcc 
> > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran 
> > 

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-15 Thread Balay, Satish via petsc-users
On Wed, 15 Jan 2020, Matthew Knepley wrote:

> On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук <
> dmitry.melnic...@geosteertech.com> wrote:
> 
> > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc
> > routines should be suing 'PetscErrorCode for ierr'
> >
> > If I define *ierr *as *PetscErrorCode *for all subroutines given below
> >
> > call VecDuplicate(Vec_U,Vec_Um,ierr)
> > call VecCopy(Vec_U,Vec_Um,ierr)
> > call VecGetLocalSize(Vec_U,j,ierr)
> > call VecGetOwnershipRange(Vec_U,j1,j2,ierr)
> >
> > then errors occur with first three subroutines:
> > *Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to
> > INTEGER(8).*
> >
> 
> Barry,
> 
> It looks like the ftn-auto interfaces are using 'integer' for the error
> code, whereas the ftn-custom is using PetscErrorCode.
> Could we make the generated ones use integer?

Well it needs a fix to bfort. But then there are a bunch of other issues wrt 
MPI - its not clear [to me] how to fix [wrt -fdefault-integer-8]

Satish

> 
>   Thanks,
> 
>  Matt
> 
> 
> > Therefore I was forced to define *ierr *as *PetscInt *for VecDuplicate,
> > VecCopy, VecGetLocalSize subroutines to fix these errors.
> > Why some subroutines sue 8-bytes integer type of *ierr *(*PetscInt*),
> > while others - 4-bytes integer type of *ierr *(*PetscErrorCode*) remains
> > a mystery for me.
> >
> > > What version of PETSc are you using?
> >
> > version 3.12.2
> >
> > > Are you seeing this issue with a PETSc example?
> >
> > I will check it tomorrow  and let you know.
> >
> > Kind regards,
> > Dmitry Melnichuk
> >
> >
> >
> > 15.01.2020, 17:14, "Balay, Satish" :
> >
> > -fdefault-integer-8 is likely to break things [esp with MPI - where
> > 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become
> > incompatible with the MPI library with -fdefault-integer-8.]
> >
> > And I'm not sure why you are having to use PetscInt for ierr. All PETSc
> > routines should be suing 'PetscErrorCode for ierr'
> >
> > What version of PETSc are you using? Are you seeing this issue with a
> > PETSc example?
> >
> > Satish
> >
> > On Wed, 15 Jan 2020, Дмитрий Мельничук wrote:
> >
> >
> >  Hello all!
> >   At present time I need to compile solver called Defmod (
> > https://bitbucket.org/stali/defmod/wiki/Home), which is written in
> > Fortran 95.
> >  Defmod uses PETSc for solving linear algebra system.
> >  Solver compilation with 32-bit version of PETSc does not cause any
> > problem.
> >  But solver compilation with 64-bit version of PETSc produces an error
> > with size of ierr PETSc variable.
> >
> >  1. For example, consider the following statements written in Fortran:
> >
> >
> >  PetscErrorCode :: ierr_m
> >  PetscInt :: ierr
> >  ...
> >  ...
> >  call VecDuplicate(Vec_U,Vec_Um,ierr)
> >  call VecCopy(Vec_U,Vec_Um,ierr)
> >  call VecGetLocalSize(Vec_U,j,ierr)
> >  call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m)
> >
> >
> >  As can be seen first three subroutunes require ierr to be size
> > of INTEGER(8), while the last subroutine (VecGetOwnershipRange)
> > requires ierr to be size of INTEGER(4).
> >  Using the same integer format gives an error:
> >
> >  There is no specific subroutine for the generic ‘vecgetownershiprange’ at
> > (1)
> >
> >  2. Another example is:
> >
> >
> >  call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr)
> >  CHKERRA(ierr)
> >  call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr)
> >
> >
> >  I am not able to define an appropriate size if ierr in CHKERRA(ierr). If
> > I choose INTEGER(8), the error "Type mismatch in argument ‘ierr’ at (1);
> > passed INTEGER(8) to
> >  INTEGER(4)" occurs.
> >  If I define ierr  as INTEGER(4), the error "Type mismatch in argument
> > ‘ierr’ at (1); passed INTEGER(4) to INTEGER(8)" appears.
> >
> >  3. If I change the sizes of ierr vaiables as error messages require, the
> > compilation completed successfully, but an error occurs when calculating
> > the RHS vector with
> >  following message:
> >  [0]PETSC ERROR: Out of range index value -4 cannot be negative
> >
> >
> >  Command to configure 32-bit version of PETSc under Windows 10 using
> > Cygwin:
> >  ./configure --with-cc=x86_64-w64-mingw32-gcc
> > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran
> > --download-fblaslapack
> >  --with-mpi-include=/cygdrive/c/MPISDK/Include
> > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
> > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes
> >  -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static
> > -lpthread -fno-range-check' --with-shared-libraries=no
> >   Command to configure 64-bit version of PETSc under Windows 10 using
> > Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc
> > --with-cxx=x86_64-w64-mingw32-g++
> >  --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack
> > --with-mpi-include=/cygdrive/c/MPISDK/Include
> > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
> >  

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-15 Thread Balay, Satish via petsc-users
-fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' 
is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible 
with the MPI library with -fdefault-integer-8.]

And I'm not sure why you are having to use PetscInt for ierr. All PETSc 
routines should be suing 'PetscErrorCode for ierr'

What version of PETSc are you using? Are you seeing this issue with a PETSc 
example?

Satish

On Wed, 15 Jan 2020, Дмитрий Мельничук wrote:

> Hello all!
>  At present time I need to compile solver called Defmod 
> (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 
> 95.
> Defmod uses PETSc for solving linear algebra system.
> Solver compilation with 32-bit version of PETSc does not cause any problem. 
> But solver compilation with 64-bit version of PETSc produces an error with 
> size of ierr PETSc variable. 
>  
> 1. For example, consider the following statements written in Fortran:
>  
>  
> PetscErrorCode :: ierr_m
> PetscInt :: ierr
> ...
> ...
> call VecDuplicate(Vec_U,Vec_Um,ierr) 
> call VecCopy(Vec_U,Vec_Um,ierr)
> call VecGetLocalSize(Vec_U,j,ierr)
> call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m)
>  
>  
> As can be seen first three subroutunes require ierr to be size of INTEGER(8), 
> while the last subroutine (VecGetOwnershipRange) requires ierr to be size of 
> INTEGER(4).
> Using the same integer format gives an error:
>  
> There is no specific subroutine for the generic ‘vecgetownershiprange’ at (1)
>  
> 2. Another example is:
>  
>  
> call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr)
> CHKERRA(ierr)
> call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr)
>  
>  
> I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I 
> choose INTEGER(8), the error "Type mismatch in argument ‘ierr’ at (1); passed 
> INTEGER(8) to
> INTEGER(4)" occurs.
> If I define ierr  as INTEGER(4), the error "Type mismatch in argument ‘ierr’ 
> at (1); passed INTEGER(4) to INTEGER(8)" appears.
>   
> 3. If I change the sizes of ierr vaiables as error messages require, the 
> compilation completed successfully, but an error occurs when calculating the 
> RHS vector with
> following message:
> [0]PETSC ERROR: Out of range index value -4 cannot be negative 
>  
> 
> Command to configure 32-bit version of PETSc under Windows 10 using Cygwin:
> ./configure --with-cc=x86_64-w64-mingw32-gcc 
> --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran 
> --download-fblaslapack
> --with-mpi-include=/cygdrive/c/MPISDK/Include 
> --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a 
> --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes
> -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static 
> -lpthread -fno-range-check' --with-shared-libraries=no
>  Command to configure 64-bit version of PETSc under Windows 10 using 
> Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc 
> --with-cxx=x86_64-w64-mingw32-g++
> --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack 
> --with-mpi-include=/cygdrive/c/MPISDK/Include 
> --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
> --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes 
> -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static 
> -lpthread -fno-range-check
> -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices 
> --known-64-bit-blas-indices
> 
>  
> Kind regards,
> Dmitry Melnichuk
> 
> 


Re: [petsc-users] petsc without MPI

2019-11-19 Thread Balay, Satish via petsc-users
Not sure why you are looking at this flag and interpreting it - PETSc code uses 
the flag PETSC_HAVE_MPIUNI to check for a sequential build.

[this one states the module MPI similar to BLASLAPACK etc  in configure is 
enabled]

Satish

On Tue, 19 Nov 2019, Povolotskyi, Mykhailo via petsc-users wrote:

> Let me explain the problem.
> 
> This log file has
> 
> #ifndef PETSC_HAVE_MPI
> #define PETSC_HAVE_MPI 1
> #endif
> 
> while I need to have PETSC without MPI.
> 
> On 11/19/2019 2:55 PM, Matthew Knepley wrote:
> The log you sent has configure completely successfully. Please retry and send 
> the log for a failed run.
> 
>   Thanks,
> 
>  Matt
> 
> On Tue, Nov 19, 2019 at 2:53 PM Povolotskyi, Mykhailo via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
> Why it did not work then?
> 
> On 11/19/2019 2:51 PM, Balay, Satish wrote:
> > And I see from configure.log - you are using the correct option.
> >
> > Configure Options: --configModules=PETSc.Configure 
> > --optionsModule=config.compilerOptions --with-scalar-type=real --with-x=0 
> > --with-hdf5=0 --with-single-library=1 --with-shared-libraries=0 
> > --with-log=0 --with-mpi=0 --with-clanguage=C++ --with-cxx-dialect=C++11 
> > --CXXFLAGS="-fopenmp -fPIC" --CFLAGS="-fopenmp -fPIC" --with-fortran=0 
> > --FFLAGS="-fopenmp -fPIC" --with-64-bit-indices=0 --with-debugging=0 
> > --with-cc=gcc --with-fc=gfortran --with-cxx=g++ COPTFLAGS= CXXOPTFLAGS= 
> > FOPTFLAGS= --download-metis=0 --download-superlu_dist=0 
> > --download-parmetis=0 
> > --with-valgrind-dir=/apps/brown/valgrind/3.13.0_gcc-4.8.5 
> > --download-mumps=1 --with-mumps-serial=1 --with-fortran-kernels=0 
> > --with-blaslapack-lib="-Wl,-rpath,/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64
> >   
> > -L/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64
> >  -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core " 
> > --with-blacs-lib=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so
> >  
> > --with-blacs-include=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/include
> >  --with-scalapack=0
> > <<<<<<<
> >
> > And configure completed successfully. What issue are you encountering? Why 
> > do you think its activating MPI?
> >
> > Satish
> >
> >
> > On Tue, 19 Nov 2019, Balay, Satish via petsc-users wrote:
> >
> >> On Tue, 19 Nov 2019, Povolotskyi, Mykhailo via petsc-users wrote:
> >>
> >>> Hello,
> >>>
> >>> I'm trying to build PETSC without MPI.
> >>>
> >>> Even if I specify --with_mpi=0, the configuration script still activates
> >>> MPI.
> >>>
> >>> I attach the configure.log.
> >>>
> >>> What am I doing wrong?
> >> The option is --with-mpi=0
> >>
> >> Satish
> >>
> >>
> >>> Thank you,
> >>>
> >>> Michael.
> >>>
> >>>
> 
> 
> 
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>
> 


Re: [petsc-users] petsc without MPI

2019-11-19 Thread Balay, Satish via petsc-users
And I see from configure.log - you are using the correct option.

>>>>>>>
Configure Options: --configModules=PETSc.Configure 
--optionsModule=config.compilerOptions --with-scalar-type=real --with-x=0 
--with-hdf5=0 --with-single-library=1 --with-shared-libraries=0 --with-log=0 
--with-mpi=0 --with-clanguage=C++ --with-cxx-dialect=C++11 --CXXFLAGS="-fopenmp 
-fPIC" --CFLAGS="-fopenmp -fPIC" --with-fortran=0 --FFLAGS="-fopenmp -fPIC" 
--with-64-bit-indices=0 --with-debugging=0 --with-cc=gcc --with-fc=gfortran 
--with-cxx=g++ COPTFLAGS= CXXOPTFLAGS= FOPTFLAGS= --download-metis=0 
--download-superlu_dist=0 --download-parmetis=0 
--with-valgrind-dir=/apps/brown/valgrind/3.13.0_gcc-4.8.5 --download-mumps=1 
--with-mumps-serial=1 --with-fortran-kernels=0 
--with-blaslapack-lib="-Wl,-rpath,/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64
  -L/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64 
-lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core " 
--with-blacs-lib=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so
 
--with-blacs-include=/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/include
 --with-scalapack=0
<<<<<<<

And configure completed successfully. What issue are you encountering? Why do 
you think its activating MPI?

Satish


On Tue, 19 Nov 2019, Balay, Satish via petsc-users wrote:

> On Tue, 19 Nov 2019, Povolotskyi, Mykhailo via petsc-users wrote:
> 
> > Hello,
> > 
> > I'm trying to build PETSC without MPI.
> > 
> > Even if I specify --with_mpi=0, the configuration script still activates 
> > MPI.
> > 
> > I attach the configure.log.
> > 
> > What am I doing wrong?
> 
> The option is --with-mpi=0
> 
> Satish
> 
> 
> > 
> > Thank you,
> > 
> > Michael.
> > 
> > 
> 


Re: [petsc-users] petsc without MPI

2019-11-19 Thread Balay, Satish via petsc-users
On Tue, 19 Nov 2019, Povolotskyi, Mykhailo via petsc-users wrote:

> Hello,
> 
> I'm trying to build PETSC without MPI.
> 
> Even if I specify --with_mpi=0, the configuration script still activates 
> MPI.
> 
> I attach the configure.log.
> 
> What am I doing wrong?

The option is --with-mpi=0

Satish


> 
> Thank you,
> 
> Michael.
> 
> 



Re: [petsc-users] PETSc 3.12 with .f90 files

2019-10-29 Thread Balay, Satish via petsc-users
On Tue, 29 Oct 2019, Matthew Knepley via petsc-users wrote:

> On Tue, Oct 29, 2019 at 2:35 PM Smith, Barry F.  wrote:
> 
> >
> >The problem is that this change DOES use the preprocessor on the f90
> > file, does it not? We need a rule that does not use the preprocessor.
> >
> 
> This change just calls the Fortran compiler on it. The compiler decides to
> use the preprocessor based on the extension (I thought).

With older releases, the primary difference between .f.o and .F.o are flag 
related to FPP

${PETSC_FC_INCLUDES} ${PETSCFLAGS} ${FPP_FLAGS} ${FPPFLAGS}

Perhaps this difference won't matter with most compilers.. [when compiling .f 
sources]

Satish

> 
>   Thanks,
> 
> Matt
> 
> 
> >Barry
> >
> >
> > > On Oct 29, 2019, at 10:50 AM, Matthew Knepley via petsc-users <
> > petsc-users@mcs.anl.gov> wrote:
> > >
> > > On Tue, Oct 29, 2019 at 11:38 AM Randall Mackie 
> > wrote:
> > > Hi Matt,
> > >
> > > That worked and everything compiles correctly now.
> > >
> > > Great.
> > >
> > >   https://gitlab.com/petsc/petsc/merge_requests/2236
> > >
> > >   Thanks,
> > >
> > > Matt
> > >
> > > Thanks,
> > >
> > > Randy
> > >
> > >
> > >> On Oct 29, 2019, at 8:29 AM, Matthew Knepley  wrote:
> > >>
> > >> On Tue, Oct 29, 2019 at 11:25 AM Randall Mackie 
> > wrote:
> > >> Correct, no preprocessing.
> > >>
> > >> Okay. I am not sure why it would have been remove, but you can try
> > adding .f90 to lib/petsc/conf/ruls line 273
> > >> and see if that fixes it.
> > >>
> > >>   THanks,
> > >>
> > >> Matt
> > >>> On Oct 29, 2019, at 8:24 AM, Matthew Knepley 
> > wrote:
> > >>>
> > >>> On Tue, Oct 29, 2019 at 10:54 AM Randall Mackie via petsc-users <
> > petsc-users@mcs.anl.gov> wrote:
> > >>> Dear PETSc users:
> > >>>
> > >>> In our code, we have one or two small .f90 files that are part of the
> > software, and they have always compiled without any issues with previous
> > versions of PETSc, using standard PETSc make files.
> > >>>
> > >>> However, starting with PETSc 3.12, they no longer compile.
> > >>>
> > >>> Was there some reasons for this change and any suggestion as to how to
> > deal with this?
> > >>>
> > >>> My cursory look cannot find a compile rule for .f90, only .F90. Di you
> > not want preprocessing on that file?
> > >>>
> > >>>   Thanks,
> > >>>
> > >>> Matt
> > >>>
> > >>> Thanks, Randy
> > >>>
> > >>>
> > >>> --
> > >>> What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which their
> > experiments lead.
> > >>> -- Norbert Wiener
> > >>>
> > >>> https://www.cse.buffalo.edu/~knepley/
> > >>
> > >>
> > >>
> > >> --
> > >> What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which their
> > experiments lead.
> > >> -- Norbert Wiener
> > >>
> > >> https://www.cse.buffalo.edu/~knepley/
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which their
> > experiments lead.
> > > -- Norbert Wiener
> > >
> > > https://www.cse.buffalo.edu/~knepley/
> >
> >
> 
> 



Re: [petsc-users] PETSc 3.12 with .f90 files

2019-10-29 Thread Balay, Satish via petsc-users
On Tue, 29 Oct 2019, Randall Mackie via petsc-users wrote:

> Dear PETSc users:
> 
> In our code, we have one or two small .f90 files that are part of the 
> software, and they have always compiled without any issues with previous 
> versions of PETSc, using standard PETSc make files.
> 
> However, starting with PETSc 3.12, they no longer compile.
> 
> Was there some reasons for this change and any suggestion as to how to deal 
> with this?
> 
> Thanks, Randy


Hm - Looks like we did have something earlier - and there was some makefile 
reorg..

>>
  self.addMakeRule('.f.o .f90.o 
.f95.o','',['${PETSC_MAKE_STOP_ON_ERROR}${FC} -c ${FC_FLAGS} ${FFLAGS} -o $@ 
$<'])
<<<

However - this should not be needed - as the default make targets should 
compile .f sources.

[We don't support using petsc from .f sources anyway - so the
non-petsc .f sources should compile without petsc targets]

Satish

--

balay@sb /home/balay/tmp
$ ls
bug.f  makefile
balay@sb /home/balay/tmp
$ cat makefile 
include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules
#include ${PETSC_DIR}/lib/petsc/conf/test

bug: bug.o
balay@sb /home/balay/tmp
$ make bug.o
mpif90   -c -o bug.o bug.f
balay@sb /home/balay/tmp
$ 





Re: [petsc-users] anaconda installation of petsc

2019-10-23 Thread Balay, Satish via petsc-users
Likely this install is broken as mpicc [used here] is unable to find 'clang' 
used to built it.
And we have no idea how the petsc install in anaconda works.

Suggest installing PETSc from source - this is what we support if you encounter 
problems.

Satish

On Wed, 23 Oct 2019, Gideon Simpson via petsc-users wrote:

> I have an anaconda installation of petsc and I was trying to use it with
> some existing petsc codes, with the makefile:
> 
> include ${PETSC_DIR}/conf/variables
> include ${PETSC_DIR}/conf/rules
> 
> all: ex1
> 
> hello: ex1.o
> ${CLINKER} -o ex1 ex1.o ${LIBS} ${PETSC_LIB}
> 
> and PETSC_DIR=/Users/gideonsimpson/anaconda3/lib/petsc
> 
> However, when i try to call make on this, I get
> 
> mpicc -o ex1.o -c -march=core2 -mtune=haswell -mssse3 -ftree-vectorize
> -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem
> /Users/gideonsimpson/anaconda3/include -O3 -D_FORTIFY_SOURCE=2
> -mmacosx-version-min=10.9 -isystem /Users/gideonsimpson/anaconda3/include
>  -I/Users/gideonsimpson/anaconda3/include  -D_FORTIFY_SOURCE=2
> -mmacosx-version-min=10.9 -isystem /Users/gideonsimpson/anaconda3/include
>  `pwd`/ex1.c
> /Users/gideonsimpson/anaconda3/bin/mpicc: line 282:
> x86_64-apple-darwin13.4.0-clang: command not found
> make: *** [ex1.o] Error 127
> 
> 
> 



[petsc-users] petsc-3.12.1.tar.gz now available

2019-10-22 Thread Balay, Satish via petsc-users
Dear PETSc users,

The patch release petsc-3.12.1 is now available for download,
with change list at 'PETSc-3.12 Changelog'

http://www.mcs.anl.gov/petsc/download/index.html

Satish




Re: [petsc-users] Compile petsc with a simple code

2019-10-21 Thread Balay, Satish via petsc-users
Try compiling a PETSc example with the corresponding PETSc makefile -
to check the compile/link command that you need to use.

Its best to use a PETSc formatted makefile for user code [check users
manual for an example]

Satish

On Mon, 21 Oct 2019, Jianhua Qin via petsc-users wrote:

> Hello everyone,
> 
> I am new to petsc and trying to play with petsc. However, I met the problem
> of linking petsc to my code. For example, I have a simple code named
> hello.c which include petscvec,h, but I don't know how to include. I always
> get the following error :
> 
> 
> *hello.c:1:10: fatal error: petscvec.h: No such file or directory #include
> *
> 
> I compile with the following command:
> 
> * gcc -L/home/lucky/Desktop/software/petsc-3.7.7/arch-linux2-c-debug/lib
>  hello.c -o hello: *
> 
> It would be best if any of you can give any suggestions.
> 
> Best,
> Jianhua
> 



Re: [petsc-users] Makefile problems: environmental variables, includes paths and libraries.

2019-10-15 Thread Balay, Satish via petsc-users
On Tue, 15 Oct 2019, Matthew Agius via petsc-users wrote:

> Dear PETSC users,
> 
> This is my first attempt at PETSC package.
> 
> First step, I think I have installed PETSC, well at least no errors.
> Now I am trying to run a simple makefile as suggested in the manual
> I am confused about undeclared variables in the example makefiles that I am
> coming across; variables such as:
> 
> > ${CLINKER}   ${PETSC_LIB}   ${PETSC_SYS_LIB}   ${PETSC_VEC_LI B}
> > ${PETSC_MAT_LIB}   ${PETSC_DM_LIB}   ${PETSC_KSP_LIB}   ${PETSC_SNES_LIB}
> > or ${PETSC_TS_LIB}.
> 
> Are these environmental variables?

nope - they are declared in other makefiles - and picked up with:

include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules


> So far I only have PETSC_DIR declared in my environment.
> Do I have to declare each one? What are they?
> 
> The error I am actually getting is
> 
> > error #6404: This name does not have a type, and must have an explicit
> > type.   [PETSC_NULL_CHARACTER]
> 
> This is probably due to a missing Include library file in the compilation.
> Any hint which one it is?

Presumably this is not a petsc example with petsc makefile.

Please compare the output of compiling your code/makefile - with a petsc 
example/makefile and check for differences.

i.e
cd src/ksp/ksp/examples/tutorials
make ex2

Satish



Re: [petsc-users] Makefile change for PETSc3.12.0???

2019-10-02 Thread Balay, Satish via petsc-users
Can you retry with this fix:

https://gitlab.com/petsc/petsc/commit/3ae65d51d08dba2e118033664acfd64a46c9bf1d

[You can use maint branch for it]

Satish

On Wed, 2 Oct 2019, Danyang Su via petsc-users wrote:

> Dear All,
> 
> I installed PETSc3.12.0 version and got problem in compiling my code (Fortran
> and C++). The code and makefile are the same as I used for previous PETSc
> version.
> 
> The error information seems the make command does not know the compiler
> information. I tested this on two linux workstations and both return the same
> error.
> 
> make: *** No rule to make target '../../usg/math_common.o', needed by 'exe'. 
> Stop.
> 
> The makefile I use is shown below:
> 
> #PETSc variables for development version, version V3.6.0 and later
> include ${PETSC_DIR}/lib/petsc/conf/variables
> include ${PETSC_DIR}/lib/petsc/conf/rules
> 
> CFLAGS =
> CXXFLAGS = -std=c++11 -O3
> CPPFLAGS = -DUSECGAL_NO
> FFLAGS = -frounding-math -O3
> FPPFLAGS = -DLINUX -DRELEASE -DRELEASE_X64 -DPETSC -DPETSC_HAVE_MUMPS
> -DPETSC_HAVE_SUPERLU
> CLEANFILES = executable-linux
> 
> SRC =./../../
> 
> OBJS = $(SRC)usg/math_common.o\
>     $(SRC)usg/geometry_definition.o\
>     ...
>     $(SRC)updtrootdensity.o
> 
> exe: $(OBJS) chkopts
>     -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS) -o executable-linux $(OBJS)
> ${PETSC_LIB}
> 
> Any idea on this?
> 
> Thanks,
> 
> Danyang
> 


Re: [petsc-users] reproduced the problem

2019-09-20 Thread Balay, Satish via petsc-users
As the message says - you need to use configure option --with-cxx-dialect=C++11 
with --download-superlu_dist

[this requirement is automated in petsc/master so extra configure option is no 
longer required]

Satish

On Fri, 20 Sep 2019, Povolotskyi, Mykhailo via petsc-users wrote:

> Hello Satish,
> 
> I did what you suggested, now the error is different:
> 
> 
>      UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for 
> details):
> ---
> Cannot use SuperLU_DIST without enabling C++11, see --with-cxx-dialect=C++11
> ***
> 
> The updated configure.log is here:
> 
> https://www.dropbox.com/s/tmkksemu294j719/configure.log?dl=0
> 
> On 9/20/2019 4:32 PM, Balay, Satish wrote:
> > 
> > TEST checkRuntimeIssues from 
> > config.packages.BlasLapack(/depot/kildisha/apps/brown/nemo5/libs/petsc/build-real3.11/config/BuildSystem/config/packages/BlasLapack.py:579)
> > TESTING: checkRuntimeIssues from 
> > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:579)
> >Determines if BLAS/LAPACK routines use 32 or 64 bit integers
> > Checking if BLAS/LAPACK routines use 32 or 64 bit integersExecuting: mpicc 
> > -c -o /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.o 
> > -I/tmp/petsc-wf99X2/config.setCompilers 
> > -I/tmp/petsc-wf99X2/config.compilers 
> > -I/tmp/petsc-wf99X2/config.utilities.closure 
> > -I/tmp/petsc-wf99X2/config.headers 
> > -I/tmp/petsc-wf99X2/config.utilities.cacheDetails 
> > -I/tmp/petsc-wf99X2/config.atomics -I/tmp/petsc-wf99X2/config.libraries 
> > -I/tmp/petsc-wf99X2/config.functions 
> > -I/tmp/petsc-wf99X2/config.utilities.featureTestMacros 
> > -I/tmp/petsc-wf99X2/config.utilities.missing 
> > -I/tmp/petsc-wf99X2/config.types -I/tmp/petsc-wf99X2/config.packages.MPI 
> > -I/tmp/petsc-wf99X2/config.packages.valgrind 
> > -I/tmp/petsc-wf99X2/config.packages.pthread 
> > -I/tmp/petsc-wf99X2/config.packages.metis 
> > -I/tmp/petsc-wf99X2/config.packages.hdf5 
> > -I/tmp/petsc-wf99X2/config.packages.BlasLapack -fopenmp -fPIC  
> > /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.c
> > Successful compile:
> > Source:
> > #include "confdefs.h"
> > #include "conffix.h"
> > #include 
> > #if STDC_HEADERS
> > #include 
> > #include 
> > #include 
> > #endif
> >
> > int main() {
> > FILE *output = fopen("runtimetestoutput","w");
> > extern double ddot_(const int*,const double*,const int *,const 
> > double*,const int*);
> >double x1mkl[4] = {3.0,5.0,7.0,9.0};
> >int one1mkl = 1,nmkl = 2;
> >double dotresultmkl = 0;
> >dotresultmkl = 
> > ddot_(,x1mkl,,x1mkl,);
> >fprintf(output, 
> > "-known-64-bit-blas-indices=%d",dotresultmkl != 34);;
> >return 0;
> > }
> > Executing: mpicc  -o /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest  
> >  -fopenmp -fPIC /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.o 
> > -L/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64
> >  -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -lm -lstdc++ -ldl 
> > -L/apps/brown/openmpi.20190215/2.1.6_gcc-5.2.0/lib -lmpi_usempif08 
> > -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm 
> > -L/apps/cent7/gcc/5.2.0/lib/gcc/x86_64-unknown-linux-gnu/5.2.0 
> > -L/apps/cent7/gcc/5.2.0/lib64 -L/apps/cent7/gcc/5.2.0/lib 
> > -Wl,-rpath,/apps/brown/openmpi.20190215/2.1.6_gcc-5.2.0/lib -lgfortran -lm 
> > -lgomp -lgcc_s -lquadmath -lpthread -lstdc++ -ldl
> > Testing executable /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest to 
> > see if it can be run
> > Executing: /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest
> > Executing: /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest
> > ERROR while running executable: Could not execute 
> > "['/tmp/petsc-wf99X2/config.packages.BlasLapack/conftest']":
> > /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest: error while loading 
> > shared libraries: libmkl_intel_lp64.so: cannot open shared object file: No 
> > such file or directory
> >
> >  Defined "HAVE_64BIT_BLAS_INDICES" to "1"
> > Checking for 64 bit blas indices: program did not return therefor assuming 
> > 64 bit blas indices
> >  Defined "HAVE_LIBMKL_INTEL_ILP64" to "1"
> >
> > 
> >
> > So this test has an error but yet the flag HAVE_64BIT_BLAS_INDICES is set.
> >
> > Is your compiler not returning correct error codes?
> >
> > Does it make a difference if you also specify -Wl,-rpath along with -L in 
> > --with-blaslapack-lib option?
> >
> >
> > Satish
> >
> > On Fri, 20 Sep 2019, Povolotskyi, Mykhailo wrote:
> >
> >> Dear Matthew and Satish,
> >>
> >> I just wrote that the error disappeared, but it still exists (I had to
> >> wait longer).
> >>
> >> The configuration log 

Re: [petsc-users] reproduced the problem

2019-09-20 Thread Balay, Satish via petsc-users
>

TEST checkRuntimeIssues from 
config.packages.BlasLapack(/depot/kildisha/apps/brown/nemo5/libs/petsc/build-real3.11/config/BuildSystem/config/packages/BlasLapack.py:579)
TESTING: checkRuntimeIssues from 
config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:579)
  Determines if BLAS/LAPACK routines use 32 or 64 bit integers
Checking if BLAS/LAPACK routines use 32 or 64 bit integersExecuting: mpicc -c 
-o /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.o 
-I/tmp/petsc-wf99X2/config.setCompilers -I/tmp/petsc-wf99X2/config.compilers 
-I/tmp/petsc-wf99X2/config.utilities.closure -I/tmp/petsc-wf99X2/config.headers 
-I/tmp/petsc-wf99X2/config.utilities.cacheDetails 
-I/tmp/petsc-wf99X2/config.atomics -I/tmp/petsc-wf99X2/config.libraries 
-I/tmp/petsc-wf99X2/config.functions 
-I/tmp/petsc-wf99X2/config.utilities.featureTestMacros 
-I/tmp/petsc-wf99X2/config.utilities.missing -I/tmp/petsc-wf99X2/config.types 
-I/tmp/petsc-wf99X2/config.packages.MPI 
-I/tmp/petsc-wf99X2/config.packages.valgrind 
-I/tmp/petsc-wf99X2/config.packages.pthread 
-I/tmp/petsc-wf99X2/config.packages.metis 
-I/tmp/petsc-wf99X2/config.packages.hdf5 
-I/tmp/petsc-wf99X2/config.packages.BlasLapack -fopenmp -fPIC  
/tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.c 
Successful compile:
Source:
#include "confdefs.h"
#include "conffix.h"
#include 
#if STDC_HEADERS
#include 
#include 
#include 
#endif

int main() {
FILE *output = fopen("runtimetestoutput","w");
extern double ddot_(const int*,const double*,const int *,const double*,const 
int*);
  double x1mkl[4] = {3.0,5.0,7.0,9.0};
  int one1mkl = 1,nmkl = 2;
  double dotresultmkl = 0;
  dotresultmkl = ddot_(,x1mkl,,x1mkl,);
  fprintf(output, "-known-64-bit-blas-indices=%d",dotresultmkl 
!= 34);;
  return 0;
}
Executing: mpicc  -o /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest   
-fopenmp -fPIC /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest.o 
-L/apps/cent7/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64 
-lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -lm -lstdc++ -ldl 
-L/apps/brown/openmpi.20190215/2.1.6_gcc-5.2.0/lib -lmpi_usempif08 
-lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm 
-L/apps/cent7/gcc/5.2.0/lib/gcc/x86_64-unknown-linux-gnu/5.2.0 
-L/apps/cent7/gcc/5.2.0/lib64 -L/apps/cent7/gcc/5.2.0/lib 
-Wl,-rpath,/apps/brown/openmpi.20190215/2.1.6_gcc-5.2.0/lib -lgfortran -lm 
-lgomp -lgcc_s -lquadmath -lpthread -lstdc++ -ldl 
Testing executable /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest to see 
if it can be run
Executing: /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest
Executing: /tmp/petsc-wf99X2/config.packages.BlasLapack/conftest
ERROR while running executable: Could not execute 
"['/tmp/petsc-wf99X2/config.packages.BlasLapack/conftest']":
/tmp/petsc-wf99X2/config.packages.BlasLapack/conftest: error while loading 
shared libraries: libmkl_intel_lp64.so: cannot open shared object file: No such 
file or directory

Defined "HAVE_64BIT_BLAS_INDICES" to "1"
Checking for 64 bit blas indices: program did not return therefor assuming 64 
bit blas indices
Defined "HAVE_LIBMKL_INTEL_ILP64" to "1"



So this test has an error but yet the flag HAVE_64BIT_BLAS_INDICES is set.

Is your compiler not returning correct error codes?

Does it make a difference if you also specify -Wl,-rpath along with -L in 
--with-blaslapack-lib option?


Satish

On Fri, 20 Sep 2019, Povolotskyi, Mykhailo wrote:

> Dear Matthew and Satish,
> 
> I just wrote that the error disappeared, but it still exists (I had to 
> wait longer).
> 
> The configuration log can be accessed here:
> 
> https://www.dropbox.com/s/tmkksemu294j719/configure.log?dl=0
> 
> Sorry for the last e-mail.
> 
> Michael.
> 
> 
> On 09/20/2019 03:53 PM, Balay, Satish wrote:
> > --with-64-bit-indices=1 => PetscInt = int64_t
> > --known-64-bit-blas-indices=1 => blas specified uses 64bit indices.
> >
> > What is your requirement (use case)?
> >
> > Satish
> >
> > On Fri, 20 Sep 2019, Povolotskyi, Mykhailo via petsc-users wrote:
> >
> >> Does it mean I have to configure petsc with --with-64-bit-indices=1 ?
> >>
> >> On 09/20/2019 03:41 PM, Matthew Knepley wrote:
> >> On Fri, Sep 20, 2019 at 1:55 PM Povolotskyi, Mykhailo via petsc-users 
> >> mailto:petsc-users@mcs.anl.gov>> wrote:
> >> Hello,
> >>
> >> I'm upgrading petsc from 3.8 to 3.11.
> >>
> >> In doing so, I see an error message:
> >>
> >>UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
> >> details):
> >> ---
> >> Cannot use SuperLU_DIST with 64 bit BLAS/Lapack indices
> >> ***
> >>
> >> I wonder why this configuration step worked well for 3.8?  

Re: [petsc-users] question about installing petsc3.11

2019-09-20 Thread Balay, Satish via petsc-users
--with-64-bit-indices=1 => PetscInt = int64_t
--known-64-bit-blas-indices=1 => blas specified uses 64bit indices.

What is your requirement (use case)?

Satish

On Fri, 20 Sep 2019, Povolotskyi, Mykhailo via petsc-users wrote:

> Does it mean I have to configure petsc with --with-64-bit-indices=1 ?
> 
> On 09/20/2019 03:41 PM, Matthew Knepley wrote:
> On Fri, Sep 20, 2019 at 1:55 PM Povolotskyi, Mykhailo via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
> Hello,
> 
> I'm upgrading petsc from 3.8 to 3.11.
> 
> In doing so, I see an error message:
> 
>   UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for details):
> ---
> Cannot use SuperLU_DIST with 64 bit BLAS/Lapack indices
> ***
> 
> I wonder why this configuration step worked well for 3.8?  I did not
> change anything else but version of petsc.
> 
> This never worked. We are just checking now.
> 
>   Thanks,
> 
> Matt
> 
> Thank you,
> 
> Michael.
> 
> 
> 
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/
> 
> 



Re: [petsc-users] [petsc-dev] IMPORTANT PETSc repository changed from Bucketbit to GitLab

2019-09-06 Thread Balay, Satish via petsc-users
On Thu, 5 Sep 2019, Balay, Satish via petsc-dev wrote:

> On Thu, 22 Aug 2019, Balay, Satish via petsc-users wrote:

> All,
> 
> We now have a preliminary CI in place that can process merge requests (MRs) - 
> so please go ahead and submit them.

We have a notice on 'Data Center Outage,  starting 5PM CST on Sep 6 -to 
sometime Sep 9'

So PETSc CI will likely not be working during this time [as (some of) the test 
machines go down] i.e MRs won't get tested during this outage.

Satish


Re: [petsc-users] [petsc-dev] IMPORTANT PETSc repository changed from Bucketbit to GitLab

2019-09-04 Thread Balay, Satish via petsc-users
On Thu, 22 Aug 2019, Balay, Satish via petsc-users wrote:

> > 
> > Please do not make pull requests to the Gitlab site yet; we will be 
> > manually processing the PR from the BitBucket site over the next couple of
> > days as we implement the testing.
> > 
> > Please be patient, this is all new to use and it may take a few days to 
> > get out all the glitches.
> 
> Just an update:
> 
> We are still in the process of setting up the CI at gitlab. So we are
> not yet ready to process PRs [or Merge Requests (MRs) in gitlab terminology]
> 
> As of now - we have the old jenkins equivalent [and a few additional]
> tests working with gitlab setup. i.e
> 
> https://gitlab.com/petsc/petsc/pipelines/77669506
> 
> But we are yet to migrate all the regular [aka next] tests to this
> infrastructure.

All,

We now have a preliminary CI in place that can process merge requests (MRs) - 
so please go ahead and submit them.

Satish


Re: [petsc-users] configuring on OSX

2019-09-03 Thread Balay, Satish via petsc-users
On Wed, 4 Sep 2019, Balay, Satish via petsc-users wrote:

> On Tue, 3 Sep 2019, Brian Van Straalen via petsc-users wrote:
> 
> > pulling from git PETSC and on master branch.
> > 
> > ./configure CPP=/usr/bin/cpp
> > ===
> >  Configuring PETSc to compile on your system
> > 
> > ===
> > TESTING: checkCPreprocessor from
> > config.setCompilers(config/BuildSystem/config/setCompilers.py:592)
> > 
> > ***
> >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> > details):
> > ---
> > Cannot find a C preprocessor
> > ***
> 
> Its best to send configure.log
> 
> > 
> > 
> > 
> > Brian
> > 
> > my configure script
> > 
> > configure_options = [
> > #   '--with-mpi-dir=/usr/local/opt/open-mpi',

Hm - this is commented out anyway. So there is no mpi specified for this build 
(its a mandatory dependency)? 

> >'--with-cc=/usr/bin/clang',
> >'--with-cpp=/usr/bin/cpp',
> >'--with-cxx=/usr/bin/clang++',
> 
> The above 3 options are redundant - when --with-mpi-dir is provided. PETSc 
> configure will pick up mpicc etc from the specified location.
> 
> >'--with-fc=0',
> 
> Hm - this conflicts with --download-mumps etc that require fortran
> 
> Satish
> 
> > 'COPTFLAGS=-g -framework Accelerate',
> > 'CXXOPTFLAGS=-g -framework Accelerate',
> > 'FOPTFLAGS=-g',
> > #  '--with-memalign=64',
> >   '--download-hypre=1',
> >   '--download-metis=1',
> >   '--download-parmetis=1',
> >   '--download-c2html=1',
> >   '--download-ctetgen',
> > #  '--download-viennacl',
> > #  '--download-ml=1',
> >   '--download-p4est=1',
> >   '--download-superlu_dist',
> >   '--download-superlu',
> >   '--with-cxx-dialect=C++11',
> >   '--download-mumps=1',
> >   '--download-scalapack=1',
> > #  '--download-exodus=1',
> > #  '--download-ctetgen=1',
> >   '--download-triangle=1',
> > #  '--download-pragmatic=1',
> > #  '--download-eigen=1',
> >   '--download-zlib',
> >   '--with-x=1',
> >   '--with-sowing=0',
> >   '--with-debugging=1',
> >   '--with-precision=double',
> >   'PETSC_ARCH=arch-macosx-gnu-g',
> >   '--download-chaco'
> >   ]
> > 
> > if __name__ == '__main__':
> >   import sys,os
> >   sys.path.insert(0,os.path.abspath('config'))
> >   import configure
> >   configure.petsc_configure(configure_options)
> > 
> > 
> > 
> 



Re: [petsc-users] configuring on OSX

2019-09-03 Thread Balay, Satish via petsc-users
On Tue, 3 Sep 2019, Brian Van Straalen via petsc-users wrote:

> pulling from git PETSC and on master branch.
> 
> ./configure CPP=/usr/bin/cpp
> ===
>  Configuring PETSc to compile on your system
> 
> ===
> TESTING: checkCPreprocessor from
> config.setCompilers(config/BuildSystem/config/setCompilers.py:592)
> 
> ***
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> details):
> ---
> Cannot find a C preprocessor
> ***

Its best to send configure.log

> 
> 
> 
> Brian
> 
> my configure script
> 
> configure_options = [
> #   '--with-mpi-dir=/usr/local/opt/open-mpi',
>'--with-cc=/usr/bin/clang',
>'--with-cpp=/usr/bin/cpp',
>'--with-cxx=/usr/bin/clang++',

The above 3 options are redundant - when --with-mpi-dir is provided. PETSc 
configure will pick up mpicc etc from the specified location.

>'--with-fc=0',

Hm - this conflicts with --download-mumps etc that require fortran

Satish

> 'COPTFLAGS=-g -framework Accelerate',
> 'CXXOPTFLAGS=-g -framework Accelerate',
> 'FOPTFLAGS=-g',
> #  '--with-memalign=64',
>   '--download-hypre=1',
>   '--download-metis=1',
>   '--download-parmetis=1',
>   '--download-c2html=1',
>   '--download-ctetgen',
> #  '--download-viennacl',
> #  '--download-ml=1',
>   '--download-p4est=1',
>   '--download-superlu_dist',
>   '--download-superlu',
>   '--with-cxx-dialect=C++11',
>   '--download-mumps=1',
>   '--download-scalapack=1',
> #  '--download-exodus=1',
> #  '--download-ctetgen=1',
>   '--download-triangle=1',
> #  '--download-pragmatic=1',
> #  '--download-eigen=1',
>   '--download-zlib',
>   '--with-x=1',
>   '--with-sowing=0',
>   '--with-debugging=1',
>   '--with-precision=double',
>   'PETSC_ARCH=arch-macosx-gnu-g',
>   '--download-chaco'
>   ]
> 
> if __name__ == '__main__':
>   import sys,os
>   sys.path.insert(0,os.path.abspath('config'))
>   import configure
>   configure.petsc_configure(configure_options)
> 
> 
> 



Re: [petsc-users] petsc on windows

2019-08-30 Thread Balay, Satish via petsc-users
Thanks for the update.

Yes - having the wrong variant of libpetsc.dll in PATH can cause problems.

Satish

On Fri, 30 Aug 2019, Sam Guo via petsc-users wrote:

> Thanks a lot for your help. It is my pilot error: I have both serial
> version and parallel version of petstc. It turns out serial version is
> always loaded. Now parallel petstc is working.
> 
> On Thu, Aug 29, 2019 at 5:51 PM Balay, Satish  wrote:
> 
> > On MS-Windows - you need the location of the DLLs in PATH
> >
> > Or use --with-shared-libraries=0
> >
> > Satish
> >
> > On Thu, 29 Aug 2019, Sam Guo via petsc-users wrote:
> >
> > > When I use intel mpi, configuration, compile and test all work fine but I
> > > cannot use dll in my application.
> > >
> > > On Thu, Aug 29, 2019 at 3:46 PM Sam Guo  wrote:
> > >
> > > > After I removed following lines inin
> > config/BuildSystem/config/package.py,
> > > > configuration finished without error.
> > > >  self.executeTest(self.checkDependencies)
> > > >  self.executeTest(self.configureLibrary)
> > > >  self.executeTest(self.checkSharedLibrary)
> > > >
> > > > I then add my mpi wrapper to
> > ${PTESTC_ARCH}/lib/petsc/conf/petscvariables:
> > > > PCC_LINKER_FLAGS =-MD -wd4996 -Z7
> > > >
> > /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> > > >
> > > > On Thu, Aug 29, 2019 at 3:28 PM Balay, Satish 
> > wrote:
> > > >
> > > >> On Thu, 29 Aug 2019, Sam Guo via petsc-users wrote:
> > > >>
> > > >> > I can link when I add my wrapper to
> > > >> > PCC_LINKER_FLAGS =-MD -wd4996 -Z7
> > > >> >
> > /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> > > >>
> > > >> I don't understand what you mean here. Add PCC_LINKER_FLAGS to where?
> > > >> This is a variable in configure generated makefile
> > > >>
> > > >> Since PETSc is not built [as configure failed] - there should be no
> > > >> configure generated makefiles.
> > > >>
> > > >> > (I don't understand why configure does not include my wrapper)
> > > >>
> > > >> Well the compiler gives the error below. Can you try to compile
> > > >> manually [i.e without PETSc or any petsc makefiles] a simple MPI code
> > > >> - say cpi.c from MPICH and see if it works?  [and copy/paste the log
> > > >> from this compile attempt.
> > > >>
> > > >> Satish
> > > >>
> > > >> >
> > > >> >
> > > >> > On Thu, Aug 29, 2019 at 1:28 PM Matthew Knepley 
> > > >> wrote:
> > > >> >
> > > >> > > On Thu, Aug 29, 2019 at 4:02 PM Sam Guo 
> > > >> wrote:
> > > >> > >
> > > >> > >> Thanks for the quick response. Attached please find the
> > configure.log
> > > >> > >> containing the configure error.
> > > >> > >>
> > > >> > >
> > > >> > > Executing:
> > > >> /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> > > >> > > -c -o /tmp/petsc-6DsCEk/config.libraries/conftest.o
> > > >> > > -I/tmp/petsc-6DsCEk/config.compilers
> > > >> > > -I/tmp/petsc-6DsCEk/config.setCompilers
> > > >> > > -I/tmp/petsc-6DsCEk/config.utilities.closure
> > > >> > > -I/tmp/petsc-6DsCEk/config.headers
> > > >> > > -I/tmp/petsc-6DsCEk/config.utilities.cacheDetails
> > > >> > > -I/tmp/petsc-6DsCEk/config.types
> > -I/tmp/petsc-6DsCEk/config.atomics
> > > >> > > -I/tmp/petsc-6DsCEk/config.functions
> > > >> > > -I/tmp/petsc-6DsCEk/config.utilities.featureTestMacros
> > > >> > > -I/tmp/petsc-6DsCEk/config.utilities.missing
> > > >> > > -I/tmp/petsc-6DsCEk/PETSc.options.scalarTypes
> > > >> > > -I/tmp/petsc-6DsCEk/config.libraries  -MD -wd4996 -Z7
> > > >> > >  /tmp/petsc-6DsCEk/config.libraries/conftest.c
> > > >> > > stdout: conftest.c
> > > >> > > Successful compile:
> > > >> > > Source:
> > > >> > > #include "confdefs.h"
> > > >> > > #include "conffix.h"
> > > >> > > /* Override any gcc2 internal prototype to avoid an error. */
> > > >> > > char MPI_Init();
> > > >> > > static void _check_MPI_Init() { MPI_Init(); }
> > > >> > > char MPI_Comm_create();
> > > >> > > static void _check_MPI_Comm_create() { MPI_Comm_create(); }
> > > >> > >
> > > >> > > int main() {
> > > >> > > _check_MPI_Init();
> > > >> > > _check_MPI_Comm_create();;
> > > >> > >   return 0;
> > > >> > > }
> > > >> > > Executing:
> > > >> /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> > > >> > >  -o /tmp/petsc-6DsCEk/config.libraries/conftest.exe-MD
> > -wd4996 -Z7
> > > >> > > /tmp/petsc-6DsCEk/config.libraries/conftest.o
> > > >> > >
> > > >>
> > /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> > > >> > > Ws2_32.lib
> > > >> > > stdout:
> > > >> > > LINK : C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe not
> > found
> > > >> or not
> > > >> > > built by the last incremental link; performing full link
> > > >> > > conftest.obj : error LNK2019: unresolved external symbol MPI_Init
> > > >> > > referenced in function _check_MPI_Init
> > > >> > > conftest.obj : error LNK2019: unresolved external symbol
> > > >> MPI_Comm_create
> > > >> > > referenced in function _check_MPI_Comm_create
> > > >> > > 

Re: [petsc-users] petsc on windows

2019-08-29 Thread Balay, Satish via petsc-users
On MS-Windows - you need the location of the DLLs in PATH

Or use --with-shared-libraries=0

Satish

On Thu, 29 Aug 2019, Sam Guo via petsc-users wrote:

> When I use intel mpi, configuration, compile and test all work fine but I
> cannot use dll in my application.
> 
> On Thu, Aug 29, 2019 at 3:46 PM Sam Guo  wrote:
> 
> > After I removed following lines inin config/BuildSystem/config/package.py,
> > configuration finished without error.
> >  self.executeTest(self.checkDependencies)
> >  self.executeTest(self.configureLibrary)
> >  self.executeTest(self.checkSharedLibrary)
> >
> > I then add my mpi wrapper to ${PTESTC_ARCH}/lib/petsc/conf/petscvariables:
> > PCC_LINKER_FLAGS =-MD -wd4996 -Z7
> > /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> >
> > On Thu, Aug 29, 2019 at 3:28 PM Balay, Satish  wrote:
> >
> >> On Thu, 29 Aug 2019, Sam Guo via petsc-users wrote:
> >>
> >> > I can link when I add my wrapper to
> >> > PCC_LINKER_FLAGS =-MD -wd4996 -Z7
> >> > /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> >>
> >> I don't understand what you mean here. Add PCC_LINKER_FLAGS to where?
> >> This is a variable in configure generated makefile
> >>
> >> Since PETSc is not built [as configure failed] - there should be no
> >> configure generated makefiles.
> >>
> >> > (I don't understand why configure does not include my wrapper)
> >>
> >> Well the compiler gives the error below. Can you try to compile
> >> manually [i.e without PETSc or any petsc makefiles] a simple MPI code
> >> - say cpi.c from MPICH and see if it works?  [and copy/paste the log
> >> from this compile attempt.
> >>
> >> Satish
> >>
> >> >
> >> >
> >> > On Thu, Aug 29, 2019 at 1:28 PM Matthew Knepley 
> >> wrote:
> >> >
> >> > > On Thu, Aug 29, 2019 at 4:02 PM Sam Guo 
> >> wrote:
> >> > >
> >> > >> Thanks for the quick response. Attached please find the configure.log
> >> > >> containing the configure error.
> >> > >>
> >> > >
> >> > > Executing:
> >> /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> >> > > -c -o /tmp/petsc-6DsCEk/config.libraries/conftest.o
> >> > > -I/tmp/petsc-6DsCEk/config.compilers
> >> > > -I/tmp/petsc-6DsCEk/config.setCompilers
> >> > > -I/tmp/petsc-6DsCEk/config.utilities.closure
> >> > > -I/tmp/petsc-6DsCEk/config.headers
> >> > > -I/tmp/petsc-6DsCEk/config.utilities.cacheDetails
> >> > > -I/tmp/petsc-6DsCEk/config.types -I/tmp/petsc-6DsCEk/config.atomics
> >> > > -I/tmp/petsc-6DsCEk/config.functions
> >> > > -I/tmp/petsc-6DsCEk/config.utilities.featureTestMacros
> >> > > -I/tmp/petsc-6DsCEk/config.utilities.missing
> >> > > -I/tmp/petsc-6DsCEk/PETSc.options.scalarTypes
> >> > > -I/tmp/petsc-6DsCEk/config.libraries  -MD -wd4996 -Z7
> >> > >  /tmp/petsc-6DsCEk/config.libraries/conftest.c
> >> > > stdout: conftest.c
> >> > > Successful compile:
> >> > > Source:
> >> > > #include "confdefs.h"
> >> > > #include "conffix.h"
> >> > > /* Override any gcc2 internal prototype to avoid an error. */
> >> > > char MPI_Init();
> >> > > static void _check_MPI_Init() { MPI_Init(); }
> >> > > char MPI_Comm_create();
> >> > > static void _check_MPI_Comm_create() { MPI_Comm_create(); }
> >> > >
> >> > > int main() {
> >> > > _check_MPI_Init();
> >> > > _check_MPI_Comm_create();;
> >> > >   return 0;
> >> > > }
> >> > > Executing:
> >> /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> >> > >  -o /tmp/petsc-6DsCEk/config.libraries/conftest.exe-MD -wd4996 -Z7
> >> > > /tmp/petsc-6DsCEk/config.libraries/conftest.o
> >> > >
> >> /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> >> > > Ws2_32.lib
> >> > > stdout:
> >> > > LINK : C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe not found
> >> or not
> >> > > built by the last incremental link; performing full link
> >> > > conftest.obj : error LNK2019: unresolved external symbol MPI_Init
> >> > > referenced in function _check_MPI_Init
> >> > > conftest.obj : error LNK2019: unresolved external symbol
> >> MPI_Comm_create
> >> > > referenced in function _check_MPI_Comm_create
> >> > > C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe : fatal error
> >> LNK1120:
> >> > > 2 unresolved externals
> >> > > Possible ERROR while running linker: exit code 2
> >> > > stdout:
> >> > > LINK : C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe not found
> >> or not
> >> > > built by the last incremental link; performing full link
> >> > > conftest.obj : error LNK2019: unresolved external symbol MPI_Init
> >> > > referenced in function _check_MPI_Init
> >> > > conftest.obj : error LNK2019: unresolved external symbol
> >> MPI_Comm_create
> >> > > referenced in function _check_MPI_Comm_create
> >> > > C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe : fatal error
> >> LNK1120:
> >> > > 2 unresolved externals
> >> > >
> >> > > The link is definitely failing. Does it work if you do it by hand?
> >> > >
> >> > >   Thanks,
> >> > >
> >> > >  Matt
> >> > >
> >> > >
> >> > >> Regarding our 

Re: [petsc-users] petsc on windows

2019-08-29 Thread Balay, Satish via petsc-users
On Thu, 29 Aug 2019, Sam Guo via petsc-users wrote:

> I can link when I add my wrapper to
> PCC_LINKER_FLAGS =-MD -wd4996 -Z7
> /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib

I don't understand what you mean here. Add PCC_LINKER_FLAGS to where? This is a 
variable in configure generated makefile 

Since PETSc is not built [as configure failed] - there should be no configure 
generated makefiles.

> (I don't understand why configure does not include my wrapper)

Well the compiler gives the error below. Can you try to compile
manually [i.e without PETSc or any petsc makefiles] a simple MPI code
- say cpi.c from MPICH and see if it works?  [and copy/paste the log
from this compile attempt.

Satish

> 
> 
> On Thu, Aug 29, 2019 at 1:28 PM Matthew Knepley  wrote:
> 
> > On Thu, Aug 29, 2019 at 4:02 PM Sam Guo  wrote:
> >
> >> Thanks for the quick response. Attached please find the configure.log
> >> containing the configure error.
> >>
> >
> > Executing: /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> > -c -o /tmp/petsc-6DsCEk/config.libraries/conftest.o
> > -I/tmp/petsc-6DsCEk/config.compilers
> > -I/tmp/petsc-6DsCEk/config.setCompilers
> > -I/tmp/petsc-6DsCEk/config.utilities.closure
> > -I/tmp/petsc-6DsCEk/config.headers
> > -I/tmp/petsc-6DsCEk/config.utilities.cacheDetails
> > -I/tmp/petsc-6DsCEk/config.types -I/tmp/petsc-6DsCEk/config.atomics
> > -I/tmp/petsc-6DsCEk/config.functions
> > -I/tmp/petsc-6DsCEk/config.utilities.featureTestMacros
> > -I/tmp/petsc-6DsCEk/config.utilities.missing
> > -I/tmp/petsc-6DsCEk/PETSc.options.scalarTypes
> > -I/tmp/petsc-6DsCEk/config.libraries  -MD -wd4996 -Z7
> >  /tmp/petsc-6DsCEk/config.libraries/conftest.c
> > stdout: conftest.c
> > Successful compile:
> > Source:
> > #include "confdefs.h"
> > #include "conffix.h"
> > /* Override any gcc2 internal prototype to avoid an error. */
> > char MPI_Init();
> > static void _check_MPI_Init() { MPI_Init(); }
> > char MPI_Comm_create();
> > static void _check_MPI_Comm_create() { MPI_Comm_create(); }
> >
> > int main() {
> > _check_MPI_Init();
> > _check_MPI_Comm_create();;
> >   return 0;
> > }
> > Executing: /home/xianzhongg/petsc-3.11.3/lib/petsc/bin/win32fe/win32fe cl
> >  -o /tmp/petsc-6DsCEk/config.libraries/conftest.exe-MD -wd4996 -Z7
> > /tmp/petsc-6DsCEk/config.libraries/conftest.o
> >  /home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
> > Ws2_32.lib
> > stdout:
> > LINK : C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe not found or not
> > built by the last incremental link; performing full link
> > conftest.obj : error LNK2019: unresolved external symbol MPI_Init
> > referenced in function _check_MPI_Init
> > conftest.obj : error LNK2019: unresolved external symbol MPI_Comm_create
> > referenced in function _check_MPI_Comm_create
> > C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe : fatal error LNK1120:
> > 2 unresolved externals
> > Possible ERROR while running linker: exit code 2
> > stdout:
> > LINK : C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe not found or not
> > built by the last incremental link; performing full link
> > conftest.obj : error LNK2019: unresolved external symbol MPI_Init
> > referenced in function _check_MPI_Init
> > conftest.obj : error LNK2019: unresolved external symbol MPI_Comm_create
> > referenced in function _check_MPI_Comm_create
> > C:\cygwin64\tmp\PE81BA~1\CONFIG~1.LIB\conftest.exe : fatal error LNK1120:
> > 2 unresolved externals
> >
> > The link is definitely failing. Does it work if you do it by hand?
> >
> >   Thanks,
> >
> >  Matt
> >
> >
> >> Regarding our dup, our wrapper does support it. In fact, everything works
> >> fine on Linux. I suspect on windows, PETSc picks the system mpi.h somehow.
> >> I am investigating it.
> >>
> >> Thanks,
> >> Sam
> >>
> >> On Thu, Aug 29, 2019 at 3:39 PM Matthew Knepley 
> >> wrote:
> >>
> >>> On Thu, Aug 29, 2019 at 3:33 PM Sam Guo via petsc-users <
> >>> petsc-users@mcs.anl.gov> wrote:
> >>>
>  Dear PETSc dev team,
> I am looking some tips porting petsc to windows. We have our mpi
>  wrapper (so we can switch different mpi). I configure petsc using
>  --with-mpi-lib and --with-mpi-include
>   ./configure --with-cc="win32fe cl" --with-fc=0
>  --download-f2cblaslapack
>  --with-mpi-lib=/home/xianzhongg/dev/star/lib/win64/intel18.3vc14/lib/StarMpiWrapper.lib
>  --with-mpi-include=/home/xianzhongg/dev/star/base/src/mpi/include
>  --with-shared-libaries=1
> 
>  But I got error
> 
>  ===
>   Configuring PETSc to compile on your system
> 
>  ===
>  TESTING: check from
>  config.libraries(config/BuildSystem/config/libraries.py:154)
>  ***
>    

Re: [petsc-users] Unable to compile super_dist using an intel compiler

2019-08-27 Thread Balay, Satish via petsc-users
attached zip file has the same/old configure.log

Can you use PETSC_ARCH=arch-test - so that the old build files are not
reused, and resend the new configure.log

Satish

On Tue, 27 Aug 2019, Fande Kong via petsc-users wrote:

> No, I hit exactly the same error using your script.
> 
> Thanks,
> 
> Fande,
> 
> On Tue, Aug 27, 2019 at 2:54 PM Balay, Satish  wrote:
> 
> > On Tue, 27 Aug 2019, Fande Kong via petsc-users wrote:
> >
> > > Hi All,
> > >
> > > I was trying to compile PETSc with "--download-superlu_dist" using an
> > intel
> > > compiler. I have explored with different options, but did not get PETSc
> > > built successfully so far. Any help would be appreciated.
> > >
> > > The log file is attached.
> >
> > I attempted one and that worked.
> >
> > Can you try the same on  your machine and see if it still gives errors?
> >
> > Satish
> >
> > -
> >
> > -bash-4.2$ ./configure --with-cc=mpiicc --with-fc=mpiifort
> > --with-cxx=mpiicpc --with-debugging=no -with-blas-lapack-dir=$MKLROOT
> > --with-cxx-dialect=C++11 --download-superlu_dist=1 --download-metis
> > --download-parmetis --download-cmake
> >
> > ===
> >  Configuring PETSc to compile on your system
> >
> >
> > ===
> > ===
> >   * WARNING: Using default optimization C
> > flags -g -O3 You might
> > consider manually setting optimal optimization flags for your system with
> >  COPTFLAGS="optimization flags" see
> > config/examples/arch-*-opt.py for examples
> >  
> > ===
> >
> > ===
> >   * WARNING: Using default C++ optimization
> > flags -g -O3   You might
> > consider manually setting optimal optimization flags for your system with
> >  CXXOPTFLAGS="optimization flags" see
> > config/examples/arch-*-opt.py for examples
> >  
> > ===
> >
> > ===
> >   * WARNING: Using default FORTRAN
> > optimization flags -g -O3   You
> > might consider manually setting optimal optimization flags for your system
> > with   FOPTFLAGS="optimization flags" see
> > config/examples/arch-*-opt.py for examples
> >  
> > ===
> >
> > ===
> >   It appears you do not have valgrind installed
> > on your system.We HIGHLY
> > recommend you install it from www.valgrind.org
> >  Or install valgrind-devel or equivalent using your
> > package manager.  Then rerun
> > ./configure
> >
> >  
> > ===
> >
> > ===
> >   Trying to download git://
> > https://bitbucket.org/petsc/pkg-sowing.git for SOWING
> >  
> > ===
> >
> > ===
> >   Running configure on SOWING; this may take
> > several minutes
> >  
> > ===
> >
> > ===
> >   Running make on SOWING; this may take several
> > minutes
> > ===
> >
> > ===
> >   Running make install on SOWING; this may take
> > several minutes
> > ===
> >
> > ===
> >   Trying to download
> > https://cmake.org/files/v3.9/cmake-3.9.6.tar.gz for CMAKE
> >
> >  
> > ===
> >
> > ===
> >   Running 

Re: [petsc-users] Unable to compile super_dist using an intel compiler

2019-08-27 Thread Balay, Satish via petsc-users
On Tue, 27 Aug 2019, Fande Kong via petsc-users wrote:

> Hi All,
> 
> I was trying to compile PETSc with "--download-superlu_dist" using an intel
> compiler. I have explored with different options, but did not get PETSc
> built successfully so far. Any help would be appreciated.
> 
> The log file is attached.

I attempted one and that worked.

Can you try the same on  your machine and see if it still gives errors?

Satish

-

-bash-4.2$ ./configure --with-cc=mpiicc --with-fc=mpiifort --with-cxx=mpiicpc 
--with-debugging=no -with-blas-lapack-dir=$MKLROOT --with-cxx-dialect=C++11 
--download-superlu_dist=1 --download-metis --download-parmetis --download-cmake
===
 Configuring PETSc to compile on your system   
===
=== 
   * WARNING: Using default optimization C 
flags -g -O3 You might consider 
manually setting optimal optimization flags for your system with
   COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for 
examples 
=== 
 
=== 
   * WARNING: Using default C++ optimization 
flags -g -O3   You might consider 
manually setting optimal optimization flags for your system with
   CXXOPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for 
examples   
=== 
 
=== 
   * WARNING: Using default FORTRAN 
optimization flags -g -O3   You might 
consider manually setting optimal optimization flags for your system with   
FOPTFLAGS="optimization flags" see 
config/examples/arch-*-opt.py for examples 
=== 
 
=== 
   It appears you do not have valgrind installed on 
your system.We HIGHLY recommend you 
install it from www.valgrind.org
 Or install valgrind-devel or equivalent using your package manager.
  Then rerun ./configure
 
=== 
 
=== 
   Trying to download 
git://https://bitbucket.org/petsc/pkg-sowing.git for SOWING 
=== 
 
=== 
   Running configure on SOWING; this may take 
several minutes 
=== 
 
=== 
   Running make on SOWING; this may take several 
minutes  
=== 
 
=== 
   Running make install on SOWING; this may take 
several minutes  
=== 
 
=== 
   Trying to download 
https://cmake.org/files/v3.9/cmake-3.9.6.tar.gz for CMAKE   
=== 
 
=== 
   Running configure on CMAKE; this may take 
several minutes  

Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-22 Thread Balay, Satish via petsc-users
On Fri, 23 Aug 2019, Smith, Barry F. via petsc-users wrote:

> bsmith@es:~$ ldd /usr/lib/libparmetis.so.3.1
>   linux-vdso.so.1 =>  (0x7fff9115e000)
>   libmpi.so.1 => /usr/lib/libmpi.so.1 (0x7faf95f87000)
>   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7faf95c81000)
>   libmetis.so.3.1 => /usr/lib/libmetis.so.3.1 (0x7faf95a34000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7faf9566b000)
>   libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x7faf95468000)
>   libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
> (0x7faf95228000)
>   libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
> (0x7faf9501e000)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x7faf94e0)
>   /lib64/ld-linux-x86-64.so.2 (0x7faf9654e000)
>   libnuma.so.1 => 
> /soft/com/packages/pgi/19.3/linux86-64/19.3/lib/libnuma.so.1 
> (0x7faf94bf5000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7faf949f1000)
> 
> You have something in /usr/lib referring to something in 
> /soft/com/packages/pgi/  ??

Must be due to LD_LIBRARY_PATH.

Checking back the original issue:


LD_LIBRARY_PATH=/home/lailai/nonroot/mpi/mpich/3.3-intel19/lib:/opt/intel/lib/intel64:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib:

Ok - all this stuff in LD_LIBRARY_PATH is the trigger of the original issue.

Satish


Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-22 Thread Balay, Satish via petsc-users
Compilers are supposed to prefer libraries in specified -L path before system 
stuff.

>>>>>>
balay@es^~ $ ls /usr/lib/lib*metis*
/usr/lib/libmetis.a/usr/lib/libmetis.so.3.1  /usr/lib/libparmetis.so@ 
/usr/lib/libscotchmetis-5.1.so  /usr/lib/libscotchmetis.so@
/usr/lib/libmetis.so@  /usr/lib/libparmetis.a/usr/lib/libparmetis.so.3.1  
/usr/lib/libscotchmetis.a
balay@es^~ $ 
<<<<

And we have these files installed and they don't cause problems. And its not 
always practical to uninstall system stuff
[esp on multi-user machines]

Satish


On Thu, 22 Aug 2019, Smith, Barry F. wrote:

> 
>   You have a copy of parmetis installed in /usr/lib this is a systems 
> directory and many compilers and linkers automatically find libraries in that 
> location and it is often difficult to avoid have the compilers/linkers use 
> these.   In general you never want to install external software such as 
> parmetis, PETSc, MPI,  etc in systems directories (/usr/  and /usr/local) 
> 
>   You should delete this library (and the includes in /usr/include)
> 
>   Barry
> 
> 
>   
> 
> > On Aug 22, 2019, at 5:17 PM, Balay, Satish via petsc-users 
> >  wrote:
> > 
> > 
> >> ./ex19: symbol lookup error: /usr/lib/libparmetis.so: undefined symbol: 
> >> ompi_mpi_comm_world
> > 
> > For some reason the wrong parmetis library is getting picked up. I don't 
> > know why.
> > 
> > Can you copy/paste the log from the following?
> > 
> > cd src/snes/examples/tutorials
> > make PETSC_DIR=/home/lailai/nonroot/petsc/petsc3.11.3_intel19_mpich3.3 ex19
> > ldd ex19
> > 
> > cd 
> > /home/lailai/nonroot/petsc/petsc3.11.3_intel19_mpich3.3/pet3.11.3-intel19-mpich3.3/lib
> > ldd *.so
> > 
> > Satish
> > 
> > On Thu, 22 Aug 2019, Lailai Zhu via petsc-users wrote:
> > 
> >> hi, Satish,
> >> 
> >> as you have suggested, i compiled a new version using 3.11.3,
> >> it compiles well, the errors occur in checking. i also attach
> >> the errors of check. thanks very much,
> >> 
> >> lailai
> >> 
> >> On 8/22/19 4:16 PM, Balay, Satish wrote:
> >>> Any reason for using  petsc-3.10.5 and not latest petsc-3.11?
> >>> 
> >>> I suggest starting from scatch and rebuilding.
> >>> 
> >>> And if you still have issues - send corresponding configure.log and 
> >>> make.log
> >>> 
> >>> Satish
> >>> 
> >>> On Thu, 22 Aug 2019, Lailai Zhu via petsc-users wrote:
> >>> 
> >>>> sorry, Satish,
> >>>> 
> >>>> but it does not seem to solve the problem.
> >>>> 
> >>>> best,
> >>>> lailai
> >>>> 
> >>>> On 8/22/19 12:41 AM, Balay, Satish wrote:
> >>>>> Can you run 'make' again and see if this error goes away?
> >>>>> 
> >>>>> Satish
> >>>>> 
> >>>>> On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >>>>> 
> >>>>>> hi, Satish,
> >>>>>> i tried to do it following your suggestion, i get the following errors
> >>>>>> when
> >>>>>> installing.
> >>>>>> here is my configuration,
> >>>>>> 
> >>>>>> any ideas?
> >>>>>> 
> >>>>>> best,
> >>>>>> lailai
> >>>>>> 
> >>>>>> ./config/configure.py --with-c++-support --known-mpi-shared-libraries=1
> >>>>>> --with-batch=0  --with-mpi=1 --with-debugging=0  CXXOPTFLAGS="-g -O3"
> >>>>>> COPTFLAGS="-O3 -ip -axCORE-AVX2 -xSSE4.2" FOPTFLAGS="-O3 -ip 
> >>>>>> -axCORE-AVX2
> >>>>>> -xSSE4.2" --with-blas-lapack-dir=/opt/intel/mkl --download-elemental=1
> >>>>>> --download-blacs=1  --download-scalapack=1  --download-hypre=1
> >>>>>> --download-plapack=1 --with-cc=mpicc --with-cxx=mpic++ 
> >>>>>> --with-fc=mpifort
> >>>>>> --download-amd=1 --download-anamod=1 --download-blopex=1
> >>>>>> --download-dscpack=1 --download-sprng=1 --download-superlu=1
> >>>>>> --with-cxx-dialect=C++11 --download-metis --download-parmetis
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/s

Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-22 Thread Balay, Satish via petsc-users


> ./ex19: symbol lookup error: /usr/lib/libparmetis.so: undefined symbol: 
> ompi_mpi_comm_world

For some reason the wrong parmetis library is getting picked up. I don't know 
why.

Can you copy/paste the log from the following?

cd src/snes/examples/tutorials
make PETSC_DIR=/home/lailai/nonroot/petsc/petsc3.11.3_intel19_mpich3.3 ex19
ldd ex19

cd 
/home/lailai/nonroot/petsc/petsc3.11.3_intel19_mpich3.3/pet3.11.3-intel19-mpich3.3/lib
ldd *.so

Satish

On Thu, 22 Aug 2019, Lailai Zhu via petsc-users wrote:

> hi, Satish,
> 
> as you have suggested, i compiled a new version using 3.11.3,
> it compiles well, the errors occur in checking. i also attach
> the errors of check. thanks very much,
> 
> lailai
> 
> On 8/22/19 4:16 PM, Balay, Satish wrote:
> > Any reason for using  petsc-3.10.5 and not latest petsc-3.11?
> >
> > I suggest starting from scatch and rebuilding.
> >
> > And if you still have issues - send corresponding configure.log and make.log
> >
> > Satish
> >
> > On Thu, 22 Aug 2019, Lailai Zhu via petsc-users wrote:
> >
> >> sorry, Satish,
> >>
> >> but it does not seem to solve the problem.
> >>
> >> best,
> >> lailai
> >>
> >> On 8/22/19 12:41 AM, Balay, Satish wrote:
> >>> Can you run 'make' again and see if this error goes away?
> >>>
> >>> Satish
> >>>
> >>> On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >>>
>  hi, Satish,
>  i tried to do it following your suggestion, i get the following errors
>  when
>  installing.
>  here is my configuration,
> 
>  any ideas?
> 
>  best,
>  lailai
> 
>  ./config/configure.py --with-c++-support --known-mpi-shared-libraries=1
>  --with-batch=0  --with-mpi=1 --with-debugging=0  CXXOPTFLAGS="-g -O3"
>  COPTFLAGS="-O3 -ip -axCORE-AVX2 -xSSE4.2" FOPTFLAGS="-O3 -ip -axCORE-AVX2
>  -xSSE4.2" --with-blas-lapack-dir=/opt/intel/mkl --download-elemental=1
>  --download-blacs=1  --download-scalapack=1  --download-hypre=1
>  --download-plapack=1 --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort
>  --download-amd=1 --download-anamod=1 --download-blopex=1
>  --download-dscpack=1 --download-sprng=1 --download-superlu=1
>  --with-cxx-dialect=C++11 --download-metis --download-parmetis
> 
> 
> 
>  pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o: In function
>  `MatCreate_SeqSBAIJ':
>  sbaij.c:(.text+0x1bc45): undefined reference to
>  `MatConvert_SeqSBAIJ_Elemental'
>  ld: pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o:
>  relocation
>  R_X86_64_PC32 against undefined hidden symbol
>  `MatConvert_SeqSBAIJ_Elemental'
>  can not be used when making a shared object
>  ld: final link failed: Bad value
>  gmakefile:86: recipe for target
>  'pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5' failed
>  make[2]: *** [pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5] Error 1
>  make[2]: Leaving directory
>  '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
>  ../petsc3.10.5_intel19_mpich3.3/lib/petsc/conf/rules:81:
>  recipe for target 'gnumake' failed
>  make[1]: *** [gnumake] Error 2
>  make[1]: Leaving directory
>  '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
>  **ERROR*
>    Error during compile, check
>  pet3.10.5-intel19-mpich3.3/lib/petsc/conf/make.log
>    Send it and pet3.10.5-intel19-mpich3.3/lib/petsc/conf/configure.log to
>  petsc-ma...@mcs.anl.gov
> 
>  On 8/21/19 10:58 PM, Balay, Satish wrote:
> > To install elemental - you use: --download-elemental=1 [not
> > --download-elemental-commit=v0.87.7]
> >
> > Satish
> >
> >
> > On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >
> >> hi, dear petsc developers,
> >>
> >> I am having a problem when using the external solver elemental.
> >> I installed petsc3.10.5 version with the flag
> >> --download-elemental-commit=v0.87.7
> >> the installation seems to be ok. However, it seems that i may not be
> >> able
> >> to use the elemental solver though.
> >>
> >> I followed this page
> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATELEMENTAL.html
> >> to interface the elemental solver, namely,
> >> MatSetType(A,MATELEMENTAL);
> >> or set it via the command line '*-mat_type elemental*',
> >>
> >> in either case, i will get the following error,
> >>
> >> [0]PETSC ERROR: - Error Message
> >> --
> >> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing
> >> package:
> >> http://www.mcs.anl.gov/petsc/documentation/installation.html#external
> >> [0]PETSC ERROR: Unknown Mat type given: elemental
> >> [0]PETSC ERROR: See 

Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-22 Thread Balay, Satish via petsc-users
Any reason for using  petsc-3.10.5 and not latest petsc-3.11?

I suggest starting from scatch and rebuilding.

And if you still have issues - send corresponding configure.log and make.log

Satish

On Thu, 22 Aug 2019, Lailai Zhu via petsc-users wrote:

> sorry, Satish,
> 
> but it does not seem to solve the problem.
> 
> best,
> lailai
> 
> On 8/22/19 12:41 AM, Balay, Satish wrote:
> > Can you run 'make' again and see if this error goes away?
> >
> > Satish
> >
> > On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >
> >> hi, Satish,
> >> i tried to do it following your suggestion, i get the following errors when
> >> installing.
> >> here is my configuration,
> >>
> >> any ideas?
> >>
> >> best,
> >> lailai
> >>
> >> ./config/configure.py --with-c++-support --known-mpi-shared-libraries=1
> >> --with-batch=0  --with-mpi=1 --with-debugging=0  CXXOPTFLAGS="-g -O3"
> >> COPTFLAGS="-O3 -ip -axCORE-AVX2 -xSSE4.2" FOPTFLAGS="-O3 -ip -axCORE-AVX2
> >> -xSSE4.2" --with-blas-lapack-dir=/opt/intel/mkl --download-elemental=1
> >> --download-blacs=1  --download-scalapack=1  --download-hypre=1
> >> --download-plapack=1 --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort
> >> --download-amd=1 --download-anamod=1 --download-blopex=1
> >> --download-dscpack=1 --download-sprng=1 --download-superlu=1
> >> --with-cxx-dialect=C++11 --download-metis --download-parmetis
> >>
> >>
> >>
> >> pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o: In function
> >> `MatCreate_SeqSBAIJ':
> >> sbaij.c:(.text+0x1bc45): undefined reference to
> >> `MatConvert_SeqSBAIJ_Elemental'
> >> ld: pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o: relocation
> >> R_X86_64_PC32 against undefined hidden symbol
> >> `MatConvert_SeqSBAIJ_Elemental'
> >> can not be used when making a shared object
> >> ld: final link failed: Bad value
> >> gmakefile:86: recipe for target
> >> 'pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5' failed
> >> make[2]: *** [pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5] Error 1
> >> make[2]: Leaving directory
> >> '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
> >> ../petsc3.10.5_intel19_mpich3.3/lib/petsc/conf/rules:81:
> >> recipe for target 'gnumake' failed
> >> make[1]: *** [gnumake] Error 2
> >> make[1]: Leaving directory
> >> '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
> >> **ERROR*
> >>   Error during compile, check
> >> pet3.10.5-intel19-mpich3.3/lib/petsc/conf/make.log
> >>   Send it and pet3.10.5-intel19-mpich3.3/lib/petsc/conf/configure.log to
> >> petsc-ma...@mcs.anl.gov
> >>
> >> On 8/21/19 10:58 PM, Balay, Satish wrote:
> >>> To install elemental - you use: --download-elemental=1 [not
> >>> --download-elemental-commit=v0.87.7]
> >>>
> >>> Satish
> >>>
> >>>
> >>> On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >>>
>  hi, dear petsc developers,
> 
>  I am having a problem when using the external solver elemental.
>  I installed petsc3.10.5 version with the flag
>  --download-elemental-commit=v0.87.7
>  the installation seems to be ok. However, it seems that i may not be able
>  to use the elemental solver though.
> 
>  I followed this page
>  https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATELEMENTAL.html
>  to interface the elemental solver, namely,
>  MatSetType(A,MATELEMENTAL);
>  or set it via the command line '*-mat_type elemental*',
> 
>  in either case, i will get the following error,
> 
>  [0]PETSC ERROR: - Error Message
>  --
>  [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing package:
>  http://www.mcs.anl.gov/petsc/documentation/installation.html#external
>  [0]PETSC ERROR: Unknown Mat type given: elemental
>  [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>  for
>  trouble shooting.
>  [0]PETSC ERROR: Petsc Release Version 3.10.5, Mar, 28, 2019
> 
>  May i ask whether there will be a way or some specific petsc versions
>  that
>  are
>  able to use the elemental solver?
> 
>  Thanks in advance,
> 
>  best,
>  lailai
> 
> >>
> 
> 

Re: [petsc-users] [petsc-dev] IMPORTANT PETSc repository changed from Bucketbit to GitLab

2019-08-22 Thread Balay, Satish via petsc-users
On Mon, 19 Aug 2019, Smith, Barry F. via petsc-dev wrote:

> 
> 
>PETSc folks.
> 
> This announcement is for people who access PETSc from the BitBucket 
> repository or post issues or have other activities with the Bitbucket 
> repository
> 
>We have changed  the location of the PETSc repository from BitBucket to 
> GitLab. For each copy of the repository you need to do
> 
>git remote set-url origin 
> g...@gitlab.com:petsc/petsc.git
>or
>git remote set-url origin https://gitlab.com/petsc/petsc.git
> 
>You will likely also want to set up an account on gitlab and remember to 
> set the ssh key information
> 
>if you previously had write permission to the petsc repository and cannot 
> write to the new repository please email 
> bsm...@mcs.anl.gov with your GitLab
>username and the email used
> 
> Please do not make pull requests to the Gitlab site yet; we will be 
> manually processing the PR from the BitBucket site over the next couple of
> days as we implement the testing.
> 
> Please be patient, this is all new to use and it may take a few days to 
> get out all the glitches.

Just an update:

We are still in the process of setting up the CI at gitlab. So we are
not yet ready to process PRs [or Merge Requests (MRs) in gitlab terminology]

As of now - we have the old jenkins equivalent [and a few additional]
tests working with gitlab setup. i.e

https://gitlab.com/petsc/petsc/pipelines/77669506

But we are yet to migrate all the regular [aka next] tests to this
infrastructure.

Satish


> 
> Thanks for your support
> 
> Barry
> 
>The reason for switching to GitLab is that it has a better testing system 
> than BitBucket and Gitlab. We hope that will allow us to test and manage
>pull requests more rapidly, efficiently and accurately, thus allowing us 
> to improve and add to PETSc more quickly.
> 



Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-21 Thread Balay, Satish via petsc-users
Can you run 'make' again and see if this error goes away?

Satish

On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:

> hi, Satish,
> i tried to do it following your suggestion, i get the following errors when
> installing.
> here is my configuration,
> 
> any ideas?
> 
> best,
> lailai
> 
> ./config/configure.py --with-c++-support --known-mpi-shared-libraries=1 
> --with-batch=0  --with-mpi=1 --with-debugging=0  CXXOPTFLAGS="-g -O3" 
> COPTFLAGS="-O3 -ip -axCORE-AVX2 -xSSE4.2" FOPTFLAGS="-O3 -ip -axCORE-AVX2
> -xSSE4.2" --with-blas-lapack-dir=/opt/intel/mkl --download-elemental=1
> --download-blacs=1  --download-scalapack=1  --download-hypre=1
> --download-plapack=1 --with-cc=mpicc --with-cxx=mpic++ --with-fc=mpifort 
> --download-amd=1 --download-anamod=1 --download-blopex=1
> --download-dscpack=1 --download-sprng=1 --download-superlu=1
> --with-cxx-dialect=C++11 --download-metis --download-parmetis
> 
> 
> 
> pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o: In function
> `MatCreate_SeqSBAIJ':
> sbaij.c:(.text+0x1bc45): undefined reference to
> `MatConvert_SeqSBAIJ_Elemental'
> ld: pet3.10.5-intel19-mpich3.3/obj/mat/impls/sbaij/seq/sbaij.o: relocation
> R_X86_64_PC32 against undefined hidden symbol `MatConvert_SeqSBAIJ_Elemental'
> can not be used when making a shared object
> ld: final link failed: Bad value
> gmakefile:86: recipe for target
> 'pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5' failed
> make[2]: *** [pet3.10.5-intel19-mpich3.3/lib/libpetsc.so.3.10.5] Error 1
> make[2]: Leaving directory '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
> ../petsc3.10.5_intel19_mpich3.3/lib/petsc/conf/rules:81:
> recipe for target 'gnumake' failed
> make[1]: *** [gnumake] Error 2
> make[1]: Leaving directory '/usr/nonroot/petsc/petsc3.10.5_intel19_mpich3.3'
> **ERROR*
>   Error during compile, check
> pet3.10.5-intel19-mpich3.3/lib/petsc/conf/make.log
>   Send it and pet3.10.5-intel19-mpich3.3/lib/petsc/conf/configure.log to
> petsc-ma...@mcs.anl.gov
> 
> On 8/21/19 10:58 PM, Balay, Satish wrote:
> > To install elemental - you use: --download-elemental=1 [not
> > --download-elemental-commit=v0.87.7]
> >
> > Satish
> >
> >
> > On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:
> >
> >> hi, dear petsc developers,
> >>
> >> I am having a problem when using the external solver elemental.
> >> I installed petsc3.10.5 version with the flag
> >> --download-elemental-commit=v0.87.7
> >> the installation seems to be ok. However, it seems that i may not be able
> >> to use the elemental solver though.
> >>
> >> I followed this page
> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATELEMENTAL.html
> >> to interface the elemental solver, namely,
> >> MatSetType(A,MATELEMENTAL);
> >> or set it via the command line '*-mat_type elemental*',
> >>
> >> in either case, i will get the following error,
> >>
> >> [0]PETSC ERROR: - Error Message
> >> --
> >> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing package:
> >> http://www.mcs.anl.gov/petsc/documentation/installation.html#external
> >> [0]PETSC ERROR: Unknown Mat type given: elemental
> >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
> >> trouble shooting.
> >> [0]PETSC ERROR: Petsc Release Version 3.10.5, Mar, 28, 2019
> >>
> >> May i ask whether there will be a way or some specific petsc versions that
> >> are
> >> able to use the elemental solver?
> >>
> >> Thanks in advance,
> >>
> >> best,
> >> lailai
> >>
> 
> 


Re: [petsc-users] errors when using elemental with petsc3.10.5

2019-08-21 Thread Balay, Satish via petsc-users
To install elemental - you use: --download-elemental=1 [not 
--download-elemental-commit=v0.87.7]

Satish


On Wed, 21 Aug 2019, Lailai Zhu via petsc-users wrote:

> hi, dear petsc developers,
> 
> I am having a problem when using the external solver elemental.
> I installed petsc3.10.5 version with the flag
> --download-elemental-commit=v0.87.7
> the installation seems to be ok. However, it seems that i may not be able
> to use the elemental solver though.
> 
> I followed this page
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATELEMENTAL.html
> to interface the elemental solver, namely,
> MatSetType(A,MATELEMENTAL);
> or set it via the command line '*-mat_type elemental*',
> 
> in either case, i will get the following error,
> 
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing package:
> http://www.mcs.anl.gov/petsc/documentation/installation.html#external
> [0]PETSC ERROR: Unknown Mat type given: elemental
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.5, Mar, 28, 2019
> 
> May i ask whether there will be a way or some specific petsc versions that are
> able to use the elemental solver?
> 
> Thanks in advance,
> 
> best,
> lailai
> 



Re: [petsc-users] [petsc-dev] IMPORTANT PETSc repository changed from Bucketbit to GitLab

2019-08-19 Thread Balay, Satish via petsc-users
A note:

The bitbucket repository is saved at 
https://bitbucket.org/petsc/petsc-pre-gitlab 

The git part is now read-only. The other parts [issues, PRs, wiki etc] are 
perhaps writable - but we should avoid that.

Satish

On Mon, 19 Aug 2019, Smith, Barry F. via petsc-dev wrote:

> 
> 
>PETSc folks.
> 
> This announcement is for people who access PETSc from the BitBucket 
> repository or post issues or have other activities with the Bitbucket 
> repository
> 
>We have changed  the location of the PETSc repository from BitBucket to 
> GitLab. For each copy of the repository you need to do
> 
>git remote set-url origin 
> g...@gitlab.com:petsc/petsc.git
>or
>git remote set-url origin https://gitlab.com/petsc/petsc.git
> 
>You will likely also want to set up an account on gitlab and remember to 
> set the ssh key information
> 
>if you previously had write permission to the petsc repository and cannot 
> write to the new repository please email 
> bsm...@mcs.anl.gov with your GitLab
>username and the email used
> 
> Please do not make pull requests to the Gitlab site yet; we will be 
> manually processing the PR from the BitBucket site over the next couple of
> days as we implement the testing.
> 
> Please be patient, this is all new to use and it may take a few days to 
> get out all the glitches.
> 
> Thanks for your support
> 
> Barry
> 
>The reason for switching to GitLab is that it has a better testing system 
> than BitBucket and Gitlab. We hope that will allow us to test and manage
>pull requests more rapidly, efficiently and accurately, thus allowing us 
> to improve and add to PETSc more quickly.
> 



Re: [petsc-users] When building PETSc with --prefix, reference to temporary build directory remains

2019-08-01 Thread Balay, Satish via petsc-users
On Thu, 1 Aug 2019, Smith, Barry F. via petsc-users wrote:

> 
>   Please consider upgrading to the latest PETSc 3.11 it has many new features 
> and fewer bugs etc.
> 
> > 5. From now on set PETSC_DIR=/readonly PETSC_ARCH=''
> > step 4 moves the compiled PETSc to /readonly/ and it works, but when I 
> > compile a program with it the following line pops up in the linking command:
> > -Wl,-rpath,/temporary/-Xlinker
> 
> 
>How are you compiling the program? 
> 
>If you are using the PETSc make facilities the easiest fix would be to 
> remove this offend -Wl,-rpath,/temporary/-Xlinker which isn't needed. It is 
> likely in /readonly/lib/petsc/conf/petscvariables 
> 
>I may have gotten the file or location wrong.  You can use find in 
> /readonly to locate any use of -Wl,-rpath,/temporary/-Xlinker and remove them.


Actually its best to replace -Wl,-rpath,/temporary/-Xlinker to 
-Wl,-rpath,/readonly/-Xlinker or appropriate value.

Likely 'prefix' install code for this version of petsc is buggy. Alternative is 
to do inplace install (i.e do not use --prefix - but have sources and build) in 
/readonly

Satish

> 
>Barry
> 
> 
> 
> 
> > On Aug 1, 2019, at 6:07 AM, Bastian Löhrer via petsc-users 
> >  wrote:
> > 
> > Dear all,
> > 
> > I'm struggling to compile PETSc 3.3-p6 on a cluster where it is to be 
> > provided in a read-only folder.
> > 
> > My scenario is the following:
> > PETSc shall end up in a folder into which I can write from a login node but 
> > which is read-only on compute nodes: I'll call it /readonly/ below.
> > So, using a compute node, I need to compile PETSc in a different location, 
> > which I'll call /temporary/
> > I have read numerous instructions on the web and here are the steps that I 
> > came up with:
> > 
> > 1. on a compute node: unpack the PETSc source to /temporary/ and navigate 
> > there.
> > 2. configure:
> > ./configure 
> >   \
> >  --prefix=/readonly/
> >   \
> >  --with-gnu-compilers=0 
> >   \
> >   --with-vendor-compilers=intel 
> >   \
> >  --with-large-file-io=1 
> >   \
> >  --CFLAGS="-fPIC -L${I_MPI_ROOT}/intel64/lib 
> > -I${I_MPI_ROOT}/intel64/include -lmpi"   \
> >--CXXFLAGS="-fPIC -L${I_MPI_ROOT}/intel64/lib 
> > -I${I_MPI_ROOT}/intel64/include -lmpi -lmpicxx"  \
> >  --FFLAGS="-fPIC -L${I_MPI_ROOT}/intel64/lib 
> > -I${I_MPI_ROOT}/intel64/include -lmpi"   \
> > --LDFLAGS="-L${I_MPI_ROOT}/intel64/lib 
> > -I${I_MPI_ROOT}/intel64/include -lmpi" \
> > COPTFLAGS="-O3 -axCORE-AVX2 -xSSE4.2 -fp-model consistent 
> > -fp-model source -fp-speculation=safe -ftz"  \
> >   CXXOPTFLAGS="-O3 -axCORE-AVX2 -xSSE4.2 -fp-model consistent 
> > -fp-model source -fp-speculation=safe -ftz"  \
> > FOPTFLAGS="-O3 -axCORE-AVX2 -xSSE4.2 -fp-model consistent 
> > -fp-model source -fp-speculation=safe -ftz"  \
> >--with-blas-lapack-dir="${MKLROOT}/lib/intel64"  
> >\
> >  --download-hypre   
> >   \
> >  --with-debugging=no
> > 
> > 3. make all
> > 4. on a login node: make install
> > 5. From now on set PETSC_DIR=/readonly PETSC_ARCH=''
> > step 4 moves the compiled PETSc to /readonly/ and it works, but when I 
> > compile a program with it the following line pops up in the linking command:
> > -Wl,-rpath,/temporary/-Xlinker
> > 
> > This is a problem when the drive on which /temporary/ is placed is not 
> > reachable which is the case right now due to technical issues. This causes 
> > the linking process to get stuck.
> > The folder /temporary/ is to be deleted anyway so I do not see why it 
> > should be referenced here.
> > 
> > Am I missing something?
> > 
> > - Bastian
> 
> 


[petsc-users] petsc-3.11.3.tar.gz now available

2019-06-26 Thread Balay, Satish via petsc-users
Dear PETSc users,

The patch release petsc-3.11.3 is now available for download,
with change list at 'PETSc-3.11 Changelog'

http://www.mcs.anl.gov/petsc/download/index.html

Satish



Re: [petsc-users] ERROR_WITH_EXAMPLE_BUILD

2019-06-16 Thread Balay, Satish via petsc-users
Can you send configure.log, make.log for this build?

Also copy/paste complete output from the 'make' command below.

Satish

On Sat, 15 Jun 2019, govind sharma via petsc-users wrote:

> Hello,
> 
> I am trying to build one example of tutorial on snes with following command
> "make PETSC_DIR=/opt/petsc/3.8.4 PETSC_ARCH=linux-gnu-dbg ex1" but it gives
> some error like
> 
> /usr/bin/ld: cannot find -lgfortran
> collect2: error: ld returned 1 exit status
> makefile:29: recipe for target 'ex1' failed
> make: [ex1] Error 1 (ignored)
> /bin/rm -f ex1.o
> 
> It does not build. Can anyone help me to understand to this to build ex1
> and run it?
> 
> Regards,
> Govind Sharma
> Phd Student, IIT Delhi
> 



Re: [petsc-users] Configuration process of Petsc hanging

2019-05-31 Thread Balay, Satish via petsc-users
PETSc configure is attempting to run some MPI binaries - and that is hanging 
with this MPI.

You can retry with the options:

 --batch=1 --known-64-bit-blas-indices=0 -known-mpi-shared-libraries=0

[and follow instructions provided by configure]

Satish



On Fri, 31 May 2019, Ma, Xiao via petsc-users wrote:

> Hi ,
> 
> I am trying to install Pylith which is a earthquake simulator using Petsc 
> library, I am building it in PSC bridge cluster, during the steps of building 
> Petsc, the configuration hanging at
> 
> TESTING: configureMPITypes from 
> config.packages.MPI(config/BuildSystem/config/packages/MPI.py:247)
> 
> 
> I am not sure if this has to with the configuration setup of the mpi version 
> I am using.
> 
> Any help would be deeply appreciated.
> 
> I am attaching the configure options here:
> 
> 
> Saving to: ‘petsc-pylith-2.2.1.tgz’
> 
> 100%[===>] 10,415,016 
>  37.3MB/s   in 0.3s
> 
> 2019-05-31 14:03:13 (37.3 MB/s) - ‘petsc-pylith-2.2.1.tgz’ saved 
> [10415016/10415016]
> 
> FINISHED --2019-05-31 14:03:13--
> Total wall clock time: 1.1s
> Downloaded: 1 files, 9.9M in 0.3s (37.3 MB/s)
> /usr/bin/tar -zxf petsc-pylith-2.2.1.tgz
> cd petsc-pylith && \
> ./configure --prefix=/home/xm12345/pylith \
> --with-c2html=0 --with-x=0 \
> --with-clanguage=C \
> --with-mpicompilers=1 \
> --with-shared-libraries=1 --with-64-bit-points=1 
> --with-large-file-io=1 \
> --download-chaco=1 --download-ml=1 --download-f2cblaslapack=1 
> --with-hdf5=1--with -debugging=0  --with-fc=0 
> CPPFLAGS="-I/home/xm12345/pylith/include -I/home/xm12345/pylith/include " L   
>   DFLAGS="-L/home/xm12345/pylith/lib 
> -L/home/xm12345/pylith/lib64 -L/home/xm12345/pylith/lib -L/home/xm
>  12345/pylith/lib64 " CFLAGS="-g -O2" CXXFLAGS="-g -O2 
> -DMPICH_IGNORE_CXX_SEEK" FCFLAGS="" \
> PETSC_DIR=/home/xm12345/build/pylith/petsc-pylith 
> PETSC_ARCH=arch-pylith && \
> make -f gmakefile -j2 
> PETSC_DIR=/home/xm12345/build/pylith/petsc-pylith PETSC_ARCH=arch-pylit   
>   h && \
> make PETSC_DIR=/home/xm12345/build/pylith/petsc-pylith install && \
> make PETSC_DIR=/home/xm12345/build/pylith/petsc-pylith test && \
> touch ../installed_petsc
> ===
>  Configuring PETSc to compile on your system
> ===
> ===
>  * WARNING: MAKEFLAGS 
> (set to w) found in environment variables - ignoring  
> use ./configure MAKEFLAGS=$MAKEFLAGS if you really want 
> to use that value **   
> ===
>
> ===
>WARNING! Compiling 
> PETSc with no debugging, this should  
>only be done for timing and production runs. 
> All development should   
> be done when configured using --with-debugging=1  
>   
> ===
>TESTING: configureMPITypes from 
> config.packages.MPI(config/BuildSystem/config/packages/MPI.py:247)
> 
> 
> 


Re: [petsc-users] How do I supply the compiler PIC flag via CFLAGS, CXXXFLAGS, and FCFLAGS

2019-05-28 Thread Balay, Satish via petsc-users
Configure.log shows '--with-pic=1' - hence this error.

Remove '--with-pic=1' and retry.

Saitsh

On Tue, 28 May 2019, Inge Gutheil via petsc-users wrote:

> Dear PETSc list,
> when I try to install the petsc-3.11.2 library as a static library - for
> some reasons I do not want the dynamic library -
> I suddenly get the error
> 
> Cannot determine compiler PIC flags if shared libraries is turned off
> Either run using --with-shared-libraries or --with-pic=0 and supply the
> compiler PIC flag via CFLAGS, CXXXFLAGS, and FCFLAGS
> 
> Attached find the configure.log
> I added --with-pic=0 as can seen from configure.log but I do not know
> where I can find how to set the compiler PIC flag via CFLAGS, CXXXFLAGS,
> and FCFLAGS, at least -fPIC seems to be not sufficient, so what can I do?
> 
> Wit 3.11.1 I did not have that problem.
> 
> Regards
> Inge
> 
> 



Re: [petsc-users] With-batch (new) flags

2019-05-20 Thread Balay, Satish via petsc-users
I'm not yet sure what the correct fix is - but the following change should get 
this going..

diff --git a/config/BuildSystem/config/packages/BlasLapack.py 
b/config/BuildSystem/config/packages/BlasLapack.py
index e0310da4b0..7355f1a369 100644
--- a/config/BuildSystem/config/packages/BlasLapack.py
+++ b/config/BuildSystem/config/packages/BlasLapack.py
@@ -42,7 +42,7 @@ class Configure(config.package.Package):
 help.addArgument('BLAS/LAPACK', '-with-lapack-lib=',nargs.ArgLibrary(None, None, 'Indicate the 
library(s) containing LAPACK'))
 help.addArgument('BLAS/LAPACK', 
'-with-blaslapack-suffix=',nargs.ArgLibrary(None, None, 'Indicate a 
suffix for BLAS/LAPACK subroutine names.'))
 help.addArgument('BLAS/LAPACK', '-with-64-bit-blas-indices', 
nargs.ArgBool(None, 0, 'Try to use 64 bit integers for BLAS/LAPACK; will error 
if not available'))
-#help.addArgument('BLAS/LAPACK', '-known-64-bit-blas-indices=', 
nargs.ArgBool(None, 0, 'Indicate if using 64 bit integer BLAS'))
+help.addArgument('BLAS/LAPACK', '-known-64-bit-blas-indices=', 
nargs.ArgBool(None, 0, 'Indicate if using 64 bit integer BLAS'))
 return
 
   def getPrefix(self):

Satish

On Mon, 20 May 2019, Mark Adams via petsc-users wrote:

> On Mon, May 20, 2019 at 3:55 PM Balay, Satish  wrote:
> 
> > for ex:  ilp version of mkl is --known-64-bit-blas-indices=1 while lp mkl
> > is --known-64-bit-blas-indices=0
> >
> > Default blas we normally use is --known-64-bit-blas-indices=0 [they don't
> > use 64bit indices]
> >
> 
> Humm, that is what Dylan (in the log that I sent). He is downloading blas
> and has --known-64-bit-blas-indices=0. Should this be correct?
> 
> 
> >
> > Satish
> >
> > On Mon, 20 May 2019, Mark Adams via petsc-users wrote:
> >
> > > We are getting this failure. This a bit frustrating in that the first
> > error
> > > message "Must give a default value for known-mpi-shared-libraries.." OK,
> > I
> > > google it and find that =0 is suggested. That seemed to work. Then we
> > got a
> > > similar error about -known-64-bit-blas-indices. It was clear from the
> > > documentation what to use so we tried =0 and that failed (attached). This
> > > is little frustrating having to use try and error for each of these
> > 'known"
> > > things.
> > >
> > > Dylan is trying  --known-64-bit-blas-indices=1 now. I trust that will
> > work,
> > > but I think the error are not very informative. All this known stuff is
> > new
> > > to me. Perhaps put an FAQ for this and list all of the "known"s that we
> > > need to add in batch.
> > >
> > > Thanks,
> > > Mark
> > >
> >
> >
> 



Re: [petsc-users] With-batch (new) flags

2019-05-20 Thread Balay, Satish via petsc-users
for ex:  ilp version of mkl is --known-64-bit-blas-indices=1 while lp mkl is 
--known-64-bit-blas-indices=0

Default blas we normally use is --known-64-bit-blas-indices=0 [they don't use 
64bit indices]

Satish

On Mon, 20 May 2019, Mark Adams via petsc-users wrote:

> We are getting this failure. This a bit frustrating in that the first error
> message "Must give a default value for known-mpi-shared-libraries.." OK, I
> google it and find that =0 is suggested. That seemed to work. Then we got a
> similar error about -known-64-bit-blas-indices. It was clear from the
> documentation what to use so we tried =0 and that failed (attached). This
> is little frustrating having to use try and error for each of these 'known"
> things.
> 
> Dylan is trying  --known-64-bit-blas-indices=1 now. I trust that will work,
> but I think the error are not very informative. All this known stuff is new
> to me. Perhaps put an FAQ for this and list all of the "known"s that we
> need to add in batch.
> 
> Thanks,
> Mark
> 



[petsc-users] petsc-3.11.2.tar.gz now available

2019-05-18 Thread Balay, Satish via petsc-users
Dear PETSc users,

The patch release petsc-3.11.2 is now available for download,
with change list at 'PETSc-3.11 Changelog'

http://www.mcs.anl.gov/petsc/download/index.html

Satish



Re: [petsc-users] ``--with-clanguage=c++" turns on "PETSC_HAVE_COMPLEX"?

2019-05-03 Thread Balay, Satish via petsc-users
On Fri, 3 May 2019, Fande Kong via petsc-users wrote:

> On Fri, May 3, 2019 at 8:02 PM Balay, Satish  wrote:
> 
> > On Fri, 3 May 2019, Fande Kong via petsc-users wrote:
> >
> > > It looks like mpicxx from openmpi does not handle this correctly.
> >
> > Perhaps my earlier messages was not clear. The problem is not with
> > OpenMPI - but your build of it. Its installed with 'clang' as the C++
> > compiler - it should be built with 'clang++' as the c++ compiler.
> >
> 
> Oh, I see. Thanks.

I've added a check to configure

https://bitbucket.org/petsc/petsc/pull-requests/1622/configure-when-with-clanguage-cxx-is-used/diff

Satish


Re: [petsc-users] ``--with-clanguage=c++" turns on "PETSC_HAVE_COMPLEX"?

2019-05-03 Thread Balay, Satish via petsc-users
On Fri, 3 May 2019, Fande Kong via petsc-users wrote:

> It looks like mpicxx from openmpi does not handle this correctly.

Perhaps my earlier messages was not clear. The problem is not with
OpenMPI - but your build of it. Its installed with 'clang' as the C++
compiler - it should be built with 'clang++' as the c++ compiler.

> I switched to mpich, and it works now.
> 
> However there is till some warnings:
> 
> *clang-6.0: warning: treating 'c' input as 'c++' when in C++ mode, this
> behavior is deprecated [-Wdeprecated]*
> * CXX arch-linux2-c-opt-memory/obj/dm/impls/plex/glexg.o*
> * CXX arch-linux2-c-opt-memory/obj/dm/impls/plex/petscpartmatpart.o*
> *clang-6.0: warning: treating 'c' input as 'c++' when in C++ mode, this
> behavior is dep*

Yes - because PETSc sources are in C - and you are building with
--with-clanguage=cxx - and this compiler thinks one should not compile
.c sources as c++.

Satish

> 
> Fande,
> 
> 
> On Fri, May 3, 2019 at 7:29 PM Balay, Satish  wrote:
> 
> > 
> > Executing: mpicxx -show
> >
> > stdout: clang
> > -I/Users/kongf/projects/openmpi-2.1.1_installed/include
> > -L/Users/kongf/projects/openmpi-2.1.1_installed/lib -lmpi
> > 
> >
> > Hm - I think this [specifying a C compiler as c++] is the trigger of this
> > problem.
> >
> > configure checks if the c++ compiler supports complex. This test was
> > successful [as it was done with .cxx file - and presumably clang switches
> > to c++ mocd for a .cxx file.
> >
> > However PETSc sources a .c - so its likely compiling PETSc as C -  so
> > things are now inconsistant - and broken..
> >
> > Note:
> > PETSC_HAVE_COMPLEX => compilers support complex - so define a complex
> > datatype
> > PETSC_USE_COMPLEX => build PETSc with PetscScalar=complex
> >
> >
> > Satish
> >
> > On Fri, 3 May 2019, Fande Kong via petsc-users wrote:
> >
> > > Hi All,
> > >
> > > Comping PETSc with ``--with-clanguage=c"  works fine. But I could not
> > > compile PETSc with "--with-clanguage=c++" since the flag
> > > "PETSC_HAVE_COMPLEX" was wrongly set on by this option.
> > >
> > > */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:21: error:
> > > expected parameter declarator*
> > > *PetscComplex ic(0.0,1.0);*
> > > *^*
> > > */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:21: error:
> > > expected ')'*
> > > */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:20: note: to
> > match
> > > this '('*
> > > *PetscComplex ic(0.0,1.0);*
> > > *   ^*
> > > */Users/kongf/projects/petsc/src/sys/objects/pinit.c:914:13: error:
> > > assigning to 'PetscComplex' (aka '_Complex double') from incompatible
> > type
> > > 'PetscComplex ()' (aka '_Complex double ()')*
> > > *PETSC_i = ic;*
> > > *^ ~~*
> > > *3 errors generated.*
> > > *make[2]: *** [arch-linux2-c-opt-memory/obj/sys/objects/pinit.o] Error 1*
> > > *make[2]: *** Waiting for unfinished jobs*
> > > *make[2]: Leaving directory `/Users/kongf/projects/petsc'*
> > > *make[1]: *** [gnumake] Error 2*
> > > *make[1]: Leaving directory `/Users/kongf/projects/petsc'*
> > > ***ERROR**
> > > *  Error during compile, check
> > > arch-linux2-c-opt-memory/lib/petsc/conf/make.log*
> > > *  Send it and arch-linux2-c-opt-memory/lib/petsc/conf/configure.log to
> > > petsc-ma...@mcs.anl.gov *
> > > **
> > >
> > > The make and configure logs are attached.
> > >
> > > Fande,
> > >
> >
> >
> 



Re: [petsc-users] ``--with-clanguage=c++" turns on "PETSC_HAVE_COMPLEX"?

2019-05-03 Thread Balay, Satish via petsc-users

Executing: mpicxx -show 

   stdout: clang 
-I/Users/kongf/projects/openmpi-2.1.1_installed/include 
-L/Users/kongf/projects/openmpi-2.1.1_installed/lib -lmpi


Hm - I think this [specifying a C compiler as c++] is the trigger of this 
problem.

configure checks if the c++ compiler supports complex. This test was successful 
[as it was done with .cxx file - and presumably clang switches to c++ mocd for 
a .cxx file.

However PETSc sources a .c - so its likely compiling PETSc as C -  so things 
are now inconsistant - and broken..

Note:
PETSC_HAVE_COMPLEX => compilers support complex - so define a complex datatype
PETSC_USE_COMPLEX => build PETSc with PetscScalar=complex


Satish

On Fri, 3 May 2019, Fande Kong via petsc-users wrote:

> Hi All,
> 
> Comping PETSc with ``--with-clanguage=c"  works fine. But I could not
> compile PETSc with "--with-clanguage=c++" since the flag
> "PETSC_HAVE_COMPLEX" was wrongly set on by this option.
> 
> */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:21: error:
> expected parameter declarator*
> *PetscComplex ic(0.0,1.0);*
> *^*
> */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:21: error:
> expected ')'*
> */Users/kongf/projects/petsc/src/sys/objects/pinit.c:913:20: note: to match
> this '('*
> *PetscComplex ic(0.0,1.0);*
> *   ^*
> */Users/kongf/projects/petsc/src/sys/objects/pinit.c:914:13: error:
> assigning to 'PetscComplex' (aka '_Complex double') from incompatible type
> 'PetscComplex ()' (aka '_Complex double ()')*
> *PETSC_i = ic;*
> *^ ~~*
> *3 errors generated.*
> *make[2]: *** [arch-linux2-c-opt-memory/obj/sys/objects/pinit.o] Error 1*
> *make[2]: *** Waiting for unfinished jobs*
> *make[2]: Leaving directory `/Users/kongf/projects/petsc'*
> *make[1]: *** [gnumake] Error 2*
> *make[1]: Leaving directory `/Users/kongf/projects/petsc'*
> ***ERROR**
> *  Error during compile, check
> arch-linux2-c-opt-memory/lib/petsc/conf/make.log*
> *  Send it and arch-linux2-c-opt-memory/lib/petsc/conf/configure.log to
> petsc-ma...@mcs.anl.gov *
> **
> 
> The make and configure logs are attached.
> 
> Fande,
> 



Re: [petsc-users] Iterative solver behavior with increasing number of mpi

2019-04-17 Thread Balay, Satish via petsc-users
Yes - the default preconditioner is block-jacobi - with one block on
each processor.

So when run on 1 proc vs 8 proc - the preconditioner is different
(with 1block for bjacobi vs 8blocks for bjacobi)- hence difference in
convergence.

Satish

On Wed, 17 Apr 2019, Marian Greg via petsc-users wrote:

> Hi All,
> 
> I am facing strange behavior of the ksp solvers with increasing number of
> MPI. The solver is taking more and more iterations with increase in number
> of MPIs. Is that a normal situation? I was expecting to get the same number
> of iteration with whatever number of MPIs I use.
> 
> E.g.
> My matrix has about 2 million dofs
> Solving with np 1 takes about 3500 iteration while solving with np 4 takes
> 6500 iterations for the same convergence criteria.
> 
> Thanks
> Mari
> 



Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-17 Thread Balay, Satish via petsc-users
On Wed, 17 Apr 2019, Smith, Barry F. via petsc-users wrote:

>   This is fine for "hacking" on PETSc but worthless for any other package. 
> Here is my concern, when someone 
> realizes there is a problem with a package they are using through a package 
> manager they think, crud I have to
> 
> 1) find the git repository for this package
> 2) git clone the package 
> 3) figure out how to build the package from source, is it ./configure, cmake, 
> what are the needed arguments,... 
> 4) wait for the entire thing to build 
> 
> then I can go in and investigate the problem and provide and test the fix via 
> a pull request. Heck I'm not going to bother.
> 
> Thus a lot of potential contributions of small fixes that everyone in the 
> community would benefit from are lost. This is why, for 
> me, an ideal HPC package manager provides a trivial process for providing 
> fixes/improvements to other packages.
> 
> For example Sajid could have easily figured out the VecView_MPI_HDF5() bug 
> and provided a fix but just the hassle of 
> logistics (not ability to solve the problem) prevented him from providing the 
> bug fix to everyone rapidly. 


Even without spack and multiple packages - this is not a easy thing to
do. For ex: most of our users install petsc from tarball.

And if they find a bug - they have to go through similar complicated
process [create a bitbucket account, get a fork - learn the petsc PR
process - make a PR etc].

With spack - I stick to the usual process - and don't get bogged down
by 'spack' support for this process.

If I see a breakage - I do 'spack build-env package [this has its own
issues] - attempt a fix - get it first working with a spack build.

[Alternative is to just edit the package file to get my fix - if its a patch I 
can find]


Once I have it working [the major issue is taken care off]. Then I
have a diff/patch and then worry about how to submit this diff/patch
to upstream.

Sure its a multi step model - and has many trip points. But is not
that our current petsc only model doesn't have any.

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-17 Thread Balay, Satish via petsc-users
On Wed, 17 Apr 2019, Satish Balay wrote:

> Its not ideal - but having local changes in our spack clones (change
> git url, add appropriate version lines to branches that one is
> working on) is possible [for a group working in this mode].

[balay@pj03 petsc]$ pwd
/home/balay/petsc
[balay@pj03 petsc]$ git branch
* barry/fix-vecview-mpi-hdf5/maint
  maint
  master

[balay@pj03 spack]$ git diff
diff --git a/var/spack/repos/builtin/packages/petsc/package.py 
b/var/spack/repos/builtin/packages/petsc/package.py
index 3d9686eb2..57932f93c 100644
--- a/var/spack/repos/builtin/packages/petsc/package.py
+++ b/var/spack/repos/builtin/packages/petsc/package.py
@@ -17,13 +17,16 @@ class Petsc(Package):
 
 homepage = "http://www.mcs.anl.gov/petsc/index.html;
 url  = 
"http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.5.3.tar.gz;
-git  = "https://bitbucket.org/petsc/petsc.git;
+git  = "/home/balay/petsc"
 
 maintainers = ['balay', 'barrysmith', 'jedbrown']
 
 version('develop', branch='master')
 version('xsdk-0.2.0', tag='xsdk-0.2.0')
 
+version('3.11.99-maint', branch='maint')
+version('3.11.98-fix-vecview-mpi-hdf5', 
branch='barry/fix-vecview-mpi-hdf5/maint')
+
 version('3.11.1', 
'cb627f99f7ce1540ebbbf338189f89a5f1ecf3ab3b5b0e357f9e46c209f1fb23')
 version('3.11.0', 
'b3bed2a9263193c84138052a1b92d47299c3490dd24d1d0bf79fb884e71e678a')
 version('3.10.5', 
'3a81c8406410e0ffa8a3e9f8efcdf2e683cc40613c9bb5cb378a6498f595803e')
[balay@pj03 spack]$ spack install petsc@develop~hdf5~hypre~superlu-dist~metis
==> mpich@3.3b1 : externally installed in /home/petsc/soft/mpich-3.3b1
==> mpich@3.3b1 : already registered in DB
==> openblas is already installed in 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/openblas-0.3.5-xokonl5jhgandxuwswkaabijra337l2d
==> python@3.7.2 : externally installed in /usr
==> python@3.7.2 : already registered in DB
==> sowing is already installed in 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/sowing-1.1.25-p1-6wiuwedg5ufoabro3ip4j7ssd43a66je
==> Installing petsc
==> Searching for binary cache of petsc
==> Warning: No Spack mirrors are currently configured
==> No binary for petsc found: installing from source
==> Cloning git repository: /home/balay/petsc on branch master
warning: --depth is ignored in local clones; use file:// instead.
==> No checksum needed when fetching with git
==> Already staged petsc-develop-qcfn35x2lvixtk4u5reczf5y3pjq6a3r in 
/home/balay/spack/var/spack/stage/petsc-develop-qcfn35x2lvixtk4u5reczf5y3pjq6a3r
==> No patches needed for petsc
==> Building petsc [Package]
==> Executing phase: 'install'
==> [2019-04-17-01:18:20.792426] 'make' '-j16' 'install'
==> Successfully installed petsc
  Fetch: 0.82s.  Build: 2m 22.90s.  Total: 2m 23.71s.
[+] 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/petsc-develop-qcfn35x2lvixtk4u5reczf5y3pjq6a3r
[balay@pj03 spack]$ spack install 
petsc@3.11.98-fix-vecview-mpi-hdf5~hdf5~hypre~superlu-dist~metis
==> mpich@3.3b1 : externally installed in /home/petsc/soft/mpich-3.3b1
==> mpich@3.3b1 : already registered in DB
==> openblas is already installed in 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/openblas-0.3.5-xokonl5jhgandxuwswkaabijra337l2d
==> python@3.7.2 : externally installed in /usr
==> python@3.7.2 : already registered in DB
==> Installing petsc
==> Searching for binary cache of petsc
==> Warning: No Spack mirrors are currently configured
==> No binary for petsc found: installing from source
==> Cloning git repository: /home/balay/petsc on branch 
barry/fix-vecview-mpi-hdf5/maint
warning: --depth is ignored in local clones; use file:// instead.
==> No checksum needed when fetching with git
==> Already staged 
petsc-3.11.98-fix-vecview-mpi-hdf5-2i6pmomvyacyefcq7hxqe22gogepf6os in 
/home/balay/spack/var/spack/stage/petsc-3.11.98-fix-vecview-mpi-hdf5-2i6pmomvyacyefcq7hxqe22gogepf6os
==> No patches needed for petsc
==> Building petsc [Package]
==> Executing phase: 'install'
==> Successfully installed petsc
  Fetch: 0.79s.  Build: 2m 31.08s.  Total: 2m 31.88s.
[+] 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/petsc-3.11.98-fix-vecview-mpi-hdf5-2i6pmomvyacyefcq7hxqe22gogepf6os
[balay@pj03 spack]$ spack install 
petsc@3.11.99-maint~hdf5~hypre~superlu-dist~metis
==> mpich@3.3b1 : externally installed in /home/petsc/soft/mpich-3.3b1
==> mpich@3.3b1 : already registered in DB
==> openblas is already installed in 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.3.1/openblas-0.3.5-xokonl5jhgandxuwswkaabijra337l2d
==> python@3.7.2 : externally installed in /usr
==> python@3.7.2 : already registered in DB
==> Installing petsc
==> Searching for binary cache of petsc
==> Warning: No Spack mirrors are currently configured
==> No binary for petsc found: installing from source
==> Cloning git repository: /home/balay/petsc on branch maint
warning: --depth is ignored in local clones; use file:// instead.
==> No 

Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Wed, 17 Apr 2019, Smith, Barry F. wrote:

> 
>   So it sounds like spack is still mostly a "package manager" where people 
> use "static" packages and don't hack the package's code. This is not 
> unreasonable, no other package manager supports hacking a package's code 
> easily, presumably. The problem is that in the HPC world a "packaged"
> code is always incomplete and may need hacking or application of newly 
> generated patches and this is painful with static package managers so people 
> want to use the git repository directly and mange the build themselves which 
> negates the advantages of using a package manager.
> 
>Thanks
> 
> Barry
> 
> Perhaps if spack had an easier mechanism to allow the user to "point to" 
> local git clones it could get closer to the best of both worlds. Maybe spack 
> could support a list of local repositories and branches in the yaml file. But 
> yes the issue of rerunning the "./configure" stage still comes up.

$ spack help --all| grep diy
  diy   do-it-yourself: build from an existing source directory

I haven't explored this mode though. [but useful for packages that are
not already represented in repo]

This more was in instructions for one of the packages - but then I
couldn't figure out equivalent of 'spack spec' vs 'spack install'
[query and check dependencies before installing] with diy

Its not ideal - but having local changes in our spack clones (change
git url, add appropriate version lines to branches that one is working
on) is possible [for a group working in this mode]. But might not be
ideal for folk who might want to easily do a one off PR - in this
mode.

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Tue, 16 Apr 2019, Sajid Ali via petsc-users wrote:

> > develop > 3.11.99 > 3.10.xx > maint (or other strings)
> Just discovered this issue when trying to build with my fork of spack at [1
> ].
> 
> 
> So, ideally each developer has to have their develop point to the branch
> they want to build ? That would make communication a little confusing since
> spack's develop version is some package's master and now everyone wants a
> different develop so as to not let spack apply any patches for string
> version sorted lower than lowest numeric version.

There is some issue filed [with PR?] regarding this with sorting order
of string versions and numerical versions. This might improve in the
future. But for now 'bugfix-vecduplicate-fftw-vec' will be lower than
version 0.1

Also 'develop' might not be appropriate for all branches.

For ex: - petsc has maint, maint-3.10 etc branches. - so if one is
creating a bugfix for maint - (i.e start a branch off maint) it would
be inappropriate to call it 'develop' - as it will be marked > version
3.11.99 and break some of the version comparisons.

> 
> >Even if you change commit from 'abc' to 'def'spack won't recognize this
> change and use the cached tarball.
> True, but since checksum changes and the user has to constantly zip and
> unzip, I personally find git cloning easier to deal with so it's just a
> matter of preference.
> 

Here you are referring to tarballs - where the sha256sum is listed.

 url  = 
"http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.5.3.tar.gz;
 version('3.11.1', 
'cb627f99f7ce1540ebbbf338189f89a5f1ecf3ab3b5b0e357f9e46c209f1fb23')

However - one can also say:

 git  = "https://bitbucket.org/sajid__ali/petsc.git;
 version('3.11.1', commit='f3d32574624d5351549675da8733a2646265404f')

Here - spack downloads the git snapshot as tarball (saves in tarball
cache as petsc-3.11.1.tar.gz - and reuses it) - and there is no
sha256sum listed here to check. If you change this to some-other
commit (perhaps to test a fix) - spack will use the cached tarball -
and not downloaded the snapshot corresponding to the changed commit.

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Wed, 17 Apr 2019, Balay, Satish via petsc-users wrote:

> On Tue, 16 Apr 2019, Smith, Barry F. wrote:
> 
> > 
> > 
> > > On Apr 16, 2019, at 10:30 PM, Sajid Ali 
> > >  wrote:
> > > 
> > > @Barry: Thanks for the bugfix! 
> > > 
> > > @Satish: Thanks for pointing out this method!
> > > 
> > > My preferred way previously was to download the source code, unzip, edit, 
> > > zip. Now ask spack to not checksum (because my edit has changed stuff) 
> > > and build. Lately, spack has added git support and now I create a branch 
> > > of spack where I add my bugfix branch as the default build git repo 
> > > instead of master to now deal with checksum headaches. 
> > 
> >With the PETSc build system directly it handles dependencies, that is if 
> > you use a PETSC_ARCH and edit one PETSc file it will only recompile that 
> > one file and add it to the library instead of insisting on recompiling all 
> > of PETSc (as developers of course we rely on this or we'd go insane waiting 
> > for builds to complete when we are adding code).
> 
> Yeah but this is within a single package - and only if we don't redo a 
> configure.
> 
> And some of our code to avoid rebuilding external packages have corner cases 
> - so we have to occasionally ask users to do 'rm -rf PETSC_ARCH'
> 
> > 
> >  Is this possible with spack?
> 
> Spack tries to do this [avoid rebuilds] at a package level.
> 
> However within a package - it doesn't keep build files. [and if the
> user forces spack to not delete them with '--dont-restage
> --keep-stage' - it doesn't check if the package need to run configure
> again or not etc..] I'm not sure if this is possible to do
> consistently without error cases across the package collection spack
> has.

One additional note: spack has a way to get into a mode where one can
do the builds manually - primarily for debugging [when builds fail]

spack build-env petsc bash
spack cd petsc
make

But I haven't checked on how to replicate all the steps [configure;
make all; make install; delete] exactly as what spack would do in
'spack install'

However - in this 'build-env' mode - one could use incremental build
feature provided by any given package.

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Tue, 16 Apr 2019, Smith, Barry F. wrote:

> 
> 
> > On Apr 16, 2019, at 10:30 PM, Sajid Ali  
> > wrote:
> > 
> > @Barry: Thanks for the bugfix! 
> > 
> > @Satish: Thanks for pointing out this method!
> > 
> > My preferred way previously was to download the source code, unzip, edit, 
> > zip. Now ask spack to not checksum (because my edit has changed stuff) and 
> > build. Lately, spack has added git support and now I create a branch of 
> > spack where I add my bugfix branch as the default build git repo instead of 
> > master to now deal with checksum headaches. 
> 
>With the PETSc build system directly it handles dependencies, that is if 
> you use a PETSC_ARCH and edit one PETSc file it will only recompile that one 
> file and add it to the library instead of insisting on recompiling all of 
> PETSc (as developers of course we rely on this or we'd go insane waiting for 
> builds to complete when we are adding code).

Yeah but this is within a single package - and only if we don't redo a 
configure.

And some of our code to avoid rebuilding external packages have corner cases - 
so we have to occasionally ask users to do 'rm -rf PETSC_ARCH'

> 
>  Is this possible with spack?

Spack tries to do this [avoid rebuilds] at a package level.

However within a package - it doesn't keep build files. [and if the
user forces spack to not delete them with '--dont-restage
--keep-stage' - it doesn't check if the package need to run configure
again or not etc..] I'm not sure if this is possible to do
consistently without error cases across the package collection spack
has.

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Tue, 16 Apr 2019, Sajid Ali wrote:

> Lately, spack has added git support and now I create a branch of
> spack where I add my bugfix branch as the default build git repo instead of
> master to now deal with checksum headaches.

Some good and bad here..

version('develop', branch='master')
version('3.11.99', branch='maint')
version('maint', branch='maint')

git branch is the only way to get a rebuild to pick up package changes (in 
branch)

However its best to set appropriate version numbers here. i.e
'3.11.99' should be preferable over 'maint'. Otherwise spack version
comparison logic will give unwanted results. It does stuff like:

develop > 3.11.99 > 3.10.xx > maint (or other strings)

Wrt tarballs and commit-ids - spack saves them as tarballs in cache
and reuses them. For ex: - the download below will be saved as
petsc-3.10.1.tar.gz.  Even if you change commit from 'abc' to 'def'
spack won't recognize this change and use the cached tarball.

However - the bad part wrt branch is - each time you do a 'spack
install' - it does a git clone.  [i.e there is no local git clone
which does a 'fetch' to minimize the clone overhead]

Satish


Re: [petsc-users] VecView to hdf5 broken for large (complex) vectors

2019-04-16 Thread Balay, Satish via petsc-users
On Wed, 17 Apr 2019, Smith, Barry F. via petsc-users wrote:

> 
>   Funny you should ask, I just found the bug. 
> 
> > On Apr 16, 2019, at 9:47 PM, Sajid Ali  
> > wrote:
> > 
> > Quick question : To drop a print statement at the required location, I need 
> > to modify the source code, build petsc from source and compile with this 
> > new version of petsc, right or is there an easier way? (Just to confirm 
> > before putting in the effort)
> 
>Yes. But perhaps spack has a way to handle this as well; it should. 
> Satish? If you can get spack to use the git repository then you could edit in 
> that and somehow have spack rebuild using your edited repository.


$ spack help install |grep stage
  --keep-stage  don't remove the build stage if installation succeeds
  --dont-restageif a partial install is detected, don't delete prior 
state

Here is how it works.

- By default - spack downloads the tarball/git-snapshots and saves them in 
var/spack/cache
- and it stages them for build in var/spack/stage [i.e untar and ready to 
compile]
- after the build is complete - it installs in opt/.. and deletes the 
staged/build files.
 [if the build breaks - it leaves the stage alone]

So if we want to add some modifications to a broken build and rebuild - I would:

- 'spack stage' or 'spack install --keep-stage' [to get the package files 
staged but not deleted]
- edit files in stage
- 'spack install --dont-restage --keep-stage'
  i.e use the currently staged files and build from it. And don't delete them 
even if the build succeeds

Satish


Re: [petsc-users] Link broken for: PETSc users manual - pdf (fully searchable with hyperlinks)

2019-04-15 Thread Balay, Satish via petsc-users
Looks like the manual didn't get generated correctly for petsc-3.11.1.

For now - I've restored the 3.11 manual - so the URL should work now.

Satish

On Mon, 15 Apr 2019, Xiang Huang via petsc-users wrote:

> http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf
> 
> Error 404
> The web server cannot find the file for which you are looking.
> 
> Best,
> Xiang
> 



[petsc-users] petsc-3.11.1.tar.gz now available

2019-04-12 Thread Balay, Satish via petsc-users
Dear PETSc users,

The patch release petsc-3.11.1 is now available for download,
with change list at 'PETSc-3.11 Changelog'

http://www.mcs.anl.gov/petsc/download/index.html

Satish


Re: [petsc-users] How to build FFTW3 interface?

2019-04-12 Thread Balay, Satish via petsc-users
Great! The change is now in spack 'develop' branch.

Satish

On Fri, 12 Apr 2019, Sajid Ali via petsc-users wrote:

> Hi Balay,
> 
> Confirming that the spack variant works. Thanks for adding it.
> 



Re: [petsc-users] How to build FFTW3 interface?

2019-04-11 Thread Balay, Satish via petsc-users
On Thu, 11 Apr 2019, Sajid Ali via petsc-users wrote:

> Hi PETSc Developers,
> 
> To run an example that involves the petsc-fftw interface, I loaded both
> petsc and fftw modules (linked of course to the same mpi) but the compiler
> complains of having no knowledge of functions like MatCreateVecsFFTW which
> happens to be defined at (in the source repo)
> petsc/src/mat/impls/fft/fftw.c. 

Sure. Unless petsc is built with fftw enabled - this interface is not enabled.

> I don't see a corresponding definition in
> the install folder (I may be wrong, but i just did a simple grep to find
> the definition of the function I'm looking for and didn't find it while it
> was present in the header and example files).
> 
> >From previous threads on this list-serv I see that the developers asked
> users to use --download-fftw at configure time, but for users that already
> have an fftw installed, is there an option to ask petsc to build the
> interfaces as well (I didn't see any such option listed either here:
> https://www.mcs.anl.gov/petsc/documentation/installation.html or a variant
> in spack) ?

If you are build petsc from source the configure option is 
--with-fftw-dir=/location

or let petsc install it with --download-fftw.

> 
> Also, could the fftw version to download be bumped to 3.3.8 (here :
> petsc/config/BuildSystem/config/packages/fftw.py) since 3.3.7 gives
> erroneous results with gcc-8.
> 
> Bug in fftw-3.3.7+gcc-8 :
> https://github.com/FFTW/fftw3/commit/19eeeca592f63413698f23dd02b9961f22581803

Wrt petsc configure - you can try:
./configure --download-fftw=http://www.fftw.org/fftw-3.3.8.tar.gz

Wrt spack - you can try the branch 'balay/petsc-fftw' and see if it works for 
you

spack install petsc+fftw

Satish


Re: [petsc-users] Problem coupling Petsc into OpenFOAM

2019-04-10 Thread Balay, Satish via petsc-users
Runtime error? You might have to add the path to $PETSC_ARCH/lib in  
LD_LIBRARY_PATH env variable
or - to your link command. If linux/gcc - the linker option is 
-Wl,-rpath,$PETSC_ARCH/lib

If not - send detail logs.

Satish

On Wed, 10 Apr 2019, Vu Do Quoc via petsc-users wrote:

> Hi all,
> 
> I am trying to insert Petsc to OpenFOAM opensource software.
> I have been successfully compiling Petsc with an available solver in
> OpenFOAM by linking it with the shared library libpetsc.so. However, when I
> call the solver to run a test case, I got an error saying that:
> "libpetsc.so cannot be found", even though the library still exists in the
> $PETSC_ARCH/lib folder.
> 
> I have been struggling for weeks but still, have not been able to figure it
> out. Therefore I would be very grateful for any suggestion to solve this
> problem.
> 
> Thanks in advance for your time,
> 
> Best regards,
> 
> Vu Do
> 



Re: [petsc-users] Error with parallel solve

2019-04-08 Thread Balay, Satish via petsc-users
On Mon, 8 Apr 2019, Manav Bhatia via petsc-users wrote:

> 
> 
> > On Apr 8, 2019, at 2:19 PM, Stefano Zampini  
> > wrote:
> > 
> > You can circumvent the problem by using a sequential solver for it. There's 
> > a command line option in petsc as well as API that allows you to do so. 
> > -mat_mumps_icntl_13 1
> 
> Stefano, 
> 
>   Do you know if there is a performance penalty to using this option as 
> opposed to fixing it with the patch? 

If would suggest first trying both the fixes if see either of them work for you.

Satish


Re: [petsc-users] Error with parallel solve

2019-04-08 Thread Balay, Satish via petsc-users
https://github.com/spack/spack/pull/11132

If this works - please add a comment on the PR

Satish

On Mon, 8 Apr 2019, Balay, Satish via petsc-users wrote:

> spack update to use mumps-5.1.2 - with this patch is in branch 
> 'balay/mumps-5.1.2'
> 
> Satish
> 
> On Mon, 8 Apr 2019, Satish Balay wrote:
> 
> > Yes - mumps via spack is unlikely to have this patch - but it can be added.
> > 
> > https://bitbucket.org/petsc/pkg-mumps/commits/5fe5b9e56f78de2b7b1c199688f6c73ff3ff4c2d
> > 
> > Satish
> > 
> > On Mon, 8 Apr 2019, Manav Bhatia wrote:
> > 
> > > This is helpful, Thibaut. Thanks! 
> > > 
> > > For reference: all my Linux installs are using Spack, while my Mac 
> > > install is through a petsc config where I let it download and install 
> > > mumps. 
> > > 
> > > Could this be a source of difference in patch level for Mumps? 
> > > 
> > > 
> > > > On Apr 8, 2019, at 1:56 PM, Appel, Thibaut  
> > > > wrote:
> > > > 
> > > > Hi Manav,
> > > > This seems to be the bug in MUMPS that I reported to their developers 
> > > > last summer.
> > > > But I thought Satish Balay had issued a patch in the maint branch of 
> > > > PETSc to correct that a few months ago?
> > > > The temporary workaround was to disable the ScaLAPACK root node, 
> > > > ICNTL(13)=1
> > > > One of the developers said later
> > > >> A workaround consists in modifying the file src/dtype3_root.F near 
> > > >> line 808
> > > >> and replace the lines:
> > > >> 
> > > >>   SUBROUTINE DMUMPS_INIT_ROOT_FAC( N, root, FILS, IROOT,
> > > >>  & KEEP, INFO )
> > > >>   IMPLICIT NONE
> > > >>   INCLUDE 'dmumps_root.h'
> > > >> by:
> > > >> 
> > > >>   SUBROUTINE DMUMPS_INIT_ROOT_FAC( N, root, FILS, IROOT,
> > > >>  & KEEP, INFO )
> > > >>   USE DMUMPS_STRUC_DEF
> > > >>   IMPLICIT NONE
> > > >> 
> > > > 
> > > > Weird that you’re getting this now if it has been corrected in PETSc?
> > > > 
> > > > Thibaut
> > > >> 
> > > >> > On Apr 8, 2019, at 1:33 PM, Mark Adams  > > >> > <https://lists.mcs.anl.gov/mailman/listinfo/petsc-users>> wrote:
> > > >> > 
> > > >> > Are you able to run the exact same job on your Mac? ie, same number 
> > > >> > of processes, etc.
> > > >> 
> > > >> This is what I am trying to dig into now. 
> > > >> 
> > > >> My Mac has 4 cores. 
> > > >> 
> > > >> I have used several different Linux machines with different number of 
> > > >> processors: 4, 12, 10, 20. They all eventually crash. 
> > > >> 
> > > >> I am trying to establish if the point of crash is the same across 
> > > >> machines. 
> > > >> 
> > > >> -Manav
> > > > 
> > > > 
> > > >> On 8 Apr 2019, at 20:24, petsc-users-requ...@mcs.anl.gov 
> > > >> <mailto:petsc-users-requ...@mcs.anl.gov> wrote:
> > > >> 
> > > >> Send petsc-users mailing list submissions to
> > > >> petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> > > >> 
> > > >> To subscribe or unsubscribe via the World Wide Web, visit
> > > >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
> > > >> or, via email, send a message with subject or body 'help' to
> > > >> petsc-users-requ...@mcs.anl.gov
> > > >> 
> > > >> You can reach the person managing the list at
> > > >> petsc-users-ow...@mcs.anl.gov
> > > >> 
> > > >> When replying, please edit your Subject line so it is more specific
> > > >> than "Re: Contents of petsc-users digest..."
> > > >> 
> > > >> 
> > > >> Today's Topics:
> > > >> 
> > > >>   1.  Error with parallel solve (Manav Bhatia)
> > > >>   2. Re:  Error with parallel solve (Smith, Barry F.)
> > > >>   3. Re:  Error with parallel solve (Mark Adams)
> > > >>   4. Re:  Error with parallel solve (Manav Bhatia)
> > > >> 
> > 

Re: [petsc-users] Error with parallel solve

2019-04-08 Thread Balay, Satish via petsc-users
spack update to use mumps-5.1.2 - with this patch is in branch 
'balay/mumps-5.1.2'

Satish

On Mon, 8 Apr 2019, Satish Balay wrote:

> Yes - mumps via spack is unlikely to have this patch - but it can be added.
> 
> https://bitbucket.org/petsc/pkg-mumps/commits/5fe5b9e56f78de2b7b1c199688f6c73ff3ff4c2d
> 
> Satish
> 
> On Mon, 8 Apr 2019, Manav Bhatia wrote:
> 
> > This is helpful, Thibaut. Thanks! 
> > 
> > For reference: all my Linux installs are using Spack, while my Mac install 
> > is through a petsc config where I let it download and install mumps. 
> > 
> > Could this be a source of difference in patch level for Mumps? 
> > 
> > 
> > > On Apr 8, 2019, at 1:56 PM, Appel, Thibaut  
> > > wrote:
> > > 
> > > Hi Manav,
> > > This seems to be the bug in MUMPS that I reported to their developers 
> > > last summer.
> > > But I thought Satish Balay had issued a patch in the maint branch of 
> > > PETSc to correct that a few months ago?
> > > The temporary workaround was to disable the ScaLAPACK root node, 
> > > ICNTL(13)=1
> > > One of the developers said later
> > >> A workaround consists in modifying the file src/dtype3_root.F near line 
> > >> 808
> > >> and replace the lines:
> > >> 
> > >>   SUBROUTINE DMUMPS_INIT_ROOT_FAC( N, root, FILS, IROOT,
> > >>  & KEEP, INFO )
> > >>   IMPLICIT NONE
> > >>   INCLUDE 'dmumps_root.h'
> > >> by:
> > >> 
> > >>   SUBROUTINE DMUMPS_INIT_ROOT_FAC( N, root, FILS, IROOT,
> > >>  & KEEP, INFO )
> > >>   USE DMUMPS_STRUC_DEF
> > >>   IMPLICIT NONE
> > >> 
> > > 
> > > Weird that you’re getting this now if it has been corrected in PETSc?
> > > 
> > > Thibaut
> > >> 
> > >> > On Apr 8, 2019, at 1:33 PM, Mark Adams  > >> > > wrote:
> > >> > 
> > >> > Are you able to run the exact same job on your Mac? ie, same number of 
> > >> > processes, etc.
> > >> 
> > >> This is what I am trying to dig into now. 
> > >> 
> > >> My Mac has 4 cores. 
> > >> 
> > >> I have used several different Linux machines with different number of 
> > >> processors: 4, 12, 10, 20. They all eventually crash. 
> > >> 
> > >> I am trying to establish if the point of crash is the same across 
> > >> machines. 
> > >> 
> > >> -Manav
> > > 
> > > 
> > >> On 8 Apr 2019, at 20:24, petsc-users-requ...@mcs.anl.gov 
> > >>  wrote:
> > >> 
> > >> Send petsc-users mailing list submissions to
> > >> petsc-users@mcs.anl.gov 
> > >> 
> > >> To subscribe or unsubscribe via the World Wide Web, visit
> > >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
> > >> or, via email, send a message with subject or body 'help' to
> > >> petsc-users-requ...@mcs.anl.gov
> > >> 
> > >> You can reach the person managing the list at
> > >> petsc-users-ow...@mcs.anl.gov
> > >> 
> > >> When replying, please edit your Subject line so it is more specific
> > >> than "Re: Contents of petsc-users digest..."
> > >> 
> > >> 
> > >> Today's Topics:
> > >> 
> > >>   1.  Error with parallel solve (Manav Bhatia)
> > >>   2. Re:  Error with parallel solve (Smith, Barry F.)
> > >>   3. Re:  Error with parallel solve (Mark Adams)
> > >>   4. Re:  Error with parallel solve (Manav Bhatia)
> > >> 
> > >> 
> > >> --
> > >> 
> > >> Message: 1
> > >> Date: Mon, 8 Apr 2019 12:12:06 -0500
> > >> From: Manav Bhatia 
> > >> To: Evan Um via petsc-users 
> > >> Subject: [petsc-users] Error with parallel solve
> > >> Message-ID: 
> > >> Content-Type: text/plain; charset="us-ascii"
> > >> 
> > >> 
> > >> Hi,
> > >> 
> > >>I am running a code a nonlinear simulation using mesh-refinement on 
> > >> libMesh. The code runs without issues on a Mac (can run for days without 
> > >> issues), but crashes on Linux (Centos 6). I am using version 3.11 on 
> > >> Linux with openmpi 3.1.3 and gcc8.2. 
> > >> 
> > >>I tried to use the -on_error_attach_debugger, but it only gave me 
> > >> this message. Does this message imply something to the more experienced 
> > >> eyes? 
> > >> 
> > >>I am going to try to build a debug version of petsc to figure out 
> > >> what is going wrong. I will get and share more detailed logs in a bit. 
> > >> 
> > >> Regards,
> > >> Manav
> > >> 
> > >> [8]PETSC ERROR: 
> > >> 
> > >> [8]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> > >> probably memory access out of range
> > >> [8]PETSC ERROR: Try option -start_in_debugger or 
> > >> -on_error_attach_debugger
> > >> [8]PETSC ERROR: or see 
> > >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> > >> [8]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS 
> > >> X to find memory corruption errors
> > >> [8]PETSC ERROR: configure using --with-debugging=yes, 

Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
A complete simple code showing this error would be useful.

wrt 'Error: Non-variable expression in variable definition context' google 
gives:

https://www.queryxchange.com/q/27_45961166/non-variable-expression-in-variable-definition-context-compilation-error/

But I don't see an expression in the code snippet here..

There must be some difference between your usage of this function vs the petsc 
example..

Satish

On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:

> Hi Satish,
> 
> Because of the intent(out) declaration, I use a temporary solution that
> passing a PetscSF type variable to DMPlexDistribute instead of passing
> PETSC_NULL_SF. But I am still confused how the example works without problem.
> 
> Thanks,
> 
> Danyang
> 
> On 2019-04-05 2:00 p.m., Balay, Satish wrote:
> > Ah - the message about distributed_dm - not PETSC_NULL_SF. So I'm off base
> > here..
> >
> > Satish
> >
> >
> > On Fri, 5 Apr 2019, Balay, Satish via petsc-users wrote:
> >
> >> A fortran interface definition was added in petsc-3.11 so the compile now
> >> checks if its used correctly.
> >>
> >> http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048
> >>
> >> +  PetscSF, intent(out) :: sf
> >>
> >> So this should be inout?
> >>
> >> [All auto-generated stubs don't quantify intent()]
> >>
> >> Satish
> >>
> >> On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:
> >>
> >>> Hi All,
> >>>
> >>> I got a strange error in calling DMPlexDistribute after updating PETSc to
> >>> V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> >>> documented in
> >>>
> >>> https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> >>>
> >>> https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> >>>
> >>> The code section is shown below.
> >>>
> >>>    !c distribute mesh over processes
> >>>    call
> >>> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >>>
> >>> When I use PETSc V3.10 and earlier versions, it works fine. After updating
> >>> to
> >>> latest PETSc V3.11.0, I got the following error during compiling
> >>>
> >>>   call
> >>> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >>>  1
> >>> Error: Non-variable expression in variable definition context (actual
> >>> argument
> >>> to INTENT = OUT/INOUT) at (1)
> >>> /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> >>> recipe for target '../../solver/solver_ddmethod.o' failed
> >>>
> >>> The fortran example
> >>> /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> >>> which also uses DMPlexDistribute can be compiled without problem. Is there
> >>> any
> >>> updates in the compiler flags I need to change?
> >>>
> >>> Thanks,
> >>>
> >>> Danyang
> >>>
> >>>
> >>>
> 


Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
Ah - the message about distributed_dm - not PETSC_NULL_SF. So I'm off base 
here..

Satish


On Fri, 5 Apr 2019, Balay, Satish via petsc-users wrote:

> A fortran interface definition was added in petsc-3.11 so the compile now 
> checks if its used correctly.
> 
> http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048
> 
> +  PetscSF, intent(out) :: sf
> 
> So this should be inout?
> 
> [All auto-generated stubs don't quantify intent()]
> 
> Satish
> 
> On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:
> 
> > Hi All,
> > 
> > I got a strange error in calling DMPlexDistribute after updating PETSc to
> > V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> > documented in
> > 
> > https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> > 
> > https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> > 
> > The code section is shown below.
> > 
> >   !c distribute mesh over processes
> >   call 
> > DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> > 
> > When I use PETSc V3.10 and earlier versions, it works fine. After updating 
> > to
> > latest PETSc V3.11.0, I got the following error during compiling
> > 
> >   call
> > DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> >  1
> > Error: Non-variable expression in variable definition context (actual 
> > argument
> > to INTENT = OUT/INOUT) at (1)
> > /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> > recipe for target '../../solver/solver_ddmethod.o' failed
> > 
> > The fortran example
> > /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> > which also uses DMPlexDistribute can be compiled without problem. Is there 
> > any
> > updates in the compiler flags I need to change?
> > 
> > Thanks,
> > 
> > Danyang
> > 
> > 
> > 
> 

Re: [petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Balay, Satish via petsc-users
A fortran interface definition was added in petsc-3.11 so the compile now 
checks if its used correctly.

http://bitbucket.org/petsc/petsc/commits/fdb49207a8b58c421782c7e45b1394c0a6567048

+  PetscSF, intent(out) :: sf

So this should be inout?

[All auto-generated stubs don't quantify intent()]

Satish

On Fri, 5 Apr 2019, Danyang Su via petsc-users wrote:

> Hi All,
> 
> I got a strange error in calling DMPlexDistribute after updating PETSc to
> V3.11.0. There sounds no change in the interface of DMPlexDistribute as
> documented in
> 
> https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> 
> https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute
> 
> The code section is shown below.
> 
>   !c distribute mesh over processes
>   call 
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
> 
> When I use PETSc V3.10 and earlier versions, it works fine. After updating to
> latest PETSc V3.11.0, I got the following error during compiling
> 
>   call
> DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)
>  1
> Error: Non-variable expression in variable definition context (actual argument
> to INTENT = OUT/INOUT) at (1)
> /home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31:
> recipe for target '../../solver/solver_ddmethod.o' failed
> 
> The fortran example
> /home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90
> which also uses DMPlexDistribute can be compiled without problem. Is there any
> updates in the compiler flags I need to change?
> 
> Thanks,
> 
> Danyang
> 
> 
> 


Re: [petsc-users] 3.11 configure error on pleiades

2019-03-30 Thread Balay, Satish via petsc-users
Hm - with-mpi-exec is wrong and ignored - but configure should default to 
'mpiexec' from PATH anyway.

It would be good to check configure.log for details on this error.

Satish

On Sat, 30 Mar 2019, Matthew Knepley via petsc-users wrote:

> On Sat, Mar 30, 2019 at 4:31 PM Kokron, Daniel S. (ARC-606.2)[InuTeq, LLC]
> via petsc-users  wrote:
> 
> > Last time I built PETSc on Pleiades it was version 3.8.3.  Using the same
> > build procedure with the same compilers and MPI libraries with 3.11 does
> > not work.  Is there a way to enable more verbose diagnostics during the
> > configure phase so I can figure out what executable was being run and how
> > it was compiled?
> >
> 
> This is not the right option:
> 
>   --with-mpi-exec=mpiexec
> 
> it is
> 
>   --with-mpiexec=mpiexec
> 
>   Thanks,
> 
>   Matt
> 
> PBS r147i6n10 24> ./configure --prefix=/nobackupp8/XXX
> > /Projects/CHEM/BoA_Case/Codes-2018.3.222/binaries/petsc-3.11+
> > --with-debugging=0 --with-shared-libraries=1 --with-cc=mpicc
> > --with-fc=mpif90 --with-cxx=mpicxx
> > --with-blas-lapack-dir=$MKLROOT/lib/intel64
> > --with-scalapack-include=$MKLROOT/include
> > --with-scalapack-lib="$MKLROOT/lib/intel64/libmkl_scalapack_lp64.so
> > $MKLROOT/lib/intel64/libmkl_blacs_sgimpt_lp64.so" --with-cpp=/usr/bin/cpp
> > --with-gnu-compilers=0 --with-vendor-compilers=intel -COPTFLAGS="-g -O3
> > -xCORE-AVX2 -diag-disable=cpu-dispatch" -CXXOPTFLAGS="-g -O3 -xCORE-AVX2
> > -diag-disable=cpu-dispatch" -FOPTFLAGS="-g -O3 -xCORE-AVX2
> > -diag-disable=cpu-dispatch" --with-mpi=true --with-mpi-exec=mpiexec
> > --with-mpi-compilers=1 --with-precision=double --with-scalar-type=real
> > --with-x=0 --with-x11=0 --with-memalign=32
> >
> >
> >
> > I get this which usually means that an executable was linked with libmpi,
> > but was not launched with mpiexec.
> >
> >
> >
> > TESTING: configureMPITypes from
> > config.packages.MPI(/nobackupp8/dkokron/Projects/CHEM/BoA_Case/Codes-2018.3.222/petsc/config/BuildSystem/config/packages/MPI.py:283)
> >
> > CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> >
> > CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> >
> > CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> >
> >
> >
> > If I let it continue, configure reports that MPI is empty.
> >
> >
> >
> > make:
> >
> > BLAS/LAPACK:
> > -Wl,-rpath,/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64
> > -L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64
> > -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread
> >
> > MPI:
> >
> > cmake:
> >
> > pthread:
> >
> > scalapack:
> >
> >
> >
> > Daniel Kokron
> > Redline Performance Solutions
> > SciCon/APP group
> >
> > --
> >
> >
> >
> 
> 
> 



Re: [petsc-users] 3.11 configure error on pleiades

2019-03-30 Thread Balay, Satish via petsc-users
configure creates configure.log with all the debugging details.

Its best to compare configure.log from the successful 3.8.3 with the
current one - and see what changed between these 2 builds

[you can send us both logs at petsc-maint]

Satish

On Sat, 30 Mar 2019, Kokron, Daniel S. (ARC-606.2)[InuTeq, LLC] via petsc-users 
wrote:

> Last time I built PETSc on Pleiades it was version 3.8.3.  Using the same 
> build procedure with the same compilers and MPI libraries with 3.11 does not 
> work.  Is there a way to enable more verbose diagnostics during the configure 
> phase so I can figure out what executable was being run and how it was 
> compiled?
> 
> PBS r147i6n10 24> ./configure --prefix=/nobackupp8/XXX 
> /Projects/CHEM/BoA_Case/Codes-2018.3.222/binaries/petsc-3.11+ 
> --with-debugging=0 --with-shared-libraries=1 --with-cc=mpicc --with-fc=mpif90 
> --with-cxx=mpicxx --with-blas-lapack-dir=$MKLROOT/lib/intel64 
> --with-scalapack-include=$MKLROOT/include 
> --with-scalapack-lib="$MKLROOT/lib/intel64/libmkl_scalapack_lp64.so 
> $MKLROOT/lib/intel64/libmkl_blacs_sgimpt_lp64.so" --with-cpp=/usr/bin/cpp 
> --with-gnu-compilers=0 --with-vendor-compilers=intel -COPTFLAGS="-g -O3 
> -xCORE-AVX2 -diag-disable=cpu-dispatch" -CXXOPTFLAGS="-g -O3 -xCORE-AVX2 
> -diag-disable=cpu-dispatch" -FOPTFLAGS="-g -O3 -xCORE-AVX2 
> -diag-disable=cpu-dispatch" --with-mpi=true --with-mpi-exec=mpiexec 
> --with-mpi-compilers=1 --with-precision=double --with-scalar-type=real 
> --with-x=0 --with-x11=0 --with-memalign=32
> 
> I get this which usually means that an executable was linked with libmpi, but 
> was not launched with mpiexec.
> 
> TESTING: configureMPITypes from 
> config.packages.MPI(/nobackupp8/dkokron/Projects/CHEM/BoA_Case/Codes-2018.3.222/petsc/config/BuildSystem/config/packages/MPI.py:283)
> CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> CMPT ERROR: mpiexec_mpt must be used to launch all MPI applications
> 
> If I let it continue, configure reports that MPI is empty.
> 
> make:
> BLAS/LAPACK: 
> -Wl,-rpath,/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64
>  
> -L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64
>  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread
> MPI:
> cmake:
> pthread:
> scalapack:
> 
> Daniel Kokron
> Redline Performance Solutions
> SciCon/APP group
> --
> 
> 



Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Balay, Satish via petsc-users
Please apply the patch I sent earlier and retry.

Satish

On Tue, 26 Mar 2019, Kun Jiao via petsc-users wrote:

> Strange things, when I compile my code in the test dir in PETSC, it works. 
> After I "make install" PETSC, and try to compile my code against the 
> installed PETSC, it doesn't work any more.
> 
> I guess this is what you means. 
> 
> Is there any way to reenable MatCreateMPIAIJMKL public interface?
> 
> And, I am using intel MKL, here is my configure option:
> 
> Configure Options: --configModules=PETSc.Configure 
> --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-intel 
> --with-precision=single --with-cc=mpiicc --with-cxx=mpiicc --with-fc=mpiifort 
> --with-mpi-include=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mpi/intel64/include
>  
> --with-mpi-lib="-L/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel//compilers_and_libraries_2019.2.187/linux/mpi/intel64/lib
>  -lmpifort -lmpi_ilp64" 
> --with-blaslapack-lib="-L/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl/lib/intel64
>  -Wl, --no-as-needed -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread 
> -lm -ldl" 
> --with-scalapack-lib="-L/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl/lib/intel64
>  -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" 
> --with-scalapack-include=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl/include
>  
> --with-mkl_pardiso-dir=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl
>  --with-mkl_sparse=1 
> --with-mkl_sparse-dir=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl
>  --with-mkl_cpardiso=1 
> --with-mkl_cpardiso-dir=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl
>  --with-mkl_sparse_optimize=1 
> --with-mkl_sparse_optimize-dir=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl
>  --with-mkl_sparse_sp2m=1 
> --with-mkl_sparse_sp2m-dir=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/intel/compilers_and_libraries_2019.2.187/linux/mkl
>  --with-cmake=1 
> --prefix=/wgdisk/hy3300/source_code_dev/imaging/kjiao/software/petsc_3.9.4 
> --known-endian=big --with-debugging=0 --COPTFLAGS=" -Ofast -xHost" 
> --CXXOPTFLAGS=" -Ofast -xHost" --FOPTFLAGS=" -Ofast -xHost" --with-x=0
> Working directory: /wgdisk/hy3300/source_code_dev/imaging/kjiao/petsc-3.10.4
> 
> 
> 
> Schlumberger-Private
> 
> -Original Message-
> From: Balay, Satish  
> Sent: Tuesday, March 26, 2019 10:19 AM
> To: Kun Jiao 
> Cc: Mark Adams ; petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" 
> is undefined in 3.10.4
> 
> >>>
> balay@sb /home/balay/petsc (maint=)
> $ git grep MatCreateMPIAIJMKL maint-3.8
> maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:   MatCreateMPIAIJMKL - 
> Creates a sparse parallel matrix whose local
> maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:PetscErrorCode  
> MatCreateMPIAIJMKL(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt 
> N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt 
> o_nnz[],Mat *A)
> maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:.seealso: 
> MatCreateMPIAIJMKL(), MATSEQAIJMKL, MATMPIAIJMKL
> maint-3.8:src/mat/impls/aij/seq/aijmkl/aijmkl.c:.seealso: MatCreate(), 
> MatCreateMPIAIJMKL(), MatSetValues() balay@sb /home/balay/petsc (maint=) $ 
> git grep MatCreateMPIAIJMKL maint
> maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:   MatCreateMPIAIJMKL - 
> Creates a sparse parallel matrix whose local
> maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:PetscErrorCode  
> MatCreateMPIAIJMKL(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt 
> N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt 
> o_nnz[],Mat *A)
> maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:.seealso: 
> MatCreateMPIAIJMKL(), MATSEQAIJMKL, MATMPIAIJMKL
> maint:src/mat/impls/aij/seq/aijmkl/aijmkl.c:.seealso: MatCreate(), 
> MatCreateMPIAIJMKL(), MatSetValues() balay@sb /home/balay/petsc (maint=) $ 
> <<<
> 
> MatCreateMPIAIJMKL() exists in both petsc-3.8 and petsc-3.10. However the 
> public interface is missing from both of these versions. So I'm surprised you 
> don't get the same error with petsc-3.8
> 
> Can you try the following change?
> 
> diff --git a/include/petscmat.h b/include/petscmat.h index 
> 1b8ac69377..c66f727994 100644
> --- a/include/petscmat.h
> +++ b/include/petscmat.h
> @@ -223,7 +223,8 @@ typedef enum 
> {DIFFERENT_NONZERO_PATTERN,SUBSET_NONZERO_PATTERN,SAME_NONZERO_PATT
>  
>  #if defined PETSC_HAVE_MKL_SPARSE
>  PETSC_EXTERN PetscErrorCode 
> 

Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Balay, Satish via petsc-users
>>>
balay@sb /home/balay/petsc (maint=)
$ git grep MatCreateMPIAIJMKL maint-3.8
maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:   MatCreateMPIAIJMKL - 
Creates a sparse parallel matrix whose local
maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:PetscErrorCode  
MatCreateMPIAIJMKL(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt 
N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt o_nnz[],Mat 
*A)
maint-3.8:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:.seealso: 
MatCreateMPIAIJMKL(), MATSEQAIJMKL, MATMPIAIJMKL
maint-3.8:src/mat/impls/aij/seq/aijmkl/aijmkl.c:.seealso: MatCreate(), 
MatCreateMPIAIJMKL(), MatSetValues()
balay@sb /home/balay/petsc (maint=)
$ git grep MatCreateMPIAIJMKL maint
maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:   MatCreateMPIAIJMKL - Creates 
a sparse parallel matrix whose local
maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:PetscErrorCode  
MatCreateMPIAIJMKL(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt 
N,PetscInt d_nz,const PetscInt d_nnz[],PetscInt o_nz,const PetscInt o_nnz[],Mat 
*A)
maint:src/mat/impls/aij/mpi/aijmkl/mpiaijmkl.c:.seealso: MatCreateMPIAIJMKL(), 
MATSEQAIJMKL, MATMPIAIJMKL
maint:src/mat/impls/aij/seq/aijmkl/aijmkl.c:.seealso: MatCreate(), 
MatCreateMPIAIJMKL(), MatSetValues()
balay@sb /home/balay/petsc (maint=)
$ 
<<<

MatCreateMPIAIJMKL() exists in both petsc-3.8 and petsc-3.10. However
the public interface is missing from both of these versions. So I'm
surprised you don't get the same error with petsc-3.8

Can you try the following change?

diff --git a/include/petscmat.h b/include/petscmat.h
index 1b8ac69377..c66f727994 100644
--- a/include/petscmat.h
+++ b/include/petscmat.h
@@ -223,7 +223,8 @@ typedef enum 
{DIFFERENT_NONZERO_PATTERN,SUBSET_NONZERO_PATTERN,SAME_NONZERO_PATT
 
 #if defined PETSC_HAVE_MKL_SPARSE
 PETSC_EXTERN PetscErrorCode 
MatCreateBAIJMKL(MPI_Comm,PetscInt,PetscInt,PetscInt,PetscInt,PetscInt,PetscInt,const
 PetscInt[],PetscInt,const PetscInt[],Mat*);
-PETSC_EXTERN PetscErrorCode MatCreateSeqBAIJMKL(MPI_Comm comm,PetscInt 
bs,PetscInt m,PetscInt n,PetscInt nz,const PetscInt nnz[],Mat *A);
+PETSC_EXTERN PetscErrorCode 
MatCreateSeqBAIJMKL(MPI_Comm,PetscInt,PetscInt,PetscInt,PetscInt,const 
PetscInt[],Mat*);
+PETSC_EXTERN PetscErrorCode  
MatCreateMPIAIJMKL(MPI_Comm,PetscInt,PetscInt,PetscInt,PetscInt,PetscInt,const 
PetscInt[],PetscInt,const PetscInt[],Mat*);
 #endif
 
 PETSC_EXTERN PetscErrorCode 
MatCreateSeqSELL(MPI_Comm,PetscInt,PetscInt,PetscInt,const PetscInt[],Mat*);


Also note: - this routine is available only when PETSc is built with Intel MKL

Satish

On Tue, 26 Mar 2019, Kun Jiao via petsc-users wrote:

> [kjiao@hyi0016 src/lsqr]% make
> [ 50%] Building CXX object lsqr/CMakeFiles/p_lsqr.dir/lsqr.cc.o
> /wgdisk/hy3300/source_code_dev/imaging/kjiao/src/git/src/lsqr/lsqr.cc(318): 
> error: identifier "MatCreateMPIAIJMKL" is undefined
> ierr = 
> MatCreateMPIAIJMKL(comm,m,n,M,N,maxnz,dialens,maxnz,offlens,);CHKERRQ(ierr);
>^
> 
> /wgdisk/hy3300/source_code_dev/imaging/kjiao/src/git/src/lsqr/lsqr.cc(578): 
> error: identifier "MatCreateMPIAIJMKL" is undefined
> ierr = 
> MatCreateMPIAIJMKL(comm,m,n,M,N,maxnz,dialens,maxnz,offlens,);CHKERRQ(ierr);
>^
> 
> compilation aborted for 
> /wgdisk/hy3300/source_code_dev/imaging/kjiao/src/git/src/lsqr/lsqr.cc (code 2)
> 
> Thanks.
> 
> 
> From: Mark Adams 
> Sent: Tuesday, March 26, 2019 9:22 AM
> To: Kun Jiao 
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [Ext] Re: [petsc-users] error: identifier "MatCreateMPIAIJMKL" 
> is undefined in 3.10.4
> 
> I assume the whole error message will have the line of code. Please send the 
> whole error message and line of offending code if not included.
> 
> On Tue, Mar 26, 2019 at 10:08 AM Kun Jiao 
> mailto:kj...@slb.com>> wrote:
> It is compiling error, error message is:
> 
> error: identifier "MatCreateMPIAIJMKL" is undefined.
> 
> 
> 
> 
> 
> From: Mark Adams mailto:mfad...@lbl.gov>>
> Sent: Tuesday, March 26, 2019 6:48 AM
> To: Kun Jiao mailto:kj...@slb.com>>
> Cc: petsc-users@mcs.anl.gov
> Subject: [Ext] Re: [petsc-users] error: identifier "MatCreateMPIAIJMKL" is 
> undefined in 3.10.4
> 
> Please send the output of the error (runtime, compile time, link time?)
> 
> On Mon, Mar 25, 2019 at 10:50 PM Kun Jiao via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
> Hi Petsc Experts,
> 
> Is MatCreateMPIAIJMKL retired in 3.10.4?
> 
> I got this error with my code which works fine in 3.8.3 version.
> 
> Regards,
> Kun
> 
> 
> 
> Schlumberger-Private
> 
> 
> Schlumberger-Private
> 
> 
> Schlumberger-Private
> 



Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-25 Thread Balay, Satish via petsc-users
I see you are using " 0e667e8fea4aa from December 23rd" - which is old
petsc 'master' snapshot.

1. After your fix for 'bad input file' - do you still get these
valgrind messages?

2. You should be able to easily apply Stefano's potential fix to your
snapshot [without upgrading to latest petsc].

git cherry-pick 0b85991cae8259fd283ce3f99b399b38f1dcd7b4

And then rebuild petsc and - rerun with valgrind - and see if the
messages persist.

Satish


On Mon, 25 Mar 2019, Derek Gaston via petsc-users wrote:

> Stefano: the stupidity was all mine and had nothing to do with PETSc.
> Valgrind helped me track down a memory corruption issue that ultimately was
> just about a bad input file to my code (and obviously not enough error
> checking for input files!).
> 
> The issue is fixed.
> 
> Now - I'd like to understand a bit more about what happened here on the
> PETSc side.  Was this valgrind issue something that was known and you
> already had a fix for it - but it wasn't on maint yet?  Or was it just that
> I was using too old of a version of PETSc so I didn't have the fix?
> 
> Derek
> 
> On Fri, Mar 22, 2019 at 4:29 AM Stefano Zampini 
> wrote:
> 
> >
> >
> > On Mar 21, 2019, at 7:59 PM, Derek Gaston  wrote:
> >
> > It sounds like you already tracked this down... but for completeness here
> > is what track-origins gives:
> >
> > ==262923== Conditional jump or move depends on uninitialised value(s)
> > ==262923==at 0x73C6548: VecScatterMemcpyPlanCreate_Index (vscat.c:294)
> > ==262923==by 0x73DBD97: VecScatterMemcpyPlanCreate_PtoP
> > (vpscat_mpi1.c:312)
> > ==262923==by 0x73DE6AE: VecScatterCreateCommon_PtoS_MPI1
> > (vpscat_mpi1.c:2328)
> > ==262923==by 0x73DFFEA: VecScatterCreateLocal_PtoS_MPI1
> > (vpscat_mpi1.c:2202)
> > ==262923==by 0x73C7A51: VecScatterCreate_PtoS (vscat.c:608)
> > ==262923==by 0x73C9E8A: VecScatterSetUp_vectype_private (vscat.c:857)
> > ==262923==by 0x73CBE5D: VecScatterSetUp_MPI1 (vpscat_mpi1.c:2543)
> > ==262923==by 0x7413D39: VecScatterSetUp (vscatfce.c:212)
> > ==262923==by 0x7412D73: VecScatterCreateWithData (vscreate.c:333)
> > ==262923==by 0x747A232: VecCreateGhostWithArray (pbvec.c:685)
> > ==262923==by 0x747A90D: VecCreateGhost (pbvec.c:741)
> > ==262923==by 0x5C7FFD6: libMesh::PetscVector::init(unsigned
> > long, unsigned long, std::vector > long> > const&, bool, libMesh::ParallelType) (petsc_vector.h:752)
> > ==262923==  Uninitialised value was created by a heap allocation
> > ==262923==at 0x402DDC6: memalign (vg_replace_malloc.c:899)
> > ==262923==by 0x7359702: PetscMallocAlign (mal.c:41)
> > ==262923==by 0x7359C70: PetscMallocA (mal.c:390)
> > ==262923==by 0x73DECF0: VecScatterCreateLocal_PtoS_MPI1
> > (vpscat_mpi1.c:2061)
> > ==262923==by 0x73C7A51: VecScatterCreate_PtoS (vscat.c:608)
> > ==262923==by 0x73C9E8A: VecScatterSetUp_vectype_private (vscat.c:857)
> > ==262923==by 0x73CBE5D: VecScatterSetUp_MPI1 (vpscat_mpi1.c:2543)
> > ==262923==by 0x7413D39: VecScatterSetUp (vscatfce.c:212)
> > ==262923==by 0x7412D73: VecScatterCreateWithData (vscreate.c:333)
> > ==262923==by 0x747A232: VecCreateGhostWithArray (pbvec.c:685)
> > ==262923==by 0x747A90D: VecCreateGhost (pbvec.c:741)
> > ==262923==by 0x5C7FFD6: libMesh::PetscVector::init(unsigned
> > long, unsigned long, std::vector > long> > const&, bool, libMesh::ParallelType) (petsc_vector.h:752)
> >
> >
> > BTW: This turned out not to be my actual problem.  My actual problem was
> > just some stupidity on my part... just a simple input parameter issue to my
> > code (should have had better error checking!).
> >
> > But: It sounds like my digging may have uncovered something real here...
> > so it wasn't completely useless :-)
> >
> >
> > Derek,
> >
> > I don’t understand. Is your problem fixed or not? Would be nice to
> > understand what was the “some stupidity on your part”, and if it was still
> > leading to valid PETSc code or just to a broken setup.
> > In the first case, then we should investigate the valgrind error you
> > reported.
> > In the second case, this is not a PETSc issue.
> >
> >
> > Thanks for your help everyone!
> >
> > Derek
> >
> >
> >
> > On Thu, Mar 21, 2019 at 10:38 AM Stefano Zampini <
> > stefano.zamp...@gmail.com> wrote:
> >
> >>
> >>
> >> Il giorno mer 20 mar 2019 alle ore 23:40 Derek Gaston via petsc-users <
> >> petsc-users@mcs.anl.gov> ha scritto:
> >>
> >>> Trying to track down some memory corruption I'm seeing on larger scale
> >>> runs (3.5B+ unknowns).
> >>>
> >>
> >> Uhm are you using 32bit indices? is it possible there's integer
> >> overflow somewhere?
> >>
> >>
> >>
> >>> Was able to run Valgrind on it... and I'm seeing quite a lot of
> >>> uninitialized value errors coming from ghost updating.  Here are some of
> >>> the traces:
> >>>
> >>> ==87695== Conditional jump or move depends on uninitialised value(s)
> >>> ==87695==at 0x73236D3: PetscMallocAlign (mal.c:28)
> >>> ==87695==by 

Re: [petsc-users] About Configuring PETSc

2019-03-22 Thread Balay, Satish via petsc-users
Did you build PETSc with PETSC_ARCH=arch-opt?

Btw - I used PETSC_ARCH=arch-debug to illustrate - but you already
have a build with PETSC_ARCH=arch-linux2-c-debug - so you should stick
to that.

Satish

On Fri, 22 Mar 2019, Maahi Talukder via petsc-users wrote:

> Hi,
> 
> I tried to run the command 'make PETSC_ARCH=arch-opt wholetest1' but it
> shows me the following error-
> 
> ..
> [maahi@CB272PP-THINK1 Desktop]$ make PETSC_ARCH=arch-opt wholetest1
> /home/maahi/petsc/lib/petsc/conf/rules:960:
> */home/maahi/petsc/arch-opt/lib/petsc/conf/petscrules:
> No such file or directory*
> make: *** No rule to make target
> '/home/maahi/petsc/arch-opt/lib/petsc/conf/petscrules'.  Stop.
> 
> 
> My .bashrc is the following -
> 
> .
> 
> PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/lib64/openmpi/bin
> 
> export PATH
> 
> export PETSC_DIR=$HOME/petsc
> export PETSC_ARCH=arch-linux2-c-debug
> #export PETSC_ARCH=arch-debug
> 
> ..
> 
> Could you please tell me what went wrong?
> 
> Regards,
> Maahi Talukder
> 
> 
> On Thu, Mar 21, 2019 at 11:55 PM Maahi Talukder 
> wrote:
> 
> > Thank you so much for your reply. That clear things up!
> >
> > On Thu, Mar 21, 2019 at 10:43 PM Balay, Satish  wrote:
> >
> >> On Thu, 21 Mar 2019, Maahi Talukder via petsc-users wrote:
> >>
> >> > Thank you for your reply.
> >> >
> >> > So do I need to set the value of PETSC_ARCH as needed in .bashrc  as I
> >> did
> >> > in case of PETSC_DIR ?
> >>
> >> You can specify PETSC_ARCH as an option to make. You can have a default
> >> value set in .bashrc - and change to a different value on command line.
> >>
> >> For ex: in .bashrc
> >>
> >> export PETSC_ARCH=arch-debug
> >>
> >> Now if you want to build with debug libraries:
> >>
> >> make wholetest1
> >>
> >> Now If you want to build with optimized libraries:
> >>
> >> make PETSC_ARCH=arch-opt wholetest1
> >>
> >>
> >> >  And by PETSC_ARCH=arch-opt, do you mean the
> >> > non-debugging mode?
> >>
> >> Yes. You can use whatever name you think is appropriate here.
> >>
> >> ./configure PETSC_ARCH=a-name-i-can-easily-associate-with-this-build
> >> [other configure options.]
> >>
> >> >
> >> > And I am using the following makefile with my code-
> >> >
> >> > CFLAGS =
> >> > FFLAGS =-I/home/maahi/petsc/include
> >> > -I/home/maahi/petsc/arch-linux2-c-debug/include -cpp -mcmodel=large
> >>
> >> Hm - you shouldn't be needing these options here. You should switch your
> >> source files from .f to .F and .f90 to .F90 - and remove the above FFLAGS
> >>
> >> Satish
> >>
> >> > CPPFLAGS =
> >> > FPPFLAGS =
> >> >
> >> >
> >> > include ${PETSC_DIR}/lib/petsc/conf/variables
> >> > include ${PETSC_DIR}/lib/petsc/conf/rules
> >> >
> >> > wholetest1: wholetest1.o
> >> > -${FLINKER} -o wholetest1 wholetest1.o ${PETSC_LIB}
> >> > ${RM} wholetest1.o
> >> >
> >> > So where do I add that PETSC_ARCH?
> >> >
> >> > Thanks,
> >> > Maahi Talukder
> >> >
> >> > On Thu, Mar 21, 2019 at 10:14 PM Balay, Satish 
> >> wrote:
> >> >
> >> > > PETSc uses the concept of PETSC_ARCH to enable multiple in-place
> >> > > builds.
> >> > >
> >> > > So you can have a debug build with PETSC_ARCH=arch-debug, and a
> >> > > optimized build with PETSC_ARCH=arch-opt etc.
> >> > >
> >> > > And if you are using a petsc formatted makefile with your code - you
> >> > > can switch between these builds by just switching PETSC_ARCH.
> >> > >
> >> > > Satish
> >> > >
> >> > > On Thu, 21 Mar 2019, Maahi Talukder via petsc-users wrote:
> >> > >
> >> > > > Dear All,
> >> > > >
> >> > > > Currently, I am running PETSc with debugging option. And it says
> >> that if
> >> > > I
> >> > > > run ./configure --with-debugging=no, the performance would be
> >> faster. My
> >> > > > question is: what would I do if I want to go back to debugging
> >> mode, and
> >> > > If
> >> > > > I configure it now with no debugging option, would it make any
> >> changes to
> >> > > > my current setting?
> >> > > >
> >> > > > Regards,
> >> > > > Maahi Talukder
> >> > > >
> >> > >
> >> > >
> >> >
> >>
> >>
> 



Re: [petsc-users] About Configuring PETSc

2019-03-21 Thread Balay, Satish via petsc-users
On Thu, 21 Mar 2019, Maahi Talukder via petsc-users wrote:

> Thank you for your reply.
> 
> So do I need to set the value of PETSC_ARCH as needed in .bashrc  as I did
> in case of PETSC_DIR ?

You can specify PETSC_ARCH as an option to make. You can have a default value 
set in .bashrc - and change to a different value on command line.

For ex: in .bashrc

export PETSC_ARCH=arch-debug

Now if you want to build with debug libraries:

make wholetest1

Now If you want to build with optimized libraries:

make PETSC_ARCH=arch-opt wholetest1


>  And by PETSC_ARCH=arch-opt, do you mean the
> non-debugging mode?

Yes. You can use whatever name you think is appropriate here.

./configure PETSC_ARCH=a-name-i-can-easily-associate-with-this-build [other 
configure options.]

> 
> And I am using the following makefile with my code-
> 
> CFLAGS =
> FFLAGS =-I/home/maahi/petsc/include
> -I/home/maahi/petsc/arch-linux2-c-debug/include -cpp -mcmodel=large

Hm - you shouldn't be needing these options here. You should switch your source 
files from .f to .F and .f90 to .F90 - and remove the above FFLAGS

Satish

> CPPFLAGS =
> FPPFLAGS =
> 
> 
> include ${PETSC_DIR}/lib/petsc/conf/variables
> include ${PETSC_DIR}/lib/petsc/conf/rules
> 
> wholetest1: wholetest1.o
> -${FLINKER} -o wholetest1 wholetest1.o ${PETSC_LIB}
> ${RM} wholetest1.o
> 
> So where do I add that PETSC_ARCH?
> 
> Thanks,
> Maahi Talukder
> 
> On Thu, Mar 21, 2019 at 10:14 PM Balay, Satish  wrote:
> 
> > PETSc uses the concept of PETSC_ARCH to enable multiple in-place
> > builds.
> >
> > So you can have a debug build with PETSC_ARCH=arch-debug, and a
> > optimized build with PETSC_ARCH=arch-opt etc.
> >
> > And if you are using a petsc formatted makefile with your code - you
> > can switch between these builds by just switching PETSC_ARCH.
> >
> > Satish
> >
> > On Thu, 21 Mar 2019, Maahi Talukder via petsc-users wrote:
> >
> > > Dear All,
> > >
> > > Currently, I am running PETSc with debugging option. And it says that if
> > I
> > > run ./configure --with-debugging=no, the performance would be faster. My
> > > question is: what would I do if I want to go back to debugging mode, and
> > If
> > > I configure it now with no debugging option, would it make any changes to
> > > my current setting?
> > >
> > > Regards,
> > > Maahi Talukder
> > >
> >
> >
> 



Re: [petsc-users] About Configuring PETSc

2019-03-21 Thread Balay, Satish via petsc-users
PETSc uses the concept of PETSC_ARCH to enable multiple in-place
builds.

So you can have a debug build with PETSC_ARCH=arch-debug, and a
optimized build with PETSC_ARCH=arch-opt etc.

And if you are using a petsc formatted makefile with your code - you
can switch between these builds by just switching PETSC_ARCH.

Satish

On Thu, 21 Mar 2019, Maahi Talukder via petsc-users wrote:

> Dear All,
> 
> Currently, I am running PETSc with debugging option. And it says that if I
> run ./configure --with-debugging=no, the performance would be faster. My
> question is: what would I do if I want to go back to debugging mode, and If
> I configure it now with no debugging option, would it make any changes to
> my current setting?
> 
> Regards,
> Maahi Talukder
> 



Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-21 Thread Balay, Satish via petsc-users
Ok - cherrypicked and pushed to maint.

Satish

On Thu, 21 Mar 2019, Zhang, Junchao via petsc-users wrote:

> Yes, it does.  It is a bug.
> --Junchao Zhang
> 
> 
> On Thu, Mar 21, 2019 at 11:16 AM Balay, Satish 
> mailto:ba...@mcs.anl.gov>> wrote:
> Does maint also need this fix?
> 
> Satish
> 
> On Thu, 21 Mar 2019, Stefano Zampini via petsc-users wrote:
> 
> > Derek
> >
> > I have fixed the optimized plan few weeks ago
> >
> > https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c
> >
> > Maybe this will fix your problem too?
> >
> > Stefano
> >
> >
> > Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users <
> > petsc-users@mcs.anl.gov> ha scritto:
> >
> > > Hi, Derek,
> > >   Try to apply this tiny (but dirty) patch on your version of PETSc to
> > > disable the VecScatterMemcpyPlan optimization to see if it helps.
> > >   Thanks.
> > > --Junchao Zhang
> > >
> > > On Wed, Mar 20, 2019 at 6:33 PM Junchao Zhang 
> > > mailto:jczh...@mcs.anl.gov>> wrote:
> > >
> > >> Did you see the warning with small scale runs?  Is it possible to provide
> > >> a test code?
> > >> You mentioned "changing PETSc now would be pretty painful". Is it because
> > >> it will affect your performance (but not your code)?  If yes, could you 
> > >> try
> > >> PETSc master and run you code with or without -vecscatter_type sf.  I 
> > >> want
> > >> to isolate the problem and see if it is due to possible bugs in 
> > >> VecScatter.
> > >> If the above suggestion is not feasible, I will disable VecScatterMemcpy.
> > >> It is an optimization I added. Sorry I did not have an option to turn off
> > >> it because I thought it was always useful:)  I will provide you a patch
> > >> later to disable it. With that you can run again to isolate possible bugs
> > >> in VecScatterMemcpy.
> > >> Thanks.
> > >> --Junchao Zhang
> > >>
> > >>
> > >> On Wed, Mar 20, 2019 at 5:40 PM Derek Gaston via petsc-users <
> > >> petsc-users@mcs.anl.gov> wrote:
> > >>
> > >>> Trying to track down some memory corruption I'm seeing on larger scale
> > >>> runs (3.5B+ unknowns).  Was able to run Valgrind on it... and I'm seeing
> > >>> quite a lot of uninitialized value errors coming from ghost updating.  
> > >>> Here
> > >>> are some of the traces:
> > >>>
> > >>> ==87695== Conditional jump or move depends on uninitialised value(s)
> > >>> ==87695==at 0x73236D3: PetscMallocAlign (mal.c:28)
> > >>> ==87695==by 0x7323C70: PetscMallocA (mal.c:390)
> > >>> ==87695==by 0x739048E: VecScatterMemcpyPlanCreate_Index 
> > >>> (vscat.c:284)
> > >>> ==87695==by 0x73A5D97: VecScatterMemcpyPlanCreate_PtoP
> > >>> (vpscat_mpi1.c:312)
> > >>> ==64730==by 0x7393E8A: VecScatterSetUp_vectype_private (vscat.c:857)
> > >>> ==64730==by 0x7395E5D: VecScatterSetUp_MPI1 (vpscat_mpi1.c:2543)
> > >>> ==64730==by 0x73DDD39: VecScatterSetUp (vscatfce.c:212)
> > >>> ==64730==by 0x73DCD73: VecScatterCreateWithData (vscreate.c:333)
> > >>> ==64730==by 0x7444232: VecCreateGhostWithArray (pbvec.c:685)
> > >>> ==64730==by 0x744490D: VecCreateGhost (pbvec.c:741)
> > >>>
> > >>> ==133582== Conditional jump or move depends on uninitialised value(s)
> > >>> ==133582==at 0x4030384: memcpy@@GLIBC_2.14
> > >>> (vg_replace_strmem.c:1034)
> > >>> ==133582==by 0x739E4F9: PetscMemcpy (petscsys.h:1649)
> > >>> ==133582==by 0x739E4F9: VecScatterMemcpyPlanExecute_Pack
> > >>> (vecscatterimpl.h:150)
> > >>> ==133582==by 0x739E4F9: VecScatterBeginMPI1_1 (vpscat_mpi1.h:69)
> > >>> ==133582==by 0x73DD964: VecScatterBegin (vscatfce.c:110)
> > >>> ==133582==by 0x744E195: VecGhostUpdateBegin (commonmpvec.c:225)
> > >>>
> > >>> This is from a Git checkout of PETSc... the hash I branched from is:
> > >>> 0e667e8fea4aa from December 23rd (updating would be really hard at this
> > >>> point as I've completed 90% of my dissertation with this version... and
> > >>> changing PETSc now would be pretty painful!).
> > >>>
> > >>> Any ideas?  Is it possible it's in my code?  Is it possible that there
> > >>> are later PETSc commits that already fix this?
> > >>>
> > >>> Thanks for any help,
> > >>> Derek
> > >>>
> > >>>
> >
> 
> 



Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-21 Thread Balay, Satish via petsc-users
Does maint also need this fix?

Satish

On Thu, 21 Mar 2019, Stefano Zampini via petsc-users wrote:

> Derek
> 
> I have fixed the optimized plan few weeks ago
> 
> https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c
> 
> Maybe this will fix your problem too?
> 
> Stefano
> 
> 
> Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users <
> petsc-users@mcs.anl.gov> ha scritto:
> 
> > Hi, Derek,
> >   Try to apply this tiny (but dirty) patch on your version of PETSc to
> > disable the VecScatterMemcpyPlan optimization to see if it helps.
> >   Thanks.
> > --Junchao Zhang
> >
> > On Wed, Mar 20, 2019 at 6:33 PM Junchao Zhang  wrote:
> >
> >> Did you see the warning with small scale runs?  Is it possible to provide
> >> a test code?
> >> You mentioned "changing PETSc now would be pretty painful". Is it because
> >> it will affect your performance (but not your code)?  If yes, could you try
> >> PETSc master and run you code with or without -vecscatter_type sf.  I want
> >> to isolate the problem and see if it is due to possible bugs in VecScatter.
> >> If the above suggestion is not feasible, I will disable VecScatterMemcpy.
> >> It is an optimization I added. Sorry I did not have an option to turn off
> >> it because I thought it was always useful:)  I will provide you a patch
> >> later to disable it. With that you can run again to isolate possible bugs
> >> in VecScatterMemcpy.
> >> Thanks.
> >> --Junchao Zhang
> >>
> >>
> >> On Wed, Mar 20, 2019 at 5:40 PM Derek Gaston via petsc-users <
> >> petsc-users@mcs.anl.gov> wrote:
> >>
> >>> Trying to track down some memory corruption I'm seeing on larger scale
> >>> runs (3.5B+ unknowns).  Was able to run Valgrind on it... and I'm seeing
> >>> quite a lot of uninitialized value errors coming from ghost updating.  
> >>> Here
> >>> are some of the traces:
> >>>
> >>> ==87695== Conditional jump or move depends on uninitialised value(s)
> >>> ==87695==at 0x73236D3: PetscMallocAlign (mal.c:28)
> >>> ==87695==by 0x7323C70: PetscMallocA (mal.c:390)
> >>> ==87695==by 0x739048E: VecScatterMemcpyPlanCreate_Index (vscat.c:284)
> >>> ==87695==by 0x73A5D97: VecScatterMemcpyPlanCreate_PtoP
> >>> (vpscat_mpi1.c:312)
> >>> ==64730==by 0x7393E8A: VecScatterSetUp_vectype_private (vscat.c:857)
> >>> ==64730==by 0x7395E5D: VecScatterSetUp_MPI1 (vpscat_mpi1.c:2543)
> >>> ==64730==by 0x73DDD39: VecScatterSetUp (vscatfce.c:212)
> >>> ==64730==by 0x73DCD73: VecScatterCreateWithData (vscreate.c:333)
> >>> ==64730==by 0x7444232: VecCreateGhostWithArray (pbvec.c:685)
> >>> ==64730==by 0x744490D: VecCreateGhost (pbvec.c:741)
> >>>
> >>> ==133582== Conditional jump or move depends on uninitialised value(s)
> >>> ==133582==at 0x4030384: memcpy@@GLIBC_2.14
> >>> (vg_replace_strmem.c:1034)
> >>> ==133582==by 0x739E4F9: PetscMemcpy (petscsys.h:1649)
> >>> ==133582==by 0x739E4F9: VecScatterMemcpyPlanExecute_Pack
> >>> (vecscatterimpl.h:150)
> >>> ==133582==by 0x739E4F9: VecScatterBeginMPI1_1 (vpscat_mpi1.h:69)
> >>> ==133582==by 0x73DD964: VecScatterBegin (vscatfce.c:110)
> >>> ==133582==by 0x744E195: VecGhostUpdateBegin (commonmpvec.c:225)
> >>>
> >>> This is from a Git checkout of PETSc... the hash I branched from is:
> >>> 0e667e8fea4aa from December 23rd (updating would be really hard at this
> >>> point as I've completed 90% of my dissertation with this version... and
> >>> changing PETSc now would be pretty painful!).
> >>>
> >>> Any ideas?  Is it possible it's in my code?  Is it possible that there
> >>> are later PETSc commits that already fix this?
> >>>
> >>> Thanks for any help,
> >>> Derek
> >>>
> >>>
> 



Re: [petsc-users] Cross-compilation cluster

2019-03-14 Thread Balay, Satish via petsc-users
As these warnings indicate - 64bit compiler ignores 32bit libraries [so
they don't get used].

i.e mixing 32bit and 64bit libraries ins not the cause of your
problems on this machine..

Satish


On Wed, 13 Mar 2019, Amneet Bhalla via petsc-users wrote:

> Hi Folks,
> 
> I am on a cluster that has -L/lib dir with 32-bit libraries and -L/lib64
> with 64-bit libraries. During compilation of some of libraries required for
> my code (such as SAMRAI and libMesh) both paths
> get picked  -L/lib and -L/lib64.
> 
> I am seeing some sporadic behavior in runtime when at some timesteps PETSc
> does not converge. The same code with the same number of processors run
> just fine on my workstation that has just 64-bit version of libraries.
> 
> Even during the final linking stage of the executable, the linker gives
> warnings like
> 
> ld: skipping incompatible //lib/libm.so when searching for -lm
> 
> ld: skipping incompatible /lib/libm.so when searching for -lm
> 
> ld: skipping incompatible /lib/libm.so when searching for -lm
> 
> ld: skipping incompatible //lib/libpthread.so when searching for -lpthread
> 
> ld: skipping incompatible /lib/libpthread.so when searching for -lpthread
> 
> ld: skipping incompatible /lib/libpthread.so when searching for -lpthread
> 
> ld: skipping incompatible //lib/libdl.so when searching for -ldl
> 
> ld: skipping incompatible //lib/libc.so when searching for -lc
> 
> ld: skipping incompatible /lib/libc.so when searching for -lc
> 
> ld: skipping incompatible /lib/libc.so when searching for -lc
> but the executable runs.
> 
> 
> This is during config of SAMRAI when it picks both -L/lib and -L/lib64:
> 
> checking whether we are using the GNU Fortran 77 compiler... no
> 
> checking whether ifort accepts -g... yes
> 
> checking how to get verbose linking output from ifort... -v
> 
> checking for Fortran 77 libraries of ifort...
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/tbb/lib/intel64/gcc4.4
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/ipp/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64_lin
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/ -L/lib/../lib64
> -L/lib/../lib64/ -L/usr/lib/../lib64 -L/usr/lib/../lib64/
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/tbb/lib/intel64/gcc4.4/
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64/
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/ipp/lib/intel64/
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../ -L/lib64 -L/lib/
> -L/usr/lib64 -L/usr/lib -lifport -lifcoremt -limf -lsvml -lm -lipgo -lirc
> -lpthread -lgcc -lgcc_s -lirc_s -ldl
> 
> libMesh is also picking that path
> 
> libmesh_optional_LIBS : -lhdf5 -lhdf5_cpp -lz
> -L/home/asbhalla/softwares/PETSc-BitBucket/PETSc/linux-opt/lib
> -Wl,-rpath,/home/asbhalla/softwares/PETSc-BitBucket/PETSc/linux-opt/lib
> -Wl,-rpath,/opt/intel/mkl/lib/intel64 -L/opt/intel/mkl/lib/intel64
> -Wl,-rpath,/opt/mellanox/hcoll/lib -L/opt/mellanox/hcoll/lib
> -Wl,-rpath,/opt/mellanox/mxm/lib -L/opt/mellanox/mxm/lib
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2018.2.199/linux/tbb/lib/intel64/gcc4.4
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/tbb/lib/intel64/gcc4.4
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2018.2.199/linux/ipp/lib/intel64
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/ipp/lib/intel64
> -Wl,-rpath,/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64_lin
> -L/opt/intel/compilers_and_libraries_2018.2.199/linux/compiler/lib/intel64_lin
> -lpetsc -lHYPRE -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
> -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lifport
> -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lpthread -lgcc_s -lirc_s
> -lstdc++ -ldl -L/lib -Wl,-rpath,/lib
> -Wl,-rpath,/usr/local/mpi/intel/openmpi-4.0.0/lib64
> -L/usr/local/mpi/intel/openmpi-4.0.0/lib64
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.5
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5
> 
> Perhaps PETSc also picks up both versions (and there is a way to query it
> from PETSc?), but I can't confirm this. Is there a way to instruct make to
> select only -L/lib64? I want to rule out that 32-bit dynamic library is not
> a culprit for the random non-convergence of PETSc solvers and the eventual
> crash of the 

Re: [petsc-users] Compiling Fortran Code

2019-03-13 Thread Balay, Satish via petsc-users
check petsc makefile format - for ex: 
src/tao/unconstrained/examples/tutorials/makefile

Also rename your fortran sources that have petsc calls from .f to .F


On Wed, 13 Mar 2019, Matthew Knepley via petsc-users wrote:

> On Wed, Mar 13, 2019 at 7:36 PM Maahi Talukder via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> 
> > Dear All,
> >
> > I am trying to compile a Fortran code. The make is as it follows-
> >
> >
> > 
> > # Makefile for egrid2d
> >
> > OBJS = main.o egrid2d.o
> >
> > FFLAGS = -I/home/maahi/petsc/include
> > -I/home/maahi/petsc/arch-linux2-c-debug/include -Ofast -fdefault-real-8
> >
> > #
> > # link
> > #
> > include ${PETSC_DIR}/lib/petsc/conf/variables
> > include ${PETSC_DIR}/lib/petsc/conf/rules
> >
> > egrid2d: $(OBJS)
> >
> > ${FLINKER}  $(OBJS)  -o egrid2d ${PETSC_LIB}
> >
> 
> Move this above your includes
> 
The location is fine. Can you change OBJS to a different name - say OBJ [or 
something else] and see if that works.

Satish

> 
> >
> > #
> > # compile
> > #
> > main.o:
> >${FLINKER} -c $(FFLAGS) main.f  ${PETSC_LIB}
> >
> 
> You should not need this rule.
> 
>   Thanks,
> 
> Matt
> 
> 
> > #
> > # Common and Parameter Dependencies
> > #
> >
> > main.o:main.fpar2d.f
> > egrid2d.o: egrid2d.f par2d.f
> >
> > .
> >
> > But I get the following error-
> >
> >
> > ..
> > /home/maahi/petsc/arch-linux2-c-debug/bin/mpif90 -Wall
> > -ffree-line-length-0 -Wno-unused-dummy-argument -g
> > -I/home/maahi/petsc/include -I/home/maahi/petsc/arch-linux2-c-debug/include
> > -Ofast -fdefault-real-8  -o egrid2d
> > -Wl,-rpath,/home/maahi/petsc/arch-linux2-c-debug/lib
> > -L/home/maahi/petsc/arch-linux2-c-debug/lib
> > -Wl,-rpath,/home/maahi/petsc/arch-linux2-c-debug/lib
> > -L/home/maahi/petsc/arch-linux2-c-debug/lib
> > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/7
> > -L/usr/lib/gcc/x86_64-redhat-linux/7 -lpetsc -lflapack -lfblas -lm
> > -lpthread -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm
> > -lgcc_s -lquadmath -lstdc++ -ldl
> > /*usr/lib/gcc/x86_64-redhat-linux/7/../../../../lib64/crt1.o: In function
> > `_start':*
> > *(.text+0x20): undefined reference to `main'*
> > collect2: error: ld returned 1 exit status
> > make: *** [makefile:18: egrid2d] Error 1
> >
> > 
> >
> > Any idea how to fix it ?
> >
> > Thanks,
> > Maahi Talukder
> >
> >
> >
> >
> 
> 



Re: [petsc-users] What's the correct syntax for PetscOptionsSetValue("-pc_hypre_boomeramg_no_CF","true"); ?

2019-02-28 Thread Balay, Satish via petsc-users
On Thu, 28 Feb 2019, Klaus Burkart via petsc-users wrote:

> Hello,
> I try to use the Hypre boomeramg preconditioner but I keep getting type 
> conversion errors like this one for PetscOptionsSetValue(...):
> Fehler: »const char*« kann nicht nach »PetscOptions« {aka »_n_PetscOptions*«} 
> umgewandelt werden
>  PetscOptionsSetValue("-pc_hypre_boomeramg_no_CF","true");
> 
> This is my setup:
> 
>     PCSetType(pc,PCHYPRE);
>     PCHYPRESetType(pc,"boomeramg");
>     PetscOptionsSetValue("-pc_hypre_boomeramg_no_CF","true");
> 
> What's the correct syntax for 
> PetscOptionsSetValue("-pc_hypre_boomeramg_no_CF","true"); ?

Check examples at: 
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsSetValue.html

  PetscOptionsSetValue(NULL,"-pc_hypre_boomeramg_no_CF","true");

Satish

Re: [petsc-users] Error with petsc on OSX with libpng

2019-02-27 Thread Balay, Satish via petsc-users
On Wed, 27 Feb 2019, Amneet Pal Bhalla via petsc-users wrote:

> Ah, I see. I will try to make sure that my Xcode is working. Did you try to
> update XCode yet.

Hm - currently petsc is tested with:


balay@ipro^~ $ xcodebuild -version
Xcode 10.1
Build version 10B61
balay@ipro^~ $ clang --version
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin18.2.0
Thread model: posix
InstalledDir: 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
<<<

Your compiler appears to be a bit older.

>>>
Apple LLVM version 10.0.0 (clang-1000.10.44.4)


I don't see any such issues.. [Its likely I don't have command line tools 
installed - so don't get into this issue]

Satish


Re: [petsc-users] Error with petsc on OSX with libpng

2019-02-27 Thread Balay, Satish via petsc-users
On Thu, 28 Feb 2019, Balay, Satish via petsc-users wrote:

> For some reason your xcode compiler is giving these warnings during linktime 
> - and I don't know why. These warnings are confusing petsc configure.
> 
> >>>>>>>
> ld: warning: text-based stub file 
> /System/Library/Frameworks//OpenCL.framework/Versions/A/OpenCL.tbd and 
> library file /System/Library/Frameworks//OpenCL.framework/Versions/A/OpenCL 
> are out of sync. Falling back to library file for linking.
> <<<<<

Here are some potential fixes you can try..

https://stackoverflow.com/questions/51314888/ld-warning-text-based-stub-file-are-out-of-sync-falling-back-to-library-file#51619171
https://forums.developer.apple.com/thread/97850

>>>>>>>>>>>
This is due to an out-of-sync Xcode/command-line tools issue. Reinstalling the 
command-line tools seems to fix the issue per 
http://sd.jtimothyking.com/2018/07/26/stub-file-and-library-file-out-of-sync/:

 

$ sudo mv /Library/Developer/CommandLineTools 
/Library/Developer/CommandLineTools.old

$ xcode-select --install

$ sudo rm -rf /Library/Developer/CommandLineTools.old
<<<<<<<<<<<

Satish


Re: [petsc-users] Error with petsc on OSX with libpng

2019-02-27 Thread Balay, Satish via petsc-users
Can you try the following and see if it makes a difference?

./configure --CC=mpicc --CXX=mpicxx --FC=mpif90 --PETSC_ARCH=sierra-dbg 
--with-debugging=1 --download-hypre=1 --with-x=0 
LIBS='-L/usr/local/Cellar/mpich/3.3/lib -lmpifort 
-L/usr/local/Cellar/gcc/8.3.0/lib/gcc/8 -lgfortran'

For some reason your xcode compiler is giving these warnings during linktime - 
and I don't know why. These warnings are confusing petsc configure.

>>>
ld: warning: text-based stub file 
/System/Library/Frameworks//OpenCL.framework/Versions/A/OpenCL.tbd and library 
file /System/Library/Frameworks//OpenCL.framework/Versions/A/OpenCL are out of 
sync. Falling back to library file for linking.
<

Satish

On Wed, 27 Feb 2019, Amneet Pal Bhalla via petsc-users wrote:


  [NON-Text Body part not included]




Re: [petsc-users] (no subject)

2019-02-27 Thread Balay, Satish via petsc-users


Can you send configure.log from this build?

Satish

On Thu, 28 Feb 2019, DAFNAKIS PANAGIOTIS via petsc-users wrote:

> Hi everybody,
> 
> I am trying to install PETSc version 3.10.3 on OSX Sierra 10.13.6 with the
> following configure options
> ./configure --CC=mpicc --CXX=mpicxx --FC=mpif90 --PETSC_ARCH=sierra-dbg
> --with-debugging=1 --download-hypre=1 --with-x=0
> 
> however I am getting the following error messages when I do 'make check'. See
> below the resulting message. Any suggestions?
> 
> Thanks,
> 
> --Panos
> 
> panos@Sierra-iMac:~/Softwares/PETSc-Bitbucket/PETSc$ make
> PETSC_DIR=/Users/panos/Softwares/PETSc-Bitbucket/PETSc PETSC_ARCH=sierra-dbg
> check
> Running test examples to verify correct installation
> Using PETSC_DIR=/Users/panos/Softwares/PETSc-Bitbucket/PETSc and
> PETSC_ARCH=sierra-dbg
> make[2]: [ex19.PETSc] Error 2 (ignored)
> ***Error detected during compile or link!***
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
> /Users/panos/Softwares/PETSc-Bitbucket/PETSc/src/snes/examples/tutorials ex19
> *
> mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
> -Wno-unknown-pragmas -Qunused-arguments -fvisibility=hidden -g3
> -I/Users/panos/Softwares/PETSc-Bitbucket/PETSc/include
> -I/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/include
> `pwd`/ex19.c
> mpicc -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
> -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
> -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
> -Qunused-arguments -fvisibility=hidden -g3  -o ex19 ex19.o
> -Wl,-rpath,/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -L/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -Wl,-rpath,/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -L/Users/panos/Softwares/PETSc-Bitbucket/PETSc/sierra-dbg/lib
> -Wl,-rpath,/usr/local/Cellar/mpich/3.3/lib -L/usr/local/Cellar/mpich/3.3/lib
> -Wl,-rpath,/usr/local/Cellar/gcc/8.3.0/lib/gcc/8/gcc/x86_64-apple-darwin17.7.0/8.3.0
> -L/usr/local/Cellar/gcc/8.3.0/lib/gcc/8/gcc/x86_64-apple-darwin17.7.0/8.3.0
> -Wl,-rpath,/usr/local/Cellar/gcc/8.3.0/lib/gcc/8
> -L/usr/local/Cellar/gcc/8.3.0/lib/gcc/8
> -Wl,-rpath,/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries
> -L/System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries
> -Wl,-rpath,/System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources
> -L/System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources
> -Wl,-rpath,/System/Library/Frameworks/ImageIO.framework/Versions/A/Resources
> -L/System/Library/Frameworks/ImageIO.framework/Versions/A/Resources -lpetsc
> -lHYPRE -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lGFXShared
> -lGLU -lGL -lGLImage -lCVMSPluginSupport -lFontParser -lFontRegistry -lJPEG
> -lTIFF -lPng -lGIF -lJP2 -lRadiance -lCoreVMClient -ldl
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.tbd and
> library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.tbd
> and library file
> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.tbd
> and library file
> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib
> are out of sync. Falling back to library file for linking.
> ld: warning: text-based stub file
> 

Re: [petsc-users] Fwd: PETSC installation issues

2019-02-27 Thread Balay, Satish via petsc-users
On Wed, 27 Feb 2019, Alexander Lindsay wrote:

> Is this part of the log relevant?

Here is the relevant part.


Do not need to rebuild FBLASLAPACK
<<<

Here configure thinks FBLASLAPACK is already built.

>
TEST checkLib from 
config.packages.BlasLapack(/home/ryan/projects/moose/petsc/config/BuildSystem/config/packages/BlasLapack.py:114)
TESTING: checkLib from 
config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:114)
  Checking for BLAS and LAPACK symbols
Checking for functions [ddot_] in library 
['/home/ryan/projects/moose/petsc/linux-c-opt/lib/libfblas.a'] ['libm.a', 
'-lstdc++', '-ldl', '-Wl,-rpa\
th,/usr/lib/openmpi', '-L/usr/lib/openmpi', '-lmpi_usempif08', 
'-lmpi_usempi_ignore_tkr', '-lmpi_mpifh', '-lmpi', '-lgfortran', '-lm', 
'-Wl,-rpath,/usr/lib/gcc/x86_6\
4-pc-linux-gnu/8.2.1', '-L/usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1', 
'-Wl,-rpath,/usr/lib/openmpi', '-lgfortran', '-lm', '-lgomp', '-lgcc_s', 
'-lquadmath', '-lpthread'\
]
  Pushing language C
Executing: mpicc -c -o /tmp/petsc-VuReVk/config.libraries/conftest.o 
-I/tmp/petsc-VuReVk/config.setCompilers -I/tmp/petsc-VuReVk/config.compilers 
-I/tmp/petsc-VuReVk\
/config.utilities.closure -I/tmp/petsc-VuReVk/config.headers 
-I/tmp/petsc-VuReVk/config.utilities.cacheDetails 
-I/tmp/petsc-VuReVk/config.atomics -I/tmp/petsc-VuReVk\
/config.functions -I/tmp/petsc-VuReVk/config.utilities.featureTestMacros 
-I/tmp/petsc-VuReVk/config.utilities.missing 
-I/tmp/petsc-VuReVk/PETSc.options.scalarTypes -\
I/tmp/petsc-VuReVk/config.packages.MPI -I/tmp/petsc-VuReVk/config.types 
-I/tmp/petsc-VuReVk/config.packages.valgrind 
-I/tmp/petsc-VuReVk/config.packages.pthread -I/t\
mp/petsc-VuReVk/config.packages.metis -I/tmp/petsc-VuReVk/config.libraries 
-fPIC -fopenmp   -g -O  /tmp/petsc-VuReVk/config.libraries/conftest.c
Successful compile:
Source:
#include "confdefs.h"
#include "conffix.h"
/* Override any gcc2 internal prototype to avoid an error. */
char ddot_();
static void _check_ddot_() { ddot_(); }

int main() {
_check_ddot_();;
  return 0;
}
  Pushing language C
  Popping language C
Executing: mpicc  -o /tmp/petsc-VuReVk/config.libraries/conftest   -fPIC 
-fopenmp   -g -O /tmp/petsc-VuReVk/config.libraries/conftest.o  
-Wl,-rpath,/home/ryan/projects/moose/petsc/linux-c-opt/lib 
-L/home/ryan/projects/moose/petsc/linux-c-opt/lib -lfblas -lm -lstdc++ -ldl 
-Wl,-rpath,/usr/lib/openmpi -L/usr/lib/openmpi -lmpi_usempif08 
-lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm 
-Wl,-rpath,/usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1 
-L/usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1 -Wl,-rpath,/usr/lib/openmpi -lgfortran 
-lm -lgomp -lgcc_s -lquadmath -lpthread -lstdc++ -ldl
Possible ERROR while running linker: exit code 256
stderr:
/usr/bin/ld: cannot find -lfblas
collect2: error: ld returned 1 exit status
<<

But here - the compiler is not finding the library.

So the library is not likely built.

You can either do:

rm -rf /home/ryan/projects/moose/petsc/linux-c-opt/

i.e delete all build files - and start a fresh build.

Or delete the file configure looks for and assumes this package is built - i.e

rm -f 
/home/ryan/projects/moose/petsc/linux-c-opt/lib/petsc/conf/pkg.conf.fblaslapack

and rerun configure.


Satish



Re: [petsc-users] Fwd: PETSC installation issues

2019-02-27 Thread Balay, Satish via petsc-users
>>>
stderr:
/usr/bin/ld: cannot find -lfblas
<<<

Looks like a broken/incomplete build - that configure thinks is successful.

Suggest doing a fresh build after 'rm -rf 
/home/ryan/projects/moose/petsc/linux-c-opt'

Also if using threads - might want to use --download-openblas [or MKL]

Satish

On Wed, 27 Feb 2019, Fande Kong wrote:

> Hi Satish,
> 
> Do you have any idea why the configure failed?
> 
> Thanks,
> 
> Fande
> 
> On Mon, Feb 25, 2019 at 2:57 PM  wrote:
> 
> > Greetings,
> > I am having trouble installing Petsc with the given configuration options
> > in the scripts/update_and_rebuild_petsc.sh. I have attached the
> > confiugure.log. I am running Arch linux - kernel 4.20.12, mpi: gcc 8.2.1,
> > mpif77/90: gnu fortran gcc 8.2.1.
> >
> > The main problem is that I keep getting errors for superlu_dist and
> > scalapack that the downloaded file cannot be installed with the given
> > configuration. Are there some other flags I can specify to compile and link
> > the packages with Petsc?
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "moose-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to moose-users+unsubscr...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/moose-users.
> > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/moose-users/09180386-e88b-4776-94cc-1f0cc00e00f6%40googlegroups.com
> > 
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
> 



Re: [petsc-users] Link to the latest tutorials broken

2019-02-07 Thread Balay, Satish via petsc-users
fixed now.

Satish

On Thu, 7 Feb 2019, Sajid Ali via petsc-users wrote:

> Hi,
> 
> The links to the Jan 2019 presentations at
> https://www.mcs.anl.gov/petsc/documentation/tutorials/index.html are
> broken. Could these be fixed ?
> 
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
> 



Re: [petsc-users] Installing PETSc

2019-02-05 Thread Balay, Satish via petsc-users
On Tue, 5 Feb 2019, Fazlul Huq via petsc-users wrote:

> Hello PETSc Developers,
> 
> may be this is a trivial question!
> 
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't.

I don't know what you mean here - please copy/paste your attempt - and errors 
you've got.

> As far as I can
> recall, I have installed PETSc in the Home directory. Is it the reason why
> I can't run PETSc code from other directory?

nope

> Shall I install PETSc in the root directory?

Once you have a working install of petsc library - you should be able to use it 
from anywhere
in the file-system.

> 
> Again, if I run command "which petsc" I don't get any echo on the terminal.

There is no 'petsc' binary in petsc library.

Satish


Re: [petsc-users] C++ compilation error

2019-01-13 Thread Balay, Satish via petsc-users
Glad you figured this out. Thanks for the update.

Satish

On Sun, 13 Jan 2019, Choudhary, Devyani D wrote:

> It was the issue with the compiler, and is resolved now. I can run the 
> examples now.
> 
> Thank you for your help.
> 
> 
> Devyani
> 
> 
> From: Balay, Satish 
> Sent: Sunday, January 13, 2019 5:18:22 PM
> To: Choudhary, Devyani D; petsc-users
> Subject: RE: [petsc-users] C++ compilation error
> 
> 
> Do you get this error with any petsc example – say src/ksp/ksp/examples 
> /tutorials/ex2 ?
> 
> 
> 
> Also, copy paste the complete output from make – when building example or 
> your Test code.
> 
> 
> 
> Satish
> 
> 
> 
> 
> From: petsc-users  on behalf of Choudhary, 
> Devyani D via petsc-users 
> Sent: Sunday, January 13, 2019 2:27:05 PM
> To: petsc-users
> Subject: Re: [petsc-users] C++ compilation error
> 
> 
> Thank you for the response Satish, but I am still encountering the same issue.
> 
> Following is my makefile:
> 
> 
> PETSC_DIR=/usr/local/pacerepov1/petsc/3.8.3/mvapich2-2.1/intel-15.0/opt/lib/petsc/
> 
> include ${PETSC_DIR}/conf/variables
> include ${PETSC_DIR}/conf/rules
> include ${PETSC_DIR}/conf/test
> 
> hello: hello.o  chkopts
> -${CLINKER} -o hello hello.o  ${PETSC_LIB}
> ${RM} hello.o
> 
> It is in the format of petsc examples. I am not sure why I am still getting 
> the error.
> 
> Devyani
> 
> 
> 
> 
> From: Balay, Satish 
> Sent: Sunday, January 13, 2019 3:07:58 PM
> To: Choudhary, Devyani D
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] C++ compilation error
> 
> Presumably PETSc is buit with intel compilers - but somehow gcc/g++ is
> getting used via your makefile.
> 
> Do you get this error when you use PETSc example with a petsc
> makefile? Perhaps you need to format your makefile using petsc
> makefile format. For ex: check
> src/tao/leastsquares/examples/tutorials/makefile
> 
> Satish
> 
> On Sun, 13 Jan 2019, Choudhary, Devyani D via petsc-users wrote:
> 
> > Hi,
> >
> >
> > I am trying to make a simple hello world script using a makefile that 
> > includes petsc, and am getting the error
> >
> > "g++: error: unrecognized command line option ‘-wd1572’"
> >
> > I am not sure how to fix it.
> >
> > Please let me know if you have some ideas on how to fix this.
> >
> >
> > Thank you
> >
> 


Re: [petsc-users] C++ compilation error

2019-01-13 Thread Balay, Satish via petsc-users
Do you get this error with any petsc example – say src/ksp/ksp/examples 
/tutorials/ex2 ?

Also, copy paste the complete output from make – when building example or your 
Test code.

Satish


From: petsc-users  on behalf of Choudhary, 
Devyani D via petsc-users 
Sent: Sunday, January 13, 2019 2:27:05 PM
To: petsc-users
Subject: Re: [petsc-users] C++ compilation error


Thank you for the response Satish, but I am still encountering the same issue.

Following is my makefile:


PETSC_DIR=/usr/local/pacerepov1/petsc/3.8.3/mvapich2-2.1/intel-15.0/opt/lib/petsc/

include ${PETSC_DIR}/conf/variables
include ${PETSC_DIR}/conf/rules
include ${PETSC_DIR}/conf/test

hello: hello.o  chkopts
-${CLINKER} -o hello hello.o  ${PETSC_LIB}
${RM} hello.o

It is in the format of petsc examples. I am not sure why I am still getting the 
error.

Devyani




From: Balay, Satish 
Sent: Sunday, January 13, 2019 3:07:58 PM
To: Choudhary, Devyani D
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] C++ compilation error

Presumably PETSc is buit with intel compilers - but somehow gcc/g++ is
getting used via your makefile.

Do you get this error when you use PETSc example with a petsc
makefile? Perhaps you need to format your makefile using petsc
makefile format. For ex: check
src/tao/leastsquares/examples/tutorials/makefile

Satish

On Sun, 13 Jan 2019, Choudhary, Devyani D via petsc-users wrote:

> Hi,
>
>
> I am trying to make a simple hello world script using a makefile that 
> includes petsc, and am getting the error
>
> "g++: error: unrecognized command line option ‘-wd1572’"
>
> I am not sure how to fix it.
>
> Please let me know if you have some ideas on how to fix this.
>
>
> Thank you
>


Re: [petsc-users] C++ compilation error

2019-01-13 Thread Balay, Satish via petsc-users
Presumably PETSc is buit with intel compilers - but somehow gcc/g++ is
getting used via your makefile.

Do you get this error when you use PETSc example with a petsc
makefile? Perhaps you need to format your makefile using petsc
makefile format. For ex: check
src/tao/leastsquares/examples/tutorials/makefile

Satish

On Sun, 13 Jan 2019, Choudhary, Devyani D via petsc-users wrote:

> Hi,
> 
> 
> I am trying to make a simple hello world script using a makefile that 
> includes petsc, and am getting the error
> 
> "g++: error: unrecognized command line option ‘-wd1572’"
> 
> I am not sure how to fix it.
> 
> Please let me know if you have some ideas on how to fix this.
> 
> 
> Thank you
> 


Re: [petsc-users] PETSc configure error

2019-01-10 Thread Balay, Satish via petsc-users
Can you send configure.log from this failure?

Do you have any old version of bfort in your PATH?

Satish

On Thu, 10 Jan 2019, Yaqi Wang via petsc-users wrote:

> Hello,
> 
> I am having difficulty on configuring PETSc. Running configure crashed in
> the middle. At the end of configure.log, I have
> ---
> 
> ***
> 
> CONFIGURATION CRASH  (Please send configure.log to
> petsc-ma...@mcs.anl.gov)
> 
> ***
> 
> 'NoneType' object has no attribute 'group'  File "./config/configure.py",
> line 394, in petsc_configure
> 
> framework.configure(out = sys.stdout)
> 
>   File
> "/Users/wangy2/projects/moose/petsc/config/BuildSystem/config/framework.py",
> line 1092, in configure
> 
> self.processChildren()
> 
>   File
> "/Users/wangy2/projects/moose/petsc/config/BuildSystem/config/framework.py",
> line 1081, in processChildren
> 
> self.serialEvaluation(self.childGraph)
> 
>   File
> "/Users/wangy2/projects/moose/petsc/config/BuildSystem/config/framework.py",
> line 1062, in serialEvaluation
> 
> child.configure()
> 
>   File
> "/Users/wangy2/projects/moose/petsc/config/BuildSystem/config/packages/sowing.py",
> line 119, in configure
> 
> self.checkBfortVersion(1,1,25)
> 
>   File
> "/Users/wangy2/projects/moose/petsc/config/BuildSystem/config/packages/sowing.py",
> line 59, in checkBfortVersion
> 
> major= int(ver.group(1))
> 
> 
> 
> Finishing configure run at Thu, 10 Jan 2019 13:04:21 -0700
> 
> 
> 
> --
> 
> Does anyone have the similar error before? How should I fix it?
> 
> 
> Your help is appreciated.
> 
> 
> Best,
> 
> Yaqi
> 



Re: [petsc-users] METIS double precision

2019-01-10 Thread Balay, Satish via petsc-users
https://bitbucket.org/petsc/petsc/pull-requests/1316/metis-provide-download-metis-use/

Satish

On Thu, 10 Jan 2019, Balay, Satish via petsc-users wrote:

> Well - this was the previous default - and was removed as it was deemed 
> unnecessary.
> 
> https://bitbucket.org/petsc/petsc/commits/2d4f01c230fe350f0ab5a28d1f5ef05ceab7ea3d
> 
> The attached patch provides --download-metis-use-doubleprecision option
> 
> Satish
> 
> On Thu, 10 Jan 2019, Matthew Overholt via petsc-users wrote:
> 
> > Hello,
> > 
> > How does one configure the PETSc installation to download METIS and have it
> > use REALTYPEWIDTH 64 (as defined in metis.h)?  I am using:
> > 
> > --with-64-bit-indices --download-metis=yes
> > 
> > to get IDXTYPEWIDTH 64.  If I were installing METIS independently I would
> > set the following near the top of metis.h:  #define
> > METIS_USE_DOUBLEPRECISION
> > 
> > The reason for this is I want to call METIS routines outside of PETSc and
> > prefer double precision.
> > 
> > Thanks,
> > Matt Overholt
> > CapeSym, Inc.
> > 
> 



Re: [petsc-users] Configuring PETSc with OpenMPI

2019-01-08 Thread Balay, Satish via petsc-users
Should have clarified: when using 
--with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc - removing 
-with-mpi-dir option.

i.e - try:

/configure PETSC_ARCH=linux-cumulus-hpc 
--with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc 
--with-fc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpifort 
--with-cxx=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicxx 
--download-parmetis --download-metis --download-ptscotch 
--download-superlu_dist --download-mumps --with-scalar-type=complex 
--with-debugging=no --download-scalapack --download-superlu 
--download-fblaslapack=1 --download-cmake

Satish

On Tue, 8 Jan 2019, Sal Am via petsc-users wrote:

> Thanks Satish for quick response,
> 
> I tried both of the above, first removing the options --with-cc etc. and
> then explicitly specifying the path e.g.
> --with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc etc..
> Netiher worked, the error is still the same telling me "did not work" I
> have attached the log file.
> 
> Thanks
> 
> On Mon, Jan 7, 2019 at 4:39 PM Balay, Satish  wrote:
> 
> > Configure Options: --configModules=PETSc.Configure
> > --optionsModule=config.compilerOptions PETSC_ARCH=linux-cumulus-hpc
> > --with-cc=gcc --with-fc=gfortran --with-cxx=g++
> > --with-mpi-dir=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/
> > --download-parmetis --download-metis --download-ptscotch
> > --download-superlu_dist --download-mumps --with-scalar-type=complex
> > --with-debugging=no --download-scalapack --download-superlu
> > --download-fblaslapack=1 --download-cmake
> >
> > ' --with-cc=gcc --with-fc=gfortran --with-cxx=g++' prevents usage of mpicc
> > etc - so remove these options when using with-mpi-dir
> >
> > Or use:
> >
> > --with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc etc..
> >
> > Satish
> >
> >
> > On Mon, 7 Jan 2019, Sal Am via petsc-users wrote:
> >
> > > Added the log file.
> > >
> > > >From OpenMPI:
> > >
> > > > The only special configuration that you need to build PETSc is to
> > ensure
> > > > that Open MPI's wrapper compilers (i.e., mpicc and mpif77) are in your
> > > > $PATH before running the PETSc configure.py script.
> > > >
> > > > PETSc should then automatically find Open MPI's wrapper compilers and
> > > > correctly build itself using Open MPI.
> > > >
> > > The OpenMPI dir is on my PATH which contain mpicc and mpif77.
> > >
> > > This is on a HPC, if that matters.
> > >
> >
> >
> 



Re: [petsc-users] Question about correctly catching fp_trap

2019-01-07 Thread Balay, Satish via petsc-users
On Mon, 7 Jan 2019, Sajid Ali via petsc-users wrote:

> Hi Satish,
> 
> Please find attached the logs for the previous patch for petsc+debug.

>>>
Compilers:
  C Compiler: 
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
  -fPIC  -g3
  C++ Compiler:   
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpic++
  -g   -fPIC
  Fortran Compiler:   
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpif90
  -fPIC -g
Linkers:
  Shared linker:   
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
  -shared  -fPIC  -g3
  Dynamic linker:   
/raid/home/sajid/packages/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/mpich-3.3-z5uiwmx24jylnivuhlnqjjmm674ozj6x/bin/mpicc
  -shared  -fPIC  -g3



So the build did use '-g' flag - so it looks good to me..

Satish


Re: [petsc-users] Configuring PETSc with OpenMPI

2019-01-07 Thread Balay, Satish via petsc-users
Configure Options: --configModules=PETSc.Configure 
--optionsModule=config.compilerOptions PETSC_ARCH=linux-cumulus-hpc 
--with-cc=gcc --with-fc=gfortran --with-cxx=g++ 
--with-mpi-dir=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/ 
--download-parmetis --download-metis --download-ptscotch 
--download-superlu_dist --download-mumps --with-scalar-type=complex 
--with-debugging=no --download-scalapack --download-superlu 
--download-fblaslapack=1 --download-cmake

' --with-cc=gcc --with-fc=gfortran --with-cxx=g++' prevents usage of mpicc etc 
- so remove these options when using with-mpi-dir

Or use:

--with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc etc..

Satish


On Mon, 7 Jan 2019, Sal Am via petsc-users wrote:

> Added the log file.
> 
> >From OpenMPI:
> 
> > The only special configuration that you need to build PETSc is to ensure
> > that Open MPI's wrapper compilers (i.e., mpicc and mpif77) are in your
> > $PATH before running the PETSc configure.py script.
> >
> > PETSc should then automatically find Open MPI's wrapper compilers and
> > correctly build itself using Open MPI.
> >
> The OpenMPI dir is on my PATH which contain mpicc and mpif77.
> 
> This is on a HPC, if that matters.
> 



Re: [petsc-users] Question about correctly catching fp_trap

2019-01-07 Thread Balay, Satish via petsc-users
On Mon, 7 Jan 2019, Sajid Ali wrote:

> @Satish Balay  : I tried building with the patch and
> don't see any difference. Do you want me to send you the config and build
> logs to investigate further?

Hm - you should see '-g' for 'petsc+debug'. Send the logs. 

I have - in 
/home/balay/spack/opt/spack/linux-fedora29-x86_64/gcc-8.2.1/petsc-3.10.3-q5v3hfhsecvhf3geffeyrm5j3bm6orup/.spack/build.out

Compilers:
  C Compiler: /home/petsc/soft/mpich-3.3b1/bin/mpicc-g3 
  C++ Compiler:   /home/petsc/soft/mpich-3.3b1/bin/mpic++  -g  
  Fortran Compiler:   /home/petsc/soft/mpich-3.3b1/bin/mpif90   -g  
Linkers:
  Shared linker:   /home/petsc/soft/mpich-3.3b1/bin/mpicc  -shared-g3
  Dynamic linker:   /home/petsc/soft/mpich-3.3b1/bin/mpicc  -shared-g3
<

Note: the patch I sent earlier fails for 'petsc~debug' - but the
attached updated patch should work for it.

> Apart from the -g flag, as I've stated above
> another bug is that petsc uses the system gdb (rhel-7) and not the gdb
> associated with the gcc that was used to build petsc.

It uses gdb from your PATH - so you can setup your PATH to use the appropriate 
gdb.

Satish
diff --git a/var/spack/repos/builtin/packages/petsc/package.py 
b/var/spack/repos/builtin/packages/petsc/package.py
index 99800b501..b40173fe6 100644
--- a/var/spack/repos/builtin/packages/petsc/package.py
+++ b/var/spack/repos/builtin/packages/petsc/package.py
@@ -195,10 +195,7 @@ class Petsc(Package):
'--download-hwloc=0',
'CFLAGS=%s' % ' '.join(spec.compiler_flags['cflags']),
'FFLAGS=%s' % ' '.join(spec.compiler_flags['fflags']),
-   'CXXFLAGS=%s' % ' '.join(spec.compiler_flags['cxxflags']),
-   'COPTFLAGS=',
-   'FOPTFLAGS=',
-   'CXXOPTFLAGS=']
+   'CXXFLAGS=%s' % ' '.join(spec.compiler_flags['cxxflags'])]
 options.extend(self.mpi_dependent_options())
 options.extend([
 '--with-precision=%s' % (
@@ -209,6 +206,13 @@ class Petsc(Package):
 '--with-debugging=%s' % ('1' if '+debug' in spec else '0'),
 '--with-64-bit-indices=%s' % ('1' if '+int64' in spec else '0')
 ])
+if '+debug' in spec:
+options.append('--with-debugging=%s')
+else:
+options.extend(['COPTFLAGS=',
+'FOPTFLAGS=',
+'CXXOPTFLAGS='])
+
 # Make sure we use exactly the same Blas/Lapack libraries
 # across the DAG. To that end list them explicitly
 lapack_blas = spec['lapack'].libs + spec['blas'].libs


Re: [petsc-users] VecGetArrayF90 in PETSc/3.10 (Fortran)

2019-01-06 Thread Balay, Satish via petsc-users
This code is missing:

#include 
  use petscvec

Check:

https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html

Satish

On Sun, 6 Jan 2019, Chang Liu via petsc-users wrote:

> Hi All,
> Recently, when I am upgrading our code from 3.8 to 3.10, it always runs
> into error during VecGetArrayF90 call (it is Fortran90-based code). The
> reason we use is to access the array for our user-defined matrix-vector
> multiplication subroutine, for example:
> 
> *  subroutine mymult(A,x,y,loco_ierr) *
> *   !!! matrix-vector multiplication  y=A.x
> !!!*
> *use mat_vec_mult,only:matvec*
> *implicit none*
> *Mat,intent(in)::A ! required by PETSc*
> *Vec,intent(in)::x*
> *Vec,intent(out)::y*
> *PetscErrorCode,intent(out)::loco_ierr*
> *PetscScalar,pointer::xx(:),yy(:)*
> 
> *call VecGetArrayReadF90(x,xx,loco_ierr)*
> *call VecGetArrayF90(y,yy,loco_ierr)*
> *call matvec(xx,yy)*
> *call VecRestoreArrayReadF90(x,xx,loco_ierr)*
> *call VecRestoreArrayF90(y,yy,loco_ierr)*
> *return*
> * end subroutine mymult*
> 
> I checked the change log on the website and saw a statement:
> TAO:
> 
>- Added VecLock{Push|Pop} calls around user callbacks; use of
>VecGetArray in user callbacks is now prohibited.
> 
> Is there any relation to my issue? All the online PETSc examples and
> description for VecGetArray/VecGetArrayF90 are the same as PETSc/3.8.  I
> tried to add/use VecLockPop/Push, but still doesn't work.
> 
> I am confused and not sure what is the problem?
> Could I get some help?
> Thanks,
> 



Re: [petsc-users] Question about correctly catching fp_trap

2019-01-03 Thread Balay, Satish via petsc-users
I guess the primary bug here is 'petsc+debug' option is broken in spack.

Its not clear to me if a package in spack should have 'debug' option. [so I 
should remove this?]

since spack way is perhaps:

spack install CFLAGS=-g FFLAGS=-g CXXFLAGS=-g petsc

Satish

On Fri, 4 Jan 2019, Smith, Barry F. via petsc-users wrote:

> 
>This appears to be more a spack question than a PETSc question. That make 
> (that doesn't have the -g) is controlled by spack, not PETSc.
> 
>Barry
> 
> 
> > On Jan 3, 2019, at 6:15 PM, Sajid Ali via petsc-users 
> >  wrote:
> > 
> > Hi, 
> > 
> > I've compiled a program using petsc with debugging enabled. 
> > Confirmation of the same : https://pastebin.com/aa0XDheD (check if the 
> > loaded module is indeed petsc+debug, compile)
> > 
> > When I run it in a debugger, I get the error as shown below: 
> > https://pastebin.com/GJtB2Ghz
> > 
> > What am I missing?
> > 
> > Thank You,
> > Sajid Ali
> > Applied Physics
> > Northwestern University
> 



Re: [petsc-users] Installation error on macOS Mojave using GNU compiler

2019-01-03 Thread Balay, Satish via petsc-users
On Thu, 3 Jan 2019, Matthew Knepley via petsc-users wrote:

> On Thu, Jan 3, 2019 at 7:02 PM Danyang Su via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> 
> > Hi All,
> >
> > I am trying to install PETSc on macOS Mojave using GNU compiler.
> > First, I tried the debug version using the following configuration and it
> > works fine.
> >
> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
> > --download-mpich --download-scalapack --download-parmetis --download-metis
> > --download-ptscotch --download-fblaslapack --download-hypre
> > --download-superlu_dist --download-hdf5=yes --download-ctetgen
> >
> > After testing debug version, I reconfigured PETSc with optimization turned
> > on using the following configuration. However, I got error during this step.
> >
> 
> Your optimization flags are not right because the compiler is producing
> AVX-512, but your linker cannot handle it. However, it looks like
> it might be that your Fortran compiler can't handle it. Do you need
> Fortran? If not, turn it off (configure with --with-fc=0) and try again.

Or use 'FOPTFLAGS=-O3' [if its indeed fortran sources causing grief]

If '-march=native -mtune=native' gives compiler/linker errors - don't use them.

Satish

> 
>   Thanks,
> 
> Matt
> 
> 
> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
> > --with-cxx-dialect=C++11 --download-mpich --download-scalapack
> > --download-parmetis --download-metis --download-ptscotch
> > --download-fblaslapack --download-hypre --download-superlu_dist
> > --download-hdf5=yes --download-ctetgen --with-debugging=0 COPTFLAGS="-O3
> > -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native"
> > FOPTFLAGS="-O3 -march=native -mtune=native"
> >
> > The error information is
> >
> > ctoolchain/usr/bin/ranlib: file: .libs/libmpl.a(mpl_dbg.o) has no symbols
> > /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
> > file: .libs/libmpl.a(mpl_dbg.o) has no symbols
> > /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
> > file: .libs/libmpl.a(mpl_dbg.o) has no symbols
> > /var/folders/jm/wcm4mv8s3v1gqz383tcf_4c0gp/T//ccrlfFuo.s:14:2: error:
> > instruction requires: AVX-512 ISA AVX-512 VL ISA
> > vmovdqu64   (%rdi), %xmm0
> > ^
> > make[2]: *** [src/binding/fortran/use_mpi/mpi_constants.mod-stamp] Error 1
> > make[2]: *** Waiting for unfinished jobs
> > make[1]: *** [all-recursive] Error 1
> > make: *** [all] Error 2
> >
> >
> > Thanks,
> >
> > Danyang
> >
> 
> 
> 



Re: [petsc-users] Configure error after upgrading GCC

2018-12-10 Thread Balay, Satish via petsc-users
The fix for petsc build error with openmpi-4.0.0 is in branch 
'barry/remove-mpi-handler-function/maint'

You can always get petsc to build mpich or openmpi - with --download-mpich or 
--download-openmpi

Satish

On Mon, 10 Dec 2018, Amneet Bhalla via petsc-users wrote:

> That still caused the same problem.
> 
> But I have resolved the issue. I removed system mpich and built a local
> version of openmpi. That made it work. It appears that apt-get install
> mpich installs default GCC-5 headers even when I made GCC-7 the default
> complier.
> 
> By the way, I built openmpi 4.0.0 and PETSc was not compatible with it:
> some API errors occurred. So I installed openmpi 1.10 which I know works
> with PETSc. Just want to confirm that it is indeed the case.
> 
> On Mon, Dec 10, 2018 at 6:12 AM Balay, Satish  wrote:
> 
> > > Configure Options: --configModules=PETSc.Configure
> > --optionsModule=config.compilerOptions --CC=mpicc --FC=mpif90
> > --with-debugging=1 --download-hypre=1 --with-x=0 --PETSC_ARCH=linux-dbg
> >
> > If specifying compilers - also specify a C++ compiler.
> >
> > --CC=mpicc --FC=mpif90  --CXX=mpicxx
> >
> > Satish
> >
> >
> > On Sun, 9 Dec 2018, Amneet Bhalla via petsc-users wrote:
> >
> > > Hi Folks,
> > >
> > > I upgraded GCC (GCC-5 to GCC-7) on Ubuntu following the first suggestion
> > > given here:
> > > https://gist.github.com/application2000/73fd6f4bf1be6600a2cf9f56315a2d91
> > >
> > > I then removed system mpich (which was using GCC-5) and reinstalled it
> > > using Ubuntu package manager. However, now I am getting the following
> > > configure error with PETSc
> > >
> > >
> > ===
> > >
> > > TESTING: CxxMPICheck from
> > > config.packages.MPI(config/BuildSystem/config/packages/MPI.py:381)
> > >
> > >
> > >
> > ***
> > >
> > >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> > > details):
> > >
> > >
> > ---
> > >
> > > C++ error! mpi.h could not be located at: []
> > >
> > ***
> > >
> > >
> > >
> > > I am attaching the configure.log as well. Any hints on what I should be
> > > doing?
> > >
> > > Thanks,
> > >
> >
> >
> 
> 



  1   2   >