Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-17 Thread Smith, Barry F. via petsc-users


   I am working on it. It requires a large number of fixes; I hope to have it 
working by tonight.

   Barry


> On Jan 17, 2020, at 2:40 AM, Дмитрий Мельничук 
>  wrote:
> 
> Thank you for your replies.
> 
> Tried to configure commited version of PETSc from Satish Balay branch 
> (balay/fix-ftn-i8/maint) and faced to the same error when running test 
> example ex5f
>  
> 
> call SNESCreate(PETSC_COMM_WORLD,snes,ierr)
>  
> Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to INTEGER(8)
>  
> 
> At the moment, some subroutines (such as PetscInitialize, PetscFinalize, 
> MatSetValue, VecSetValue) work with the correct size of the variable ierr 
> defined as PetscErrorCode, and some do not.
> The following subroutines still require ierr to be of type INTEGER(8):
>  
> 
> VecGetSubVector, VecAssemblyBegin, VecAssemblyEnd, VecScatterBegin, 
> VecScatterEnd, VecScatterDestroy, VecCreateMPI, VecDuplicate, VecZeroEntries, 
> VecAYPX, VecWAXPY, VecWAXPY
> MatMult, MatDestroy, MatAssemblyBegin, MatAssemblyEnd, MatZeroEntries, 
> MatCreateSubMatrix, MatScale, MatDiagonalSet, MatGetDiagonal, MatDuplicat, 
> MatSetSizes, MatSetFromOptions
> 
> Unfortunately, I'm not sure if this is the only issue that occurs when 
> switching to the 64-bit version of PETSc.
> I can set the size of the variables ierr so that the solver compilation 
> process completes successfully, but I get the following error when solving 
> linear algebra system by use of KSPSolve subroutine:
>  
>  
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
>  
> Because the solver with the 32-bit version of PETSc works properly, I suppose 
> that the cause of the errors (for 64-bit version of PETSc) is the 
> inappropriate size of the variables.
> I compiled PETSc with flags: with-64-bit-indices and -fdefault-integer-8. 
> Also changed the size of MPI_Integer to MPI_Integer8:
> MPI_Bcast(npart,nnds,MPI_Integer8,0,MPI_Comm_World,ierr).
>  
> I am probably missing something else.
> 
> ​​​
> Kind regards,
> Dmitry Melnichuk
>  
> 16.01.2020, 01:26, "Balay, Satish" :
> I have some changes (incomplete) here -
> 
> my hack to bfort.
> 
> diff --git a/src/bfort/bfort.c b/src/bfort/bfort.c
> index 0efe900..31ff154 100644
> --- a/src/bfort/bfort.c
> +++ b/src/bfort/bfort.c
> @@ -1654,7 +1654,7 @@ void PrintDefinition( FILE *fout, int is_function, char 
> *name, int nstrings,
> 
>  /* Add a "decl/result(name) for functions */
>  if (useFerr) {
> - OutputFortranToken( fout, 7, "integer" );
> + OutputFortranToken( fout, 7, "PetscErrorCode" );
> OutputFortranToken( fout, 1, errArgNameParm);
>  } else if (is_function) {
> OutputFortranToken( fout, 7, ArgToFortran( rt->name ) );
> 
> 
> And my changes to petsc are on branch balay/fix-ftn-i8/maint
> 
> Satish
> 
> On Wed, 15 Jan 2020, Smith, Barry F. via petsc-users wrote:
>  
> 
> 
>Working on it now; may be doable
> 
> 
> 
>  > On Jan 15, 2020, at 11:55 AM, Matthew Knepley  wrote:
>  >
>  > On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук 
>  wrote:
>  > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc 
> routines should be suing 'PetscErrorCode for ierr'
>  >
>  > If I define ierr as PetscErrorCode for all subroutines given below
>  >
>  > call VecDuplicate(Vec_U,Vec_Um,ierr)
>  > call VecCopy(Vec_U,Vec_Um,ierr)
>  > call VecGetLocalSize(Vec_U,j,ierr)
>  > call VecGetOwnershipRange(Vec_U,j1,j2,ierr)
>  >
>  > then errors occur with first three subroutines:
>  > Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to 
> INTEGER(8).
>  >
>  > Barry,
>  >
>  > It looks like the ftn-auto interfaces are using 'integer' for the error 
> code, whereas the ftn-custom is using PetscErrorCode.
>  > Could we make the generated ones use integer?
>  >
>  > Thanks,
>  >
>  > Matt
>  >
>  > Therefore I was forced to define ierr as PetscInt for VecDuplicate, 
> VecCopy, VecGetLocalSize subroutines to fix these errors.
>  > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while 
> others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery for 
> me.
>  >
>  > > What version of PETSc are you using?
>  >
>  > version 3.12.2
>  >
>  > > Are you seeing this issue with a PETSc example?
>  >
>  > I will check it tomorrow and let you know.
>  >
>  > Kind regards,
>  > Dmitry Melnichuk
>  >
>  >
>  >
>  > 15.01.2020, 17:14, "Balay, Satish" :
>  > -fdefault-integer-8 is likely to break things [esp with MPI - where 
> 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become 
> incompatible with the MPI library with -fdefault-integer-8.]
>  >
>  > And I'm not sure why you are having to use PetscInt for ierr. All PETSc 
> routines should be suing 'PetscErrorCode for ierr'
>  >
>  > What version of PETSc are you using? Are you seeing this issue with a 
> PETSc example?
>  >
>  > Satish
>  >
>  > On Wed, 15 Jan 2020, Дмитрий Мельничук wrote:
>  >
>  >
> 

Re: [petsc-users] checking max iteration reached

2020-01-17 Thread Matthew Knepley
You want
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetConvergedReason.html

  Thanks,

Matt

On Fri, Jan 17, 2020 at 1:48 PM Sam Guo  wrote:

> Dear PETSc dev team,
>How to check if the max iterations have been reached? I notice there is
> PETSC_ERR_NOT_CONVERGED but I am not sure if this error is issued for max
> iterations reached or not.
>If yes, how to tell if the max iterations have been reached since this
> error can be issued for many other reasons.
>If not, should I check the number of iterations myself?
>
> Thanks,
> Sam
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] checking max iteration reached

2020-01-17 Thread Sam Guo
Dear PETSc dev team,
   How to check if the max iterations have been reached? I notice
there is PETSC_ERR_NOT_CONVERGED
but I am not sure if this error is issued for max iterations reached or
not.
   If yes, how to tell if the max iterations have been reached since this
error can be issued for many other reasons.
   If not, should I check the number of iterations myself?

Thanks,
Sam


Re: [petsc-users] use superlu and hypre's gpu features through PETSc

2020-01-17 Thread Mark Adams
Stafano ported hypre to SUMMIT to use CUDA in
branch stefanozampini/hypre-cuda-rebased

Fragile and performance was poor.

On Thu, Jan 16, 2020 at 9:44 PM Smith, Barry F. via petsc-users <
petsc-users@mcs.anl.gov> wrote:

>
>That is superlu_dist and hypre.
>
>Yes, but both backends are rather primitive and will be a little
> struggle to use.
>
>For superlu_dist you need to get the branch
> barry/fix-superlu_dist-py-for-gpus and rebase it against master
>
>I only recommend trying them if you are adventuresome. Note that
> PETSc's GAMG can also utilize the GPU.
>
>Barry
>
>
> > On Jan 16, 2020, at 4:31 PM, Xiangdong  wrote:
> >
> > Dear Developers,
> >
> > From the online documentation, both superlu and hypre have some gpu
> functionalities. Can we use these gpu features through PETSc's interface?
> >
> > Thank you.
> >
> > Best,
> > Xiangdong
>
>


Re: [petsc-users] Cross compilation of PETSc using MXE

2020-01-17 Thread Matthew Knepley
On Fri, Jan 17, 2020 at 7:00 AM Lars Odsæter  wrote:

> Dear PETSc users,
>
> First, thanks to the developers for your great effort with the PETSc
> library, which I have benefited from several times.
>
> I want to share with you that I recently was able to cross compile PETSc
> with the MXE (mxe.cc) cross compiler, and then link it into an application
> (also built with the mxe compiler) to produce an executable that I
> successfully ran on my Windows computer. In doing this I realized that
> there is very little documentation of this on the web, so maybe others
> could benefit from my approach. It contains a few hacks that might be
> solved more elegant, but that is probably out of may range.
>
> First, I installed the mxe cross compiler following the tutorial (
> mxe.cc/#tutorial):
>
> git clone https://github.com/mxe/mxe.git
> make cc
> make blas lapack
> export PATH=~/code/mxe/usr/bin:$PATH
>
> Then I compiled PETSc (3.11.3 tarball):
>
> wget
> http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.11.3.tar.gz
> gunzip -c petsc-3.11.3.tar.gz | tar -xof -
> cd petsc-3.11.3/
> ./configure PETSC_ARCH=arch-mxe-static \
>   --with-mpi=0 --host=i686-w64-mingw32.static \
>   --enable-static --disable-shared \
>   --with-cc=i686-w64-mingw32.static-gcc \
>   --with-cxx=i686-w64-mingw32.static-g++ \
>   --with-fc=i686-w64-mingw32.static-gfortran \
>   --with-ld=i686-w64-mingw32.static-ld \
>   --with-ar=i686-w64-mingw32.static-ar \
>   --with-pkg-config=i686-w64-mingw32.static-pkg-config \
>   --with-batch --known-64-bit-blas-indices
>
> Next, I did the reconfigure step that was explained in the output from the
> call to configure:
>
> * copy 'conftest-arch-mxe-static' to your Windows machine
> * Rename it with extension '.exe'
> * Run the application in Windows. This generates
> 'reconfigure-arch-mxe-static.py'
> * Copy 'reconfigure-arch-mxe-static.py' back to the Linux machine
> * Run the python script:
>
> python reconfigure-arch-mxe-static.py
>
> Now, 'make all' failed to compile, but I did two hacks to mitigate it:
>
> 1) In  '~/code/petsc-3.11.3/arch-mxe-static/include/petscconf.h' include
> the following lines:
> #ifndef PETSC_HAVE_DIRECT_H
> #define PETSC_HAVE_DIRECT_H 1
> #endif
>
> 2) In ~/code/petsc-3.11.3/src/sys/error/fp.c, comment out line 405-406:
> // elif defined PETSC_HAVE_XMMINTRIN_H
>
> // _MM_SET_EXCEPTION_MASK(_MM_MASK_INEXACT | _MM_MASK_UNDERFLOW);
>
> After this 'make all' ran successfully.
> I was finally able to compile my code (linking PETSc) following step 5 of
> the mxe tutorial: mxe.cc/#tutorial
> Then, simply copy to Windows and double-click.
>

Great! It sounds like we can make a couple of fixes that make this easier:

1) Name the batch executable .exe for this

2) Find direct.h correctly

3) Do not find xmmintrin.h

Can you send me your configure.log so I can see what went wrong on the
header location.

  Thanks,

 Matt


> Best regards,
>
> Lars Hov Odsæter
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Cross compilation of PETSc using MXE

2020-01-17 Thread Lars Odsæter
Dear PETSc users,

First, thanks to the developers for your great effort with the PETSc library, 
which I have benefited from several times.

I want to share with you that I recently was able to cross compile PETSc with 
the MXE (mxe.cc) cross compiler, and then link it into an application (also 
built with the mxe compiler) to produce an executable that I successfully ran 
on my Windows computer. In doing this I realized that there is very little 
documentation of this on the web, so maybe others could benefit from my 
approach. It contains a few hacks that might be solved more elegant, but that 
is probably out of may range.

First, I installed the mxe cross compiler following the tutorial 
(mxe.cc/#tutorial):

git clone https://github.com/mxe/mxe.git
make cc
make blas lapack
export PATH=~/code/mxe/usr/bin:$PATH

Then I compiled PETSc (3.11.3 tarball):

wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.11.3.tar.gz
gunzip -c petsc-3.11.3.tar.gz | tar -xof -
cd petsc-3.11.3/
./configure PETSC_ARCH=arch-mxe-static \
  --with-mpi=0 --host=i686-w64-mingw32.static \
  --enable-static --disable-shared \
  --with-cc=i686-w64-mingw32.static-gcc \
  --with-cxx=i686-w64-mingw32.static-g++ \
  --with-fc=i686-w64-mingw32.static-gfortran \
  --with-ld=i686-w64-mingw32.static-ld \
  --with-ar=i686-w64-mingw32.static-ar \
  --with-pkg-config=i686-w64-mingw32.static-pkg-config \
  --with-batch --known-64-bit-blas-indices

Next, I did the reconfigure step that was explained in the output from the call 
to configure:

* copy 'conftest-arch-mxe-static' to your Windows machine
* Rename it with extension '.exe'
* Run the application in Windows. This generates 
'reconfigure-arch-mxe-static.py'
* Copy 'reconfigure-arch-mxe-static.py' back to the Linux machine
* Run the python script:

python reconfigure-arch-mxe-static.py

Now, 'make all' failed to compile, but I did two hacks to mitigate it:

1) In  '~/code/petsc-3.11.3/arch-mxe-static/include/petscconf.h' include the 
following lines:
#ifndef PETSC_HAVE_DIRECT_H
#define PETSC_HAVE_DIRECT_H 1
#endif

2) In ~/code/petsc-3.11.3/src/sys/error/fp.c, comment out line 405-406:
// elif defined PETSC_HAVE_XMMINTRIN_H
// _MM_SET_EXCEPTION_MASK(_MM_MASK_INEXACT | _MM_MASK_UNDERFLOW);

After this 'make all' ran successfully.
I was finally able to compile my code (linking PETSc) following step 5 of the 
mxe tutorial: mxe.cc/#tutorial
Then, simply copy to Windows and double-click.


Best regards,

Lars Hov Odsæter



Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-17 Thread Дмитрий Мельничук
Thank you for your replies.Tried to configure commited version of PETSc from Satish Balay branch (balay/fix-ftn-i8/maint) and faced to the same error when running test example ex5f call SNESCreate(PETSC_COMM_WORLD,snes,ierr) Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to INTEGER(8) At the moment, some subroutines (such as PetscInitialize, PetscFinalize, MatSetValue, VecSetValue) work with the correct size of the variable ierr defined as PetscErrorCode, and some do not.The following subroutines still require ierr to be of type INTEGER(8): VecGetSubVector, VecAssemblyBegin, VecAssemblyEnd, VecScatterBegin, VecScatterEnd, VecScatterDestroy, VecCreateMPI, VecDuplicate, VecZeroEntries, VecAYPX, VecWAXPY, VecWAXPYMatMult, MatDestroy, MatAssemblyBegin, MatAssemblyEnd, MatZeroEntries, MatCreateSubMatrix, MatScale, MatDiagonalSet, MatGetDiagonal, MatDuplicat, MatSetSizes, MatSetFromOptionsUnfortunately, I'm not sure if this is the only issue that occurs when switching to the 64-bit version of PETSc.I can set the size of the variables ierr so that the solver compilation process completes successfully, but I get the following error when solving linear algebra system by use of KSPSolve subroutine:  [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range Because the solver with the 32-bit version of PETSc works properly, I suppose that the cause of the errors (for 64-bit version of PETSc) is the inappropriate size of the variables.I compiled PETSc with flags: with-64-bit-indices and -fdefault-integer-8. Also changed the size of MPI_Integer to MPI_Integer8:MPI_Bcast(npart,nnds,MPI_Integer8,0,MPI_Comm_World,ierr). I am probably missing something else.​​​Kind regards,Dmitry Melnichuk 16.01.2020, 01:26, "Balay, Satish" :I have some changes (incomplete) here -my hack to bfort.diff --git a/src/bfort/bfort.c b/src/bfort/bfort.cindex 0efe900..31ff154 100644--- a/src/bfort/bfort.c+++ b/src/bfort/bfort.c@@ -1654,7 +1654,7 @@ void PrintDefinition( FILE *fout, int is_function, char *name, int nstrings, /* Add a "decl/result(name) for functions */ if (useFerr) {- OutputFortranToken( fout, 7, "integer" );+ OutputFortranToken( fout, 7, "PetscErrorCode" );OutputFortranToken( fout, 1, errArgNameParm); } else if (is_function) {OutputFortranToken( fout, 7, ArgToFortran( rt->name ) );And my changes to petsc are on branch balay/fix-ftn-i8/maintSatishOn Wed, 15 Jan 2020, Smith, Barry F. via petsc-users wrote:Working on it now; may be doable > On Jan 15, 2020, at 11:55 AM, Matthew Knepley  wrote: > > On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук  wrote: > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > If I define ierr as PetscErrorCode for all subroutines given below > > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > then errors occur with first three subroutines: > Error: Type mismatch in argument «z» at (1); passed INTEGER(4) to INTEGER(8). > > Barry, > > It looks like the ftn-auto interfaces are using 'integer' for the error code, whereas the ftn-custom is using PetscErrorCode. > Could we make the generated ones use integer? > > Thanks, > > Matt > > Therefore I was forced to define ierr as PetscInt for VecDuplicate, VecCopy, VecGetLocalSize subroutines to fix these errors. > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery for me. > > > What version of PETSc are you using? > > version 3.12.2 > > > Are you seeing this issue with a PETSc example? > > I will check it tomorrow and let you know. > > Kind regards, > Dmitry Melnichuk > > > > 15.01.2020, 17:14, "Balay, Satish" : > -fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible with the MPI library with -fdefault-integer-8.] > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > What version of PETSc are you using? Are you seeing this issue with a PETSc example? > > Satish > > On Wed, 15 Jan 2020, Дмитрий Мельничук wrote: > > > Hello all! > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call