> On Jan 21, 2020, at 8:09 AM, Balay, Satish <[email protected]> wrote:
>
> I would suggest installing regular 32bit int blas/lapack - And then using it
> with --with-blaslapack-lib option
>
> [we don't know what -fdefault-integer-8 does with --download-fblaslapack - if
> it really creates --known-64-bit-blas-indices variant of blas/lapack or not]
Satish,
The intention is that package.py strips out these options before passing
them to the external packages but it is possible I made a mistake and it does
not
strip them out properly.
Barry
>
> Satish
>
> On Tue, 21 Jan 2020, Smith, Barry F. via petsc-users wrote:
>
>>
>> I would avoid OpenBLAS it just introduces one new variable that could
>> introduce problems.
>>
>> PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with
>> --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is
>> usually 32 bit unless you build with a special BLAS that supports 64 bit
>> indices.
>>
>> In theory the ex5f should be fine, we test it all the time with all
>> possible values of the integer. Please redo the ./configure with
>> --with-64-bit-indices --download-fblaslapack and send the configure.log this
>> provides the most useful information on the decisions configure has made.
>>
>> Barry
>>
>>
>>> On Jan 21, 2020, at 4:28 AM, Дмитрий Мельничук
>>> <[email protected]> wrote:
>>>
>>>> First you need to figure out what is triggering:
>>>
>>>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot
>>>> open shared object file: No such file or directory
>>>
>>>> Googling it finds all kinds of suggestions for Linux. But Windows? Maybe
>>>> the debugger will help.
>>>
>>>> Second
>>>> VecNorm_Seq line 221
>>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c
>>>
>>>
>>>> Debugger is best to find out what is triggering this. Since it is the C
>>>> side of things it would be odd that the Fortran change affects it.
>>>
>>>> Barry
>>>
>>>
>>> I am in the process of finding out the causes of these errors.
>>>
>>> I'm inclined to the fact that BLAS has still some influence on what is
>>> happening.
>>> Because testing of 32-bit version of PETSc gives such weird error with
>>> mpiexec.exe, but Fortran example ex5f completes succeccfully.
>>>
>>> I need to say that my solver compiled with 64-bit version of PETSc failed
>>> with Segmentation Violation error (the same as ex5f) when calling
>>> KSPSolve(Krylov,Vec_F,Vec_U,ierr).
>>> During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also
>>> VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt,
>>> PetscBLASInt.
>>> I suspect that PetscBLASInt may conflict with PetscInt.
>>> Also I noted that execution of KSPSolve() does not even start , so
>>> arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve().
>>> (inserted fprint() in the top of KSPSolve and saw no output)
>>>
>>>
>>> So I tried to configure PETSc with --download-fblaslapack
>>> --with-64-bit-blas-indices, but got an error that
>>>
>>> fblaslapack does not support -with-64-bit-blas-indices
>>>
>>> Switching to flags --download-openblas -with-64-bit-blas-indices was
>>> unsuccessfully too because of error:
>>>
>>> Error during download/extract/detection of OPENBLAS:
>>> Unable to download openblas
>>> Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']":
>>> fatal: destination path
>>> '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas'
>>> already exists and is not an empty directory.
>>> Unable to download package OPENBLAS from:
>>> git://https://github.com/xianyi/OpenBLAS.git
>>> * If URL specified manually - perhaps there is a typo?
>>> * If your network is disconnected - please reconnect and rerun ./configure
>>> * Or perhaps you have a firewall blocking the download
>>> * You can run with --with-packages-download-dir=/adirectory and ./configure
>>> will instruct you what packages to download manually
>>> * or you can download the above URL manually, to /yourselectedlocation
>>> and use the configure option:
>>> --download-openblas=/yourselectedlocation
>>> Unable to download openblas
>>> Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']":
>>> fatal: destination path
>>> '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas'
>>> already exists and is not an empty directory.
>>> Unable to download package OPENBLAS from:
>>> git://https://github.com/xianyi/OpenBLAS.git
>>> * If URL specified manually - perhaps there is a typo?
>>> * If your network is disconnected - please reconnect and rerun ./configure
>>> * Or perhaps you have a firewall blocking the download
>>> * You can run with --with-packages-download-dir=/adirectory and ./configure
>>> will instruct you what packages to download manually
>>> * or you can download the above URL manually, to /yourselectedlocation
>>> and use the configure option:
>>> --download-openblas=/yourselectedlocation
>>> Could not locate downloaded package OPENBLAS in
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages
>>>
>>> But I checked the last location (.../externalpackages) and saw that
>>> OpenBLAS downloaded and unzipped.
>>>
>>>
>>>
>>> Kind regards,
>>> Dmitry Melnichuk
>>>
>>>
>>> 20.01.2020, 16:32, "Smith, Barry F." <[email protected]>:
>>>
>>> First you need to figure out what is triggering:
>>>
>>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot
>>> open shared object file: No such file or directory
>>>
>>> Googling it finds all kinds of suggestions for Linux. But Windows? Maybe
>>> the debugger will help.
>>>
>>> Second
>>>
>>>
>>>
>>> VecNorm_Seq line 221
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c
>>>
>>> Debugger is best to find out what is triggering this. Since it is the C
>>> side of things it would be odd that the Fortran change affects it.
>>>
>>> Barry
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Jan 20, 2020, at 4:43 AM, Дмитрий Мельничук
>>> <[email protected]> wrote:
>>>
>>> Thank you so much for your assistance!
>>>
>>> As far as I have been able to find out, the errors "Type mismatch in
>>> argument ‘ierr’" have been successfully fixed.
>>> But execution of command "make PETSC_DIR=/cygdrive/d/...
>>> PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of
>>> Segmantation Violation error.
>>>
>>> I compiled PETSc with Microsoft MPI v10.
>>> Does it make sense to compile PETSc with another MPI implementation (such
>>> as MPICH) in order to resolve the issue?
>>>
>>> Error message:
>>> Running test examples to verify correct installation
>>> Using
>>> PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry
>>> and PETSC_ARCH=arch-mswin-c-debug
>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI
>>> process
>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot
>>> open shared object file: No such file or directory
>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI
>>> processes
>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot
>>> open shared object file: No such file or directory
>>> Possible error running Fortran example src/snes/examples/tutorials/ex5f
>>> with 1 MPI process
>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>> probably memory access out of range
>>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>>> [0]PETSC ERROR: or see
>>> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X
>>> to find memory corruption errors
>>> [0]PETSC ERROR: likely location of problem given in stack below
>>> [0]PETSC ERROR: --------------------- Stack Frames
>>> ------------------------------------
>>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
>>> [0]PETSC ERROR: INSTEAD the line number of the start of the function
>>> [0]PETSC ERROR: is given.
>>> [0]PETSC ERROR: [0] VecNorm_Seq line 221
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c
>>> [0]PETSC ERROR: [0] VecNorm line 213
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c
>>> [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c
>>> [0]PETSC ERROR: [0] SNESSolve line 4375
>>> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c
>>> [0]PETSC ERROR: --------------------- Error Message
>>> --------------------------------------------------------------
>>> [0]PETSC ERROR: Signal received
>>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown
>>> [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by
>>> useruser Mon Jan 20 09:18:34 2020
>>> [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc
>>> --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran
>>> --with-mpi-include=/cygdrive/c/MPISDK/Include
>>> --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
>>> --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes
>>> -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static
>>> -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack
>>> --with-shared-libraries=no --with-64-bit-indices --force
>>> [0]PETSC ERROR: #1 User provided function() line 0 in unknown file
>>>
>>> job aborted:
>>> [ranks] message
>>>
>>> [0] application aborted
>>> aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0
>>>
>>> ---- error analysis -----
>>>
>>> [0] on DESKTOP-R88IMOB
>>> ./ex5f aborted the job. abort code 50152059
>>>
>>> ---- error analysis -----
>>> Completed test examples
>>>
>>> Kind regards,
>>> Dmitry Melnichuk
>>>
>>> 19.01.2020, 07:47, "Smith, Barry F." <[email protected]>:
>>>
>>> Dmitry,
>>>
>>> I have completed and tested the branch
>>> barry/2020-01-15/support-default-integer-8 it is undergoing testing now
>>> https://gitlab.com/petsc/petsc/merge_requests/2456
>>>
>>> Please give it a try. Note that MPI has no support for integer promotion
>>> so YOU must insure that any MPI calls from Fortran pass 4 byte integers not
>>> promoted 8 byte integers.
>>>
>>> I have tested it with recent versions of MPICH and OpenMPI, it is
>>> fragile at compile time and may fail to compile with different versions of
>>> MPI.
>>>
>>> Good luck,
>>>
>>> Barry
>>>
>>> I do not recommend this approach for integer promotion in Fortran. Just
>>> blindly promoting all integers can often lead to problems. I recommend
>>> using the kind mechanism of
>>> Fortran to insure that each variable is the type you want, you can
>>> recompile with different options to promote the kind declared variables you
>>> wish. Of course this is more intrusive and requires changes to the Fortran
>>> code.
>>>
>>>
>>> On Jan 15, 2020, at 7:00 AM, Дмитрий Мельничук
>>> <[email protected]> wrote:
>>>
>>> Hello all!
>>>
>>> At present time I need to compile solver called Defmod
>>> (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran
>>> 95.
>>> Defmod uses PETSc for solving linear algebra system.
>>> Solver compilation with 32-bit version of PETSc does not cause any problem.
>>> But solver compilation with 64-bit version of PETSc produces an error with
>>> size of ierr PETSc variable.
>>>
>>> 1. For example, consider the following statements written in Fortran:
>>>
>>>
>>> PetscErrorCode :: ierr_m
>>> PetscInt :: ierr
>>> ...
>>> ...
>>> call VecDuplicate(Vec_U,Vec_Um,ierr)
>>> call VecCopy(Vec_U,Vec_Um,ierr)
>>> call VecGetLocalSize(Vec_U,j,ierr)
>>> call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m)
>>>
>>>
>>> As can be seen first three subroutunes require ierr to be size of
>>> INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr
>>> to be size of INTEGER(4).
>>> Using the same integer format gives an error:
>>>
>>> There is no specific subroutine for the generic ‘vecgetownershiprange’ at
>>> (1)
>>>
>>> 2. Another example is:
>>>
>>>
>>> call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr)
>>> CHKERRA(ierr)
>>> call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr)
>>>
>>>
>>> I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I
>>> choose INTEGER(8), the error "Type mismatch in argument ‘ierr’ at (1);
>>> passed INTEGER(8) to INTEGER(4)" occurs.
>>> If I define ierr as INTEGER(4), the error "Type mismatch in argument
>>> ‘ierr’ at (1); passed INTEGER(4) to INTEGER(8)" appears.
>>>
>>>
>>> 3. If I change the sizes of ierr vaiables as error messages require, the
>>> compilation completed successfully, but an error occurs when calculating
>>> the RHS vector with following message:
>>>
>>> [0]PETSC ERROR: Out of range index value -4 cannot be negative
>>>
>>>
>>> Command to configure 32-bit version of PETSc under Windows 10 using Cygwin:
>>> ./configure --with-cc=x86_64-w64-mingw32-gcc
>>> --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran
>>> --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include
>>> --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
>>> --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes
>>> -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static
>>> -lpthread -fno-range-check' --with-shared-libraries=no
>>>
>>> Command to configure 64-bit version of PETSc under Windows 10 using Cygwin:
>>> ./configure --with-cc=x86_64-w64-mingw32-gcc
>>> --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran
>>> --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include
>>> --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a
>>> --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes
>>> -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static
>>> -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no
>>> --with-64-bit-indices --known-64-bit-blas-indices
>>>
>>>
>>> Kind regards,
>>> Dmitry Melnichuk
>>>
>>>
>>