Re: [petsc-users] PETSc initialization error

2020-06-29 Thread Junchao Zhang
On Mon, Jun 29, 2020 at 1:00 PM Sam Guo  wrote:

> Hi Junchao,
>I'll test the ex53. At the meantime, I use the following work around:
> my program call MPI initialize once for entire program
> PetscInitialize once for entire program
> SlecpInitialize once for entire program (I think I can skip
> PetscInitialize above)
> calling slepc multiple times
> my program call MPI finalize before ending program
>
>You can see I stkip PetscFinalize/SlepcFinalize. I am uneasy for
> skipping them since I am not sure what is the consequence. Can you comment
> on it?
>
It should be fine. MPI_Finalize does not free objects created by MPI.  But
since you end your program after MPI_Finalize, there should be no memory
leaks. In general, one needs to call PetscFinalize/SlepcFinalize.
Try to get a minimal working example and then we can have a look.


>
> Thanks,
> Sam
>
>
>
> On Fri, Jun 26, 2020 at 6:58 PM Junchao Zhang 
> wrote:
>
>> Did the test included in that commit fail in your environment? You can
>> also change the test by adding calls to SlepcInitialize/SlepcFinalize
>> between PetscInitializeNoPointers/PetscFinalize as in my previous email.
>>
>> --Junchao Zhang
>>
>>
>> On Fri, Jun 26, 2020 at 5:54 PM Sam Guo  wrote:
>>
>>> Hi Junchao,
>>>If you are talking about this commit of yours
>>> https://gitlab.com/petsc/petsc/-/commit/f0463fa09df52ce43e7c5bf47a1c87df0c9e5cbb
>>>
>>> Recycle keyvals and fix bugs in MPI_Comm creation
>>>I think I got it. It fixes the serial one but parallel one is still
>>> crashing.
>>>
>>> Thanks,
>>> Sam
>>>
>>> On Fri, Jun 26, 2020 at 3:43 PM Sam Guo  wrote:
>>>
 Hi Junchao,
I am not ready to upgrade petsc yet(due to the lengthy technical and
 legal approval process of our internal policy). Can you send me the diff
 file so I can apply it to petsc 3.11.3)?

 Thanks,
 Sam

 On Fri, Jun 26, 2020 at 3:33 PM Junchao Zhang 
 wrote:

> Sam,
>   Please discard the origin patch I sent you. A better fix is already
> in maint/master. An test is at src/sys/tests/ex53.c
>   I modified that test at the end with
>
>   for (i=0; i<500; i++) {
> ierr = PetscInitializeNoPointers(argc,argv,NULL,help);if (ierr)
> return ierr;
> ierr = SlepcInitialize(,,NULL,help);if (ierr) return
> ierr;
> ierr = SlepcFinalize();if (ierr) return ierr;
> ierr = PetscFinalize();if (ierr) return ierr;
>   }
>
>
>  then I ran it with multiple mpi ranks and it ran correctly. So try
> your program with petsc master first. If not work, see if you can come up
> with a test example for us.
>
>  Thanks.
> --Junchao Zhang
>
>
> On Fri, Jun 26, 2020 at 3:37 PM Sam Guo  wrote:
>
>> One work around for me is to call PetscInitialize once for my entire
>> program and skip PetscFinalize (since I don't have a good place to call
>> PetscFinalize   before ending the program).
>>
>> On Fri, Jun 26, 2020 at 1:33 PM Sam Guo 
>> wrote:
>>
>>> I get the crash after calling Initialize/Finalize multiple times.
>>> Junchao fixed the bug for serial but parallel still crashes.
>>>
>>> On Fri, Jun 26, 2020 at 1:28 PM Barry Smith 
>>> wrote:
>>>

   Ah, so you get the crash the second time you call
 PetscInitialize()?  That is a problem because we do intend to support 
 that
 capability (but you much call PetscFinalize() each time also).

   Barry


 On Jun 26, 2020, at 3:25 PM, Sam Guo  wrote:

 Hi Barry,
Thanks for the quick response.
I will call PetscInitialize once and skip the PetscFinalize for
 now to avoid the crash. The crash is actually in PetscInitialize, not
 PetscFinalize.

 Thanks,
 Sam

 On Fri, Jun 26, 2020 at 1:21 PM Barry Smith 
 wrote:

>
>   Sam,
>
>   You can skip PetscFinalize() so long as you only call
> PetscInitialize() once. It is not desirable in general to skip the 
> finalize
> because PETSc can't free all its data structures and you cannot see 
> the
> PETSc logging information with -log_view but in terms of the code 
> running
> correctly you do not need to call PetscFinalize.
>
>If your code crashes in PetscFinalize() please send the full
> error output and we can try to help you debug it.
>
>
>Barry
>
> On Jun 26, 2020, at 3:14 PM, Sam Guo 
> wrote:
>
> To clarify, we have a mpi wrapper (so we can switch to different
> mpi at runtime). I compile petsc using our mpi wrapper.
> If I just call PETSc initialize once without calling finallize, it
> is ok. My question to you is that: can I skip finalize?

Re: [petsc-users] PETSc and Windows 10

2020-06-29 Thread Pierre Jolivet


> On 29 Jun 2020, at 9:37 PM, Pierre Jolivet  wrote:
> 
> I do not give up easily on Windows problems:
> 1) that’s around 50% of our (FreeFEM) user-base (and I want them to use PETSc 
> and SLEPc, ofc…)
> 2) most people I work with from corporations just have Windows 
> laptops/desktops and I always recommend MSYS because it’s very lightweight 
> and you can pass .exe around
> 3) I’ve bothered enough Satish, Jed, and Matt on GitLab to take (at least 
> partially) the blame now when it doesn’t work on MSYS
> 
> That being said, the magic keyword is the added flag 
> FFLAGS="-fallow-invalid-boz" (see, I told you ./configure issues were easier 
> to deal with than the others).
> Here you’ll see that everything goes through just fine (sorry, it took me a 
> long time to post this because everything is slow on my VM):
> 1) http://jolivet.perso.enseeiht.fr/win10/configure.log 
> 
> 2) http://jolivet.perso.enseeiht.fr/win10/make.log 
>  (both steps #1 and #2 in 
> MSYS terminal, gcc/gfortran 10, MS-MPI see screenshot)
> 3) http://jolivet.perso.enseeiht.fr/win10/ex2.txt 
>  (Command Prompt, 4 processes 
> + MUMPS, I can send you the .exe if you want to try on your machine)
> I just realize that I didn’t generate the Fortran bindings, but you can see I 
> compiled MUMPS and ScaLAPACK, so that shouldn’t be a problem.
> Or if there is a problem, we will need to fix this in PETSc.
> 
> I’ll push this added flag to the FreeFEM repo

Sorry for the noise, but maybe it’s better to put this in PETSc ./configure, 
like you did here Satish 
https://gitlab.com/petsc/petsc/-/commit/2cd8068296b34e127f055bb32f556e3599f17523
 

 ?
If Gfortran100 && MS-MPI, then FFLAGS += "-fallow-invalid-boz"
WDY(PETSc-)GT?

Thanks,
Pierre

> thanks for reminding me of the brokenness of gcc/gfortran 10 + MS-MPI.
> Here is to hoping this won’t affect PETSc ./configure with previous 
> gcc/gfortran version (unlikely, this option is apparently 13-year old 
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29471 
> )
> 
> Let me know of the next hiccup, if any.
> Thanks,
> Pierre
> 
>> On 29 Jun 2020, at 8:09 PM, Paolo Lampitella > > wrote:
>> 
>> Dear Pierre,
>>  
>> thanks again for your time
>>  
>> I guess there is no way for me to use the toolchain you are using (I don’t 
>> remember having any choice on which version of MSYS or GCC I could install)
>>  
>> Paolo
>>  
>> Inviato da Posta  per 
>> Windows 10
>>  
>> Da: Pierre Jolivet 
>> Inviato: lunedì 29 giugno 2020 20:01
>> A: Matthew Knepley 
>> Cc: Paolo Lampitella ; petsc-users 
>> 
>> Oggetto: Re: [petsc-users] PETSc and Windows 10
>>  
>>  
>> 
>> 
>> On 29 Jun 2020, at 7:47 PM, Matthew Knepley > > wrote:
>>  
>> On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella 
>> mailto:paololampite...@hotmail.com>> wrote:
>> Dear Pierre, sorry to bother you, but I already have some issues. What I did:
>>  
>> pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also 
>> troublesome?)
>> Followed points 6 and 7 at 
>> https://doc.freefem.org/introduction/installation.html#compilation-on-windows
>>  
>> 
>> I first got a warning on the configure at point 6, as –disable-hips is not 
>> recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was 
>> necessary) I got to this point:
>>  
>> tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
>> patch -p1 < petsc-suitesparse.patch
>> patching file petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
>> touch petsc-3.13.0/tag-tar
>> cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
>> --prefix=/home/paolo/freefem/ff-petsc//r \
>> --with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3 
>> -mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11 
>> --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0 
>> --with-cc='gcc' --with-cxx='g++' --with-fc='gfortran' 
>> CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector' 
>> --with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll' 
>> --with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi' 
>> --with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' 
>> --with-blaslapack-include='' 
>> --with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack 
>> --download-metis --download-ptscotch --download-mumps --download-hypre 
>> --download-parmetis --download-superlu --download-suitesparse 
>> 

Re: [petsc-users] PETSc and Windows 10

2020-06-29 Thread Pierre Jolivet
I do not give up easily on Windows problems:
1) that’s around 50% of our (FreeFEM) user-base (and I want them to use PETSc 
and SLEPc, ofc…)
2) most people I work with from corporations just have Windows laptops/desktops 
and I always recommend MSYS because it’s very lightweight and you can pass .exe 
around
3) I’ve bothered enough Satish, Jed, and Matt on GitLab to take (at least 
partially) the blame now when it doesn’t work on MSYS

That being said, the magic keyword is the added flag 
FFLAGS="-fallow-invalid-boz" (see, I told you ./configure issues were easier to 
deal with than the others).
Here you’ll see that everything goes through just fine (sorry, it took me a 
long time to post this because everything is slow on my VM):
1) http://jolivet.perso.enseeiht.fr/win10/configure.log 

2) http://jolivet.perso.enseeiht.fr/win10/make.log 
 (both steps #1 and #2 in MSYS 
terminal, gcc/gfortran 10, MS-MPI see screenshot)
3) http://jolivet.perso.enseeiht.fr/win10/ex2.txt 
 (Command Prompt, 4 processes + 
MUMPS, I can send you the .exe if you want to try on your machine)
I just realize that I didn’t generate the Fortran bindings, but you can see I 
compiled MUMPS and ScaLAPACK, so that shouldn’t be a problem.
Or if there is a problem, we will need to fix this in PETSc.

I’ll push this added flag to the FreeFEM repo, thanks for reminding me of the 
brokenness of gcc/gfortran 10 + MS-MPI.
Here is to hoping this won’t affect PETSc ./configure with previous 
gcc/gfortran version (unlikely, this option is apparently 13-year old 
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29471 
)

Let me know of the next hiccup, if any.
Thanks,
Pierre

> On 29 Jun 2020, at 8:09 PM, Paolo Lampitella  
> wrote:
> 
> Dear Pierre,
>  
> thanks again for your time
>  
> I guess there is no way for me to use the toolchain you are using (I don’t 
> remember having any choice on which version of MSYS or GCC I could install)
>  
> Paolo
>  
> Inviato da Posta  per Windows 
> 10
>  
> Da: Pierre Jolivet 
> Inviato: lunedì 29 giugno 2020 20:01
> A: Matthew Knepley 
> Cc: Paolo Lampitella ; petsc-users 
> 
> Oggetto: Re: [petsc-users] PETSc and Windows 10
>  
>  
> 
> 
> On 29 Jun 2020, at 7:47 PM, Matthew Knepley  > wrote:
>  
> On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella  > wrote:
> Dear Pierre, sorry to bother you, but I already have some issues. What I did:
>  
> pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also 
> troublesome?)
> Followed points 6 and 7 at 
> https://doc.freefem.org/introduction/installation.html#compilation-on-windows 
> 
> I first got a warning on the configure at point 6, as –disable-hips is not 
> recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was 
> necessary) I got to this point:
>  
> tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
> patch -p1 < petsc-suitesparse.patch
> patching file petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
> touch petsc-3.13.0/tag-tar
> cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
> --prefix=/home/paolo/freefem/ff-petsc//r \
> --with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3 
> -mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11 
> --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0 
> --with-cc='gcc' --with-cxx='g++' --with-fc='gfortran' 
> CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector' 
> --with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll' 
> --with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi' 
> --with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' 
> --with-blaslapack-include='' 
> --with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack 
> --download-metis --download-ptscotch --download-mumps --download-hypre 
> --download-parmetis --download-superlu --download-suitesparse 
> --download-tetgen --download-slepc '--download-metis-cmake-arguments=-G "MSYS 
> Makefiles"' '--download-parmetis-cmake-arguments=-G "MSYS Makefiles"' 
> '--download-superlu-cmake-arguments=-G "MSYS Makefiles"' 
> '--download-hypre-configure-arguments=--build=x86_64-linux-gnu 
> --host=x86_64-linux-gnu' PETSC_ARCH=fr
> ===
>  Configuring PETSc to compile on your system
> ===
> TESTING: FortranMPICheck from 
> 

[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
Dear Pierre,

thanks again for your time

I guess there is no way for me to use the toolchain you are using (I don’t 
remember having any choice on which version of MSYS or GCC I could install)

Paolo

Inviato da Posta per Windows 10

Da: Pierre Jolivet
Inviato: lunedì 29 giugno 2020 20:01
A: Matthew Knepley
Cc: Paolo Lampitella; 
petsc-users
Oggetto: Re: [petsc-users] PETSc and Windows 10




On 29 Jun 2020, at 7:47 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:
Dear Pierre, sorry to bother you, but I already have some issues. What I did:


  *   pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also 
troublesome?)
  *   Followed points 6 and 7 at 
https://doc.freefem.org/introduction/installation.html#compilation-on-windows
I first got a warning on the configure at point 6, as –disable-hips is not 
recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was 
necessary) I got to this point:

tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
patch -p1 < petsc-suitesparse.patch
patching file petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
touch petsc-3.13.0/tag-tar
cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
--prefix=/home/paolo/freefem/ff-petsc//r \
--with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3 
-mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11 
--with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0 
--with-cc='gcc' --with-cxx='g++' --with-fc='gfortran' 
CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector' 
--with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll' 
--with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi' 
--with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' 
--with-blaslapack-include='' 
--with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack 
--download-metis --download-ptscotch --download-mumps --download-hypre 
--download-parmetis --download-superlu --download-suitesparse --download-tetgen 
--download-slepc '--download-metis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-parmetis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-superlu-cmake-arguments=-G "MSYS Makefiles"' 
'--download-hypre-configure-arguments=--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu' PETSC_ARCH=fr
===
 Configuring PETSc to compile on your system
===
TESTING: FortranMPICheck from 
config.packages.MPI(config/BuildSystem/config/pack***
 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):
---
Fortran error! mpi_init() could not be located!
***

make: *** [Makefile:210: petsc-3.13.0/tag-conf-real] Errore 1

Note that I didn’t add anything to any PATH variable, because this is not 
mentioned in your documentation.

On a side note, this is the same error I got when trying to build PETSc in 
Cygwin with the default OpenMPI available in Cygwin.

I am attaching the configure.log… it seems to me that the error comes from the 
configure trying to include the mpif.h in your folder and not using the 
-fallow-invalid-boz flag that I had to use, for example, to compile mpi.f90 
into mpi.mod

But I’m not sure why this is happening

Pierre,

Could this be due to gcc 10?

Sorry, I’m slow. You are right. Our workers use gcc 9, everything is fine, but 
I see on my VM which I updated that I use gcc 10 and had to disable Fortran, I 
guess the MUMPS run I showcased was with a prior PETSc build.
I’ll try to resolve this and will keep you posted.
They really caught a lot of people off guard with gfortran 10…

Thanks,
Pierre


Executing: gfortran -c -o /tmp/petsc-ur0cff6a/config.libraries/conftest.o 
-I/tmp/petsc-ur0cff6a/config.compilers 
-I/tmp/petsc-ur0cff6a/config.setCompilers 
-I/tmp/petsc-ur0cff6a/config.compilersFortran 
-I/tmp/petsc-ur0cff6a/config.libraries  -Wall -ffree-line-length-0 
-Wno-unused-dummy-argument -O3 -mtune=generic   
-I/home/paolo/FreeFem-sources/3rdparty/include/msmpi 
/tmp/petsc-ur0cff6a/config.libraries/conftest.F90
Possible ERROR while running compiler: exit code 1
stderr:
C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:227:36:

  227 |PARAMETER (MPI_DATATYPE_NULL=z'0c00')
  |1
Error: BOZ literal constant at (1) is neither a data-stmt-constant nor an 
actual argument 

Re: [petsc-users] PETSc and Windows 10

2020-06-29 Thread Pierre Jolivet


> On 29 Jun 2020, at 7:47 PM, Matthew Knepley  wrote:
> 
> On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella  > wrote:
> Dear Pierre, sorry to bother you, but I already have some issues. What I did:
> 
>  
> 
> pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also 
> troublesome?)
> Followed points 6 and 7 at 
> https://doc.freefem.org/introduction/installation.html#compilation-on-windows 
> 
> I first got a warning on the configure at point 6, as –disable-hips is not 
> recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was 
> necessary) I got to this point:
> 
>  
> 
> tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
> 
> patch -p1 < petsc-suitesparse.patch
> 
> patching file petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
> 
> touch petsc-3.13.0/tag-tar
> 
> cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
> 
> --prefix=/home/paolo/freefem/ff-petsc//r \
> 
> --with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3 
> -mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11 
> --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0 
> --with-cc='gcc' --with-cxx='g++' --with-fc='gfortran' 
> CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector' 
> --with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll' 
> --with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi' 
> --with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' 
> --with-blaslapack-include='' 
> --with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack 
> --download-metis --download-ptscotch --download-mumps --download-hypre 
> --download-parmetis --download-superlu --download-suitesparse 
> --download-tetgen --download-slepc '--download-metis-cmake-arguments=-G "MSYS 
> Makefiles"' '--download-parmetis-cmake-arguments=-G "MSYS Makefiles"' 
> '--download-superlu-cmake-arguments=-G "MSYS Makefiles"' 
> '--download-hypre-configure-arguments=--build=x86_64-linux-gnu 
> --host=x86_64-linux-gnu' PETSC_ARCH=fr
> 
> ===
> 
>  Configuring PETSc to compile on your system
> 
> ===
> 
> TESTING: FortranMPICheck from 
> config.packages.MPI(config/BuildSystem/config/pack***
> 
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
> details):
> 
> ---
> 
> Fortran error! mpi_init() could not be located!
> 
> ***
> 
>  
> 
> make: *** [Makefile:210: petsc-3.13.0/tag-conf-real] Errore 1
> 
>  
> 
> Note that I didn’t add anything to any PATH variable, because this is not 
> mentioned in your documentation.
> 
>  
> 
> On a side note, this is the same error I got when trying to build PETSc in 
> Cygwin with the default OpenMPI available in Cygwin.
> 
>  
> 
> I am attaching the configure.log… it seems to me that the error comes from 
> the configure trying to include the mpif.h in your folder and not using the 
> -fallow-invalid-boz flag that I had to use, for example, to compile mpi.f90 
> into mpi.mod
> 
>  
> 
> But I’m not sure why this is happening
> 
> 
> Pierre,
> 
> Could this be due to gcc 10?

Sorry, I’m slow. You are right. Our workers use gcc 9, everything is fine, but 
I see on my VM which I updated that I use gcc 10 and had to disable Fortran, I 
guess the MUMPS run I showcased was with a prior PETSc build.
I’ll try to resolve this and will keep you posted.
They really caught a lot of people off guard with gfortran 10…

Thanks,
Pierre

> Executing: gfortran -c -o /tmp/petsc-ur0cff6a/config.libraries/conftest.o 
> -I/tmp/petsc-ur0cff6a/config.compilers 
> -I/tmp/petsc-ur0cff6a/config.setCompilers 
> -I/tmp/petsc-ur0cff6a/config.compilersFortran 
> -I/tmp/petsc-ur0cff6a/config.libraries  -Wall -ffree-line-length-0 
> -Wno-unused-dummy-argument -O3 -mtune=generic   
> -I/home/paolo/FreeFem-sources/3rdparty/include/msmpi 
> /tmp/petsc-ur0cff6a/config.libraries/conftest.F90 
> Possible ERROR while running compiler: exit code 1
> stderr:
> C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:227:36:
> 
>   227 |PARAMETER (MPI_DATATYPE_NULL=z'0c00')
>   |1
> Error: BOZ literal constant at (1) is neither a data-stmt-constant nor an 
> actual argument to INT, REAL, DBLE, or CMPLX intrinsic function [see 
> '-fno-allow-invalid-boz']
> C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:303:27:
> 
>   303 |PARAMETER (MPI_CHAR=z'4c000101')
>   |   1
> Error: BOZ literal 

Re: [petsc-users] PETSc initialization error

2020-06-29 Thread Sam Guo
Hi Junchao,
   I'll test the ex53. At the meantime, I use the following work around:
my program call MPI initialize once for entire program
PetscInitialize once for entire program
SlecpInitialize once for entire program (I think I can skip PetscInitialize
above)
calling slepc multiple times
my program call MPI finalize before ending program

   You can see I stkip PetscFinalize/SlepcFinalize. I am uneasy for
skipping them since I am not sure what is the consequence. Can you comment
on it?

Thanks,
Sam



On Fri, Jun 26, 2020 at 6:58 PM Junchao Zhang 
wrote:

> Did the test included in that commit fail in your environment? You can
> also change the test by adding calls to SlepcInitialize/SlepcFinalize
> between PetscInitializeNoPointers/PetscFinalize as in my previous email.
>
> --Junchao Zhang
>
>
> On Fri, Jun 26, 2020 at 5:54 PM Sam Guo  wrote:
>
>> Hi Junchao,
>>If you are talking about this commit of yours
>> https://gitlab.com/petsc/petsc/-/commit/f0463fa09df52ce43e7c5bf47a1c87df0c9e5cbb
>>
>> Recycle keyvals and fix bugs in MPI_Comm creation
>>I think I got it. It fixes the serial one but parallel one is still
>> crashing.
>>
>> Thanks,
>> Sam
>>
>> On Fri, Jun 26, 2020 at 3:43 PM Sam Guo  wrote:
>>
>>> Hi Junchao,
>>>I am not ready to upgrade petsc yet(due to the lengthy technical and
>>> legal approval process of our internal policy). Can you send me the diff
>>> file so I can apply it to petsc 3.11.3)?
>>>
>>> Thanks,
>>> Sam
>>>
>>> On Fri, Jun 26, 2020 at 3:33 PM Junchao Zhang 
>>> wrote:
>>>
 Sam,
   Please discard the origin patch I sent you. A better fix is already
 in maint/master. An test is at src/sys/tests/ex53.c
   I modified that test at the end with

   for (i=0; i<500; i++) {
 ierr = PetscInitializeNoPointers(argc,argv,NULL,help);if (ierr)
 return ierr;
 ierr = SlepcInitialize(,,NULL,help);if (ierr) return ierr;
 ierr = SlepcFinalize();if (ierr) return ierr;
 ierr = PetscFinalize();if (ierr) return ierr;
   }


  then I ran it with multiple mpi ranks and it ran correctly. So try
 your program with petsc master first. If not work, see if you can come up
 with a test example for us.

  Thanks.
 --Junchao Zhang


 On Fri, Jun 26, 2020 at 3:37 PM Sam Guo  wrote:

> One work around for me is to call PetscInitialize once for my entire
> program and skip PetscFinalize (since I don't have a good place to call
> PetscFinalize   before ending the program).
>
> On Fri, Jun 26, 2020 at 1:33 PM Sam Guo  wrote:
>
>> I get the crash after calling Initialize/Finalize multiple times.
>> Junchao fixed the bug for serial but parallel still crashes.
>>
>> On Fri, Jun 26, 2020 at 1:28 PM Barry Smith  wrote:
>>
>>>
>>>   Ah, so you get the crash the second time you call
>>> PetscInitialize()?  That is a problem because we do intend to support 
>>> that
>>> capability (but you much call PetscFinalize() each time also).
>>>
>>>   Barry
>>>
>>>
>>> On Jun 26, 2020, at 3:25 PM, Sam Guo  wrote:
>>>
>>> Hi Barry,
>>>Thanks for the quick response.
>>>I will call PetscInitialize once and skip the PetscFinalize for
>>> now to avoid the crash. The crash is actually in PetscInitialize, not
>>> PetscFinalize.
>>>
>>> Thanks,
>>> Sam
>>>
>>> On Fri, Jun 26, 2020 at 1:21 PM Barry Smith 
>>> wrote:
>>>

   Sam,

   You can skip PetscFinalize() so long as you only call
 PetscInitialize() once. It is not desirable in general to skip the 
 finalize
 because PETSc can't free all its data structures and you cannot see the
 PETSc logging information with -log_view but in terms of the code 
 running
 correctly you do not need to call PetscFinalize.

If your code crashes in PetscFinalize() please send the full
 error output and we can try to help you debug it.


Barry

 On Jun 26, 2020, at 3:14 PM, Sam Guo  wrote:

 To clarify, we have a mpi wrapper (so we can switch to different
 mpi at runtime). I compile petsc using our mpi wrapper.
 If I just call PETSc initialize once without calling finallize, it
 is ok. My question to you is that: can I skip finalize?
 Our program calls mpi_finalize at end anyway.

 On Fri, Jun 26, 2020 at 1:09 PM Sam Guo 
 wrote:

> Hi Junchao,
>Attached please find the configure.log.
>I also attach the pinit.c which contains your patch (I am
> currently using 3.11.3. I've applied your patch to 3.11.3). Your patch
> fixes the serial version. The error now is about the parallel.
>Here is the error log:
>
> [1]PETSC ERROR: #1 

Re: [petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Matthew Knepley
On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella <
paololampite...@hotmail.com> wrote:

> Dear Pierre, sorry to bother you, but I already have some issues. What I
> did:
>
>
>
>- pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also
>troublesome?)
>- Followed points 6 and 7 at
>
> https://doc.freefem.org/introduction/installation.html#compilation-on-windows
>
> I first got a warning on the configure at point 6, as –disable-hips is not
> recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was
> necessary) I got to this point:
>
>
>
> tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
>
> patch -p1 < petsc-suitesparse.patch
>
> patching file
> petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
>
> touch petsc-3.13.0/tag-tar
>
> cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
>
> --prefix=/home/paolo/freefem/ff-petsc//r \
>
> --with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3
> -mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11
> --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0
> --with-cc='gcc' --with-cxx='g++' --with-fc='gfortran'
> CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector'
> --with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll'
> --with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi'
> --with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec'
> --with-blaslapack-include=''
> --with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack
> --download-metis --download-ptscotch --download-mumps --download-hypre
> --download-parmetis --download-superlu --download-suitesparse
> --download-tetgen --download-slepc '--download-metis-cmake-arguments=-G
> "MSYS Makefiles"' '--download-parmetis-cmake-arguments=-G "MSYS Makefiles"'
> '--download-superlu-cmake-arguments=-G "MSYS Makefiles"'
> '--download-hypre-configure-arguments=--build=x86_64-linux-gnu
> --host=x86_64-linux-gnu' PETSC_ARCH=fr
>
>
> ===
>
>  Configuring PETSc to compile on your system
>
>
> ===
>
> TESTING: FortranMPICheck from
> config.packages.MPI(config/BuildSystem/config/pack***
>
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> details):
>
>
> ---
>
> Fortran error! mpi_init() could not be located!
>
>
> ***
>
>
>
> make: *** [Makefile:210: petsc-3.13.0/tag-conf-real] Errore 1
>
>
>
> Note that I didn’t add anything to any PATH variable, because this is not
> mentioned in your documentation.
>
>
>
> On a side note, this is the same error I got when trying to build PETSc in
> Cygwin with the default OpenMPI available in Cygwin.
>
>
>
> I am attaching the configure.log… it seems to me that the error comes from
> the configure trying to include the mpif.h in your folder and not using the
> -fallow-invalid-boz flag that I had to use, for example, to compile mpi.f90
> into mpi.mod
>
>
>
> But I’m not sure why this is happening
>

Pierre,

Could this be due to gcc 10?

Executing: gfortran -c -o /tmp/petsc-ur0cff6a/config.libraries/conftest.o
-I/tmp/petsc-ur0cff6a/config.compilers
-I/tmp/petsc-ur0cff6a/config.setCompilers
-I/tmp/petsc-ur0cff6a/config.compilersFortran
-I/tmp/petsc-ur0cff6a/config.libraries  -Wall -ffree-line-length-0
-Wno-unused-dummy-argument -O3 -mtune=generic
-I/home/paolo/FreeFem-sources/3rdparty/include/msmpi
/tmp/petsc-ur0cff6a/config.libraries/conftest.F90
Possible ERROR while running compiler: exit code 1
stderr:
C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:227:36:

  227 |PARAMETER (MPI_DATATYPE_NULL=z'0c00')
  |1
Error: BOZ literal constant at (1) is neither a data-stmt-constant nor an
actual argument to INT, REAL, DBLE, or CMPLX intrinsic function [see
'-fno-allow-invalid-boz']
C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:303:27:

  303 |PARAMETER (MPI_CHAR=z'4c000101')
  |   1
Error: BOZ literal constant at (1) is neither a data-stmt-constant nor an
actual argument to INT, REAL, DBLE, or CMPLX intrinsic function [see
'-fno-allow-invalid-boz']
C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:305:36:

  305 |PARAMETER (MPI_UNSIGNED_CHAR=z'4c000102')
  |1

  Thanks,

 Matt


> Thanks
>
>
>
> Paolo
>
>
>
> Inviato da Posta  per
> Windows 10
>
>
>
> *Da: *Pierre Jolivet 
> *Inviato: *lunedì 29 giugno 2020 18:34
> *A: *Paolo Lampitella 
> *Cc: *Satish Balay ; petsc-users
> 
> 

Re: [petsc-users] PETSc and Windows 10

2020-06-29 Thread Pierre Jolivet


> On 29 Jun 2020, at 6:27 PM, Paolo Lampitella  
> wrote:
> 
> I think I made the first step of having mingw64 from msys2 working with 
> ms-mpi.
>  
> I found that the issue I was having was related to:
>  
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91556 
> 
>  
> and, probably (but impossible to check now), I was using an msys2 and/or 
> mingw mpi package before this fix:
>  
> https://github.com/msys2/MINGW-packages/commit/11b4cff3d2ec7411037b692b0ad5a9f3e9b9978d#diff-eac59989e3096be97d940c8f47b50fba
>  
> 
>  
> Admittedly, I never used gcc 10 before on any machine. Still, I feel that 
> reporting that sort of error in that way is,
> at least, misleading (I would have preferred the initial implementation as 
> mentioned in the gcc bug track).
>  
> A second thing that I was not used to, and made me more uncertain of the 
> procedure I was following, is having to compile myself the mpi module. There 
> are several version of this out there, but I decided to stick with this one:
>  
> https://www.scivision.dev/windows-mpi-msys2/ 
> 
>  
> even if there seems to be no need to include -fno-range-check and the current 
> mpi.f90 version is different from the mpif.h as reported here:
>  
> https://github.com/microsoft/Microsoft-MPI/issues/33 
> 
>  
> which, to me, are both signs of lack of attention on the fortran side by 
> those that maintain this thing.
>  
> In summary, this is the procedure I followed so far (on a 64 bit machine with 
> Windows 10):
>  
> Install MSYS2 from https://www.msys2.org/  and just 
> follow the install wizard
> Open the MSYS2 terminal and execute: pacman -Syuu
> Close the terminal when asked and reopen it
> Keep executing ‘pacman -Syuu’ until nothing else needs to be updated
> Close the MSYS2 terminal and reopen it (I guess because was in paranoid 
> mode), then install packages with:
>  
> pacman -S base-devel git gcc gcc-fortran bsdcpio lndir pax-git unzip
> pacman -S mingw-w64-x86_64-toolchain
> pacman -S mingw-w64-x86_64-msmpi
> pacman -S mingw-w64-x86_64-cmake
> pacman -S mingw-w64-x86_64-freeglut
> pacman -S mingw-w64-x86_64-gsl
> pacman -S mingw-w64-x86_64-libmicroutils
> pacman -S mingw-w64-x86_64-hdf5
> pacman -S mingw-w64-x86_64-openblas
> pacman -S mingw-w64-x86_64-arpack
> pacman -S mingw-w64-x86_64-jq
>  
> This set should include all the libraries mentioned by Pierre and/or used by 
> his Jenkins, as the final scope here is to have PETSc and dependencies 
> working. But I think that for pure MPI one could stop to msmpi (even, maybe, 
> just install msmpi and have the dependencies figured out by pacman). 
> Honestly, I don’t remember the exact order I used to install the packages, 
> but this should not affect things. Also, as I was still in paranoid mode, I 
> kept executing ‘pacman -Syuu’ after each package was installed. After this, 
> close the MSYS2 terminal.
>  
> Open the MINGW64 terminal and create the .mod file out of the mpi.f90 file, 
> as mentioned here https://www.scivision.dev/windows-mpi-msys2/ 
> , with:
>  
> cd /mingw64/include
> gfortran mpif90 -c -fno-range-check -fallow-invalid-boz

Ah, yes, that’s new to gfortran 10 (we use gfortran 9 on our workers), which is 
now what’s ship with MSYS2 (we haven’t updated yet). Sorry that I forgot about 
that.

> This is needed to ‘USE mpi’ (as opposed to INCLUDE ‘mpif.h’)
>  
> Install the latest MS-MPI (both sdk and setup) from 
> https://www.microsoft.com/en-us/download/details.aspx?id=100593 
> 
>  
> At this point I’ve been able to compile (using the MINGW64 terminal) 
> different mpi test programs and they run as expected in the classical Windows 
> prompt. I added this function to my .bashrc in MSYS2 in order to easily copy 
> the required dependencies out of MSYS:
>  
> function copydep() { ldd $1 | grep "=> /$2" | awk '{print $3}' | xargs -I 
> '{}' cp -v '{}' .; }
>  
> which can be used, with the MINGW64 terminal, by navigating to the folder 
> where the final executable, say, my.exe, resides (even if under a Windows 
> path) and executing:
>  
> copydep my.exe mingw64
>  
> This, of course, must be done before actually trying to execute the .exe in 
> the windows cmd prompt.
>  
> Hopefully, I should now be able to follow Pierre’s instructions for PETSc 
> (but first I wanna give a try to the system python before removing it)

Looks like the hard part is over. It’s usually easier to deal with ./configure 
issues.
If you have weird errors like “incomplete Cygwin install” or whatever, this is 
the kind of issues I was referring to earlier.
In that case, what I’d suggest is just, as 

[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
I think I made the first step of having mingw64 from msys2 working with ms-mpi.

I found that the issue I was having was related to:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91556

and, probably (but impossible to check now), I was using an msys2 and/or mingw 
mpi package before this fix:

https://github.com/msys2/MINGW-packages/commit/11b4cff3d2ec7411037b692b0ad5a9f3e9b9978d#diff-eac59989e3096be97d940c8f47b50fba

Admittedly, I never used gcc 10 before on any machine. Still, I feel that 
reporting that sort of error in that way is,
at least, misleading (I would have preferred the initial implementation as 
mentioned in the gcc bug track).

A second thing that I was not used to, and made me more uncertain of the 
procedure I was following, is having to compile myself the mpi module. There 
are several version of this out there, but I decided to stick with this one:

https://www.scivision.dev/windows-mpi-msys2/

even if there seems to be no need to include -fno-range-check and the current 
mpi.f90 version is different from the mpif.h as reported here:

https://github.com/microsoft/Microsoft-MPI/issues/33

which, to me, are both signs of lack of attention on the fortran side by those 
that maintain this thing.

In summary, this is the procedure I followed so far (on a 64 bit machine with 
Windows 10):


  *   Install MSYS2 from https://www.msys2.org/ and just follow the install 
wizard
  *   Open the MSYS2 terminal and execute: pacman -Syuu
  *   Close the terminal when asked and reopen it
  *   Keep executing ‘pacman -Syuu’ until nothing else needs to be updated
  *   Close the MSYS2 terminal and reopen it (I guess because was in paranoid 
mode), then install packages with:



pacman -S base-devel git gcc gcc-fortran bsdcpio lndir pax-git unzip

pacman -S mingw-w64-x86_64-toolchain

pacman -S mingw-w64-x86_64-msmpi

pacman -S mingw-w64-x86_64-cmake

pacman -S mingw-w64-x86_64-freeglut

pacman -S mingw-w64-x86_64-gsl

pacman -S mingw-w64-x86_64-libmicroutils

pacman -S mingw-w64-x86_64-hdf5

pacman -S mingw-w64-x86_64-openblas

pacman -S mingw-w64-x86_64-arpack

pacman -S mingw-w64-x86_64-jq



This set should include all the libraries mentioned by Pierre and/or used by 
his Jenkins, as the final scope here is to have PETSc and dependencies working. 
But I think that for pure MPI one could stop to msmpi (even, maybe, just 
install msmpi and have the dependencies figured out by pacman). Honestly, I 
don’t remember the exact order I used to install the packages, but this should 
not affect things. Also, as I was still in paranoid mode, I kept executing 
‘pacman -Syuu’ after each package was installed. After this, close the MSYS2 
terminal.



  *   Open the MINGW64 terminal and create the .mod file out of the mpi.f90 
file, as mentioned here https://www.scivision.dev/windows-mpi-msys2/, with:



cd /mingw64/include

gfortran mpif90 -c -fno-range-check -fallow-invalid-boz



This is needed to ‘USE mpi’ (as opposed to INCLUDE ‘mpif.h’)



  *   Install the latest MS-MPI (both sdk and setup) from 
https://www.microsoft.com/en-us/download/details.aspx?id=100593

At this point I’ve been able to compile (using the MINGW64 terminal) different 
mpi test programs and they run as expected in the classical Windows prompt. I 
added this function to my .bashrc in MSYS2 in order to easily copy the required 
dependencies out of MSYS:

function copydep() { ldd $1 | grep "=> /$2" | awk '{print $3}' | xargs -I '{}' 
cp -v '{}' .; }

which can be used, with the MINGW64 terminal, by navigating to the folder where 
the final executable, say, my.exe, resides (even if under a Windows path) and 
executing:

copydep my.exe mingw64

This, of course, must be done before actually trying to execute the .exe in the 
windows cmd prompt.

Hopefully, I should now be able to follow Pierre’s instructions for PETSc (but 
first I wanna give a try to the system python before removing it)

Thanks

Paolo



[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
As a follow up on the OpenMPI matter in Cygwin, I wasn’t actually able to use 
the cygwin version at all, not even compiling a simple mpi test.
And PETSc fails in using it as well, as it seems unable to find MPI_Init.

I might try with having PETSc install it as it did with MPICH, but just for 
future reference if anyone is interested

Paolo

Inviato da Posta per Windows 10

Da: Satish Balay
Inviato: domenica 28 giugno 2020 18:17
A: Satish Balay via petsc-users
Cc: Paolo Lampitella; Pierre 
Jolivet
Oggetto: Re: [petsc-users] PETSc and Windows 10

On Sun, 28 Jun 2020, Satish Balay via petsc-users wrote:

> On Sun, 28 Jun 2020, Paolo Lampitella wrote:

> >  *   For my Cygwin-GNU route (basically what is mentioned in PFLOTRAN 
> > documentation), am I expected to then run from the cygwin terminal or 
> > should the windows prompt work as well? Is the fact that I require a second 
> > Enter hit and the mismanagement of serial executables the sign of something 
> > wrong with the Windows prompt?
>
> I would think Cygwin-GNU route should work. I'll have to see if I can 
> reproduce the issues you have.

I attempted a couple of builds - one with mpich and the other with 
cygwin-openmpi

mpich compiled petsc example works sequentially - however mpiexec appears to 
require cygwin env.


C:\petsc-install\bin>ex5f
Number of SNES iterations = 4

C:\petsc-install\bin>mpiexec -n 1 ex5f
[cli_0]: write_line error; fd=448 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=448 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1
[cli_0]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1

C:\petsc-install\bin>
<<

cygwin-openmpi compiled petsc example binary gives errors even for sequential 
run


C:\Users\balay\test>ex5f
Warning: '/dev/shm' does not exists or is not a directory.

POSIX shared memory objects require the existance of this directory.
Create the directory '/dev/shm' and set the permissions to 01777.
For instance on the command line: mkdir -m 01777 /dev/shm
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 388
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 166
--
Sorry!  You were supposed to get help about:
orte_init:startup:internal-failure
But I couldn't open the help file:
/usr/share/openmpi/help-orte-runtime: No such file or directory.  Sorry!
--
<<<

So looks like you would need cygwin installed to run Cygwin-MPI binaries.. Also 
I don't know how cygwin/windows interaction overhead will affect parallel 
performance.

Satish