[petsc-users] R: R: How to use Intel OneApi mpi wrappers on Linux

2022-10-06 Thread Paolo Lampitella
Hi Eric,

With the previous Intel version I was able to configure without mpi wrappers 
without problems.
Using the suggestion by Mark (CFLAGS, FFLAGS, CXXFLAGS) I managed to also use 
the mpi wrappers.

Unfortunately, as you seem to have noticed, things break down on Hypre and that 
loopopt. I have a lead on a possible solution
being to use Autoconf 2.7 or higher, but this is untested.

However, in an attempt to clarify the procedure better, I started from scratch 
and got trapped in the new intel version, which has now deprecated the 
classical C/C++ compiler, and passing “-diag-disable=10441” in the C/CXX FLAGS 
is not working for me.

So, as a matter of fact, I am stacked too and had to abandon the intel route 
for the moment

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows

Da: Eric Chamberland<mailto:eric.chamberl...@giref.ulaval.ca>
Inviato: giovedì 6 ottobre 2022 00:13
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>; Barry 
Smith<mailto:bsm...@petsc.dev>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] R: How to use Intel OneApi mpi wrappers on Linux


Hi,

fwiw, I tried to compile with ipcx too, without mpi wrappers...

However, I had other problems... check here: 
https://gitlab.com/petsc/petsc/-/issues/1255<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fpetsc%2Fpetsc%2F-%2Fissues%2F1255=05%7C01%7C%7C001a549b673247652d8608daa71ed334%7C84df9e7fe9f640afb435%7C1%7C0%7C638006048085772663%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=xo3a%2FbHSNREcbHH5q5hriSBOhGUE%2Bdo%2FFLZCXRBuHUI%3D=0>

Anyone have compiled PETSc with the latest Intel OneAPI release?

Can you give a working configure line?

Thanks,

Eric


On 2022-10-03 15:58, Paolo Lampitella wrote:
Hi Barry,

thanks for the suggestion. I tried this but doesn’t seem to work as expected. 
That is, configure actually works, but it is because it is not seeing the LLVM 
based compilers, only the intel classical ones. Yet the variables seem 
correctly exported.

Paolo


Da: Barry Smith<mailto:bsm...@petsc.dev>
Inviato: lunedì 3 ottobre 2022 15:19
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] How to use Intel OneApi mpi wrappers on Linux


bsmith@petsc-01:~$ mpicc
This script invokes an appropriate specialized C MPI compiler driver.
The following ways (priority order) can be used for changing default
compiler name (gcc):
   1. Command line option:  -cc=
   2. Environment variable: I_MPI_CC (current value '')
   3. Environment variable: MPICH_CC (current value '')



So
export I_MPI_CC=icx
export I_MPI_CXX=icpx
export I_MPI_FC=ifx

should do the trick.




On Oct 3, 2022, at 5:43 AM, Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:

Dear PETSc users and developers,

as per the title, I recently installed the base and HPC Intel OneApi toolkits 
on a machine running Ubuntu 20.04.

As you probably know, OneApi comes with the classical compilers (icc, icpc, 
ifort) and relative mpi wrappers (mpiicc, mpiicpc, mpiifort) as well as with 
the new LLVM based compilers (icx, icpx, ifx).

My experience so far with PETSc on Linux has been without troubles using both 
gcc compilers and either Mpich or OpenMPI and Intel classical compilers and MPI.

However, I have now troubles using the MPI wrappers of the new LLVM compilers 
as, in fact, there aren’t dedicated mpi wrappers for them. Instead, they can be 
used with certain flags for the classical wrappers:

mpiicc -cc=icx
mpiicpc -cxx=icpx
mpiifort -fc=ifx

The problem I have is that I have no idea how to pass them correctly to the 
configure and whatever comes after that.

Admittedly, I am just starting to use the new compilers, so I have no clue how 
I would use them in other projects as well.

I started with an alias in my .bash_aliases (which works for simple compilation 
tests from command line) but doesn’t with configure.

I also tried adding the flags to the COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS but 
didn’t work as well.

Do you have any experience with the new Intel compilers and, in case, could you 
share hot to properly use them with MPI?

Thanks

Paolo



--

Eric Chamberland, ing., M. Ing

Professionnel de recherche

GIREF/Université Laval

(418) 656-2131 poste 41 22 42



[petsc-users] R: How to use Intel OneApi mpi wrappers on Linux

2022-10-03 Thread Paolo Lampitella
Not that I know of, today is the first time I’ve read of it.

It actually happened few hours ago while googling for this issue, and the 
results with most things in common with my case were 3 now closed
Issues on the spack repository (never heard of it). Seems something related to 
Autoconf up to 2.69 (2.7 has a patch).

I actually verified that I have the last offending Autoconf version (2.69), but 
I didn’t really understand anything else of what I read, so I couldn’t make any 
further progress

I guess that this kind of confirms that this is my current problem with the new 
OneApi compilers and hypre on my ubuntu 20.04 machine

Thanks

Paolo

Da: Mark Adams<mailto:mfad...@lbl.gov>
Inviato: lunedì 3 ottobre 2022 19:10
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] How to use Intel OneApi mpi wrappers on Linux

You are getting a "-loopopt=0" added to your link line.
No idea what that is or where it comes from.
I don't see it in our repo.
Does this come from your environment somehow? 
https://dl.acm.org/doi/abs/10.1145/3493229.3493301<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdl.acm.org%2Fdoi%2Fabs%2F10.1145%2F3493229.3493301=05%7C01%7C%7C3fbce77f69184e56e0e608daa5621f20%7C84df9e7fe9f640afb435%7C1%7C0%7C638004138093246917%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=XUzPPdRFPKhX4Qi4j3LJ77Ij%2BvPfTdhTZ6CK43oIbIM%3D=0>

On Mon, Oct 3, 2022 at 9:20 AM Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:
Hi Mark,

thank you very much, problem solved!

I was indeed making confusion between OPTFLAGS and FLAGS.

Now, I know that this is probably not the place for this but, as I still owe 
you a configure.log, what happened next is that I added hypre to the previous 
configuration (now working) and I had problems again in configure (log file 
attached). If I remove “--download-hypre” from the configure command, as I 
said, everything works as expected. This also worked with the intel classical 
compilers (that is, if I remove again the CFLAGS, CXXFLAGS and FFLAGS options 
that fixed my configure without hypre).

My catch here is that HYPRE seems to interpret the C/CXX compilers as GNU 
(instead of intel), and later fails in linking C with Fortran.

I don’t actually need Hypre for now, but if you have any clue on where to look 
next, that would be helpful

Thanks again

Paolo

Da: Mark Adams<mailto:mfad...@lbl.gov>
Inviato: lunedì 3 ottobre 2022 13:20
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] How to use Intel OneApi mpi wrappers on Linux

Hi Paolo,

You can use things like this in your configure file to set compilers and  
options.

And you want to send us your configure.log file if it fails.

Mark

'--with-cc=gcc-11',
'--with-cxx=g++-11',
'--with-fc=gfortran-11',
'CFLAGS=-g',
'CXXFLAGS=-g',
'COPTFLAGS=-O0',
    'CXXOPTFLAGS=-O0',


On Mon, Oct 3, 2022 at 5:43 AM Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:
Dear PETSc users and developers,

as per the title, I recently installed the base and HPC Intel OneApi toolkits 
on a machine running Ubuntu 20.04.

As you probably know, OneApi comes with the classical compilers (icc, icpc, 
ifort) and relative mpi wrappers (mpiicc, mpiicpc, mpiifort) as well as with 
the new LLVM based compilers (icx, icpx, ifx).

My experience so far with PETSc on Linux has been without troubles using both 
gcc compilers and either Mpich or OpenMPI and Intel classical compilers and MPI.

However, I have now troubles using the MPI wrappers of the new LLVM compilers 
as, in fact, there aren’t dedicated mpi wrappers for them. Instead, they can be 
used with certain flags for the classical wrappers:

mpiicc -cc=icx
mpiicpc -cxx=icpx
mpiifort -fc=ifx

The problem I have is that I have no idea how to pass them correctly to the 
configure and whatever comes after that.

Admittedly, I am just starting to use the new compilers, so I have no clue how 
I would use them in other projects as well.

I started with an alias in my .bash_aliases (which works for simple compilation 
tests from command line) but doesn’t with configure.

I also tried adding the flags to the COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS but 
didn’t work as well.

Do you have any experience with the new Intel compilers and, in case, could you 
share hot to properly use them with MPI?

Thanks

Paolo




[petsc-users] R: How to use Intel OneApi mpi wrappers on Linux

2022-10-03 Thread Paolo Lampitella
Hi Barry,

thanks for the suggestion. I tried this but doesn’t seem to work as expected. 
That is, configure actually works, but it is because it is not seeing the LLVM 
based compilers, only the intel classical ones. Yet the variables seem 
correctly exported.

Paolo


Da: Barry Smith<mailto:bsm...@petsc.dev>
Inviato: lunedì 3 ottobre 2022 15:19
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] How to use Intel OneApi mpi wrappers on Linux


bsmith@petsc-01:~$ mpicc
This script invokes an appropriate specialized C MPI compiler driver.
The following ways (priority order) can be used for changing default
compiler name (gcc):
   1. Command line option:  -cc=
   2. Environment variable: I_MPI_CC (current value '')
   3. Environment variable: MPICH_CC (current value '')


So
export I_MPI_CC=icx
export I_MPI_CXX=icpx
export I_MPI_FC=ifx

should do the trick.



On Oct 3, 2022, at 5:43 AM, Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:

Dear PETSc users and developers,

as per the title, I recently installed the base and HPC Intel OneApi toolkits 
on a machine running Ubuntu 20.04.

As you probably know, OneApi comes with the classical compilers (icc, icpc, 
ifort) and relative mpi wrappers (mpiicc, mpiicpc, mpiifort) as well as with 
the new LLVM based compilers (icx, icpx, ifx).

My experience so far with PETSc on Linux has been without troubles using both 
gcc compilers and either Mpich or OpenMPI and Intel classical compilers and MPI.

However, I have now troubles using the MPI wrappers of the new LLVM compilers 
as, in fact, there aren’t dedicated mpi wrappers for them. Instead, they can be 
used with certain flags for the classical wrappers:

mpiicc -cc=icx
mpiicpc -cxx=icpx
mpiifort -fc=ifx

The problem I have is that I have no idea how to pass them correctly to the 
configure and whatever comes after that.

Admittedly, I am just starting to use the new compilers, so I have no clue how 
I would use them in other projects as well.

I started with an alias in my .bash_aliases (which works for simple compilation 
tests from command line) but doesn’t with configure.

I also tried adding the flags to the COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS but 
didn’t work as well.

Do you have any experience with the new Intel compilers and, in case, could you 
share hot to properly use them with MPI?

Thanks

Paolo




[petsc-users] How to use Intel OneApi mpi wrappers on Linux

2022-10-03 Thread Paolo Lampitella
Dear PETSc users and developers,

as per the title, I recently installed the base and HPC Intel OneApi toolkits 
on a machine running Ubuntu 20.04.

As you probably know, OneApi comes with the classical compilers (icc, icpc, 
ifort) and relative mpi wrappers (mpiicc, mpiicpc, mpiifort) as well as with 
the new LLVM based compilers (icx, icpx, ifx).

My experience so far with PETSc on Linux has been without troubles using both 
gcc compilers and either Mpich or OpenMPI and Intel classical compilers and MPI.

However, I have now troubles using the MPI wrappers of the new LLVM compilers 
as, in fact, there aren’t dedicated mpi wrappers for them. Instead, they can be 
used with certain flags for the classical wrappers:

mpiicc -cc=icx
mpiicpc -cxx=icpx
mpiifort -fc=ifx

The problem I have is that I have no idea how to pass them correctly to the 
configure and whatever comes after that.

Admittedly, I am just starting to use the new compilers, so I have no clue how 
I would use them in other projects as well.

I started with an alias in my .bash_aliases (which works for simple compilation 
tests from command line) but doesn’t with configure.

I also tried adding the flags to the COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS but 
didn’t work as well.

Do you have any experience with the new Intel compilers and, in case, could you 
share hot to properly use them with MPI?

Thanks

Paolo


[petsc-users] R: R: PETSc and Windows 10

2020-07-08 Thread Paolo Lampitella
Ok, I see, but this seems to translate to compiling MS-MPI with mingw in cygwin 
by myself.

Should be doable, maybe following how they managed to do that in MSYS in the 
first place (the instructions are available for each package).

Actually, now I see I might have messed up when working in Cygwin because the 
mingw tool there might still be a cross compiler (while it isn’t in MSYS2), so 
it might have required something like “--host=x86_64-w64-mingw32”

Will update if I make any progress on this

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Satish Balay<mailto:ba...@mcs.anl.gov>
Inviato: lunedì 6 luglio 2020 20:31
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: petsc-users<mailto:petsc-users@mcs.anl.gov>; Pierre 
Jolivet<mailto:pierre.joli...@enseeiht.fr>
Oggetto: Re: [petsc-users] R: PETSc and Windows 10

I was thinking in terms of: If using mingw-gcc from cygwin - then it could be 
used in the same way as mingw-gcc in msys2 is used - i.e with MS-MPI etc..

[one can install mingw-gcc in cygwin - which is different than cygwin native 
gcc - perhaps this is similar to mingw-gcc install in msys2]

I haven't tried this though..

Likely cygwin doesn't have the equivalent of mingw-w64-x86_64-msmpi - for easy 
use of MS-MPI from mingw-gfortran

Satish

On Mon, 6 Jul 2020, Paolo Lampitella wrote:

> Dear Satish,
>
> Yes indeed, or at least that is my understanding. Still, my experience so far 
> with Cygwin has been, let’s say, controversial.
>
> I wasn’t able to compile myself MPICH, with both gcc and mingw.
>
> When having PETSc compile also MPICH, I was successful only with gcc but not 
> mingw.
>
> I didn’t even try compiling OpenMPI with mingw, as PETSc compilation already 
> failed using the OpenMPI available trough cygwin libraries (which is based on 
> gcc and not mingw).
>
> Not sure if this is my fault, but in the end it didn’t go well
>
> Paolo
>
> Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 
> 10
>
> Da: Satish Balay<mailto:ba...@mcs.anl.gov>
> Inviato: domenica 5 luglio 2020 23:50
> A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
> Cc: Pierre Jolivet<mailto:pierre.joli...@enseeiht.fr>; 
> petsc-users<mailto:petsc-users@mcs.anl.gov>
> Oggetto: Re: [petsc-users] PETSc and Windows 10
>
> Sounds like there are different mingw tools and msys2 tools.
>
> So I guess one could use mingw compilers even from cygwin [using cygwin 
> tools] - i.e mingw compilers don't really need msys2 tools to work.
>
> Satish
>
> On Sun, 5 Jul 2020, Paolo Lampitella wrote:
>
> > Unfortunately, even PETSC_ARCH=i didn't work out. And while 
> > with-single-library=0 wasn't really appealing to me, it worked but only to 
> > later fail on make test.
> >
> > I guess all these differences are due to the fortran bindings and/or gcc 10.
> >
> > However, until I discover how they are different, I guess I'll be fine with 
> > /usr/bin/ar
> >
> > Paolo
> >
> >
> >
> > Inviato da smartphone Samsung Galaxy.
> >
> >
> >
> >  Messaggio originale 
> > Da: Paolo Lampitella 
> > Data: 05/07/20 14:00 (GMT+01:00)
> > A: Pierre Jolivet 
> > Cc: Matthew Knepley , petsc-users 
> > 
> > Oggetto: RE: [petsc-users] PETSc and Windows 10
> >
> > Thank you very much Pierre.
> >
> > I'll keep you informed in case I see any relevant change from the tests 
> > when using your suggestion.
> >
> > Paolo
> >
> >
> >
> > Inviato da smartphone Samsung Galaxy.
> >
> >
> >
> >  Messaggio originale 
> > Da: Pierre Jolivet 
> > Data: 05/07/20 13:45 (GMT+01:00)
> > A: Paolo Lampitella 
> > Cc: Matthew Knepley , petsc-users 
> > 
> > Oggetto: Re: [petsc-users] PETSc and Windows 10
> >
> > Hello Paolo,
> >
> > On 5 Jul 2020, at 1:15 PM, Paolo Lampitella 
> > mailto:paololampite...@hotmail.com>> wrote:
> >
> > Dear all,
> >
> > I just want to update you on my journey to PETSc compilation in Windows 
> > under MSYS2+MINGW64
> >
> > Unfortunately, I haven’t been able to compile petsc-slepc trough Freefem 
> > but, as my final goal required also Fortran bindings (but I only needed 
> > blas, lapack, metis and hypre), I decided to follow my own route using the 
> > useful information from Pierre.
> >
> >
> >   *   I started by installing MPI from 
> > https://www.microsoft.com/en-us/download/details.aspx?id=100593. I don’t 
> > think the SDK is actually needed in my specific workf

[petsc-users] R: PETSc and Windows 10

2020-07-06 Thread Paolo Lampitella
Dear Satish,

Yes indeed, or at least that is my understanding. Still, my experience so far 
with Cygwin has been, let’s say, controversial.

I wasn’t able to compile myself MPICH, with both gcc and mingw.

When having PETSc compile also MPICH, I was successful only with gcc but not 
mingw.

I didn’t even try compiling OpenMPI with mingw, as PETSc compilation already 
failed using the OpenMPI available trough cygwin libraries (which is based on 
gcc and not mingw).

Not sure if this is my fault, but in the end it didn’t go well

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Satish Balay<mailto:ba...@mcs.anl.gov>
Inviato: domenica 5 luglio 2020 23:50
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: Pierre Jolivet<mailto:pierre.joli...@enseeiht.fr>; 
petsc-users<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] PETSc and Windows 10

Sounds like there are different mingw tools and msys2 tools.

So I guess one could use mingw compilers even from cygwin [using cygwin tools] 
- i.e mingw compilers don't really need msys2 tools to work.

Satish

On Sun, 5 Jul 2020, Paolo Lampitella wrote:

> Unfortunately, even PETSC_ARCH=i didn't work out. And while 
> with-single-library=0 wasn't really appealing to me, it worked but only to 
> later fail on make test.
>
> I guess all these differences are due to the fortran bindings and/or gcc 10.
>
> However, until I discover how they are different, I guess I'll be fine with 
> /usr/bin/ar
>
> Paolo
>
>
>
> Inviato da smartphone Samsung Galaxy.
>
>
>
>  Messaggio originale 
> Da: Paolo Lampitella 
> Data: 05/07/20 14:00 (GMT+01:00)
> A: Pierre Jolivet 
> Cc: Matthew Knepley , petsc-users 
> Oggetto: RE: [petsc-users] PETSc and Windows 10
>
> Thank you very much Pierre.
>
> I'll keep you informed in case I see any relevant change from the tests when 
> using your suggestion.
>
> Paolo
>
>
>
> Inviato da smartphone Samsung Galaxy.
>
>
>
>  Messaggio originale 
> Da: Pierre Jolivet 
> Data: 05/07/20 13:45 (GMT+01:00)
> A: Paolo Lampitella 
> Cc: Matthew Knepley , petsc-users 
> Oggetto: Re: [petsc-users] PETSc and Windows 10
>
> Hello Paolo,
>
> On 5 Jul 2020, at 1:15 PM, Paolo Lampitella 
> mailto:paololampite...@hotmail.com>> wrote:
>
> Dear all,
>
> I just want to update you on my journey to PETSc compilation in Windows under 
> MSYS2+MINGW64
>
> Unfortunately, I haven’t been able to compile petsc-slepc trough Freefem but, 
> as my final goal required also Fortran bindings (but I only needed blas, 
> lapack, metis and hypre), I decided to follow my own route using the useful 
> information from Pierre.
>
>
>   *   I started by installing MPI from 
> https://www.microsoft.com/en-us/download/details.aspx?id=100593. I don’t 
> think the SDK is actually needed in my specific workflow, but I installed it 
> as well together with mpisetup.
>   *   Then I installed MSYS2 just following the wizard. Opened the MSYS2 
> terminal and updated with pacman -Syuu, closed if asked, reopened it and used 
> again pacman -Syuu several times until no more updates were available. Closed 
> it and opened it back.
>   *   Under the MSYS2 terminal installed just the following packages:
>
>
>
>  *   pacman -S base-devel git gcc gcc-fortran
>  *   pacman -S mingw-w64-x86_64-toolchain
>  *   pacman -S mingw-w64-x86_64-cmake
>  *   pacman -S mingw-w64-x86_64-msmpi
>
>
>
>   *   Closed the MSYS2 terminal and opened the MINGW64 one, went to 
> /mingw64/include and compiled my mpi module following 
> https://www.scivision.dev/windows-mpi-msys2/:
>
>
>
>  *   gfortran -c mpi.f90 -fno-range-check -fallow-invalid-boz
>
>
> However, I will keep an eye on the MS-MPI GitHub repository because the 
> fortran side seems to be far from perfect.
>
>
>   *   Then I downloaded the 3.13.3 version of petsc and configured it, still 
> under the MINGW64 terminal, with the following command:
>
>
> /usr/bin/python ./configure --prefix=/home/paolo/petsc --with-ar=/usr/bin/ar
> --with-shared-libraries=0 --with-debugging=0 --with-windows-graphics=0 
> --with-x=0
> COPTFLAGS="-O3 -mtune=native"
> CXXOPTFLAGS="-O3 -mtune=native"
> FOPTFLAGS="-O3 -mtune=native"
> FFLAGS=-fallow-invalid-boz
> --with-mpiexec="/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec"
> --download-fblaslapack --download-metis --download-hypre
> --download-metis-cmake-arguments='-G "MSYS Makefiles"'
> --download-hypre-configure-arguments="--build=x86_64-linux-gnu 
> --host=x86_64-linux-gnu"
>
> Note that I just bypassed uni

Re: [petsc-users] PETSc and Windows 10

2020-07-05 Thread Paolo Lampitella
Unfortunately, even PETSC_ARCH=i didn't work out. And while 
with-single-library=0 wasn't really appealing to me, it worked but only to 
later fail on make test.

I guess all these differences are due to the fortran bindings and/or gcc 10.

However, until I discover how they are different, I guess I'll be fine with 
/usr/bin/ar

Paolo



Inviato da smartphone Samsung Galaxy.



 Messaggio originale 
Da: Paolo Lampitella 
Data: 05/07/20 14:00 (GMT+01:00)
A: Pierre Jolivet 
Cc: Matthew Knepley , petsc-users 
Oggetto: RE: [petsc-users] PETSc and Windows 10

Thank you very much Pierre.

I'll keep you informed in case I see any relevant change from the tests when 
using your suggestion.

Paolo



Inviato da smartphone Samsung Galaxy.



 Messaggio originale 
Da: Pierre Jolivet 
Data: 05/07/20 13:45 (GMT+01:00)
A: Paolo Lampitella 
Cc: Matthew Knepley , petsc-users 
Oggetto: Re: [petsc-users] PETSc and Windows 10

Hello Paolo,

On 5 Jul 2020, at 1:15 PM, Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:

Dear all,

I just want to update you on my journey to PETSc compilation in Windows under 
MSYS2+MINGW64

Unfortunately, I haven’t been able to compile petsc-slepc trough Freefem but, 
as my final goal required also Fortran bindings (but I only needed blas, 
lapack, metis and hypre), I decided to follow my own route using the useful 
information from Pierre.


  *   I started by installing MPI from 
https://www.microsoft.com/en-us/download/details.aspx?id=100593. I don’t think 
the SDK is actually needed in my specific workflow, but I installed it as well 
together with mpisetup.
  *   Then I installed MSYS2 just following the wizard. Opened the MSYS2 
terminal and updated with pacman -Syuu, closed if asked, reopened it and used 
again pacman -Syuu several times until no more updates were available. Closed 
it and opened it back.
  *   Under the MSYS2 terminal installed just the following packages:



 *   pacman -S base-devel git gcc gcc-fortran
 *   pacman -S mingw-w64-x86_64-toolchain
 *   pacman -S mingw-w64-x86_64-cmake
 *   pacman -S mingw-w64-x86_64-msmpi



  *   Closed the MSYS2 terminal and opened the MINGW64 one, went to 
/mingw64/include and compiled my mpi module following 
https://www.scivision.dev/windows-mpi-msys2/:



 *   gfortran -c mpi.f90 -fno-range-check -fallow-invalid-boz


However, I will keep an eye on the MS-MPI GitHub repository because the fortran 
side seems to be far from perfect.


  *   Then I downloaded the 3.13.3 version of petsc and configured it, still 
under the MINGW64 terminal, with the following command:


/usr/bin/python ./configure --prefix=/home/paolo/petsc --with-ar=/usr/bin/ar
--with-shared-libraries=0 --with-debugging=0 --with-windows-graphics=0 
--with-x=0
COPTFLAGS="-O3 -mtune=native"
CXXOPTFLAGS="-O3 -mtune=native"
FOPTFLAGS="-O3 -mtune=native"
FFLAGS=-fallow-invalid-boz
--with-mpiexec="/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec"
--download-fblaslapack --download-metis --download-hypre
--download-metis-cmake-arguments='-G "MSYS Makefiles"'
--download-hypre-configure-arguments="--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu"

Note that I just bypassed uninstalling python in mingw64 (which doesn’t work) 
by using /usr/bin/python and that, as opposed to Pierre, I needed to also use 
the MSYS2 archiver (/usr/bin/ar) as opposed to the mingw64 one (/mingw64/bin/ar 
that shows up in the Pierre configure) as also mentioned here 
http://hillyuan.blogspot.com/2017/11/build-petsc-in-windows-under-mingw64.html, 
probably because of this issue 
https://stackoverflow.com/questions/37504625/ar-on-msys2-shell-receives-truncated-paths-when-called-from-makefile.

You are right that you can avoid deinstalling mingw-w64-x86_64-python if you 
can supply the proper Python yourself (we don’t have that luxury in our 
Makefile).
If you want to avoid using that AR, and stick to /mingw64/bin/ar (not sure what 
the pros and cons are), you can either:
- use another PETSC_ARCH (very short, like pw, for petsc-windows);
- use --with-single-library=0.
See this post on GitLab 
https://gitlab.com/petsc/petsc/-/issues/647#note_373507681
The OS I’m referring to is indeed my Windows + MSYS2 box.

Thanks,
Pierre

Then make all, make install and make check all went smooth. Also, I don’t know 
exactly what with-x=0 and with-windows-graphics=0 do, but I think it is stuff 
that I don’t need (yet configure worked with windows-graphics as well).


  *   Finally I launched make test. As some tests failed, I replicated the same 
install procedure on all the systems I have available on this same Windows 
machine (Ubuntu 20.04 and Centos 8 under a VirtualBox 6.0.22 VM, Ubuntu 20.04 
under WSL1 and the MSYS2-MINGW64 toolchain). I am attaching a file with the 
results printed to screen (not sure about which file should be used for a 
comparison/check). Note, however, that the 

Re: [petsc-users] PETSc and Windows 10

2020-07-05 Thread Paolo Lampitella
Thank you very much Pierre.

I'll keep you informed in case I see any relevant change from the tests when 
using your suggestion.

Paolo



Inviato da smartphone Samsung Galaxy.



 Messaggio originale 
Da: Pierre Jolivet 
Data: 05/07/20 13:45 (GMT+01:00)
A: Paolo Lampitella 
Cc: Matthew Knepley , petsc-users 
Oggetto: Re: [petsc-users] PETSc and Windows 10

Hello Paolo,

On 5 Jul 2020, at 1:15 PM, Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:

Dear all,

I just want to update you on my journey to PETSc compilation in Windows under 
MSYS2+MINGW64

Unfortunately, I haven’t been able to compile petsc-slepc trough Freefem but, 
as my final goal required also Fortran bindings (but I only needed blas, 
lapack, metis and hypre), I decided to follow my own route using the useful 
information from Pierre.


  *   I started by installing MPI from 
https://www.microsoft.com/en-us/download/details.aspx?id=100593. I don’t think 
the SDK is actually needed in my specific workflow, but I installed it as well 
together with mpisetup.
  *   Then I installed MSYS2 just following the wizard. Opened the MSYS2 
terminal and updated with pacman -Syuu, closed if asked, reopened it and used 
again pacman -Syuu several times until no more updates were available. Closed 
it and opened it back.
  *   Under the MSYS2 terminal installed just the following packages:



 *   pacman -S base-devel git gcc gcc-fortran
 *   pacman -S mingw-w64-x86_64-toolchain
 *   pacman -S mingw-w64-x86_64-cmake
 *   pacman -S mingw-w64-x86_64-msmpi



  *   Closed the MSYS2 terminal and opened the MINGW64 one, went to 
/mingw64/include and compiled my mpi module following 
https://www.scivision.dev/windows-mpi-msys2/:



 *   gfortran -c mpi.f90 -fno-range-check -fallow-invalid-boz


However, I will keep an eye on the MS-MPI GitHub repository because the fortran 
side seems to be far from perfect.


  *   Then I downloaded the 3.13.3 version of petsc and configured it, still 
under the MINGW64 terminal, with the following command:


/usr/bin/python ./configure --prefix=/home/paolo/petsc --with-ar=/usr/bin/ar
--with-shared-libraries=0 --with-debugging=0 --with-windows-graphics=0 
--with-x=0
COPTFLAGS="-O3 -mtune=native"
CXXOPTFLAGS="-O3 -mtune=native"
FOPTFLAGS="-O3 -mtune=native"
FFLAGS=-fallow-invalid-boz
--with-mpiexec="/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec"
--download-fblaslapack --download-metis --download-hypre
--download-metis-cmake-arguments='-G "MSYS Makefiles"'
--download-hypre-configure-arguments="--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu"

Note that I just bypassed uninstalling python in mingw64 (which doesn’t work) 
by using /usr/bin/python and that, as opposed to Pierre, I needed to also use 
the MSYS2 archiver (/usr/bin/ar) as opposed to the mingw64 one (/mingw64/bin/ar 
that shows up in the Pierre configure) as also mentioned here 
http://hillyuan.blogspot.com/2017/11/build-petsc-in-windows-under-mingw64.html, 
probably because of this issue 
https://stackoverflow.com/questions/37504625/ar-on-msys2-shell-receives-truncated-paths-when-called-from-makefile.

You are right that you can avoid deinstalling mingw-w64-x86_64-python if you 
can supply the proper Python yourself (we don’t have that luxury in our 
Makefile).
If you want to avoid using that AR, and stick to /mingw64/bin/ar (not sure what 
the pros and cons are), you can either:
- use another PETSC_ARCH (very short, like pw, for petsc-windows);
- use --with-single-library=0.
See this post on GitLab 
https://gitlab.com/petsc/petsc/-/issues/647#note_373507681
The OS I’m referring to is indeed my Windows + MSYS2 box.

Thanks,
Pierre

Then make all, make install and make check all went smooth. Also, I don’t know 
exactly what with-x=0 and with-windows-graphics=0 do, but I think it is stuff 
that I don’t need (yet configure worked with windows-graphics as well).


  *   Finally I launched make test. As some tests failed, I replicated the same 
install procedure on all the systems I have available on this same Windows 
machine (Ubuntu 20.04 and Centos 8 under a VirtualBox 6.0.22 VM, Ubuntu 20.04 
under WSL1 and the MSYS2-MINGW64 toolchain). I am attaching a file with the 
results printed to screen (not sure about which file should be used for a 
comparison/check). Note, however, that the tests in MSYS2 started with some 
cyclic reference issues for some .mod files, but this doesn’t show up in any 
file I could check.


I am still left with some doubts about the archiver, the cyclic reference 
errors and the differences in the test results, but I am able to link my code 
with petsc. Unfortunately, as this Windows porting is part of a large code 
restructuring, I can’t do much more with it, now, from my code. But if you can 
suggest some specific tutorial to use as test also for the parallel, I would be 
glad to dig deeper into the matter.

Best regards

Paol

[petsc-users] R: PETSc and Windows 10

2020-07-05 Thread Paolo Lampitella
Dear all,

I just want to update you on my journey to PETSc compilation in Windows under 
MSYS2+MINGW64

Unfortunately, I haven’t been able to compile petsc-slepc trough Freefem but, 
as my final goal required also Fortran bindings (but I only needed blas, 
lapack, metis and hypre), I decided to follow my own route using the useful 
information from Pierre.


  *   I started by installing MPI from 
https://www.microsoft.com/en-us/download/details.aspx?id=100593. I don’t think 
the SDK is actually needed in my specific workflow, but I installed it as well 
together with mpisetup.
  *   Then I installed MSYS2 just following the wizard. Opened the MSYS2 
terminal and updated with pacman -Syuu, closed if asked, reopened it and used 
again pacman -Syuu several times until no more updates were available. Closed 
it and opened it back.
  *   Under the MSYS2 terminal installed just the following packages:



 *   pacman -S base-devel git gcc gcc-fortran
 *   pacman -S mingw-w64-x86_64-toolchain
 *   pacman -S mingw-w64-x86_64-cmake
 *   pacman -S mingw-w64-x86_64-msmpi


  *   Closed the MSYS2 terminal and opened the MINGW64 one, went to 
/mingw64/include and compiled my mpi module following 
https://www.scivision.dev/windows-mpi-msys2/:



 *   gfortran -c mpi.f90 -fno-range-check -fallow-invalid-boz


However, I will keep an eye on the MS-MPI GitHub repository because the fortran 
side seems to be far from perfect.



  *   Then I downloaded the 3.13.3 version of petsc and configured it, still 
under the MINGW64 terminal, with the following command:



/usr/bin/python ./configure --prefix=/home/paolo/petsc --with-ar=/usr/bin/ar

--with-shared-libraries=0 --with-debugging=0 --with-windows-graphics=0 
--with-x=0

COPTFLAGS="-O3 -mtune=native"

CXXOPTFLAGS="-O3 -mtune=native"

FOPTFLAGS="-O3 -mtune=native"

FFLAGS=-fallow-invalid-boz

--with-mpiexec="/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec"

--download-fblaslapack --download-metis --download-hypre

--download-metis-cmake-arguments='-G "MSYS Makefiles"'

--download-hypre-configure-arguments="--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu"



Note that I just bypassed uninstalling python in mingw64 (which doesn’t work) 
by using /usr/bin/python and that, as opposed to Pierre, I needed to also use 
the MSYS2 archiver (/usr/bin/ar) as opposed to the mingw64 one (/mingw64/bin/ar 
that shows up in the Pierre configure) as also mentioned here 
http://hillyuan.blogspot.com/2017/11/build-petsc-in-windows-under-mingw64.html, 
probably because of this issue 
https://stackoverflow.com/questions/37504625/ar-on-msys2-shell-receives-truncated-paths-when-called-from-makefile.
 Then make all, make install and make check all went smooth. Also, I don’t know 
exactly what with-x=0 and with-windows-graphics=0 do, but I think it is stuff 
that I don’t need (yet configure worked with windows-graphics as well).



  *   Finally I launched make test. As some tests failed, I replicated the same 
install procedure on all the systems I have available on this same Windows 
machine (Ubuntu 20.04 and Centos 8 under a VirtualBox 6.0.22 VM, Ubuntu 20.04 
under WSL1 and the MSYS2-MINGW64 toolchain). I am attaching a file with the 
results printed to screen (not sure about which file should be used for a 
comparison/check). Note, however, that the tests in MSYS2 started with some 
cyclic reference issues for some .mod files, but this doesn’t show up in any 
file I could check.

I am still left with some doubts about the archiver, the cyclic reference 
errors and the differences in the test results, but I am able to link my code 
with petsc. Unfortunately, as this Windows porting is part of a large code 
restructuring, I can’t do much more with it, now, from my code. But if you can 
suggest some specific tutorial to use as test also for the parallel, I would be 
glad to dig deeper into the matter.

Best regards

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Pierre Jolivet<mailto:pierre.joli...@enseeiht.fr>
Inviato: martedì 30 giugno 2020 15:22
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: Matthew Knepley<mailto:knep...@gmail.com>; 
petsc-users<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] PETSc and Windows 10

Please use the 3.13.2 tarball, this was fixed by Satish in the previous commit 
I already linked 
(https://gitlab.com/petsc/petsc/-/commit/2cd8068296b34e127f055bb32f556e3599f17523).
(If you want FreeFEM to do the dirty work for you, just switch to the develop 
branch, and redo “make petsc-slepc”)
But I think you’ve got everything you need now for a smooth compilation :)

Thanks,
Pierre


On 30 Jun 2020, at 3:09 PM, Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:

Dear Pierre,

thanks for the fast response. Unfortunately it still fails, but now in the 
configure of ScaLAPACK
(wh

[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
Dear Pierre,

thanks again for your time

I guess there is no way for me to use the toolchain you are using (I don’t 
remember having any choice on which version of MSYS or GCC I could install)

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Pierre Jolivet<mailto:pierre.joli...@enseeiht.fr>
Inviato: lunedì 29 giugno 2020 20:01
A: Matthew Knepley<mailto:knep...@gmail.com>
Cc: Paolo Lampitella<mailto:paololampite...@hotmail.com>; 
petsc-users<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] PETSc and Windows 10




On 29 Jun 2020, at 7:47 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Mon, Jun 29, 2020 at 1:35 PM Paolo Lampitella 
mailto:paololampite...@hotmail.com>> wrote:
Dear Pierre, sorry to bother you, but I already have some issues. What I did:


  *   pacman -R mingw-w64-x86_64-python mingw-w64-x86_64-gdb (is gdb also 
troublesome?)
  *   Followed points 6 and 7 at 
https://doc.freefem.org/introduction/installation.html#compilation-on-windows
I first got a warning on the configure at point 6, as –disable-hips is not 
recognized. Then, on make ‘petsc-slepc’ of point 7 (no SUDO=sudo flag was 
necessary) I got to this point:

tar xzf ../pkg/petsc-lite-3.13.0.tar.gz
patch -p1 < petsc-suitesparse.patch
patching file petsc-3.13.0/config/BuildSystem/config/packages/SuiteSparse.py
touch petsc-3.13.0/tag-tar
cd petsc-3.13.0 && ./configure MAKEFLAGS='' \
--prefix=/home/paolo/freefem/ff-petsc//r \
--with-debugging=0 COPTFLAGS='-O3 -mtune=generic' CXXOPTFLAGS='-O3 
-mtune=generic' FOPTFLAGS='-O3 -mtune=generic' --with-cxx-dialect=C++11 
--with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-shared-libraries=0 
--with-cc='gcc' --with-cxx='g++' --with-fc='gfortran' 
CXXFLAGS='-fno-stack-protector' CFLAGS='-fno-stack-protector' 
--with-scalar-type=real --with-mpi-lib='/c/Windows/System32/msmpi.dll' 
--with-mpi-include='/home/paolo/FreeFem-sources/3rdparty/include/msmpi' 
--with-mpiexec='/C/Program\ Files/Microsoft\ MPI/Bin/mpiexec' 
--with-blaslapack-include='' 
--with-blaslapack-lib='/mingw64/bin/libopenblas.dll' --download-scalapack 
--download-metis --download-ptscotch --download-mumps --download-hypre 
--download-parmetis --download-superlu --download-suitesparse --download-tetgen 
--download-slepc '--download-metis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-parmetis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-superlu-cmake-arguments=-G "MSYS Makefiles"' 
'--download-hypre-configure-arguments=--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu' PETSC_ARCH=fr
===
 Configuring PETSc to compile on your system
===
TESTING: FortranMPICheck from 
config.packages.MPI(config/BuildSystem/config/pack***
 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):
---
Fortran error! mpi_init() could not be located!
***

make: *** [Makefile:210: petsc-3.13.0/tag-conf-real] Errore 1

Note that I didn’t add anything to any PATH variable, because this is not 
mentioned in your documentation.

On a side note, this is the same error I got when trying to build PETSc in 
Cygwin with the default OpenMPI available in Cygwin.

I am attaching the configure.log… it seems to me that the error comes from the 
configure trying to include the mpif.h in your folder and not using the 
-fallow-invalid-boz flag that I had to use, for example, to compile mpi.f90 
into mpi.mod

But I’m not sure why this is happening

Pierre,

Could this be due to gcc 10?

Sorry, I’m slow. You are right. Our workers use gcc 9, everything is fine, but 
I see on my VM which I updated that I use gcc 10 and had to disable Fortran, I 
guess the MUMPS run I showcased was with a prior PETSc build.
I’ll try to resolve this and will keep you posted.
They really caught a lot of people off guard with gfortran 10…

Thanks,
Pierre


Executing: gfortran -c -o /tmp/petsc-ur0cff6a/config.libraries/conftest.o 
-I/tmp/petsc-ur0cff6a/config.compilers 
-I/tmp/petsc-ur0cff6a/config.setCompilers 
-I/tmp/petsc-ur0cff6a/config.compilersFortran 
-I/tmp/petsc-ur0cff6a/config.libraries  -Wall -ffree-line-length-0 
-Wno-unused-dummy-argument -O3 -mtune=generic   
-I/home/paolo/FreeFem-sources/3rdparty/include/msmpi 
/tmp/petsc-ur0cff6a/config.libraries/conftest.F90
Possible ERROR while running compiler: exit code 1
stderr:
C:/msys64/home/paolo/FreeFem-sources/3rdparty/include/msmpi/mpif.h:227:36:

  227 |PARAMETER (MPI_DATATYPE_NULL=z'0c00')
  |1
Error: 

[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
I think I made the first step of having mingw64 from msys2 working with ms-mpi.

I found that the issue I was having was related to:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91556

and, probably (but impossible to check now), I was using an msys2 and/or mingw 
mpi package before this fix:

https://github.com/msys2/MINGW-packages/commit/11b4cff3d2ec7411037b692b0ad5a9f3e9b9978d#diff-eac59989e3096be97d940c8f47b50fba

Admittedly, I never used gcc 10 before on any machine. Still, I feel that 
reporting that sort of error in that way is,
at least, misleading (I would have preferred the initial implementation as 
mentioned in the gcc bug track).

A second thing that I was not used to, and made me more uncertain of the 
procedure I was following, is having to compile myself the mpi module. There 
are several version of this out there, but I decided to stick with this one:

https://www.scivision.dev/windows-mpi-msys2/

even if there seems to be no need to include -fno-range-check and the current 
mpi.f90 version is different from the mpif.h as reported here:

https://github.com/microsoft/Microsoft-MPI/issues/33

which, to me, are both signs of lack of attention on the fortran side by those 
that maintain this thing.

In summary, this is the procedure I followed so far (on a 64 bit machine with 
Windows 10):


  *   Install MSYS2 from https://www.msys2.org/ and just follow the install 
wizard
  *   Open the MSYS2 terminal and execute: pacman -Syuu
  *   Close the terminal when asked and reopen it
  *   Keep executing ‘pacman -Syuu’ until nothing else needs to be updated
  *   Close the MSYS2 terminal and reopen it (I guess because was in paranoid 
mode), then install packages with:



pacman -S base-devel git gcc gcc-fortran bsdcpio lndir pax-git unzip

pacman -S mingw-w64-x86_64-toolchain

pacman -S mingw-w64-x86_64-msmpi

pacman -S mingw-w64-x86_64-cmake

pacman -S mingw-w64-x86_64-freeglut

pacman -S mingw-w64-x86_64-gsl

pacman -S mingw-w64-x86_64-libmicroutils

pacman -S mingw-w64-x86_64-hdf5

pacman -S mingw-w64-x86_64-openblas

pacman -S mingw-w64-x86_64-arpack

pacman -S mingw-w64-x86_64-jq



This set should include all the libraries mentioned by Pierre and/or used by 
his Jenkins, as the final scope here is to have PETSc and dependencies working. 
But I think that for pure MPI one could stop to msmpi (even, maybe, just 
install msmpi and have the dependencies figured out by pacman). Honestly, I 
don’t remember the exact order I used to install the packages, but this should 
not affect things. Also, as I was still in paranoid mode, I kept executing 
‘pacman -Syuu’ after each package was installed. After this, close the MSYS2 
terminal.



  *   Open the MINGW64 terminal and create the .mod file out of the mpi.f90 
file, as mentioned here https://www.scivision.dev/windows-mpi-msys2/, with:



cd /mingw64/include

gfortran mpif90 -c -fno-range-check -fallow-invalid-boz



This is needed to ‘USE mpi’ (as opposed to INCLUDE ‘mpif.h’)



  *   Install the latest MS-MPI (both sdk and setup) from 
https://www.microsoft.com/en-us/download/details.aspx?id=100593

At this point I’ve been able to compile (using the MINGW64 terminal) different 
mpi test programs and they run as expected in the classical Windows prompt. I 
added this function to my .bashrc in MSYS2 in order to easily copy the required 
dependencies out of MSYS:

function copydep() { ldd $1 | grep "=> /$2" | awk '{print $3}' | xargs -I '{}' 
cp -v '{}' .; }

which can be used, with the MINGW64 terminal, by navigating to the folder where 
the final executable, say, my.exe, resides (even if under a Windows path) and 
executing:

copydep my.exe mingw64

This, of course, must be done before actually trying to execute the .exe in the 
windows cmd prompt.

Hopefully, I should now be able to follow Pierre’s instructions for PETSc (but 
first I wanna give a try to the system python before removing it)

Thanks

Paolo



[petsc-users] R: PETSc and Windows 10

2020-06-29 Thread Paolo Lampitella
As a follow up on the OpenMPI matter in Cygwin, I wasn’t actually able to use 
the cygwin version at all, not even compiling a simple mpi test.
And PETSc fails in using it as well, as it seems unable to find MPI_Init.

I might try with having PETSc install it as it did with MPICH, but just for 
future reference if anyone is interested

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Satish Balay<mailto:ba...@mcs.anl.gov>
Inviato: domenica 28 giugno 2020 18:17
A: Satish Balay via petsc-users<mailto:petsc-users@mcs.anl.gov>
Cc: Paolo Lampitella<mailto:paololampite...@hotmail.com>; Pierre 
Jolivet<mailto:pierre.joli...@enseeiht.fr>
Oggetto: Re: [petsc-users] PETSc and Windows 10

On Sun, 28 Jun 2020, Satish Balay via petsc-users wrote:

> On Sun, 28 Jun 2020, Paolo Lampitella wrote:

> >  *   For my Cygwin-GNU route (basically what is mentioned in PFLOTRAN 
> > documentation), am I expected to then run from the cygwin terminal or 
> > should the windows prompt work as well? Is the fact that I require a second 
> > Enter hit and the mismanagement of serial executables the sign of something 
> > wrong with the Windows prompt?
>
> I would think Cygwin-GNU route should work. I'll have to see if I can 
> reproduce the issues you have.

I attempted a couple of builds - one with mpich and the other with 
cygwin-openmpi

mpich compiled petsc example works sequentially - however mpiexec appears to 
require cygwin env.

>>>>>>>>
C:\petsc-install\bin>ex5f
Number of SNES iterations = 4

C:\petsc-install\bin>mpiexec -n 1 ex5f
[cli_0]: write_line error; fd=448 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=448 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1
[cli_0]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1

C:\petsc-install\bin>
<<<<<<

cygwin-openmpi compiled petsc example binary gives errors even for sequential 
run

>>>>>>>>
C:\Users\balay\test>ex5f
Warning: '/dev/shm' does not exists or is not a directory.

POSIX shared memory objects require the existance of this directory.
Create the directory '/dev/shm' and set the permissions to 01777.
For instance on the command line: mkdir -m 01777 /dev/shm
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 388
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 166
--
Sorry!  You were supposed to get help about:
orte_init:startup:internal-failure
But I couldn't open the help file:
/usr/share/openmpi/help-orte-runtime: No such file or directory.  Sorry!
--
<<<<<<<

So looks like you would need cygwin installed to run Cygwin-MPI binaries.. Also 
I don't know how cygwin/windows interaction overhead will affect parallel 
performance.

Satish



Re: [petsc-users] PETSc and Windows 10

2020-06-28 Thread Paolo Lampitella
Not sure if I did the same. I first made the petsc install in a folder in my 
cygwin home and then copied the mpich executables, their .dll dependencies, my 
executable and its dependencies in a folder on my desktop.

Then I went there with both terminals (cygwin and window) and launched using 
mpiexec.hydra.exe in the folder (noting the difference between using ./ and .\ 
prepended to both executables in the two terminals).

With cygwin terminal things worked as expected. It is kind of premarure now for 
testing performances, but I feel that some compromise here can be admitted, 
considering the different constraints. I didn't pay too much attention in this 
phase but I haven't seen nothing suspiciously slow as well (the point is that I 
don't have a native linux install now to make a meaningful comparison).

However, running from the Windows terminal, things worked differently for me. 
It seems that it worked, but I had to give a second Enter hit... maybe I'm 
missing something behind the lines.

I still have to recompile with OpenMPI to have a meaningful comparison

Thanks

Paolo



Inviato da smartphone Samsung Galaxy.

 Messaggio originale 
Da: Satish Balay 
Data: 28/06/20 18:17 (GMT+01:00)
A: Satish Balay via petsc-users 
Cc: Paolo Lampitella , Pierre Jolivet 

Oggetto: Re: [petsc-users] PETSc and Windows 10

On Sun, 28 Jun 2020, Satish Balay via petsc-users wrote:

> On Sun, 28 Jun 2020, Paolo Lampitella wrote:

> >  *   For my Cygwin-GNU route (basically what is mentioned in PFLOTRAN 
> > documentation), am I expected to then run from the cygwin terminal or 
> > should the windows prompt work as well? Is the fact that I require a second 
> > Enter hit and the mismanagement of serial executables the sign of something 
> > wrong with the Windows prompt?
>
> I would think Cygwin-GNU route should work. I'll have to see if I can 
> reproduce the issues you have.

I attempted a couple of builds - one with mpich and the other with 
cygwin-openmpi

mpich compiled petsc example works sequentially - however mpiexec appears to 
require cygwin env.

>>>>>>>>
C:\petsc-install\bin>ex5f
Number of SNES iterations = 4

C:\petsc-install\bin>mpiexec -n 1 ex5f
[cli_0]: write_line error; fd=448 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=448 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1
[cli_0]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(467):
MPID_Init(140)...: channel initialization failed
MPID_Init(421)...: PMI_Get_appnum returned -1

C:\petsc-install\bin>
<<<<<<

cygwin-openmpi compiled petsc example binary gives errors even for sequential 
run

>>>>>>>>
C:\Users\balay\test>ex5f
Warning: '/dev/shm' does not exists or is not a directory.

POSIX shared memory objects require the existance of this directory.
Create the directory '/dev/shm' and set the permissions to 01777.
For instance on the command line: mkdir -m 01777 /dev/shm
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 388
[ps5:00560] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable 
either could not be found or was not executable by this user in file 
/cygdrive/d/cyg_pub/devel/openmpi/v3.1/prova/openmpi-3.1.6-1.x86_64/src/openmpi-3.1.6/orte/mca/ess/singleton/ess_singleton_module.c
 at line 166
--
Sorry!  You were supposed to get help about:
orte_init:startup:internal-failure
But I couldn't open the help file:
/usr/share/openmpi/help-orte-runtime: No such file or directory.  Sorry!
--
<<<<<<<

So looks like you would need cygwin installed to run Cygwin-MPI binaries.. Also 
I don't know how cygwin/windows interaction overhead will affect parallel 
performance.

Satish


[petsc-users] R: PETSc and Windows 10

2020-06-28 Thread Paolo Lampitella
Hello Pierre,

thank you very much. Knowing that you actually test it on a daily basis is 
already enough for me to then focus on the MSYS2-MinGw64 toolchain, which would 
be more straighforward to deploy (instead of having someone install cygwin) and 
valuable to reuse.

I already had the impression that your work on it was recent but, knowing that 
your actual code is C++, and seeing some recent issues with MS-MPI and 
gfortran, say

https://github.com/microsoft/Microsoft-MPI/issues/33

gave me the impression that the overall toolchain was poorly mantained/tested 
by Microsoft on the fortran side and maybe this could go undetected in non 
fortran projects.

I can also confirm that my problems with MSYS2 and MinGW64 already started at 
the MPI level and had nothing to do with PETSc… yet.

At this point, I guess, we can either go off radar (if there really isn’t much 
love for MSYS2 here ) or keep it going.

I will try to rework everything from scratch with MSYS2 and first make 
extensive MPI tests again. Maybe expect to be bothered again when I try to 
reuse your Makefile 

Thanks again

Paolo

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Pierre Jolivet<mailto:pierre.joli...@enseeiht.fr>
Inviato: domenica 28 giugno 2020 16:42
A: Paolo Lampitella<mailto:paololampite...@hotmail.com>
Cc: Satish Balay<mailto:ba...@mcs.anl.gov>; 
petsc-users<mailto:petsc-users@mcs.anl.gov>
Oggetto: Re: [petsc-users] PETSc and Windows 10

Hello Paolo,


On 28 Jun 2020, at 4:19 PM, Satish Balay 
mailto:ba...@mcs.anl.gov>> wrote:

On Sun, 28 Jun 2020, Paolo Lampitella wrote:


 1.  MSYS2+MinGW64 compilers. I understood that MinGW is not well supported, 
probably because of how it handles paths, but I wanted to give it a try, 
because it should be more “native” and there seems to be relevant examples out 
there that managed to do it. I first tried with the msys2 mpi distribution, 
produced the .mod file out of the mpi.f90 file in the distribution (I tried my 
best with different hacks from known limitations of this file as also present 
in the official MS-MPI distribution) and tried with my code without petsc, but 
it failed in compiling the code with some strange MPI related error (argument 
mismatch between two unrelated MPI calls in the code, which is non sense to 
me). In contrast, simple mpi tests (hello world like) worked as expected. Then 
I decided to follow this:



https://doc.freefem.org/introduction/installation.html#compilation-on-windows

Sorry, our (FreeFEM) documentation is not the best…

MSYS2+MinGW64 is a fantastic tool to deploy .exe with PETSc.
For example, in this .exe 
https://github.com/FreeFem/FreeFem-sources/releases/download/v4.6/FreeFEM-4.6-win7-64.exe,
 we ship PETSc + SLEPc (in real + complex) with MS-MPI, hypre, MUMPS, 
ScaLAPACK, SuperLU, SuiteSparse, ParMETIS, METIS, SCOTCH, TetGen, HPDDM, all 
compiled by PETSc, needless to say :)
There are some tricks, that you can copy/paste from 
https://github.com/FreeFem/FreeFem-sources/blob/master/3rdparty/ff-petsc/Makefile#L99-L120
Basically, hypre + MinGW64 does not work if you don’t supply 
'--download-hypre-configure-arguments=--build=x86_64-linux-gnu 
--host=x86_64-linux-gnu' and all CMake packages need an additional flag as well:
'--download-metis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-parmetis-cmake-arguments=-G "MSYS Makefiles"' 
'--download-superlu-cmake-arguments=-G "MSYS Makefiles"'

This is tested on a daily basis on Windows 7 and Windows 10, so I’m a little 
puzzled by your MPI problems.
I’d suggest you stick to MS-MPI (that’s what we use and it’s trivial to install 
on MSYS2 https://packages.msys2.org/package/mingw-w64-x86_64-msmpi).

I’m not sure this is specific to PETSc, so feel free to have a chat in private.
But I guess we can continue on the mailing list as well, it’s just that there 
is not much love for MSYS2 over here, sadly.

Thanks,
Pierre



but the exact same type of error came up (MPI calls in my code were different, 
but the error was the same). Trying again from scratch (i.e., without all the 
things I did in the beginning to compile my code) the same error came up in 
compiling some of the freefem dependencies (this time not even mpi calls).



As a side note, there seems to be an official effort in porting petsc to msys2 
(https://github.com/okhlybov/MINGW-packages/tree/whpc/mingw-w64-petsc), but it 
didn’t get into the official packages yet, which I interpret as a warning



 1.  Didn’t give a try to cross compiling with MinGw from Linux, as I tought it 
couldn’t be any better than doing it from MSYS2
 2.  Didn’t try PGI as I actually didn’t know if I would then been able to make 
PETSc work.

So, here there are some questions I have with respect to where I stand now and 
the points above:


*   I haven’t seen the MSYS2-MinGw64 toolchain mentioned at all in official 
documentation/discussions. Should 

[petsc-users] R: PETSc and Windows 10

2020-06-28 Thread Paolo Lampitella
Dear Satish,

let me first mention that using the OpenMPI runtime for running the executable 
built on top of the PETSc-mpich toolchain just came as an act of despair and 
was just a command away (despite knowing the ABI initiative is based on mpich). 
I already had OpenMPI in Cygwin because I was planning to move there in case of 
PETSc failing to install MPICH. But as PETSc managed to handle that, that route 
is on hold for now (and I have not even tought about a plan for destributing 
it).

Also, I didn’t mess up myself with any PATH variable nor I installed anything 
in system paths, and I Always checked in both systems (cygwin terminal and 
Windows prompt) that the mpiexec that was used was the correct one.

For what I can tell, just copying all the mpi executables made by PETSc in the 
same folder of my executable (say, a folder on the desktop), going there with 
the cygwin terminal or the Windows prompt and running from there, actually 
worked (yet, with the differences I specified in the first mail). So, in this 
very case, the idea would be that the user just installs cygwin and runs my 
executable from within the folder I send to him.

If MS-MPI or Intel MPI had worked, it wouldn’t have been a problem (in my view) 
to let them install one of them, as long as a trivial install would have worked.

Thanks

Inviato da Posta<https://go.microsoft.com/fwlink/?LinkId=550986> per Windows 10

Da: Satish Balay<mailto:ba...@mcs.anl.gov>
Inviato: domenica 28 giugno 2020 16:30
A: Satish Balay via petsc-users<mailto:petsc-users@mcs.anl.gov>
Cc: Paolo Lampitella<mailto:paololampite...@hotmail.com>; Pierre 
Jolivet<mailto:pierre.joli...@enseeiht.fr>
Oggetto: Re: [petsc-users] PETSc and Windows 10

BTW: How does redistributing MPI/runtime work with all the choices you have?

For ex: with MS-MPI, Intel-MPI - wouldn't the user have to install these 
packages? [i.e you can't just copy them over to a folder and have mpiexec work 
- from what I can tell]

And how did you plan on installing MPICH - but make mpiexec from OpenMPI 
redistributable? Did you use OpeMPI from cygwin - or install it manually?

And presumably you don't want users installing cygwin.

Satish

On Sun, 28 Jun 2020, Satish Balay via petsc-users wrote:

> On Sun, 28 Jun 2020, Paolo Lampitella wrote:
>
> > Dear PETSc users,
> >
> > I’ve been an happy PETSc user since version 3.3, using it both under Ubuntu 
> > (from 14.04 up to 20.04) and CentOS (from 5 to 8).
> >
> > I use it as an optional component for a parallel Fortran code (that, BTW, 
> > also uses metis) and, wherever allowed, I used to install myself MPI (both 
> > MPICH and OpenMPI) and PETSc on top of it without any trouble ever (besides 
> > being, myself, as dumb as one can be in this).
> >
> > I did this on top of gnu compilers and, less extensively, intel compilers, 
> > both on a range of different systems (from virtual machines, to 
> > workstations to actual clusters).
> >
> > So far so good.
> >
> > Today I find myself in the need of deploying my application to Windows 10 
> > users, which means giving them a folder with all the executables and 
> > libraries to make them run in it, including the mpi runtime. Unfortunately, 
> > I also have to rely on free tools (can’t afford Intel for the moment).
> >
> > To the best of my knowledge, considering also far from optimal solutions, 
> > my options would then be: Virtual machines and WSL1, Cygwin, MSYS2-MinGW64, 
> > Cross compiling with MinGW64 from within Linux, PGI + Visual Studio + 
> > Cygwin (not sure about this one)
> >
> > I know this is largely unsupported, but I was wondering if there is, 
> > nonetheless, some general (and more official) knowledge available on the 
> > matter. What I tried so far:
> >
> >
> >   1.  Virtual machines and WSL1: both work like a charm, just like in the 
> > native OS, but very far from ideal for the distribution purpose
> >
> >
> >   1.  Cygwin with gnu compilers (as opposed to using Intel and Visual 
> > Studio): I was unable to compile myself MPI as I am used to on Linux, so I 
> > just tried going all in and let PETSc do everything for me (using static 
> > linking): download and install MPICH, BLAS, LAPACK, METIS and HYPRE. 
> > Everything just worked (for now compiling and making trivial tests) and I 
> > am able to use everything from within a cygwin terminal (even with 
> > executables and dependencies outside cygwin). Still, even within cygwin, I 
> > can’t switch to use, say, the cygwin ompi mpirun/mpiexec for an mpi program 
> > compiled with PETSc mpich (things run but not as expected). Some troubles 
> > start when I try to use cmd.exe (which I pictured as the more natural way 
>

[petsc-users] PETSc and Windows 10

2020-06-28 Thread Paolo Lampitella
 processes produce more 
copies of this. However, both Intel and MS-MPI are able to run a serial fortran 
executable built with cygwin. I think I made everything correctly and adding 
-localhost didn’t help (actually, it caused more problems to the interpretation 
of the cmd line arguments for mpiexec)


  1.  Cygwin with MinGW64 compilers. Never managed to compile MPI, not even 
trough PETSc.



  1.  MSYS2+MinGW64 compilers. I understood that MinGW is not well supported, 
probably because of how it handles paths, but I wanted to give it a try, 
because it should be more “native” and there seems to be relevant examples out 
there that managed to do it. I first tried with the msys2 mpi distribution, 
produced the .mod file out of the mpi.f90 file in the distribution (I tried my 
best with different hacks from known limitations of this file as also present 
in the official MS-MPI distribution) and tried with my code without petsc, but 
it failed in compiling the code with some strange MPI related error (argument 
mismatch between two unrelated MPI calls in the code, which is non sense to 
me). In contrast, simple mpi tests (hello world like) worked as expected. Then 
I decided to follow this:



https://doc.freefem.org/introduction/installation.html#compilation-on-windows



but the exact same type of error came up (MPI calls in my code were different, 
but the error was the same). Trying again from scratch (i.e., without all the 
things I did in the beginning to compile my code) the same error came up in 
compiling some of the freefem dependencies (this time not even mpi calls).



As a side note, there seems to be an official effort in porting petsc to msys2 
(https://github.com/okhlybov/MINGW-packages/tree/whpc/mingw-w64-petsc), but it 
didn’t get into the official packages yet, which I interpret as a warning



  1.  Didn’t give a try to cross compiling with MinGw from Linux, as I tought 
it couldn’t be any better than doing it from MSYS2
  2.  Didn’t try PGI as I actually didn’t know if I would then been able to 
make PETSc work.

So, here there are some questions I have with respect to where I stand now and 
the points above:


 *   I haven’t seen the MSYS2-MinGw64 toolchain mentioned at all in 
official documentation/discussions. Should I definitely abandon it (despite 
someone mentioning it as working) because of known issues?
 *   What about the PGI route? I don’t see it mentioned as well. I guess it 
would require some work on win32fe
 *   For my Cygwin-GNU route (basically what is mentioned in PFLOTRAN 
documentation), am I expected to then run from the cygwin terminal or should 
the windows prompt work as well? Is the fact that I require a second Enter hit 
and the mismanagement of serial executables the sign of something wrong with 
the Windows prompt?
 *   More generally, is there some known working, albeit non official, 
route given my constraints (free+fortran+windows+mpi+petsc)?

Thanks for your attention and your great work on PETSc

Best regards

Paolo Lampitella


[petsc-users] R: Best workflow for different systems with different block sizes

2017-02-02 Thread Paolo Lampitella
Barry,

thank you very much.

Hopefully, I will soon be able to come back with some real numbers on some real 
machine, to compare the two approaches.

Paolo

-Messaggio originale-
Da: Barry Smith [mailto:bsm...@mcs.anl.gov] 
Inviato: mercoledì 1 febbraio 2017 17:53
A: Paolo Lampitella <paololampite...@hotmail.com>
Cc: petsc-users@mcs.anl.gov
Oggetto: Re: [petsc-users] Best workflow for different systems with different 
block sizes


   Paolo,

 If the various matrices keep the same nonzero structure throughout the 
simulation there is a slight performance gain to reusing them vs creating and 
destroying them in each outer iteration. Likely not more than say 10% of the 
run time if the linear solves are relatively difficult (i.e. most of the time 
is spent in the linear solves) but it could be a bit more maybe 20% if the 
linear solves are not dominating the time.

I am not basing this on any quantitive information I have. So it really 
comes down to if you want to run larger problems that require reusing the 
memory.

 Regarding creating the right hand side in your memory and copying to 
PETSc. Creating the right hand side directly in PETSc vectors (with 
VecGetArrayF90()) will be a little faster and require slightly less memory so 
is a "good thing to do" but I suspect you will barely be able to measure the 
difference.

   Barry



> On Feb 1, 2017, at 9:21 AM, Paolo Lampitella <paololampite...@hotmail.com> 
> wrote:
> 
> Dear PETSc users and developers,
>  
> I successfully, and very satisfactorily, use PETSc for the linear 
> algebra part of a coupled, preconditioned, density-based, unstructured, 
> cell-centered, finite volume CFD code. In short, the method is the one 
> presented in the paper:
>  
> J.M. Weiss, J.P. Maruszewski, W.A. Smith: Implicit Solution of 
> Preconditioned Navier-Stokes Equations Using Algebraic Multigrid
> http://dx.doi.org/10.2514/2.689
>  
> except for the AMG part as, at the moment, I use GMRES + point Jacobi trough 
> PETSc (this will probably be the subject for another post).
> However, I also have a very simple, internally coded, symmetric block 
> Gauss-Seidel solver.
>  
> The code, written in Fortran with MPI, manages all the aspects of the 
> solver, including outer iterations, with PETSc just handling the 
> resolution of the linear systems at each outer iteration. In particular, note 
> that, for certain combination of models, the solver can end up having 
> different systems of equations to solve, in sequence, at each outer 
> iteration. For example, I might have:
>  
> -  Main equations (with block size = 5 + n species)
> -  Turbulence equations (with block size = number of equations 
> implied by the turbulence model)
> -  Additional Transported Scalar Equations (with block size = number 
> of required scalars)
> -  Etc.
>  
> The way I manage the workflow with the internal GS solver is such that, for 
> each block of equations, at each outer iteration, I do the following:
>  
> -  allocate matrix, solution and rhs
> -  fill the matrix and rhs
> -  solve the system
> -  update the independent variables (system solution is in delta form)
> -  deallocate matrix, solution and rhs
>  
> so that the allocated memory is kept as low as possible at any given 
> time. However, for legacy reasons now obsolete, the PETSc workflow 
> used in the code is different as all the required matrices and rhs are 
> instead created in advance with the routine petsc_create.f90 in attachment. 
> Then iterations start, and at each iteration, each system is solved with the 
> routine petsc_solve.f90, also in attachment (both are included in a dedicated 
> petsc module).
> At the end of the iterations, before the finalization, a petsc_destroy 
> subroutine (not attached) is obviously also called for each matrix/rhs 
> allocated.
> So, in conclusion, I keep in memory all the matrices for the whole time (the 
> matrix structure, of course, doesn’t change with the iterations).
> My first question then is: 
>  
> 1) Is this approach recommended? Wouldn’t, instead, calling my petsc_create 
> inside my petsc_solve be a better option in my case?
> In this case I could avoid storing any petsc matrix or rhs outside the 
> petsc_solve routine.
> Would the overhead implied by the routines called in my petsc_create be 
> sustainable if that subroutine is called at every outer iteration for every 
> system?
>  
> Also, note that the way I fill the RHS and the matrix of the systems 
> for PETSc are different. For the RHS I always allocate mine in the 
> code, which is then copied in the petsc one in the petsc_solve routine. For 
> the matrix, instead, I directly fill the petsc one outside

[petsc-users] Best workflow for different systems with different block sizes

2017-02-01 Thread Paolo Lampitella
Dear PETSc users and developers,

I successfully, and very satisfactorily, use PETSc for the linear algebra part 
of a coupled, preconditioned, density-based,
unstructured, cell-centered, finite volume CFD code. In short, the method is 
the one presented in the paper:

J.M. Weiss, J.P. Maruszewski, W.A. Smith: Implicit Solution of Preconditioned 
Navier-Stokes Equations Using Algebraic Multigrid
http://dx.doi.org/10.2514/2.689

except for the AMG part as, at the moment, I use GMRES + point Jacobi trough 
PETSc (this will probably be the subject for another post).
However, I also have a very simple, internally coded, symmetric block 
Gauss-Seidel solver.

The code, written in Fortran with MPI, manages all the aspects of the solver, 
including outer iterations, with PETSc just handling the
resolution of the linear systems at each outer iteration. In particular, note 
that, for certain combination of models, the solver
can end up having different systems of equations to solve, in sequence, at each 
outer iteration. For example, I might have:


-  Main equations (with block size = 5 + n species)

-  Turbulence equations (with block size = number of equations implied 
by the turbulence model)

-  Additional Transported Scalar Equations (with block size = number of 
required scalars)

-  Etc.

The way I manage the workflow with the internal GS solver is such that, for 
each block of equations, at each outer iteration, I do the following:


-  allocate matrix, solution and rhs

-  fill the matrix and rhs

-  solve the system

-  update the independent variables (system solution is in delta form)

-  deallocate matrix, solution and rhs

so that the allocated memory is kept as low as possible at any given time. 
However, for legacy reasons now obsolete, the PETSc workflow used in the code
is different as all the required matrices and rhs are instead created in 
advance with the routine petsc_create.f90 in attachment. Then iterations start,
and at each iteration, each system is solved with the routine petsc_solve.f90, 
also in attachment (both are included in a dedicated petsc module).
At the end of the iterations, before the finalization, a petsc_destroy 
subroutine (not attached) is obviously also called for each matrix/rhs 
allocated.
So, in conclusion, I keep in memory all the matrices for the whole time (the 
matrix structure, of course, doesn't change with the iterations).
My first question then is:

1) Is this approach recommended? Wouldn't, instead, calling my petsc_create 
inside my petsc_solve be a better option in my case?
In this case I could avoid storing any petsc matrix or rhs outside the 
petsc_solve routine.
Would the overhead implied by the routines called in my petsc_create be 
sustainable if that subroutine is called at every outer iteration for every 
system?

Also, note that the way I fill the RHS and the matrix of the systems for PETSc 
are different. For the RHS I always allocate mine in the code, which is then
copied in the petsc one in the petsc_solve routine. For the matrix, instead, I 
directly fill the petsc one outside the subroutine,
which is then passed already filled to petsc_solve. So, the second question is:

2) Independently from the approach at question (1), which method do you suggest 
to adopt? Storing directly the rhs and the matrix in the petsc ones
Or copying them at the solve time? Is there a mainstream approach in such cases?

I don't know if this has relevance but, consider that, in my case, every 
process always writes only its own matrix and rhs entries.

Unfortunately, at the moment, I have no access to a system where a 
straightforward test would give a clear answer to this. Still, I guess, the 
matter
is more conceptual than practical.

Thank you and sorry for the long mail


petsc_solve.f90
Description: petsc_solve.f90


petsc_create.f90
Description: petsc_create.f90