Re: [petsc-users] Field split degree of freedom ordering

2022-11-01 Thread Jed Brown
In most circumstances, you can and should interlace in some form such that each 
block in fieldsplit is distributed across all ranks. If you interlace at scalar 
granularity as described, then each block needs to be able to do that. So for 
the Stokes equations with equal order elements (like P1-P1 stabilized), you can 
interlace (u,v,w,p), but for mixed elements (like Q2-P1^discontinuous) you 
can't interlace in that way. You can still distribute pressure and velocity 
over all processes, but will need index sets to identify the velocity-pressure 
splits.

Alexander Lindsay  writes:

> In the block matrices documentation, it's stated: "Note that for interlaced
> storage the number of rows/columns of each block must be the same size" Is
> interlacing defined in a global sense, or a process-local sense? So
> explicitly, if I don't want the same size restriction, do I need to ensure
> that globally all of my block 1 dofs are numbered after my block 0 dofs? Or
> do I need to follow that on a process-local level? Essentially in libMesh
> we always follow rank-major ordering. I'm asking whether for unequal row
> sizes, in order to split, would we need to strictly follow variable-major
> ordering (splitting here meaning splitting by variable)?
>
> Alex


[petsc-users] Field split degree of freedom ordering

2022-11-01 Thread Alexander Lindsay
In the block matrices documentation, it's stated: "Note that for interlaced
storage the number of rows/columns of each block must be the same size" Is
interlacing defined in a global sense, or a process-local sense? So
explicitly, if I don't want the same size restriction, do I need to ensure
that globally all of my block 1 dofs are numbered after my block 0 dofs? Or
do I need to follow that on a process-local level? Essentially in libMesh
we always follow rank-major ordering. I'm asking whether for unequal row
sizes, in order to split, would we need to strictly follow variable-major
ordering (splitting here meaning splitting by variable)?

Alex


Re: [petsc-users] PETSc Windows Installation

2022-11-01 Thread Mohammad Ali Yaqteen
What if I use Codeblocks to run petsc? Would I still need to reinstall petsc or 
the Cygwin installation will work?

Thanks 
Ali

-Original Message-
From: Satish Balay  
Sent: Wednesday, November 2, 2022 12:13 AM
To: Mohammad Ali Yaqteen 
Cc: petsc-users 
Subject: Re: [petsc-users] PETSc Windows Installation

If you need to use PETSc from Visual Studio - you need to follow instructions 
at 
https://petsc.org/release/install/windows/#installation-with-microsoft-intel-windows-compilers

[i.e install with MS compilers/MPI - not cygwin compilers/MPI]

Also check "Project Files" section on how to setup compiler env for visual 
studio.

Note: Most external packages won't work with MS compilers.

Satish

On Tue, 1 Nov 2022, Mohammad Ali Yaqteen wrote:

> The above commands worked but I get an error message when I include petsc.h 
> in Visual Studio. The error message is "Cannot open include file: 
> 'petscconf.h': No such file or directory
> 
> Thanks,
> Ali
> -Original Message-
> From: Satish Balay  
> Sent: Tuesday, November 1, 2022 2:40 PM
> To: Mohammad Ali Yaqteen 
> Cc: petsc-users 
> Subject: Re: [petsc-users] PETSc Windows Installation
> 
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -lhwloc: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -levent_core: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -levent_pthreads: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -lz: No such file or directory
> 
> For some reason cygwin has broken dependencies here. Run cygwin setup and 
> install the following pkgs.
> 
> $ cygcheck.exe -f /usr/lib/libhwloc.dll.a /usr/lib/libevent_core.dll.a 
> /usr/lib/libevent_pthreads.dll.a /usr/lib/libz.dll.a 
> libevent-devel-2.1.12-1
> libevent-devel-2.1.12-1
> libhwloc-devel-2.6.0-2
> zlib-devel-1.2.12-1
> 
> BTW: you can attach the file from 
> PETSC_DIR/PETSC_ARCH/lib/petsc/conf/configure.log
> 
> Satish
> 
> On Tue, 1 Nov 2022, Mohammad Ali Yaqteen wrote:
> 
> > I am unable to attach the configure.log file. Hence. I have copied the 
> > following text after executing the command (less configure.log) in the 
> > cygwin64
> > 
> > Executing: uname -s
> > stdout: CYGWIN_NT-10.0-19044
> > =
> >  Configuring PETSc to compile on your system
> > =
> > 
> > 
> > 
> > Starting configure run at Tue, 01 Nov 2022 13:06:06 +0900
> > Configure Options: --configModules=PETSc.Configure 
> > --optionsModule=config.compilerOptions --with-cc=mpicc --with-cxx=mpicxx 
> > --with-fc=mpif90
> > Working directory: /home/SEJONG/petsc-3.18.1
> > Machine platform:
> > uname_result(system='CYGWIN_NT-10.0-19044', node='DESKTOP-R1C768B', 
> > release='3.3.6-341.x86_64', version='2022-09-05 11:15 UTC', 
> > machine='x86_64')
> > Python version:
> > 3.9.10 (main, Jan 20 2022, 21:37:52)
> > [GCC 11.2.0]
> > 
> >   Environmental variables
> > USERDOMAIN=DESKTOP-R1C768B
> > OS=Windows_NT
> > COMMONPROGRAMFILES=C:\Program Files\Common Files
> > PROCESSOR_LEVEL=6
> > PSModulePath=C:\Users\SEJONG\Documents\WindowsPowerShell\Modules;C:\Program 
> > Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
> > CommonProgramW6432=C:\Program Files\Common Files
> > CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
> > LANG=en_US.UTF-8
> > TZ=Asia/Seoul
> > HOSTNAME=DESKTOP-R1C768B
> > PUBLIC=C:\Users\Public
> > OLDPWD=/home/SEJONG
> > USERNAME=SEJONG
> > LOGONSERVER=\\DESKTOP-R1C768B
> > PROCESSOR_ARCHITECTURE=AMD64
> > LOCALAPPDATA=C:\Users\SEJONG\AppData\Local
> > COMPUTERNAME=DESKTOP-R1C768B
> > USER=SEJONG
> > !::=::\
> > SYSTEMDRIVE=C:
> > USERPROFILE=C:\Users\SEJONG
> > PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL
> > SYSTEMROOT=C:\Windows
> > USERDOMAIN_ROAMINGPROFILE=DESKTOP-R1C768B
> > OneDriveCommercial=C:\Users\SEJONG\OneDrive - Sejong University
> > PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
> > GNUPLOT_LIB=C:\Program Files\gnuplot\demo;C:\Program 
> > Files\gnuplot\demo\games;C:\Program Files\gnuplot\share
> > PWD=/home/SEJONG/petsc-3.18.1
> > MSMPI_BIN=C:\Program Files\Microsoft MPI\Bin\
> > HOME=/home/SEJONG
> > TMP=/tmp
> > OneDrive=C:\Users\SEJONG\OneDrive - Sejong University
> > ZES_ENABLE_SYSMAN=1
> > !C:=C:\cygwin64\bin
> > PROCESSOR_REVISION=a505
> > PROFILEREAD=true
> > PROMPT=$P$G
> > 

Re: [petsc-users] KSP on GPU

2022-11-01 Thread Carl-Johan Thore via petsc-users
Yes, I'm calling VecRestoreArray, but I realized now that I exported the 
vectors to Matlab
before doing that. Apparently that worked anyway for the CPU, but when using 
the GPU
it didn't. If I call VecRestoreArray before exporting then everything works 
fine on the GPU
as well. Thanks for pointing this out!

From: Stefano Zampini 
Sent: den 1 november 2022 17:11
To: Carl-Johan Thore 
Cc: Mark Adams ; PETSc users list 
Subject: Re: [petsc-users] KSP on GPU

Are you calling VecRestoreArray when you are done inserting the values?

On Tue, Nov 1, 2022, 18:42 Carl-Johan Thore via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Thanks for the tips!

The suggested settings for GAMG did not yield better results,
but hypre worked well right away, giving very good convergence!

A follow-up question then (I hope that's ok; and it could be related to GAMG
not working, I'll check that). Once everything was running I discovered that my 
gradient vector
dfdx which I populate via an array df obtained from VecGetArray(dfdx, ) 
doesn't get
filled properly; it always contains only zeros. This is not the case when I run 
on the CPU,
and df gets filled as it should even on the GPU, suggesting that either I'm not 
using
VecGetArray properly, or I shouldn't use it at all for GPU computations?

Kind regards,
Carl-Johan

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: den 31 oktober 2022 13:30
To: Carl-Johan Thore mailto:carl-johan.th...@liu.se>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; Barry Smith 
mailto:bsm...@petsc.dev>>; 
petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] KSP on GPU

* You could try hypre or another preconditioner that you can afford, like LU or 
ASM, that works.
* If this matrix is SPD, you want to use -fieldsplit_0_pc_gamg_esteig_ksp_type 
cg -fieldsplit_0_pc_gamg_esteig_ksp_max_it 10
 These will give better eigen estimates, and that is important.
The differences between these steimates is not too bad.
There is a safety factor (1.05 is the default) that you could increase with: 
-fieldsplit_0_mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.1
* Finally you could try -fieldsplit_0_pc_gamg_reuse_interpolation 1, if GAMG is 
still not working.

Use -fieldsplit_0_ksp_converged_reason and check the iteration count.
And it is a good idea to check with hypre to make sure something is not going 
badly in terms of performance anyway. AMG is hard and hypre is a good solver.

Mark

On Mon, Oct 31, 2022 at 1:56 AM Carl-Johan Thore via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
The GPU supports double precision and I didn't explicitly tell PETSc to use 
float when compiling, so
I guess it uses double? What's the easiest way to check?

Barry, running -ksp_view shows that the solver options are the same for CPU and 
GPU. The only
difference is the coarse grid solver for gamg ("the package used to perform 
factorization:") which
is petsc for CPU and cusparse for GPU. I tried forcing the GPU to use petsc via
-fieldsplit_0_mg_coarse_sub_pc_factor_mat_solver_type, but then ksp failed to 
converge
even on the first topology optimization iteration.

-ksp_view also shows differences in the eigenvalues from the Chebyshev 
smoother. For example,

GPU:
   Down solver (pre-smoother) on level 2 ---
  KSP Object: (fieldsplit_0_mg_levels_2_) 1 MPI process
type: chebyshev
  eigenvalue targets used: min 0.109245, max 1.2017
  eigenvalues provided (min 0.889134, max 1.09245) with

CPU:
  eigenvalue targets used: min 0.112623, max 1.23886
  eigenvalues provided (min 0.879582, max 1.12623)

But I guess such differences are expected?

/Carl-Johan

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: den 30 oktober 2022 22:00
To: Barry Smith mailto:bsm...@petsc.dev>>
Cc: Carl-Johan Thore mailto:carl-johan.th...@liu.se>>; 
petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] KSP on GPU

On Sun, Oct 30, 2022 at 3:52 PM Barry Smith 
mailto:bsm...@petsc.dev>> wrote:

   In general you should expect similar but not identical conference behavior.

I suggest running with all the monitoring you can. 
-ksp_monitor_true_residual -fieldsplit_0_monitor_true_residual 
-fieldsplit_1_monitor_true_residual and compare the various convergence between 
the CPU and GPU. Also run with -ksp_view and check that the various solver 
options are the same (they should be).

Is the GPU using float or double?

   Matt

  Barry


On Oct 30, 2022, at 11:02 AM, Carl-Johan Thore via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:

Hi,

I'm solving a topology optimization problem with Stokes flow discretized by a 
stabilized Q1-Q0 finite element method
and using BiCGStab with the fieldsplit preconditioner to solve the linear 
systems. The implementation
is based on DMStag, runs on Ubuntu via WSL2, and works fine with PETSc-3.18.1 
on multiple CPU cores and the following
options for the 

Re: [petsc-users] KSP on GPU

2022-11-01 Thread Stefano Zampini
Are you calling VecRestoreArray when you are done inserting the values?

On Tue, Nov 1, 2022, 18:42 Carl-Johan Thore via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Thanks for the tips!
>
>
>
> The suggested settings for GAMG did not yield better results,
>
> but hypre worked well right away, giving very good convergence!
>
>
>
> A follow-up question then (I hope that’s ok; and it could be related to
> GAMG
>
> not working, I’ll check that). Once everything was running I discovered
> that my gradient vector
>
> dfdx which I populate via an array df obtained from VecGetArray(dfdx, )
> doesn’t get
>
> filled properly; it always contains only zeros. This is not the case when
> I run on the CPU,
>
> and df gets filled as it should even on the GPU, suggesting that either
> I’m not using
>
> VecGetArray properly, or I shouldn’t use it at all for GPU computations?
>
>
>
> Kind regards,
>
> Carl-Johan
>
>
>
> *From:* Mark Adams 
> *Sent:* den 31 oktober 2022 13:30
> *To:* Carl-Johan Thore 
> *Cc:* Matthew Knepley ; Barry Smith ;
> petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] KSP on GPU
>
>
>
> * You could try hypre or another preconditioner that you can afford,
> like LU or ASM, that works.
>
> * If this matrix is SPD, you want to use
> -fieldsplit_0_pc_gamg_esteig_ksp_type cg
> -fieldsplit_0_pc_gamg_esteig_ksp_max_it 10
>
>  These will give better eigen estimates, and that is important.
>
> The differences between these steimates is not too bad.
>
> There is a safety factor (1.05 is the default) that you could increase
> with: -fieldsplit_0_mg_levels_ksp_chebyshev_esteig 0,0.05,0,*1.1*
>
> * Finally you could try -fieldsplit_0_pc_gamg_reuse_interpolation 1, if
> GAMG is still not working.
>
>
>
> Use -fieldsplit_0_ksp_converged_reason and check the iteration count.
>
> And it is a good idea to check with hypre to make sure something is not
> going badly in terms of performance anyway. AMG is hard and hypre is a good
> solver.
>
>
>
> Mark
>
>
>
> On Mon, Oct 31, 2022 at 1:56 AM Carl-Johan Thore via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> The GPU supports double precision and I didn’t explicitly tell PETSc to
> use float when compiling, so
>
> I guess it uses double? What’s the easiest way to check?
>
>
>
> Barry, running -ksp_view shows that the solver options are the same for
> CPU and GPU. The only
>
> difference is the coarse grid solver for gamg (“the package used to
> perform factorization:”) which
>
> is petsc for CPU and cusparse for GPU. I tried forcing the GPU to use
> petsc via
>
> -fieldsplit_0_mg_coarse_sub_pc_factor_mat_solver_type, but then ksp failed
> to converge
>
> even on the first topology optimization iteration.
>
>
>
> -ksp_view also shows differences in the eigenvalues from the Chebyshev
> smoother. For example,
>
>
>
> GPU:
>
>Down solver (pre-smoother) on level 2 ---
>
>   KSP Object: (fieldsplit_0_mg_levels_2_) 1 MPI process
>
> type: chebyshev
>
>   eigenvalue targets used: min 0.109245, max 1.2017
>
>   eigenvalues provided (min 0.889134, max 1.09245) with
>
>
>
> CPU:
>
>   eigenvalue targets used: min 0.112623, max 1.23886
>
>   eigenvalues provided (min 0.879582, max 1.12623)
>
>
>
> But I guess such differences are expected?
>
>
>
> /Carl-Johan
>
>
>
> *From:* Matthew Knepley 
> *Sent:* den 30 oktober 2022 22:00
> *To:* Barry Smith 
> *Cc:* Carl-Johan Thore ; petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] KSP on GPU
>
>
>
> On Sun, Oct 30, 2022 at 3:52 PM Barry Smith  wrote:
>
>
>
>In general you should expect similar but not identical conference
> behavior.
>
>
>
> I suggest running with all the monitoring you can.
> -ksp_monitor_true_residual
> -fieldsplit_0_monitor_true_residual -fieldsplit_1_monitor_true_residual and
> compare the various convergence between the CPU and GPU. Also run with
> -ksp_view and check that the various solver options are the same (they
> should be).
>
>
>
> Is the GPU using float or double?
>
>
>
>Matt
>
>
>
>   Barry
>
>
>
>
>
> On Oct 30, 2022, at 11:02 AM, Carl-Johan Thore via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>
>
> Hi,
>
>
>
> I'm solving a topology optimization problem with Stokes flow discretized
> by a stabilized Q1-Q0 finite element method
>
> and using BiCGStab with the fieldsplit preconditioner to solve the linear
> systems. The implementation
>
> is based on DMStag, runs on Ubuntu via WSL2, and works fine with
> PETSc-3.18.1 on multiple CPU cores and the following
>
> options for the preconditioner:
>
>
>
> -fieldsplit_0_ksp_type preonly \
>
> -fieldsplit_0_pc_type gamg \
>
> -fieldsplit_0_pc_gamg_reuse_interpolation 0 \
>
> -fieldsplit_1_ksp_type preonly \
>
> -fieldsplit_1_pc_type jacobi
>
>
>
> However, when I enable GPU computations by adding two options -
>
>
>
> ...
>
> -dm_vec_type cuda \
>
> -dm_mat_type aijcusparse \
>
> -fieldsplit_0_ksp_type preonly \
>
> 

[petsc-users] AMD vs Intel mobile CPU performance

2022-11-01 Thread D.J. Nolte
Hi all,
I'm looking for a small laptop which I'll be using (also) for small scale
PETSc (KSP & SNES) simulations. For this setting performance is not that
important, but still, I wonder if the community has any experience with AMD
Ryzen CPUs (specifically 5 Pro 6650U) CPUs compared to Intel i7 12th gen.
Do I have to expect significant performance differences?

Thanks!

David


Re: [petsc-users] KSP on GPU

2022-11-01 Thread Carl-Johan Thore via petsc-users
Thanks for the tips!

The suggested settings for GAMG did not yield better results,
but hypre worked well right away, giving very good convergence!

A follow-up question then (I hope that's ok; and it could be related to GAMG
not working, I'll check that). Once everything was running I discovered that my 
gradient vector
dfdx which I populate via an array df obtained from VecGetArray(dfdx, ) 
doesn't get
filled properly; it always contains only zeros. This is not the case when I run 
on the CPU,
and df gets filled as it should even on the GPU, suggesting that either I'm not 
using
VecGetArray properly, or I shouldn't use it at all for GPU computations?

Kind regards,
Carl-Johan

From: Mark Adams 
Sent: den 31 oktober 2022 13:30
To: Carl-Johan Thore 
Cc: Matthew Knepley ; Barry Smith ; 
petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] KSP on GPU

* You could try hypre or another preconditioner that you can afford, like LU or 
ASM, that works.
* If this matrix is SPD, you want to use -fieldsplit_0_pc_gamg_esteig_ksp_type 
cg -fieldsplit_0_pc_gamg_esteig_ksp_max_it 10
 These will give better eigen estimates, and that is important.
The differences between these steimates is not too bad.
There is a safety factor (1.05 is the default) that you could increase with: 
-fieldsplit_0_mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.1
* Finally you could try -fieldsplit_0_pc_gamg_reuse_interpolation 1, if GAMG is 
still not working.

Use -fieldsplit_0_ksp_converged_reason and check the iteration count.
And it is a good idea to check with hypre to make sure something is not going 
badly in terms of performance anyway. AMG is hard and hypre is a good solver.

Mark

On Mon, Oct 31, 2022 at 1:56 AM Carl-Johan Thore via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
The GPU supports double precision and I didn't explicitly tell PETSc to use 
float when compiling, so
I guess it uses double? What's the easiest way to check?

Barry, running -ksp_view shows that the solver options are the same for CPU and 
GPU. The only
difference is the coarse grid solver for gamg ("the package used to perform 
factorization:") which
is petsc for CPU and cusparse for GPU. I tried forcing the GPU to use petsc via
-fieldsplit_0_mg_coarse_sub_pc_factor_mat_solver_type, but then ksp failed to 
converge
even on the first topology optimization iteration.

-ksp_view also shows differences in the eigenvalues from the Chebyshev 
smoother. For example,

GPU:
   Down solver (pre-smoother) on level 2 ---
  KSP Object: (fieldsplit_0_mg_levels_2_) 1 MPI process
type: chebyshev
  eigenvalue targets used: min 0.109245, max 1.2017
  eigenvalues provided (min 0.889134, max 1.09245) with

CPU:
  eigenvalue targets used: min 0.112623, max 1.23886
  eigenvalues provided (min 0.879582, max 1.12623)

But I guess such differences are expected?

/Carl-Johan

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: den 30 oktober 2022 22:00
To: Barry Smith mailto:bsm...@petsc.dev>>
Cc: Carl-Johan Thore mailto:carl-johan.th...@liu.se>>; 
petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] KSP on GPU

On Sun, Oct 30, 2022 at 3:52 PM Barry Smith 
mailto:bsm...@petsc.dev>> wrote:

   In general you should expect similar but not identical conference behavior.

I suggest running with all the monitoring you can. 
-ksp_monitor_true_residual -fieldsplit_0_monitor_true_residual 
-fieldsplit_1_monitor_true_residual and compare the various convergence between 
the CPU and GPU. Also run with -ksp_view and check that the various solver 
options are the same (they should be).

Is the GPU using float or double?

   Matt

  Barry


On Oct 30, 2022, at 11:02 AM, Carl-Johan Thore via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:

Hi,

I'm solving a topology optimization problem with Stokes flow discretized by a 
stabilized Q1-Q0 finite element method
and using BiCGStab with the fieldsplit preconditioner to solve the linear 
systems. The implementation
is based on DMStag, runs on Ubuntu via WSL2, and works fine with PETSc-3.18.1 
on multiple CPU cores and the following
options for the preconditioner:

-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

However, when I enable GPU computations by adding two options -

...
-dm_vec_type cuda \
-dm_mat_type aijcusparse \
-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

- KSP still works fine the first couple of topology optimization iterations but 
then
stops with "Linear solve did not converge due to DIVERGED_DTOL ..".

My question is whether I should expect the GPU versions of the linear solvers 
and pre-conditioners
to function exactly as their CPU counterparts (I got 

Re: [petsc-users] PETSc Windows Installation

2022-11-01 Thread Satish Balay via petsc-users
If you need to use PETSc from Visual Studio - you need to follow instructions 
at 
https://petsc.org/release/install/windows/#installation-with-microsoft-intel-windows-compilers

[i.e install with MS compilers/MPI - not cygwin compilers/MPI]

Also check "Project Files" section on how to setup compiler env for visual 
studio.

Note: Most external packages won't work with MS compilers.

Satish

On Tue, 1 Nov 2022, Mohammad Ali Yaqteen wrote:

> The above commands worked but I get an error message when I include petsc.h 
> in Visual Studio. The error message is "Cannot open include file: 
> 'petscconf.h': No such file or directory
> 
> Thanks,
> Ali
> -Original Message-
> From: Satish Balay  
> Sent: Tuesday, November 1, 2022 2:40 PM
> To: Mohammad Ali Yaqteen 
> Cc: petsc-users 
> Subject: Re: [petsc-users] PETSc Windows Installation
> 
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -lhwloc: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -levent_core: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -levent_pthreads: No such file or directory
> > /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: 
> > cannot find -lz: No such file or directory
> 
> For some reason cygwin has broken dependencies here. Run cygwin setup and 
> install the following pkgs.
> 
> $ cygcheck.exe -f /usr/lib/libhwloc.dll.a /usr/lib/libevent_core.dll.a 
> /usr/lib/libevent_pthreads.dll.a /usr/lib/libz.dll.a 
> libevent-devel-2.1.12-1
> libevent-devel-2.1.12-1
> libhwloc-devel-2.6.0-2
> zlib-devel-1.2.12-1
> 
> BTW: you can attach the file from 
> PETSC_DIR/PETSC_ARCH/lib/petsc/conf/configure.log
> 
> Satish
> 
> On Tue, 1 Nov 2022, Mohammad Ali Yaqteen wrote:
> 
> > I am unable to attach the configure.log file. Hence. I have copied the 
> > following text after executing the command (less configure.log) in the 
> > cygwin64
> > 
> > Executing: uname -s
> > stdout: CYGWIN_NT-10.0-19044
> > =
> >  Configuring PETSc to compile on your system
> > =
> > 
> > 
> > 
> > Starting configure run at Tue, 01 Nov 2022 13:06:06 +0900
> > Configure Options: --configModules=PETSc.Configure 
> > --optionsModule=config.compilerOptions --with-cc=mpicc --with-cxx=mpicxx 
> > --with-fc=mpif90
> > Working directory: /home/SEJONG/petsc-3.18.1
> > Machine platform:
> > uname_result(system='CYGWIN_NT-10.0-19044', node='DESKTOP-R1C768B', 
> > release='3.3.6-341.x86_64', version='2022-09-05 11:15 UTC', 
> > machine='x86_64')
> > Python version:
> > 3.9.10 (main, Jan 20 2022, 21:37:52)
> > [GCC 11.2.0]
> > 
> >   Environmental variables
> > USERDOMAIN=DESKTOP-R1C768B
> > OS=Windows_NT
> > COMMONPROGRAMFILES=C:\Program Files\Common Files
> > PROCESSOR_LEVEL=6
> > PSModulePath=C:\Users\SEJONG\Documents\WindowsPowerShell\Modules;C:\Program 
> > Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
> > CommonProgramW6432=C:\Program Files\Common Files
> > CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
> > LANG=en_US.UTF-8
> > TZ=Asia/Seoul
> > HOSTNAME=DESKTOP-R1C768B
> > PUBLIC=C:\Users\Public
> > OLDPWD=/home/SEJONG
> > USERNAME=SEJONG
> > LOGONSERVER=\\DESKTOP-R1C768B
> > PROCESSOR_ARCHITECTURE=AMD64
> > LOCALAPPDATA=C:\Users\SEJONG\AppData\Local
> > COMPUTERNAME=DESKTOP-R1C768B
> > USER=SEJONG
> > !::=::\
> > SYSTEMDRIVE=C:
> > USERPROFILE=C:\Users\SEJONG
> > PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL
> > SYSTEMROOT=C:\Windows
> > USERDOMAIN_ROAMINGPROFILE=DESKTOP-R1C768B
> > OneDriveCommercial=C:\Users\SEJONG\OneDrive - Sejong University
> > PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
> > GNUPLOT_LIB=C:\Program Files\gnuplot\demo;C:\Program 
> > Files\gnuplot\demo\games;C:\Program Files\gnuplot\share
> > PWD=/home/SEJONG/petsc-3.18.1
> > MSMPI_BIN=C:\Program Files\Microsoft MPI\Bin\
> > HOME=/home/SEJONG
> > TMP=/tmp
> > OneDrive=C:\Users\SEJONG\OneDrive - Sejong University
> > ZES_ENABLE_SYSMAN=1
> > !C:=C:\cygwin64\bin
> > PROCESSOR_REVISION=a505
> > PROFILEREAD=true
> > PROMPT=$P$G
> > NUMBER_OF_PROCESSORS=16
> > ProgramW6432=C:\Program Files
> > COMSPEC=C:\Windows\system32\cmd.exe
> > APPDATA=C:\Users\SEJONG\AppData\Roaming
> > SHELL=/bin/bash
> > TERM=xterm-256color
> > WINDIR=C:\Windows
> > ProgramData=C:\ProgramData
> > SHLVL=1
> > PRINTER=\\210.107.220.119\HP Color LaserJet Pro MFP M377 PCL 6
> > 

Re: [petsc-users] PETSc Windows Installation

2022-11-01 Thread Mohammad Ali Yaqteen
The above commands worked but I get an error message when I include petsc.h in 
Visual Studio. The error message is "Cannot open include file: 'petscconf.h': 
No such file or directory

Thanks,
Ali
-Original Message-
From: Satish Balay  
Sent: Tuesday, November 1, 2022 2:40 PM
To: Mohammad Ali Yaqteen 
Cc: petsc-users 
Subject: Re: [petsc-users] PETSc Windows Installation

> /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: cannot 
> find -lhwloc: No such file or directory
> /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: cannot 
> find -levent_core: No such file or directory
> /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: cannot 
> find -levent_pthreads: No such file or directory
> /usr/lib/gcc/x86_64-pc-cygwin/11/../../../../x86_64-pc-cygwin/bin/ld: cannot 
> find -lz: No such file or directory

For some reason cygwin has broken dependencies here. Run cygwin setup and 
install the following pkgs.

$ cygcheck.exe -f /usr/lib/libhwloc.dll.a /usr/lib/libevent_core.dll.a 
/usr/lib/libevent_pthreads.dll.a /usr/lib/libz.dll.a 
libevent-devel-2.1.12-1
libevent-devel-2.1.12-1
libhwloc-devel-2.6.0-2
zlib-devel-1.2.12-1

BTW: you can attach the file from 
PETSC_DIR/PETSC_ARCH/lib/petsc/conf/configure.log

Satish

On Tue, 1 Nov 2022, Mohammad Ali Yaqteen wrote:

> I am unable to attach the configure.log file. Hence. I have copied the 
> following text after executing the command (less configure.log) in the 
> cygwin64
> 
> Executing: uname -s
> stdout: CYGWIN_NT-10.0-19044
> =
>  Configuring PETSc to compile on your system
> =
> 
> 
> 
> Starting configure run at Tue, 01 Nov 2022 13:06:06 +0900
> Configure Options: --configModules=PETSc.Configure 
> --optionsModule=config.compilerOptions --with-cc=mpicc --with-cxx=mpicxx 
> --with-fc=mpif90
> Working directory: /home/SEJONG/petsc-3.18.1
> Machine platform:
> uname_result(system='CYGWIN_NT-10.0-19044', node='DESKTOP-R1C768B', 
> release='3.3.6-341.x86_64', version='2022-09-05 11:15 UTC', machine='x86_64')
> Python version:
> 3.9.10 (main, Jan 20 2022, 21:37:52)
> [GCC 11.2.0]
> 
>   Environmental variables
> USERDOMAIN=DESKTOP-R1C768B
> OS=Windows_NT
> COMMONPROGRAMFILES=C:\Program Files\Common Files
> PROCESSOR_LEVEL=6
> PSModulePath=C:\Users\SEJONG\Documents\WindowsPowerShell\Modules;C:\Program 
> Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
> CommonProgramW6432=C:\Program Files\Common Files
> CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
> LANG=en_US.UTF-8
> TZ=Asia/Seoul
> HOSTNAME=DESKTOP-R1C768B
> PUBLIC=C:\Users\Public
> OLDPWD=/home/SEJONG
> USERNAME=SEJONG
> LOGONSERVER=\\DESKTOP-R1C768B
> PROCESSOR_ARCHITECTURE=AMD64
> LOCALAPPDATA=C:\Users\SEJONG\AppData\Local
> COMPUTERNAME=DESKTOP-R1C768B
> USER=SEJONG
> !::=::\
> SYSTEMDRIVE=C:
> USERPROFILE=C:\Users\SEJONG
> PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL
> SYSTEMROOT=C:\Windows
> USERDOMAIN_ROAMINGPROFILE=DESKTOP-R1C768B
> OneDriveCommercial=C:\Users\SEJONG\OneDrive - Sejong University
> PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
> GNUPLOT_LIB=C:\Program Files\gnuplot\demo;C:\Program 
> Files\gnuplot\demo\games;C:\Program Files\gnuplot\share
> PWD=/home/SEJONG/petsc-3.18.1
> MSMPI_BIN=C:\Program Files\Microsoft MPI\Bin\
> HOME=/home/SEJONG
> TMP=/tmp
> OneDrive=C:\Users\SEJONG\OneDrive - Sejong University
> ZES_ENABLE_SYSMAN=1
> !C:=C:\cygwin64\bin
> PROCESSOR_REVISION=a505
> PROFILEREAD=true
> PROMPT=$P$G
> NUMBER_OF_PROCESSORS=16
> ProgramW6432=C:\Program Files
> COMSPEC=C:\Windows\system32\cmd.exe
> APPDATA=C:\Users\SEJONG\AppData\Roaming
> SHELL=/bin/bash
> TERM=xterm-256color
> WINDIR=C:\Windows
> ProgramData=C:\ProgramData
> SHLVL=1
> PRINTER=\\210.107.220.119\HP Color LaserJet Pro MFP M377 PCL 6
> PROGRAMFILES=C:\Program Files
> ALLUSERSPROFILE=C:\ProgramData
> TEMP=/tmp
> DriverData=C:\Windows\System32\Drivers\DriverData
> SESSIONNAME=Console
> ProgramFiles(x86)=C:\Program Files (x86)
> PATH=/usr/local/bin:/usr/bin:/cygdrive/c/SIMULIA/Commands:/cygdrive/c/Program 
> Files/Microsoft 
> MPI/Bin:/cygdrive/c/Windows/system32:/cygdrive/c/Windows:/cygdrive/c/Windows/System32/Wbem:/cygdrive/c/Windows/System32/WindowsPowerShell/v1.0:/cygdrive/c/Windows/System32/OpenSSH:/cygdrive/c/Program
>  Files/MATLAB/R2020b/bin:/cygdrive/c/Program Files/Microsoft SQL 
> Server/130/Tools/Binn:/cygdrive/c/Program Files/Microsoft SQL Server/Client 
> SDK/ODBC/170/Tools/Binn:/cygdrive/c/Program 
>