Re: [petsc-users] Can't compile code after upgrading to VS2017 and Intel One API + PETSc 3.15

2021-04-19 Thread TAY wee-beng

Hi,

Sorry, I didn't notice your email. Anyway, I uninstalled Intel MPI and 
replaces it with MS MPI with Intel ifort. It's working correctly now.


So I guess the problem lies with Intel MPI.

Thank you very much.

Yours sincerely,


TAY Wee-Beng 郑伟明 (Zheng Weiming)
Personal research webpage: _http://tayweebeng.wixsite.com/website_
Youtube research showcase: _https://goo.gl/PtvdwQ_
linkedin: _https://www.linkedin.com/in/tay-weebeng_


On 17/4/2021 10:08 pm, Satish Balay wrote:

Can you try editing this example and changing PETSC_COMM_WORLD to 
MPI_COMM_WORLD - and
see if that works?

Satish

On Fri, 16 Apr 2021, TAY wee-beng wrote:


Hi,

There's no error.log.

I have removed the LP64 lib but the error is still the same during Fortran
test below. Any other solution?

$ make PETSC_DIR=/cygdrive/e/wtay/Downloads/Source_codes/petsc-main
PETSC_ARCH=petsc-dev_win64_impi_vs2017 check
Running check examples to verify correct installation
Using PETSC_DIR=/cygdrive/e/wtay/Downloads/Source_codes/petsc-main and
PETSC_ARCH=petsc-dev_win64_impi_vs2017
C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process
C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes
make[3]:
[/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/lib/petsc/conf/rules:350:
ex5f.PETSc] Error 2 (ignored)
***Error detected during compile or link!***
See http://www.mcs.anl.gov/petsc/documentation/faq.html
/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/src/snes/tutorials ex5f
*
/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/lib/petsc/bin/win32fe/win32fe
ifort -MD -Z7 -fpp  -MD -Z7 -fpp
-I/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/include
-I/cygdrive/e/wtay/Downloads/Sourc
e_codes/petsc-main/petsc-dev_win64_impi_vs2017/include -I/cygdrive/c/Program\
Files\ \(x86\)/Intel/oneAPI/mpi/latest/include ex5f.F90
-L/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/petsc-dev_win64_imp
i_vs2017/lib
-L/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/petsc-dev_win64_impi_vs2017/lib
-L/cygdrive/e/wtay/Downloads/Source_codes/petsc-main/petsc-dev_win64_impi_vs2017/lib
-lpetsc -lflapack -lfblas /c
ygdrive/c/Program\ Files\ \(x86\)/Intel/oneAPI/mpi/latest/lib/debug/impi.lib
/cygdrive/c/Program\ Files\
\(x86\)/Intel/oneAPI/mpi/latest/lib/debug/impicxx.lib Gdi32.lib User32.lib
Advapi32.lib Kernel32.lib Ws2_3
2.lib -o ex5f
ex5f.F90(83): error #6405: The same named entity from different modules and/or
program units cannot be referenced. [PETSC_COMM_WORLD]
   call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)
-^
ex5f.F90(83): error #7112: This actual argument must not be the name of a
procedure.   [PETSC_COMM_WORLD]
   call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)
-^
ex5f.F90(84): error #6405: The same named entity from different modules and/or
program units cannot be referenced. [PETSC_COMM_WORLD]
   call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
-^
ex5f.F90(84): error #7112: This actual argument must not be the name of a
procedure.   [PETSC_COMM_WORLD]
   call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
-^
ex5f.F90(96): error #6405: The same named entity from different modules and/or
program units cannot be referenced. [PETSC_COMM_WORLD]
     ierr = PETSC_ERR_ARG_OUTOFRANGE; call
PetscError(PETSC_COMM_WORLD,ierr,0,'Lambda'); call
MPIU_Abort(PETSC_COMM_WORLD,ierr)
-^
ex5f.F90(96): error #6405: The same named entity from different modules and/or
program units cannot be referenced. [PETSC_COMM_WORLD]
     ierr = PETSC_ERR_ARG_OUTOFRANGE; call
PetscError(PETSC_COMM_WORLD,ierr,0,'Lambda'); call
MPIU_Abort(PETSC_COMM_WORLD,ierr)
^
ex5f.F90(96): error #7112: This actual argument must not be the name of a
procedure.   [PETSC_COMM_WORLD]
     ierr = PETSC_ERR_ARG_OUTOFRANGE; call
PetscError(PETSC_COMM_WORLD,ierr,0,'Lambda'); call
MPIU_Abort(PETSC_COMM_WORLD,ierr)
^
ex5f.F90(103): error #6405: The same named entity from different modules
and/or program units cannot be referenced. [PETSC_COMM_WORLD]
   call SNESCreate(PETSC_COMM_WORLD,snes,ierr)
--^
ex5f.F90(103): error #7112: This actual argument must not be the name of a
procedure.   [PETSC_COMM_WORLD]
   call SNESCreate(PETSC_COMM_WORLD,snes,ierr)
--^
ex5f.F90(120): error #6405: The same named entity from different modules
and/or program units cannot be referenced. [PETSC_COMM_WORLD]
   call DMDACreate2d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NON

Re: [petsc-users] Question about PCFieldSplit

2021-04-19 Thread Tang, Qi
Patrick,
We were able to get field split working in DMStag with the dummy solver 
approach I described. Let us know when you add the capability and we will be 
happy to test it. Meanwhile, please take your time to merge it.

Thanks,
Qi



On Apr 18, 2021, at 11:51 PM, Tang, Qi mailto:tan...@msu.edu>> 
wrote:

Thanks a lot, Patrick. We appreciate your help.

Qi



On Apr 18, 2021, at 11:30 PM, Patrick Sanan 
mailto:patrick.sa...@gmail.com>> wrote:

We have this functionality in a branch, which I'm working on cleaning up to get 
to master. It doesn't use PETScSection. Sorry about the delay!

You can only use PCFieldSplitSetDetectSaddlePoint when your diagonal entries 
being zero or non-zero defines the splits correctly.

Am 17.04.2021 um 21:09 schrieb Matthew Knepley 
mailto:knep...@gmail.com>>:

On Fri, Apr 16, 2021 at 8:39 PM Jorti, Zakariae via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:

Hello,


I have a DMStag grid with one dof on each edge and face center.

I want to use a PCFieldSplit preconditioner on a Jacobian matrix that I assume 
is already split but I am not sure how to determine the fields.

In the DMStag examples (ex2.c and ex3.c), the function 
PCFieldSplitSetDetectSaddlePoint is used to determine those fields based on 
zero diagonal entries. In my case, I have a Jacobian matrix that does not have 
zero diagonal entries.

Can I use that PCFieldSplitSetDetectSaddlePoint in this case?

If not, how should I do?

Should I do like this example 
(https://www.mcs.anl.gov/petsc/petsc-master/src/ksp/ksp/tutorials/ex43.c.html):

const PetscInt Bfields[1] = {0},Efields[1] = {1};

KSPGetPC(ksp,&pc);

PCFieldSplitSetBlockSize(pc,2);

PCFieldSplitSetFields(pc,"B",1,Bfields,Bfields); 
PCFieldSplitSetFields(pc,"E",1,Efields,Efields);

where my B unknowns are defined on face centers and E unknowns are defined on 
edge centers?

That will not work.That interface only works for colocated fields that you get 
from DMDA.

Patrick, does DMSTAG use PetscSection? Then the field split would be 
automatically calculated. If not, does it maintain the
field division so that it could be given to PCFIELDSPLIT as ISes?

  Thanks,

 Matt

One last thing, I do not know which field comes first. Is it the one defined 
for face dofs or edge dofs.


Thank you.

Best regards,


Zakariae



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/





Re: [petsc-users] Preconditioner for stokes flow with mixed boundary conditions.

2021-04-19 Thread Barry Smith

  You might consider a discretization that replaces the direct 4th order 
discretization with a coupled set of second order discretions and then using 
PCFIELDSPLIT to organize appropriate algebraic multigrid on the 2nd order 
discretions.  The thing is traditional linear iterative solvers are totally 
focused on 2nd order systems and if you blindly give them a 4th order system 
they will choke 99% of the time. PCFIELDSPLIT is our attempt to allow one to 
split up a system and use appropriate preconditioners on the parts but the 
parts likely need to be second order for the traditional preconditioners to 
work.

  Direct solvers don't care about the order, but don't scale. There are a few 
specialized 4th order iterative solvers but it is generally easier to make 
multiple 2nd order systems and use the traditional iterative solvers.
  
  Barry




> On Apr 19, 2021, at 1:55 PM, Abhinav Singh  wrote:
> 
> The only reference that I know of which solves these equations is :
> https://www.pks.mpg.de/fileadmin/user_upload/MPIPKS/group_pages/BiologicalPhysics/juelicher/publications/2015/AHp-MMfIAPVG2015.pdf
>  
> 
> 
> There is coordinate free form in the appendix. They have been solved using 
> the UMFPACK Solver on staggered grids. I am trying a new approach (pressure 
> correction with auxiliary potential). In 2 dimensions, the approach worked 
> well with GMRES. In 3d, GMRES again works but with Dirichlet boundary 
> conditions.
> 
> On Mon, 19 Apr 2021 at 20:45, Matthew Knepley  > wrote:
> On Mon, Apr 19, 2021 at 2:38 PM Abhinav Singh  > wrote:
> Hello,
> 
> The Stokes flow equations are a 3d version of the equations attached 
> (Stokes-Leslie Flow). Only variables/unknowns are v and 
> u_xy=0.5(Dx(vy)+Dy(vx).
> 
> This is not what I have referred to as Stokes flow (Google gave me no results 
> for Stokes-Leslie flow). The Stokes operator is elliptic, but I have no idea 
> if
> what you have written is. It has fourth order derivatives in it, with mixed 
> nonlinear terms, so nothing is clear to me. Is there a coordinate-independent 
> form?
> From what I see in the image, I have no idea what solvers might work. Do you 
> have any reference where people have solved this before?
> 
>   Thanks,
> 
>  Matt
>  
> I am trying to solve them iteratively by correcting the pressure to reach a 
> steady state. I start with 0 pressure. Currently, I am unable to solve the 
> first iteration (periodic in X and Z. V=0 at Y=0 and Dx(vy)+Dy(vx)=g(x) at 
> Y=10).
> 
> I think the equations might be singular but I am not sure as in my 
> experience, the problem is well posed if the solution is known at certain 
> boundaries.
> 
> 
> 
> 
> On Mon, 19 Apr 2021 at 15:59, Matthew Knepley  > wrote:
> On Mon, Apr 19, 2021 at 9:37 AM Abhinav Singh  > wrote:
> What does this mean? Stokes means using an incompressibility constraint, for 
> which we often introduce a pressure.
> Yes, what I mean is solving the momentum block, say with known pressure. 
> viscosity is constant however, the momentum equation has both Laplacian and 
> Gradient terms.
> 
> That does not make any sense to me. Can you write the equation?
> 
>   Thanks,
> 
>  Matt
>  
> You should use a good Laplacian preconditioner, like -pc_type gamg or 
> -pc_type ml.
> I tried gamg and it seems to diverge as the solution is NaN. The KSP residual 
> message is " 0 KSP Residual norm 1.131782747169e+01  ".
> When using -pc_type ml, I get Aggregate Warning and then some faulty address 
> which stops the code.
> 
> 
> 
> On Mon, 19 Apr 2021 at 12:31, Matthew Knepley  > wrote:
> On Mon, Apr 19, 2021 at 6:18 AM Abhinav Singh  > wrote:
> Hello all,
> 
> I am trying to solve for incompressible stokes flow on a particle based 
> discretization. I use a pressure correction technique along with Particle 
> strength exchange like operators.  
> 
> I call Petsc to solve the Stokes Equation without the pressure term.
> 
> What does this mean? Stokes means using an incompressibility constraint, for 
> which we often introduce a pressure.
> 
> Do you mean you are solving only the momentum block? If so, do you have a 
> constant viscosity? If so, then this is just the Laplace equation.
> You should use a good Laplacian preconditioner, like -pc_type gamg or 
> -pc_type ml.
> 
>   Thanks
> 
>  Matt
>  
> GMRES usually works great but with dirichlet boundary conditions. When I use 
> a mixed boundary condition in Y, (dirichlet on bottom and Neumann on the top) 
> with periodicity in X,Z. GMRES fails converge when the size of matrix 
> increases. For smaller size (upto 27*27*5), only GMRES works and that too 
> only with the option 'pc_type none'. I was unable to find any pre

Re: [petsc-users] Data transfer between DMDA-managed Vecs

2021-04-19 Thread Dave May
On Tue, 20 Apr 2021 at 01:06, Constantine Khrulev 
wrote:

> Hi,
>
> I would like to transfer values from one DMDA-managed Vec (i.e. created
> using DMCreateGlobalVector() or equivalent) to a Vec managed using a
> different DMDA instance (same number of elements, same number of degrees
> of freedom, *different* domain decomposition).
>
> What approach would you recommend?
>
>
VecScatter.


> Bonus question: what about DMDAs using different MPI communicators?
>

VecScatter. :D

We do exactly this in PCTELESCOPE using VecScatter.
It might be worth snooping through telescope_dmda.c which you can find here

https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/telescope/telescope_dmda.c.html

Thanks,
Dave


> Thanks!
>
> --
> Constantine
>
>


[petsc-users] Data transfer between DMDA-managed Vecs

2021-04-19 Thread Constantine Khrulev

Hi,

I would like to transfer values from one DMDA-managed Vec (i.e. created 
using DMCreateGlobalVector() or equivalent) to a Vec managed using a 
different DMDA instance (same number of elements, same number of degrees 
of freedom, *different* domain decomposition).


What approach would you recommend?

Bonus question: what about DMDAs using different MPI communicators?

Thanks!

--
Constantine



[petsc-users] convergence slow

2021-04-19 Thread Sepideh Kavousi
Hello,
I want to solve PF solidification+Navier stokes using Finite different method. 
The Poisson equation is much more complicated that the one for just a 
fluid-flow problems (because it includes different terms order parameter terms).
The code converges but the convergence rate is very slow. I have tried 
different options. Following is the best options I found but the convergence is 
still very slow. I did not include the ksp_convergence data because it takes 
around 1000 liner solver steps for each snes step.
I look in literature, but I did not find any data on PF+navierstokes solutions 
using Petsc. Do you have any suggestions how should I improve the convergence?
Best,
Sepideh


ibrun -np 136 ./one.out -ts_monitor -snes_fd_color -ts_max_snes_failures -1  
-ts_type bdf -ts_bdf_adapt -snes_linesearch_type l2 -snes_type ksponly -pc_type 
fieldsplit -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2,3 
-pc_fieldsplit_2_fields 4 -pc_fieldsplit_type multiplicative 
-fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type bjacobi 
-fieldsplit_2_pc_type ilu -fieldsplit_0_ksp_type preonly -fieldsplit_1_ksp_type 
preonly -fieldsplit_2_ksp_type preonly -sub_pc_type ilu -sub_ksp_type preonly 
-snes_monitor -log_view log.txt

TACC:  Starting up job 7605521
TACC:  Starting parallel tasks...
nu_nd=14948.511 !
tau/w=26225.459!
initial!
0 TS dt 0.0001 time 0.
copy!
Write output at step= 0!
0 SNES Function norm 1.465357113711e+01
1 SNES Function norm 1.131764004117e+02
0 SNES Function norm 5.657793773935e+01
1 SNES Function norm 9.539772669908e-02
1 TS dt 0.0002 time 0.0001
copy!
0 SNES Function norm 1.734774194628e+02
1 SNES Function norm 1.009134558763e+00
2 TS dt 0.000324128 time 0.0003
copy!
0 SNES Function norm 1.199860231725e+01
1 SNES Function norm 1.698885201785e-02
3 TS dt 0.000470679 time 0.000624128
copy!
0 SNES Function norm 8.344934987594e+00
1 SNES Function norm 4.082958856578e-03
4 TS dt 0.000769357 time 0.00109481
copy!
0 SNES Function norm 9.338443428763e+00
1 SNES Function norm 2.472903267511e-03
5 TS dt 0.00153871 time 0.00186416
copy!
0 SNES Function norm 9.836384162921e+00
1 SNES Function norm 6.383053421790e-03
6 TS dt 0.00215245 time 0.00340288
copy!
0 SNES Function norm 5.043289760603e+00
1 SNES Function norm 2.168332362305e-03
7 TS dt 0.0022353 time 0.0033
copy!
0 SNES Function norm 1.421255264272e+00
1 SNES Function norm 1.750393691475e-03
8 TS dt 0.0044706 time 0.00779063
copy!
0 SNES Function norm 1.216746389225e+00
1 SNES Function norm 5.339123694240e-04
0 SNES Function norm 2.168489818312e+00
1 SNES Function norm 1.928809773415e-03
9 TS dt 0.00310217 time 0.0110885
copy!
0 SNES Function norm 8.021629529051e-01
1 SNES Function norm 2.161901383228e-04
10 TS dt 0.00300302 time 0.0141906
copy!
0 SNES Function norm 7.731190774223e-01
1 SNES Function norm 2.807020573793e-04
11 TS dt 0.00367968 time 0.0171937
copy!
0 SNES Function norm 1.263830181443e+00
1 SNES Function norm 1.049133009231e-02
12 TS dt 0.00704796 time 0.0208733
copy!
0 SNES Function norm 5.619818748359e+00
1 SNES Function norm 1.569218805659e-01
13 TS dt 0.00742462 time 0.0279213
copy!
0 SNES Function norm 2.268051808662e+01
1 SNES Function norm 7.055762930884e+00
0 SNES Function norm 2.173806325395e+01
1 SNES Function norm 1.564571270312e+00
0 SNES Function norm 1.208038223084e+01
1 SNES Function norm 5.192087605517e-02
14 TS dt 0.00186927 time 0.0299244
copy!
0 SNES Function norm 4.797722218343e+00
1 SNES Function norm 3.975928994156e-03
15 TS dt 0.00189622 time 0.0317937
copy!
0 SNES Function norm 5.510229536082e+00
1 SNES Function norm 4.496811351139e-03
16 TS dt 0.00172953 time 0.0336899
copy!
0 SNES Function norm 6.945417639798e+00
1 SNES Function norm 4.569326335543e-03
17 TS dt 0.00218527 time 0.0354194
copy!
0 SNES Function norm 1.351474156680e+01
1 SNES Function norm 1.719775600164e-02
18 TS dt 0.00224386 time 0.0376047
copy!
0 SNES Function norm 2.174969556700e+01
1 SNES Function norm 3.814212178889e-02
0 SNES Function norm 4.891415655138e+01
1 SNES Function norm 6.772042718901e-02
0 SNES Function norm 2.828143585009e+01
1 SNES Function norm 6.965842377755e-03
0 SNES Function norm 2.168193335917e+01
1 SNES Function norm 2.463212234975e-03
19 TS dt 0.000205482 time 0.0378175
copy!
0 SNES Function norm 1.409435206457e+00
1 SNES Function norm 1.859628755245e-02
20 TS dt 0.000188785 time 0.0380229
copy!
Write output at step= 20!
0 SNES Function norm 5.900261363044e-01
1 SNES Function norm 2.653576084845e-02
0 SNES Function norm 5.122905363140e+00
1 SNES Function norm 4.297068425492e-03
0 SNES Function norm 3.428448792746e+00
1 SNES Function norm 1.000610664096e-02
21 TS dt 6.50511e-05 time 0.0380738
copy!
0 SNES Function norm 3.700405789863e-01
1 SNES Function norm 6.2636423

Re: [petsc-users] Rather different matrix product results on multiple processes

2021-04-19 Thread Zhang, Hong via petsc-users
Peder,
I tested your code on a linux machine. I got
$ ./acorr_mwe
Data matrix norm: 5.0538e+01
Autocorrelation matrix norm: 1.0473e+03

mpiexec -n 40 ./acorr_mwe -matmattransmult_mpidense_mpidense_via allgatherv 
(default)
Data matrix norm: 5.0538e+01
Autocorrelation matrix norm: 1.0363e+03

mpiexec -n 20 ./acorr_mwe
Data matrix norm: 5.0538e+01
Autocorrelation matrix norm: 1.0897e+03

mpiexec -n 40 ./acorr_mwe -matmattransmult_mpidense_mpidense_via cyclic
Data matrix norm: 5.0538e+01
Autocorrelation matrix norm: 1.0363e+03

I use petsc 'main' branch (same as the latest release). You can remove 
MatAssemblyBegin/End calls after MatMatTransposeMult():
MatMatTransposeMult(data_mat, data_mat, MAT_INITIAL_MATRIX, PETSC_DEFAULT, 
&corr_mat);
//ierr = MatAssemblyBegin(corr_mat, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
//ierr = MatAssemblyEnd(corr_mat, MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);

The communication patterns of parallel implementation led to different order of 
floating-point computation, thus slightly different matrix norm of R.
Hong


From: petsc-users  on behalf of Peder 
Jørgensgaard Olesen via petsc-users 
Sent: Monday, April 19, 2021 7:57 AM
To: petsc-users@mcs.anl.gov 
Subject: [petsc-users] Rather different matrix product results on multiple 
processes


Hello,


When computing a matrix product of the type R = D.DT using 
MatMatTransposeMult() I find I get rather different results depending on the 
number of processes. In one example using a data set that is small compared to 
the application I get Frobenius norms |R| = 1.047e3 on a single process, 
1.0363e3 on a single HPC node (40 cores), and 9.7307e2 on two nodes.


I have ascertained that the single process result is indeed the correct one 
(i.e., eigenvectors of R form a proper basis for the columns of D), so 
naturally I'd love to be able to reproduce this result across different 
parallel setups. How might I achieve this?


I'm attaching MWE code and the data set used for the example.


Thanks in advance!


Best Regards


Peder Jørgensgaard Olesen

PhD Student, Turbulence Research Lab

Dept. of Mechanical Engineering

Technical University of Denmark

Niels Koppels Allé

Bygning 403, Rum 105

DK-2800 Kgs. Lyngby


Re: [petsc-users] Preconditioner for stokes flow with mixed boundary conditions.

2021-04-19 Thread Matthew Knepley
On Mon, Apr 19, 2021 at 9:37 AM Abhinav Singh 
wrote:

> What does this mean? Stokes means using an incompressibility constraint,
>> for which we often introduce a pressure.
>
> Yes, what I mean is solving the momentum block, say with known pressure.
> viscosity is constant however, the momentum equation has both Laplacian and
> Gradient terms.
>

That does not make any sense to me. Can you write the equation?

  Thanks,

 Matt


> You should use a good Laplacian preconditioner, like -pc_type gamg or
>> -pc_type ml.
>
> I tried gamg and it seems to diverge as the solution is NaN. The KSP
> residual message is " 0 KSP Residual norm 1.131782747169e+01  ".
> When using -pc_type ml, I get Aggregate Warning and then some faulty
> address which stops the code.
>
>
>
> On Mon, 19 Apr 2021 at 12:31, Matthew Knepley  wrote:
>
>> On Mon, Apr 19, 2021 at 6:18 AM Abhinav Singh 
>> wrote:
>>
>>> Hello all,
>>>
>>> I am trying to solve for incompressible stokes flow on a particle based
>>> discretization. I use a pressure correction technique along with Particle
>>> strength exchange like operators.
>>>
>>> I call Petsc to solve the Stokes Equation without the pressure term.
>>>
>>
>> What does this mean? Stokes means using an incompressibility constraint,
>> for which we often introduce a pressure.
>>
>> Do you mean you are solving only the momentum block? If so, do you have a
>> constant viscosity? If so, then this is just the Laplace equation.
>> You should use a good Laplacian preconditioner, like -pc_type gamg or
>> -pc_type ml.
>>
>>   Thanks
>>
>>  Matt
>>
>>
>>> GMRES usually works great but with dirichlet boundary conditions. When I
>>> use a mixed boundary condition in Y, (dirichlet on bottom and Neumann on
>>> the top) with periodicity in X,Z. GMRES fails converge when the size of
>>> matrix increases. For smaller size (upto 27*27*5), only GMRES works and
>>> that too only with the option 'pc_type none'. I was unable to find any
>>> preconditioner which worked. Eventually, it also fails for bigger size.
>>> UMFPACK works but LU decomposition fails after a certain size and is very
>>> slow.
>>>
>>> It would be great if you could suggest a way or a preconditioner which
>>> suits this problem.
>>>
>>> Kind regards,
>>> Abhinav
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Preconditioner for stokes flow with mixed boundary conditions.

2021-04-19 Thread Abhinav Singh
>
> What does this mean? Stokes means using an incompressibility constraint,
> for which we often introduce a pressure.

Yes, what I mean is solving the momentum block, say with known pressure.
viscosity is constant however, the momentum equation has both Laplacian and
Gradient terms.

You should use a good Laplacian preconditioner, like -pc_type gamg or
> -pc_type ml.

I tried gamg and it seems to diverge as the solution is NaN. The KSP
residual message is " 0 KSP Residual norm 1.131782747169e+01  ".
When using -pc_type ml, I get Aggregate Warning and then some faulty
address which stops the code.



On Mon, 19 Apr 2021 at 12:31, Matthew Knepley  wrote:

> On Mon, Apr 19, 2021 at 6:18 AM Abhinav Singh 
> wrote:
>
>> Hello all,
>>
>> I am trying to solve for incompressible stokes flow on a particle based
>> discretization. I use a pressure correction technique along with Particle
>> strength exchange like operators.
>>
>> I call Petsc to solve the Stokes Equation without the pressure term.
>>
>
> What does this mean? Stokes means using an incompressibility constraint,
> for which we often introduce a pressure.
>
> Do you mean you are solving only the momentum block? If so, do you have a
> constant viscosity? If so, then this is just the Laplace equation.
> You should use a good Laplacian preconditioner, like -pc_type gamg or
> -pc_type ml.
>
>   Thanks
>
>  Matt
>
>
>> GMRES usually works great but with dirichlet boundary conditions. When I
>> use a mixed boundary condition in Y, (dirichlet on bottom and Neumann on
>> the top) with periodicity in X,Z. GMRES fails converge when the size of
>> matrix increases. For smaller size (upto 27*27*5), only GMRES works and
>> that too only with the option 'pc_type none'. I was unable to find any
>> preconditioner which worked. Eventually, it also fails for bigger size.
>> UMFPACK works but LU decomposition fails after a certain size and is very
>> slow.
>>
>> It would be great if you could suggest a way or a preconditioner which
>> suits this problem.
>>
>> Kind regards,
>> Abhinav
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] Serial solution of system in MPI FEM application

2021-04-19 Thread Matthew Knepley
On Mon, Apr 19, 2021 at 9:04 AM Ivano Barletta 
wrote:

> Thanks
>
> I've tried the first option, I get this compilation error
>
> undefined reference to `matcreatesubmatrices_'
>
> My PETSc version is 3.7.5, is this a feature of newer versions?
>

It was renamed from MatGetSubMatrix()

  https://www.mcs.anl.gov/petsc/documentation/changes/38.html

  Thanks,

 Matt


> Ivano
>
> Il giorno ven 16 apr 2021 alle ore 16:06 Matthew Knepley <
> knep...@gmail.com> ha scritto:
>
>> On Fri, Apr 16, 2021 at 9:46 AM Ivano Barletta 
>> wrote:
>>
>>> Dear all,
>>>
>>> I have an MPI FEM application with an elliptic problem that I
>>> solve with PETSc. For debugging purposes I want to
>>> let the master process to solve the global system.
>>>
>>> Is there any simple way in PETSc to gather the distributed matrix
>>> (the type is MATMPIAIJ) on a single MPI process?
>>>
>>
>> You could use
>>
>>
>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSubMatrices.html
>>
>> to extract a single, serial matrix on process 0. However, you can do this
>> with the solver automatically using
>>
>>
>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCTELESCOPE.html
>>
>> If you have the memory, you can do this more simply with
>>
>>
>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCREDUNDANT.html
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks,
>>> Ivano
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Serial solution of system in MPI FEM application

2021-04-19 Thread Ivano Barletta
Thanks

I've tried the first option, I get this compilation error

undefined reference to `matcreatesubmatrices_'

My PETSc version is 3.7.5, is this a feature of newer versions?

Ivano

Il giorno ven 16 apr 2021 alle ore 16:06 Matthew Knepley 
ha scritto:

> On Fri, Apr 16, 2021 at 9:46 AM Ivano Barletta 
> wrote:
>
>> Dear all,
>>
>> I have an MPI FEM application with an elliptic problem that I
>> solve with PETSc. For debugging purposes I want to
>> let the master process to solve the global system.
>>
>> Is there any simple way in PETSc to gather the distributed matrix
>> (the type is MATMPIAIJ) on a single MPI process?
>>
>
> You could use
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSubMatrices.html
>
> to extract a single, serial matrix on process 0. However, you can do this
> with the solver automatically using
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCTELESCOPE.html
>
> If you have the memory, you can do this more simply with
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCREDUNDANT.html
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Ivano
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] Preconditioner for stokes flow with mixed boundary conditions.

2021-04-19 Thread Matthew Knepley
On Mon, Apr 19, 2021 at 6:18 AM Abhinav Singh 
wrote:

> Hello all,
>
> I am trying to solve for incompressible stokes flow on a particle based
> discretization. I use a pressure correction technique along with Particle
> strength exchange like operators.
>
> I call Petsc to solve the Stokes Equation without the pressure term.
>

What does this mean? Stokes means using an incompressibility constraint,
for which we often introduce a pressure.

Do you mean you are solving only the momentum block? If so, do you have a
constant viscosity? If so, then this is just the Laplace equation.
You should use a good Laplacian preconditioner, like -pc_type gamg or
-pc_type ml.

  Thanks

 Matt


> GMRES usually works great but with dirichlet boundary conditions. When I
> use a mixed boundary condition in Y, (dirichlet on bottom and Neumann on
> the top) with periodicity in X,Z. GMRES fails converge when the size of
> matrix increases. For smaller size (upto 27*27*5), only GMRES works and
> that too only with the option 'pc_type none'. I was unable to find any
> preconditioner which worked. Eventually, it also fails for bigger size.
> UMFPACK works but LU decomposition fails after a certain size and is very
> slow.
>
> It would be great if you could suggest a way or a preconditioner which
> suits this problem.
>
> Kind regards,
> Abhinav
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Nesting splits based on IS

2021-04-19 Thread Matthew Knepley
On Sun, Apr 18, 2021 at 11:54 PM Barry Smith  wrote:

>
>   Matt,
>
>I agree it can be done with DMs, but I am not convinced that is
> necessarily a superior way when one has to make DMSHELLS and mess around
> with their behavior.
>
>One way to look at PCFIELDSPLIT is that its basic splitting tool is IS
> and DM is a way of generating IS that can be used for PCFIELDSPLIT. In this
> approach PCFIELDSPLIT needs a bit more support for IS and nesting. To also
> help using multiple levels of DM.
>
>The other way is to view DM and subDM as the basic splitting tool for
> PCFIELDSPLIT but then one has to ask the question how does DM communicate
> its splits to PCFIELDSPLIT?   I am too lazy to look at the code but
> presumably with IS, hence the approach above.
>
>In the traditional Unix software stack approach to code, each layer
> uses the simpler layer below to communicate; so DM uses IS to communicate
> to the layer below (the linear algebra and preconditioners).  DM is a
> hugely complicated layer and making it communicate directly with the vector
> level is complex; I like having the IS level in between to simplify the
> software stack and programming model.
>
>PetscSection makes live more complicated since it is a bit disjoint
> from IS. I've never resolved in my mind what role PetscSection plays, is it
> a level above IS in the software stack that generates IS, does it sometimes
> "skip over" IS to directly talk to linear algebra?
>
>If you cannot make a cartoon picture of the software stack, with all
> the objects, then I think the software stack is not well designed or
> defined. I fear we cannot make such a cartoon currently.
>

DM _does_ communicate with PCFIELDSPLIT using IS. I agree with you that IS
is a good object for communication. In PETSc, IS is just a nice way to pass
a list of integers.

I don't think DM is hugely complicated. It does a few simple jobs. Here
it's job is to remember which field each dof belongs to. That is all we
have to worry about.

PetscSection semantically is a linear space with some structure. We already
know we want some structure like this, since we break all linear spaces in
PETSc into processes. Section allows you to break it down a little finer
into the pieces of the space for each "point", where you can use a point to
mark anything you want, like a process, cell, edge, another dof, etc.
Sections can make an IS when asked a question, such as "which dofs lie in
the closure of this cell", or "which dofs are in this field", or "which
dofs are owned by this process". I have written this in the manual years
ago.

Matt


>  Barry
>
>
>
>
> On Apr 18, 2021, at 8:54 AM, Matthew Knepley  wrote:
>
> On Sat, Apr 17, 2021 at 6:13 PM Barry Smith  wrote:
>
>>
>>   So you would like to be able to create three IS in your code and attach
>> them with names to the PC.  Then have -pc_fieldsplit_XXX_fields be able to
>> utilize the attached IS by name and use them to define the blocks.
>>
>>   This is all doable and could be added to PCFIELDSPLIT without too much
>> code, new code. The code would be largely like
>> PCFieldSplitSetRuntimeSplits_Private.
>>
>>The recursive part may also be doable but I think your syntax below is
>> not exactly right. You would need something like
>>
>> -fieldsplit_0_pc_type fieldsplit   // split the first PC into a fieldsplit
>>  -fieldsplit_0_pc_fieldsplit_0_fields xxx
>> -fieldsplit_0_fieldsplit_0_pc_type jacobi
>>  -fieldsplit_0_pc_fieldsplit_1_fields yyy
>>  etc
>>
>> this would split the first field into two fields and use jacobi on the
>> first field.
>>
>> The problem is what to use for labelling the xxx and the yyy?
>>
>> I think one could achieve this by having the PCFIELDPLIT attach to each
>> of its split PCs the appropriate modified IS with names attached to them.
>> There are two cases,
>>
>>   when building the split uses first all the entries from fieldsplit_v_,
>> then from fieldsplit_p_ then the new ISs it needs to attach to the first
>> split PC are two sets of integers the first from 0 to the len(v)-1 and the
>> second from len(v) to len(v)+len(p)-1.
>>
>>   when building the split it interlaces the indices from v and p
>> (interlacing only make sense if the size of v and p is the same). Then the
>> new v would be {0,2,4,...} and p would be {1,3,...}.
>>
>>   If you are ambitious and would like to add this to fieldsplit.c we'd be
>> very happy to receive an MR. It might even lead to allowing us to simply
>> how the PCFIELDPLIT interacts with DMs. If all the split type, stride,
>> named, etc are handle in a single consistent manner.
>>
>
> Barry, this is already working with DMs, which I think is the right way to
> do this.
>
> Here is the code:
>
>
> https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L420
>
> The DM must respond to DMCreateSubDM(). The interface is that the call
> provides a list of fields [f_0, f_1, ...]
> and the DM returns an IS for that combi

[petsc-users] Preconditioner for stokes flow with mixed boundary conditions.

2021-04-19 Thread Abhinav Singh
Hello all,

I am trying to solve for incompressible stokes flow on a particle based
discretization. I use a pressure correction technique along with Particle
strength exchange like operators.

I call Petsc to solve the Stokes Equation without the pressure term. GMRES
usually works great but with dirichlet boundary conditions. When I use a
mixed boundary condition in Y, (dirichlet on bottom and Neumann on the top)
with periodicity in X,Z. GMRES fails converge when the size of matrix
increases. For smaller size (upto 27*27*5), only GMRES works and that too
only with the option 'pc_type none'. I was unable to find any
preconditioner which worked. Eventually, it also fails for bigger size.
UMFPACK works but LU decomposition fails after a certain size and is very
slow.

It would be great if you could suggest a way or a preconditioner which
suits this problem.

Kind regards,
Abhinav