Dear Jed,

ILU with BAIJ works, and its performance in reducing the condition number is 
slightly better than PCVPBJACOBI. Thanks for your guidance.

Best wishes,
Ali 

-----Original Message-----
From: Jed Brown <[email protected]> 
Sent: Tuesday, December 04, 2018 9:40 PM
To: Ali Reza Khaz'ali <[email protected]>; 'Smith, Barry F.' 
<[email protected]>
Cc: [email protected]
Subject: RE: Re[2]: [petsc-dev] Implementing of a variable block size BILU 
preconditioner

Ali Reza Khaz'ali <[email protected]> writes:

> Dear Jed,
>
> Thanks for your kind answer. I thought Scalar BJACOBI does not need 
> data from the other domains, but ILU does.

There is no parallel ILU in PETSc.

$ mpiexec -n 2 mpich-clang/tests/ksp/ksp/examples/tutorials/ex2 -pc_type ilu 
[0]PETSC ERROR: --------------------- Error Message 
--------------------------------------------------------------
[0]PETSC ERROR: See 
http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible 
LU and Cholesky solvers [0]PETSC ERROR: Could not locate a solver package. 
Perhaps you must ./configure with --download-<package> [0]PETSC ERROR: See 
http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.10.2-19-g217b8b62e2  GIT 
Date: 2018-10-17 10:34:59 +0200 [0]PETSC ERROR: 
mpich-clang/tests/ksp/ksp/examples/tutorials/ex2 on a mpich-clang named joule 
by jed Tue Dec  4 11:02:53 2018 [0]PETSC ERROR: Configure options 
--download-chaco --download-p4est --download-sundials --download-triangle 
--with-fc=0 --with-mpi-dir=/home/jed/usr/ccache/mpich-clang --with-visibility 
--with-x --with-yaml PETSC_ARCH=mpich-clang [0]PETSC ERROR: #1 MatGetFactor() 
line 4485 in /home/jed/petsc/src/mat/interface/matrix.c
[0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in 
/home/jed/petsc/src/ksp/pc/impls/factor/ilu/ilu.c
[0]PETSC ERROR: #3 PCSetUp() line 932 in 
/home/jed/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #4 KSPSetUp() line 391 in 
/home/jed/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #5 KSPSolve() line 723 in 
/home/jed/petsc/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #6 main() line 201 in 
/home/jed/petsc/src/ksp/ksp/examples/tutorials/ex2.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -malloc_test
[0]PETSC ERROR: -pc_type ilu
[0]PETSC ERROR: ----------------End of Error Message -------send entire error 
message to [email protected] application called 
MPI_Abort(MPI_COMM_WORLD, 92) - process 0

> I have tested my code with scalar ILU. However, no KSP could converge.

There are no guarantees.  See src/ksp/pc/examples/tutorials/ex1.c which tests 
with Kershaw's matrix, a 4x4 sparse SPD matrix where incomplete factorization 
yields an indefinite preconditioner.

> Also, there are no zeros on the diagonal, at least in the current 
> cases that I am simulating them. However, I will recheck it. 
> Additionally, I am going to do a limited test with the available BILU 
> (ILU with BAIJ matrices) to see whether it can work if I keep my block 
> sizes constant.

Good.  We should understand why that works or doesn't work before proceeding.

> Since PCVPBJACOBI had a limited success, I feel it is going to work.
>
> Best wishes, Ali
>
> -----Original Message-----
> From: Jed Brown <[email protected]>
> Sent: Tuesday, December 04, 2018 6:22 PM
> To: Alireza Khazali <[email protected]>; Smith, Barry F. 
> <[email protected]>
> Cc: [email protected]
> Subject: Re: Re[2]: [petsc-dev] Implementing of a variable block size 
> BILU preconditioner
>
> Alireza Khazali <[email protected]> writes:
>
>> Dear Barry,
>>
>>
>> Thanks for your kind answer. You are right. I will try to write my own block 
>> ILU with the properties I need. However, my primary problem is the matrix 
>> storage. As I understand, each process keeps a portion of a matrix and has 
>> access to that portion only. Additionally, I have not found any routine that 
>> enables the access of one process to the portion of the matrix that is saved 
>> on the memory of another process. However, some algorithms like ILU need 
>> such access, and despite spending much time investigating the code, I still 
>> do not understand how such a thing is done.
>
> Even scalar ILU in PETSc is used inside domain decomposition, such as block 
> Jacobi or additive Schwarz.  Hypre has a parallel ILU, but the parallel 
> scalability is bad.  This is pretty fundamental to ILU: it is not a good 
> parallel algorithm.
>
>> I am a very experienced programmer in the field of numerical analysis, but 
>> PETSc is a very huge and complicated code. Therefore, I have to apologize 
>> for taking your precious time if you find the solution to my problem too 
>> obvious, but I will be really really grateful if you could give me a hint.
>>
>>
>>
>>
>> Dear Jed,
>>
>>
>> Thank you for your kind answer. I have a multi-component fluid flow 
>> simulator, which produces valid results using a direct solver (like MKL 
>> DGESV). However, direct solvers are useless if the problem size increases, 
>> no other available combinations of PC/KSP could solve the system. The 
>> simulator must solve a system of nonlinear PDEs for each node, and the 
>> primary unknowns are pressure and fluid compositions. However, at some 
>> pressures, the fluid is vaporized/liquefied (A.K.A undergoes phase change), 
>> and the number of PDEs for that node is increased (phase equilibrium 
>> equations have to be solved for that node, too). Therefore, in the 
>> discretized then linearized system, we have a block of equations for each 
>> node, but the block size is variable, depending on the fluid phase status in 
>> that node. The block preconditioners can handle the problem but only if they 
>> are designed for such a variable size block matrix. Thanks to Barry, we have 
>> a variable block sized BJacobi preconditioner (PCVPBJACOBI), but it does not 
>> provide the required precision in a few cases, and more effective 
>> preconditioning is needed. Also, I have found that others may need such 
>> variable block size handling, as it can be found in PETSc mailing lists:
>
> I understand that you have variable sized blocks, but why not use scalar ILU? 
>  Is it failing due to zeros on the diagonal and you have evidence that 
> blocking fixes that?  If so, what ordering are you using for your fields?  
> Putting the dual variable (often pressure) last often works.
> Note that incomplete factorization can break down even for SPD matrices.
>
> I keep asking on this point because block ILU(0) is algebraically equivalent 
> to scalar ILU(0) on a matrix with the same nonzero pattern, modulo handling 
> of singular blocks that is hard to achieve with scalar ILU.
>
>> https://lists.mcs.anl.gov/pipermail/petsc-users/2011-October/010491.h
>> t
>> ml
>>
>> https://lists.mcs.anl.gov/pipermail/petsc-users/2018-August/036028.ht
>> m
>> l
>>
>> Since I have always loved to contribute to open source projects, and PETSc 
>> helped me a lot in my other researches, I decided to add variable size block 
>> ILU preconditioner to PETSc. However, PETSc is too complicated, andI cannot 
>> accomplish such a task efficiently without help.
>>
>>
>>
>>
>> Many thanks,
>>
>> Ali
>> ----- Original Message -----
>>
>>
>> From: Jed Brown ([email protected])
>> Date: 13/09/97 05:09
>> To: Smith, Barry F. ([email protected]), Ali Reza Khaz'ali
>> ([email protected])
>> Cc: [email protected]
>> Subject: Re: [petsc-dev] Implementing of a variable block size BILU 
>> preconditioner
>>
>>
>>
>>
>>
>> "Smith, Barry F. via petsc-dev" <[email protected]> writes:
>>
>>>> On Dec 3, 2018, at 4:49 PM, Ali Reza Khaz'ali <[email protected]> 
>>>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I think that the topic is more suited for PETSc-developers than its users; 
>>>> therefore, I move it to the dev list.
>>>> 
>>>> Continuing the discussion on implementing a variable-block size BILU 
>>>> preconditioner, would it be possible to change the block size parameter 
>>>> (bs) on BAIJ format such that it can handle variable block sizes? (i.e., 
>>>> instead of it being a scalar, it can be an array). Although BILU does not 
>>>> necessarily require rectangular blocks, I think, it leads to less         
>>>> messy code.
>>>
>>>    That is an alternative to using the AIJ format. The problem with this 
>>> approach is you will need to write a lot of code for the variable block 
>>> size BAIJ; MatSetValues_SeqVBAIJ, MatMult_SeqVBAIJ, etc etc. While if you 
>>> reuse the AIJ you only need to write new factorization and solve routines 
>>> (much less code).
>>
>> Sure, but the result isn't really different (modulo associativity) 
>> from normal ILU applied to a block matrix (at least unless you start 
>> considering fill with incomplete blocks).
>>
>> Ali, what are you hoping to achieve with variable block ILU?
>>
>>>> Also, being a newbie on PETSc code, I do not understand some parts of the 
>>>> code, especially distributed matrix storage and some of the implemented 
>>>> numerical algorithms. Is there any reference that I can use for this?
>>>
>>>    There is a little discussion at the end chapters of the users manual, 
>>> plus you should read the developers manual. But there is not a lot of 
>>> detail except in the actual code.
>>>
>>>     Barry
>>>
>>>> 
>>>>  
>>>> --
>>>> Ali Reza Khaz’ali
>>>> Assistant Professor of Petroleum Engineering, Department of 
>>>> Chemical Engineering Isfahan University of Technology Isfahan, Iran
>>>>

Reply via email to