Re: [petsc-users] Parallelism of the Mat.convert() function

2024-04-23 Thread Pierre Jolivet
The code is behaving as it should, IMHO.
Here is a way to have the Mat stored the same independently of the number of 
processes.
[…]
global_rows, global_cols = input_array.T.shape
size = ((None, global_rows), (0 if COMM_WORLD.Get_rank() < 
COMM_WORLD.Get_size() - 1 else global_cols, global_cols))
[…]
I.e., you want to enforce that the lone column is stored by the last process, 
otherwise, it will be stored by the first one, and interleaved with the rest of 
the (0,0) block.

Thanks,
Pierre

> On 23 Apr 2024, at 6:13 PM, Miguel Angel Salazar de Troya 
>  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
> 
> The following code returns a different answer depending on how many 
> processors I use. With one processor, the last MPIAIJ matrix is correctly 
> formed:
> 
> row 0: (0, 1.)  (1, 2.)  (2, 3.)  (3, 4.)  (4, -1.)
> row 1: (0, 5.)  (1, 6.)  (2, 7.)  (3, 8.)  (4, -2.)
> row 2: (0, 9.)  (1, 10.)  (2, 11.)  (3, 12.)  (4, -3.)
> row 3: (0, 13.)  (1, 14.)  (2, 15.)  (3, 16.)  (4, -4.)
> row 4: (0, 17.)  (1, 18.)  (2, 19.)  (3, 20.)  (4, -5.)
> row 5: (0, 21.)  (1, 22.)  (2, 23.)  (3, 24.)  (4, -6.)
> row 6: (0, 25.)  (1, 26.)  (2, 27.)  (3, 28.)  (4, -7.)
> 
> With two processors though, the column matrix is placed in between:
> 
> row 0: (0, 1.)  (1, 2.)  (2, -1.)  (3, 3.)  (4, 4.)
> row 1: (0, 5.)  (1, 6.)  (2, -2.)  (3, 7.)  (4, 8.)
> row 2: (0, 9.)  (1, 10.)  (2, -3.)  (3, 11.)  (4, 12.)
> row 3: (0, 13.)  (1, 14.)  (2, -4.)  (3, 15.)  (4, 16.)
> row 4: (0, 17.)  (1, 18.)  (2, -5.)  (3, 19.)  (4, 20.)
> row 5: (0, 21.)  (1, 22.)  (2, -6.)  (3, 23.)  (4, 24.)
> row 6: (0, 25.)  (1, 26.)  (2, -7.)  (3, 27.)  (4, 28.)
> 
> Am I not building the nested matrix correctly, perhaps? I am using the 
> Firedrake PETSc fork. Can you reproduce it?
> 
> Thanks,
> Miguel
> 
> 
> ```python
> import numpy as np
> from petsc4py import PETSc
> from petsc4py.PETSc import COMM_WORLD
> 
> 
> input_array = np.array(
> [
> [1, 2, 3, 4],
> [5, 6, 7, 8],
> [9, 10, 11, 12],
> [13, 14, 15, 16],
> [17, 18, 19, 20],
> [21, 22, 23, 24],
> [25, 26, 27, 28],
> ],
> dtype=np.float64,
> )
> 
> 
> n_11_global_rows, n_11_global_cols = input_array.shape
> size = ((None, n_11_global_rows), (None, n_11_global_cols))
> mat = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
> mat.setUp()
> mat.setValues(range(n_11_global_rows), range(n_11_global_cols), input_array)
> mat.assemblyBegin()
> mat.assemblyEnd()
> mat.view()
> 
> input_array = np.array([[-1, -2, -3, -4, -5, -6, -7]], dtype=np.float64)
> global_rows, global_cols = input_array.T.shape
> size = ((None, global_rows), (None, global_cols))
> mat_2 = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
> mat_2.setUp()
> mat_2.setValues(range(global_rows), range(global_cols), input_array)
> mat_2.assemblyBegin()
> mat_2.assemblyEnd()
> 
> N = PETSc.Mat().createNest([[mat, mat_2]], comm=COMM_WORLD)
> N.assemblyBegin()
> N.assemblyEnd()
> 
> PETSc.Sys.Print(f"N sizes: {N.getSize()}")
> 
> N.convert("mpiaij").view()
> ```
> 
> 
> -- 
>   
> Miguel Angel Salazar de Troya 
> Head of Software Engineering 
>   
>   
> EPFL Innovation Park Building C 
> 1015 Lausanne 
> Email: miguel.sala...@corintis.com  
> Website: 
> https://urldefense.us/v3/__http://www.corintis.com__;!!G_uCfscf7eWS!YORvPf_X3sveJywRYWyVNAFZ4TMboGzB8VrtDIonGAbCuwi-L_km8zcUwHsovFRYTKz6sEij9ppa0U9uf4u2Ew$
>   
> 
>   
>   
> 
>   
> 



Re: [petsc-users] MatCreateTranspose

2024-04-12 Thread Pierre Jolivet


> On 12 Apr 2024, at 11:51 AM, Carl-Johan Thore  
> wrote:
> 
> On Fri, Apr 12, 2024 at 11:16 AM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> 
>> 
>>> On 12 Apr 2024, at 11:10 AM, Carl-Johan Thore >> <mailto:carljohanth...@gmail.com>> wrote:
>>> 
>>> Pierre, I see that you've already done a merge request for this. Thanks!
>>> I have tested this and it works nicely in my application
>> 
>> I guess your matrix is symmetric in pattern?
>> Because otherwise, I don’t think this should work.
>> But if it’s OK for your use case, I could simply add a PetscCheck() that the 
>> input Mat is symmetric and then get this integrated (better to have 
>> something partially working than nothing at all, I guess).
>> Please let me know.
>> 
>> Thanks,
>> Pierre
> 
> Yes, unless I'm mistaken it's structurally symmetric. Would be great to have 
> this integrated.

OK, I’ll work on finalizing this.

> (I don't now what you have in mind for the check, but 
> MatIsStructurallySymmetric did not work for me)

You’ll have to call MatSetOption(A, MAT_STRUCTURALLY_SYMMETRIC, PETSC_TRUE) 
before KSPSolveTranspose().

Thanks,
Pierre

> /Carl-Johan
> 
> 
>  
>>> /Carl-Johan
>>> 
>>> On Wed, Apr 10, 2024 at 8:24 AM Carl-Johan Thore >> <mailto:carljohanth...@gmail.com>> wrote:
>>>> 
>>>> 
>>>> On Tue, Apr 9, 2024 at 5:31 PM Pierre Jolivet >>> <mailto:pie...@joliv.et>> wrote:
>>>>> 
>>>>>> On 9 Apr 2024, at 4:19 PM, Carl-Johan Thore >>>>> <mailto:carljohanth...@gmail.com>> wrote:
>>>>>> 
>>>>>> This Message Is From an External Sender
>>>>>> This message came from outside your organization.
>>>>>> Thanks for the suggestion. I don't have a factored matrix (and can't 
>>>>>> really use direct linear solvers) so MatSolveTranspose doesn't seem to 
>>>>>> be an option. 
>>>>>> I should have mentioned that I've also tried KSPSolveTranspose but that 
>>>>>> doesn't work with pcredistribute
>>>>> 
>>>>> I’m not a frequent PCREDISTRIBUTE user, but it looks like 
>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/pc/impls/redistribute/redistribute.c.html*line332__;Iw!!G_uCfscf7eWS!Z3r5Vel5sbPcmhUVW14FZ6PsYuoOjHTQALmGl5fdDTHAI1ylX6YQfiOTutFQCeLQewR7-dZQ2k-7sSa95-LFDg$
>>>>>   could be copy/paste’d into PCApplyTranspose_Redistribute() by just 
>>>>> changing a MatMult() to MatMultTranspose() and KSPSolve() to 
>>>>> KSPSolveTranspose().
>>>>> Would you be willing to contribute (and test) this?
>>>>> Then, KSPSolveTranspose() — which should be the function you call — will 
>>>>> work.
>>>>> 
>>>>> Thanks,
>>>>> Pierre
>>>> 
>>>> Thanks, that sounds promising. Yes, I'll try to make a contribution
>>>> /Carl-Johan
>> 



Re: [petsc-users] MatCreateTranspose

2024-04-12 Thread Pierre Jolivet


> On 12 Apr 2024, at 11:10 AM, Carl-Johan Thore  
> wrote:
> 
> Pierre, I see that you've already done a merge request for this. Thanks!
> I have tested this and it works nicely in my application

I guess your matrix is symmetric in pattern?
Because otherwise, I don’t think this should work.
But if it’s OK for your use case, I could simply add a PetscCheck() that the 
input Mat is symmetric and then get this integrated (better to have something 
partially working than nothing at all, I guess).
Please let me know.

Thanks,
Pierre

> /Carl-Johan
> 
> On Wed, Apr 10, 2024 at 8:24 AM Carl-Johan Thore  <mailto:carljohanth...@gmail.com>> wrote:
>> 
>> 
>> On Tue, Apr 9, 2024 at 5:31 PM Pierre Jolivet > <mailto:pie...@joliv.et>> wrote:
>>> 
>>>> On 9 Apr 2024, at 4:19 PM, Carl-Johan Thore >>> <mailto:carljohanth...@gmail.com>> wrote:
>>>> 
>>>> This Message Is From an External Sender
>>>> This message came from outside your organization.
>>>> Thanks for the suggestion. I don't have a factored matrix (and can't 
>>>> really use direct linear solvers) so MatSolveTranspose doesn't seem to be 
>>>> an option. 
>>>> I should have mentioned that I've also tried KSPSolveTranspose but that 
>>>> doesn't work with pcredistribute
>>> 
>>> I’m not a frequent PCREDISTRIBUTE user, but it looks like 
>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/pc/impls/redistribute/redistribute.c.html*line332__;Iw!!G_uCfscf7eWS!dzQtfplyy0liDIzLvwZPEQ15gmxeoXxZBqfgkYyHOUdkUhP-wvoKWG58yEPisaaRzqydtExDOol1d4MSyysHYQ$
>>>   could be copy/paste’d into PCApplyTranspose_Redistribute() by just 
>>> changing a MatMult() to MatMultTranspose() and KSPSolve() to 
>>> KSPSolveTranspose().
>>> Would you be willing to contribute (and test) this?
>>> Then, KSPSolveTranspose() — which should be the function you call — will 
>>> work.
>>> 
>>> Thanks,
>>> Pierre
>> 
>> Thanks, that sounds promising. Yes, I'll try to make a contribution
>> /Carl-Johan



Re: [petsc-users] MatCreateTranspose

2024-04-09 Thread Pierre Jolivet

> On 9 Apr 2024, at 4:19 PM, Carl-Johan Thore  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Thanks for the suggestion. I don't have a factored matrix (and can't really 
> use direct linear solvers) so MatSolveTranspose doesn't seem to be an option. 
> I should have mentioned that I've also tried KSPSolveTranspose but that 
> doesn't work with pcredistribute

I’m not a frequent PCREDISTRIBUTE user, but it looks like 
https://urldefense.us/v3/__https://petsc.org/release/src/ksp/pc/impls/redistribute/redistribute.c.html*line332__;Iw!!G_uCfscf7eWS!bgGlN3dW5g5M98EkZPPaYivRkcXtawYE_jKsdMC0zK2bDy4u1Qy1KOMIcZ-_sBABRkHTzGlKHmAefb8Ozy3XbQ$
  could be copy/paste’d into PCApplyTranspose_Redistribute() by just changing a 
MatMult() to MatMultTranspose() and KSPSolve() to KSPSolveTranspose().
Would you be willing to contribute (and test) this?
Then, KSPSolveTranspose() — which should be the function you call — will work.

Thanks,
Pierre

> /Carl-Johan
> 
> On Tue, Apr 9, 2024 at 3:59 PM Zhang, Hong  > wrote:
>> Carl-Johan,
>> You can use MatSolveTranspose() to solve A'*x = b. See 
>> petsc/src/ksp/ksp/tutorials/ex79.c
>> 
>> `MatCreateTranspose()` is used if you only need a matrix that behaves like 
>> the transpose, but don't need the storage to be changed, i.e., A and A' 
>> share same matrix storage, thus MatGetRow() needs to get columns of A, which 
>> is not supported.
>> 
>> Hong
>> From: petsc-users > > on behalf of Carl-Johan Thore 
>> mailto:carljohanth...@gmail.com>>
>> Sent: Tuesday, April 9, 2024 6:38 AM
>> To: petsc-users mailto:petsc-users@mcs.anl.gov>>
>> Subject: [petsc-users] MatCreateTranspose
>>  
>> This Message Is From an External Sender 
>> This message came from outside your organization. 
>>  
>> Hi,
>> 
>> I have a matrix A with transpose A' and would like to solve the linear 
>> system A'*x = b using the pcredistribute preconditioner. It seemed like a 
>> good idea to use MatCreateTranspose, but this leads to
>> 
>> [0]PETSC ERROR: - Error Message 
>> --
>> [0]PETSC ERROR: No support for this operation for this object type
>> [0]PETSC ERROR: No method getrow for Mat of type transpose
>> [0]PETSC ERROR: See 
>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!bgGlN3dW5g5M98EkZPPaYivRkcXtawYE_jKsdMC0zK2bDy4u1Qy1KOMIcZ-_sBABRkHTzGlKHmAefb8Z27cesg$
>>   
>> 
>>  for trouble shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.21.0, unknown
>> [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" 
>> CXXOPTFLAGS="-O3 -march=native" --with-fortran-bindings=0 FOPTFLAGS="-O3 
>> -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 
>> --download-scalapack --download-hdf5 --download-zlib --download-mumps 
>> --download-parmetis --download-metis --download-ptscotch --download-hypre 
>> --download-spai
>> [0]PETSC ERROR: #1 MatGetRow() at 
>> /mnt/c/mathware/petsc/petsc-v3-21-0/src/mat/interface/matrix.c:573
>> [0]PETSC ERROR: #2 PCSetUp_Redistribute() at 
>> /mnt/c/mathware/petsc/petsc-v3-21-0/src/ksp/pc/impls/redistribute/redistribute.c:111
>> [0]PETSC ERROR: #3 PCSetUp() at 
>> /mnt/c/mathware/petsc/petsc-v3-21-0/src/ksp/pc/interface/precon.c:1079
>> [0]PETSC ERROR: #4 KSPSetUp() at 
>> /mnt/c/mathware/petsc/petsc-v3-21-0/src/ksp/ksp/interface/itfunc.c:415
>> 
>> MatTranspose is a working alternative, but MatCreateTranspose would be 
>> preferable. In principle the solution seems straightforward -- just add a 
>> getrow method -- but is it, and is it a good idea (performancewise etc)?
>> 
>> Kind regards,
>> Carl-Johan 



Re: [petsc-users] Install PETSc with option `--with-shared-libraries=1` failed on MacOS

2024-03-18 Thread Pierre Jolivet


> On 18 Mar 2024, at 7:59 PM, Satish Balay via petsc-users 
>  wrote:
> 
> On Mon, 18 Mar 2024, Satish Balay via petsc-users wrote:
> 
>> On Mon, 18 Mar 2024, Pierre Jolivet wrote:
>> 
>>> 
>>> 
>>>> On 18 Mar 2024, at 5:13 PM, Satish Balay via petsc-users 
>>>>  wrote:
>>>> 
>>>> Ah - the compiler did flag code bugs.
>>>> 
>>>>> (current version is 0.3.26 but we can’t update because there is a huge 
>>>>> performance regression which makes the pipeline timeout)
>>>> 
>>>> maybe we should retry - updating to the latest snapshot and see if this 
>>>> issue persists.
>>> 
>>> Well, that’s easy to see it is _still_ broken: 
>>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/jobs/6419779589__;!!G_uCfscf7eWS!f4svx7Rv1mmcLfy5l0C9bXXrw9gwb49ykkTb28IAtZW0VgZ8vgdD8exUOZSL0TCEqqP5X-p-0ll6TetPkw$
>>>  
>>> The infamous gcc segfault that can’t let us run the pipeline, but that 
>>> builds fine when it’s you that connect to the machine (I bothered you about 
>>> this a couple of months ago in case you don’t remember, see 
>>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7143__;!!G_uCfscf7eWS!f4svx7Rv1mmcLfy5l0C9bXXrw9gwb49ykkTb28IAtZW0VgZ8vgdD8exUOZSL0TCEqqP5X-p-0llrLiE4GQ$
>>>  ).
>> 
>>> make[2]: *** [../../Makefile.tail:46: libs] Bus error (core dumped)
>> 
>> Ah - ok - that's a strange error. I'm not sure how to debug it. [it fails 
>> when the build is invoked from configure - but not when its invoked directly 
>> from bash/shell.]
> 
> Pushed a potential workaround to jolivet/test-openblas

And here we go: 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/jobs/6420606887__;!!G_uCfscf7eWS!ZRl7bXHfAYjDN_AxaP28sbWmVsW1LJNw3_FdSSjv_R3X7Ol03i_HRQZ-5iro-4Y-w6JpmqnJrp6g33qwH26Uag$
 
20 minutes in, and still in the dm_* tests with timeouts right, left, and 
center.
For reference, this prior job 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/jobs/6418468279__;!!G_uCfscf7eWS!ZRl7bXHfAYjDN_AxaP28sbWmVsW1LJNw3_FdSSjv_R3X7Ol03i_HRQZ-5iro-4Y-w6JpmqnJrp6g33rzzaakGw$
  completed in 3 minutes (OK, maybe add a couple of minutes to rebuild the 
packages to have a fair comparison).
What did they do to OpenBLAS? Add a sleep() in their axpy?

Thanks,
Pierre

> Note: The failure comes up on same OS (Fedora 39) on X64 aswell.
> 
> Satish
> 
>> 
>> Satish
>> 
>>> 
>>> Thanks,
>>> Pierre
>>> 
>>>> 
>>>> Satish
>>>> 
>>>> On Mon, 18 Mar 2024, Zongze Yang wrote:
>>>> 
>>>>> The issue of openblas was resolved by this pr 
>>>>> https://urldefense.us/v3/__https://github.com/OpenMathLib/OpenBLAS/pull/4565__;!!G_uCfscf7eWS!b09n5clcTFuLceLY_9KfqtSsgmmCIBLFbqciRVCKvnvFw9zTaNF8ssK0MiQlBOXUJe7H88nl-7ExdfhB-cMXLQ2d$
>>>>>  
>>>>> 
>>>>> Best wishes,
>>>>> Zongze
>>>>> 
>>>>>> On 18 Mar 2024, at 00:50, Zongze Yang  wrote:
>>>>>> 
>>>>>> It can be resolved by adding CFLAGS=-Wno-int-conversion. Perhaps the 
>>>>>> default behaviour of the new version compiler has been changed?
>>>>>> 
>>>>>> Best wishes,
>>>>>> Zongze
>>>>>>> On 18 Mar 2024, at 00:23, Satish Balay  wrote:
>>>>>>> 
>>>>>>> Hm - I just tried a build with balay/xcode15-mpich - and that goes 
>>>>>>> through fine for me. So don't know what the difference here is.
>>>>>>> 
>>>>>>> One difference is - I have a slightly older xcode. However your 
>>>>>>> compiler appears to behave as using -Werror. Perhaps 
>>>>>>> CFLAGS=-Wno-int-conversion will help here?
>>>>>>> 
>>>>>>> Satish
>>>>>>> 
>>>>>>> 
>>>>>>> Executing: gcc --version
>>>>>>> stdout:
>>>>>>> Apple clang version 15.0.0 (clang-1500.3.9.4)
>>>>>>> 
>>>>>>> Executing: 
>>>>>>> /Users/zzyang/workspace/repos/petsc/arch-darwin-c-debug/bin/mpicc -show
>>>>>>> stdout: gcc -fPIC -fno-stack-check -Qunused-arguments -g -O0 
>>>>>>> -Wno-implicit-function-declaration -fno-common 
>>>>>>> -I/Users/zzyang/workspace/repos/petsc/arch-darwin-c-debug/include 
>>>>>>> -

Re: [petsc-users] Install PETSc with option `--with-shared-libraries=1` failed on MacOS

2024-03-18 Thread Pierre Jolivet
 -DMAX_STACK_ALLOC=2048 -Wall -DF_INTERFACE_GFORT -fPIC -DNO_WARMUP 
>>>> -DMAX_CPU_NUMBER=24 -DMAX_PARALLEL_NUMBER=1 -DBUILD_SINGLE=1 
>>>> -DBUILD_DOUBLE=1 -DBUILD_COMPLEX=1 -DBUILD_COMPLEX16=1 
>>>> -DVERSION=\"0.3.21\" -march=armv8-a -UASMNAME -UASMFNAME -UNAME -UCNAME 
>>>> -UCHAR_NAME -UCHAR_CNAME -DASMNAME=_lapack_wrappers 
>>>> -DASMFNAME=_lapack_wrappers_ -DNAME=lapack_wrappers_ 
>>>> -DCNAME=lapack_wrappers -DCHAR_NAME=\"lapack_wrappers_\" 
>>>> -DCHAR_CNAME=\"lapack_wrappers\" -DNO_AFFINITY -I.. -c 
>>>> src/lapack_wrappers.c -o src/lapack_wrappers.o
>>>> src/lapack_wrappers.c:570:81: warning: incompatible integer to pointer 
>>>> conversion passing 'blasint' (aka 'int') to parameter of type 'const 
>>>> blasint *' (aka 'const int *'); take the address with & [-Wint-conversion]
>>>>   RELAPACK_sgemmt(uplo, transA, transB, n, k, alpha, A, ldA, B, ldB, beta, 
>>>> C, info);
>>>>
>>>>^~~~
>>>>
>>>>&
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Sun, 17 Mar 2024, Pierre Jolivet wrote:
>>>> 
>>>>> Ah, my bad, I misread linux-opt-arm as a macOS runner, no wonder the 
>>>>> option is not helping…
>>>>> Take Barry’s advice.
>>>>> Furthermore, it looks like OpenBLAS people are steering in the opposite 
>>>>> direction as us, by forcing the use of ld-classic 
>>>>> https://urldefense.us/v3/__https://github.com/OpenMathLib/OpenBLAS/commit/103d6f4e42fbe532ae4ea48e8d90d7d792bc93d2__;!!G_uCfscf7eWS!bY2l3X9Eb5PRzNQYrfPFXhgcUodHCiDinhQYga0PeQn1IQzJYD376fk-pZfktGAkpTvBmzy7BFDc9SrazFoooQ$
>>>>>  , so that’s another good argument in favor of -framework Accelerate.
>>>>> 
>>>>> Thanks,
>>>>> Pierre
>>>>> 
>>>>> PS: anyone benchmarked those 
>>>>> https://urldefense.us/v3/__https://developer.apple.com/documentation/accelerate/sparse_solvers__;!!G_uCfscf7eWS!bY2l3X9Eb5PRzNQYrfPFXhgcUodHCiDinhQYga0PeQn1IQzJYD376fk-pZfktGAkpTvBmzy7BFDc9SrpnDvT5g$
>>>>>   ? I didn’t even know they existed.
>>>>> 
>>>>>> On 17 Mar 2024, at 3:06 PM, Zongze Yang >>>>> <mailto:yangzon...@gmail.com>> wrote:
>>>>>> 
>>>>>> This Message Is From an External Sender 
>>>>>> This message came from outside your organization.
>>>>>> Understood. Thank you for your advice.
>>>>>> 
>>>>>> Best wishes,
>>>>>> Zongze
>>>>>> 
>>>>>>> On 17 Mar 2024, at 22:04, Barry Smith >>>>>> <mailto:bsm...@petsc.dev> <mailto:bsm...@petsc.dev>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> I would just avoid the --download-openblas  option. The BLAS/LAPACK 
>>>>>>> provided by Apple should perform fine, perhaps even better than 
>>>>>>> OpenBLAS on your system.
>>>>>>> 
>>>>>>> 
>>>>>>>> On Mar 17, 2024, at 9:58 AM, Zongze Yang >>>>>>> <mailto:yangzon...@gmail.com> <mailto:yangzon...@gmail.com>> wrote:
>>>>>>>> 
>>>>>>>> This Message Is From an External Sender 
>>>>>>>> This message came from outside your organization.
>>>>>>>> Adding the flag `--download-openblas-make-options=TARGET=GENERIC` did 
>>>>>>>> not resolve the issue. The same error persisted.
>>>>>>>> 
>>>>>>>> Best wishes,
>>>>>>>> Zongze
>>>>>>>> 
>>>>>>>>> On 17 Mar 2024, at 20:58, Pierre Jolivet >>>>>>>> <mailto:pie...@joliv.et> <mailto:pie...@joliv.et>> wrote:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> On 17 Mar 2024, at 1:04 PM, Zongze Yang >>>>>>>>> <mailto:yangzon...@gmail.com> <mailto:yangzon...@gmail.com>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Thank you for providing the instructions. I try the first option

Re: [petsc-users] Install PETSc with option `--with-shared-libraries=1` failed on MacOS

2024-03-17 Thread Pierre Jolivet
 here
> void RELAPACK_zgemmt(const char *, const char *, const char *, const 
> blasint *, const blasint *, const double *, const double *, const blasint *, 
> const double *, const blasint *, const double *, double *, const blasint *);
>   
>           
> ^
> 4 errors generated.
> ```
> 
> Best wishes,
> Zongze
> 
> 
> 
>> On 17 Mar 2024, at 18:48, Pierre Jolivet  wrote:
>> 
>> You need this MR 
>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7365__;!!G_uCfscf7eWS!ZvwjMAU-lcsnHOD1DYr8ZDVvdFJXa8hvGZBO4pBGgAQUue7mdUWP4lTTAroHEIUV3yEADf1DJ7z-etn64PkQMw$
>>  
>> main has been broken for macOS since 
>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7341__;!!G_uCfscf7eWS!ZvwjMAU-lcsnHOD1DYr8ZDVvdFJXa8hvGZBO4pBGgAQUue7mdUWP4lTTAroHEIUV3yEADf1DJ7z-etlzwFFxow$
>>  , so the alternative is to revert to the commit prior.
>> It should work either way.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 17 Mar 2024, at 11:31 AM, Zongze Yang >> <mailto:yangzon...@gmail.com>> wrote:
>>> 
>>> 
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> Hi, PETSc Team,
>>> 
>>> I am trying to install petsc with the following configuration
>>> ```
>>> ./configure \
>>> --download-bison \
>>> --download-mpich \
>>> --download-mpich-configure-arguments=--disable-opencl \
>>> --download-hwloc \
>>> --download-hwloc-configure-arguments=--disable-opencl \
>>> --download-openblas \
>>> --download-openblas-make-options="'USE_THREAD=0 USE_LOCKING=1 
>>> USE_OPENMP=0'" \
>>> --with-shared-libraries=1 \
>>> --with-fortran-bindings=0 \
>>> --with-zlib \
>>> LDFLAGS=-Wl,-ld_classic
>>> ```
>>> 
>>> The log shows that
>>> ```
>>>Exhausted all shared linker guesses. Could not determine how to create a 
>>> shared library!
>>> ```
>>> 
>>> I recently updated the system and Xcode, as well as homebrew.
>>> 
>>> The configure.log is attached.
>>> 
>>> Thanks for your attention to this matter.
>>> 
>>> Best wishes,
>>> Zongze
>>> 
> 



Re: [petsc-users] Install PETSc with option `--with-shared-libraries=1` failed on MacOS

2024-03-17 Thread Pierre Jolivet
You need this MR 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7365__;!!G_uCfscf7eWS!ZkCTTWYisvuRPh4YfLXoXTv2cY0NTXiSQhpIXFilhL3cIcymTpMi-q4OnEYp5mh85b9wUGFXUa55vdINNmkvfQ$
 
main has been broken for macOS since 
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7341__;!!G_uCfscf7eWS!ZkCTTWYisvuRPh4YfLXoXTv2cY0NTXiSQhpIXFilhL3cIcymTpMi-q4OnEYp5mh85b9wUGFXUa55vdKnE_DalQ$
 , so the alternative is to revert to the commit prior.
It should work either way.

Thanks,
Pierre

> On 17 Mar 2024, at 11:31 AM, Zongze Yang  wrote:
> 
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi, PETSc Team,
> 
> I am trying to install petsc with the following configuration
> ```
> ./configure \
> --download-bison \
> --download-mpich \
> --download-mpich-configure-arguments=--disable-opencl \
> --download-hwloc \
> --download-hwloc-configure-arguments=--disable-opencl \
> --download-openblas \
> --download-openblas-make-options="'USE_THREAD=0 USE_LOCKING=1 
> USE_OPENMP=0'" \
> --with-shared-libraries=1 \
> --with-fortran-bindings=0 \
> --with-zlib \
> LDFLAGS=-Wl,-ld_classic
> ```
> 
> The log shows that
> ```
>Exhausted all shared linker guesses. Could not determine how to create a 
> shared library!
> ```
> 
> I recently updated the system and Xcode, as well as homebrew.
> 
> The configure.log is attached.
> 
> Thanks for your attention to this matter.
> 
> Best wishes,
> Zongze
> 
> 
> 


Re: [petsc-users] Help Needed Debugging Installation Issue for PETSc with SLEPc

2024-03-15 Thread Pierre Jolivet
This was fixed 5 days ago in 
https://urldefense.us/v3/__https://gitlab.com/slepc/slepc/-/merge_requests/638__;!!G_uCfscf7eWS!Z6nHLo3qh8bAe4zTIkJHhBcLc2WyVfPb47nBF38kwu8pyoNFz11wVdiL2ytUg66oHoVPkjBafopf_Gag8j6dXQ$
 , so you need to use an up-to-date release branch of SLEPc.

Thanks,
Pierre

> On 15 Mar 2024, at 3:44 PM, Zongze Yang  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi,
> 
> I am currently facing an issue while attempting to install PETSc with SLEPc. 
> Despite not encountering any errors in the log generated by the 'make' 
> command, I am receiving an error message stating "Error during compile".
> 
> I would greatly appreciate it if someone could provide me with some guidance 
> on debugging this issue. 
> 
> I have attached the configure logs and make logs for both PETSc and SLEPc for 
> your reference.
> 
> Below is an excerpt from the make.log file of SLEPc:
> ```
> CLINKER default/lib/libslepc.3.020.1.dylib
> ld: warning: -commons use_dylibs is no longer supported, using error 
> treatment instead
> ld: warning: -commons use_dylibs is no longer supported, using error 
> treatment instead
> ld: warning: -commons use_dylibs is no longer supported, using error 
> treatment instead
> ld: warning: duplicate -rpath 
> '/Users/zzyang/opt/firedrake/firedrake-real-int32-debug/src/petsc/default/lib'
>  ignored
> ld: warning: dylib 
> (/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/libgfortran.dylib) was built 
> for newer macOS version (14.0) than being linked (12.0)
> ld: warning: dylib 
> (/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/libquadmath.dylib) was built 
> for newer macOS version (14.0) than being linked (12.0)
>DSYMUTIL default/lib/libslepc.3.020.1.dylib
> gmake[6]: Leaving directory 
> '/Users/zzyang/opt/firedrake/firedrake-real-int32-debug/src/petsc/default/externalpackages/git.slepc'
> gmake[5]: Leaving directory 
> '/Users/zzyang/opt/firedrake/firedrake-real-int32-debug/src/petsc/default/externalpackages/git.slepc'
> ***ERROR
>  Error during compile, check default/lib/slepc/conf/make.log
>  Send all contents of ./default/lib/slepc/conf to slepc-ma...@upv.es 
> 
> 
> Finishing make run at 五, 15  3 2024 21:04:17 +0800
> ```
> 
> Thank you very much for your attention and support.
> 
> Best wishes,
> Zongze
> 
> 



Re: [petsc-users] Help with SLEPc eigenvectors convergence.

2024-03-06 Thread Pierre Jolivet
It seems your A is rank-deficient.
If you slightly regularize the GEVP, e.g., -st_target 1.0E-6, you’ll get errors 
closer to 0.

Thanks,
Pierre

> On 6 Mar 2024, at 8:57 PM, Eric Chamberland via petsc-users 
>  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi,
> 
> we have a simple generalized Hermitian problem  (Kirchhoff plate 
> vibration) for which we are comparing SLEPc results with Matlab results.
> 
> SLEPc computes eigenvalues correctly, as Matlab does.
> 
> However, the output eigenvectors are not fully converged and we are 
> trying to understand where we have missed a convergence parameter or 
> anything else about eigenvectors.
> 
> SLEPc warns us at the end of EPSSolve with this message:
> 
> ---
>   Problem: some of the first 10 relative errors are higher than the 
> tolerance
> ---
> 
> And in fact, when we import the resulting vectors into Matlab, 
> "A*x-B*Lambda*x" isn't close to 0.
> 
> Here are attached the EPS view output as the A and B matrices used.
> 
> Any help or insights will be appreciated! :)
> 
> Thanks,
> Eric
> 



Re: [petsc-users] MUMPS Metis options

2024-03-05 Thread Pierre Jolivet

> On 5 Mar 2024, at 2:20 PM, Michal Habera  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear all,
> 
> MUMPS allows custom configuration of the METIS library which it uses
> for symmetric permutations using "mumps_par%METIS OPTIONS",
> see p. 35 of 
> https://urldefense.us/v3/__https://mumps-solver.org/doc/userguide_5.6.2.pdf__;!!G_uCfscf7eWS!YlgQB0Z4fqIbHPr0NbNaJZivhlC-hjMLgRjRGox9nvwZQ1ptSyRvtQHlzqAJwdQgbUWP0HpQWM-NV5C0TiKNzA$
>   
> .
> 
> Is it possible to provide these options to MUMPS via PETSc API? I can
> only find a way to control integer and real control parameters.

It is not possible, but I guess we could add something like 
MatMumpsSetMetisOptions(Mat A, PetscInt index, PetscInt value), just like 
MatMumpsSetIcntl().

Thanks,
Pierre

> -- 
> Kind regards,
> 
> Michal Habera



Re: [petsc-users] PAMI error on Summit

2024-02-29 Thread Pierre Jolivet


> On 29 Feb 2024, at 5:06 PM, Matthew Knepley  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> On Thu, Feb 29, 2024 at 11:03 AM Blondel, Sophie via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>  
>> Thank you Junchao,
>> 
>> Yes, I am using gpu-aware MPI.
>> 
>> Is "-use_gpu_aware_mpi 0" a runtime option or a compile option?
> 
> That is a configure option, so

No, it’s a runtime option, you don’t need to reconfigure, just add it to your 
command line arguments.

Thanks,
Pierre

>   cd $PETSC_DIR
>   ./${PETSC_ARCH}/lib./petsc/conf/reconfigure-${PETSC_ARCH}.py 
> -use-gpu_aware_mpi 0
>   make all
> 
>   Thanks,
> 
>  Matt
>  
>> Best,
>> 
>> Sophie
>> From: Junchao Zhang > >
>> Sent: Thursday, February 29, 2024 10:50
>> To: Blondel, Sophie mailto:sblon...@utk.edu>>
>> Cc: xolotl-psi-developm...@lists.sourceforge.net 
>>  
>> > >; 
>> petsc-users@mcs.anl.gov  
>> mailto:petsc-users@mcs.anl.gov>>
>> Subject: Re: [petsc-users] PAMI error on Summit
>>  
>> You don't often get email from junchao.zh...@gmail.com 
>> . Learn why this is important 
>> 
>>
>> Hi Sophie,
>>   PetscSFBcastEnd() was calling MPI_Waitall() to finish the communication in 
>> DMGlobalToLocal. 
>>   I guess you used gpu-aware MPI.  The error you saw might be due to it.  
>> You can try without it with a petsc option  -use_gpu_aware_mpi 0
>>   But we generally recommend gpu-aware mpi.  You can try on other GPU 
>> machines to see if it is just an IBM Spectrum MPI problem. 
>> 
>>Thanks.
>> --Junchao Zhang
>> 
>> 
>> On Thu, Feb 29, 2024 at 9:17 AM Blondel, Sophie via petsc-users 
>> mailto:petsc-users@mcs.anl.gov>> wrote:
>> This Message Is From an External Sender 
>> This message came from outside your organization. 
>>  
>> Hi,
>> 
>> I am using PETSc build with the Kokkos CUDA backend on Summit but when I run 
>> my code with multiple MPI tasks I get the following error:
>> 0 TS dt 1e-12 time 0.
>> errno 14 pid 864558
>> xolotl: 
>> /__SMPI_build_dir__/ibmsrc/pami/ibm-pami/buildtools/pami_build_port/../pami/components/devices/shmem/shaddr/CMAShaddr.h:164:
>>  size_t PAMI::Dev
>> ice::Shmem::CMAShaddr::read_impl(PAMI::Memregion*, size_t, PAMI::Memregion*, 
>> size_t, size_t, bool*): Assertion `cbytes > 0' failed.
>> errno 14 pid 864557
>> xolotl: 
>> /__SMPI_build_dir__/ibmsrc/pami/ibm-pami/buildtools/pami_build_port/../pami/components/devices/shmem/shaddr/CMAShaddr.h:164:
>>  size_t PAMI::Dev
>> ice::Shmem::CMAShaddr::read_impl(PAMI::Memregion*, size_t, PAMI::Memregion*, 
>> size_t, size_t, bool*): Assertion `cbytes > 0' failed.
>> [e28n07:864557] *** Process received signal ***
>> [e28n07:864557] Signal: Aborted (6)
>> [e28n07:864557] Signal code:  (-6)
>> [e28n07:864557] [ 0] 
>> linux-vdso64.so.1(__kernel_sigtramp_rt64+0x0)[0x200604d8]
>> [e28n07:864557] [ 1] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> (gsignal+0xd8)[0x25d796f8]
>> [e28n07:864557] [ 2] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> (abort+0x164)[0x25d53ff4]
>> [e28n07:864557] [ 3] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> (+0x3d280)[0x25d6d280]
>> [e28n07:864557] [ 4] [e28n07:864558] *** Process received signal ***
>> [e28n07:864558] Signal: Aborted (6)
>> [e28n07:864558] Signal code:  (-6)
>> [e28n07:864558] [ 0] 
>> linux-vdso64.so.1(__kernel_sigtramp_rt64+0x0)[0x200604d8]
>> [e28n07:864558] [ 1] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> (gsignal+0xd8)[0x25d796f8]
>> [e28n07:864558] [ 2] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> (abort+0x164)[0x25d53ff4]
>> [e28n07:864558] [ 3] /lib64/glibc-hwcaps/power9/libc-2.28.so 
>> 

Re: [petsc-users] question on PCLSC with matrix blocks of type MATNEST

2024-02-13 Thread Pierre Jolivet


> On 13 Feb 2024, at 9:21 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Pierre,
> I can repeat your change in ex27.c on petsc-release. However, replacing 
> +Mat D;
> +PetscCall(MatMatMult(C, C, MAT_INITIAL_MATRIX, PETSC_DECIDE, ));
> +PetscCall(MatDestroy());
> with
>  PetscCall(MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, array, ));
> +Mat D;
> +PetscCall(MatProductCreate(C, C, NULL, ));
> +PetscCall(MatProductSetType(D, MATPRODUCT_AB));
> +PetscCall(MatProductSetFromOptions(D));
> +PetscCall(MatProductSymbolic(D));
> +PetscCall(MatProductNumeric(D));
> +PetscCall(MatDestroy());

Sure, I fixed a single MatProduct bug, my guts tell me there are other 
unhandled corner cases, and you indeed found another one.

> ./ex27 -f  farzad_B_rhs -truncate -solve_augmented
> ...
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Unspecified symbolic phase for product AB with A nest, B 
> nest. Call MatProductSetFromOptions() first or the product is not supported
> ...
> [0]PETSC ERROR: #1 MatProductSymbolic() at 
> /Users/hongzhang-sun/soft/petsc/src/mat/interface/matproduct.c:807
> [0]PETSC ERROR: #2 main() at ex27.c:250
> 
> i.e., same confusing error message as reported by Hana, because this calling 
> process does not call MatProduct_Private() with your fix. A fix to this is to 
> modify the error message in MatProductSymbolic():
> 
> --- a/src/mat/interface/matproduct.c
> +++ b/src/mat/interface/matproduct.c
> @@ -804,7 +804,7 @@ PetscErrorCode MatProductSymbolic(Mat mat)
>  ...
> -PetscCheck(!missing, PetscObjectComm((PetscObject)mat), PETSC_ERR_PLIB, 
> "Unspecified symbolic phase for product %s. Call MatProductSetFromOptions() 
> first", errstr);
> +PetscCheck(!missing, PetscObjectComm((PetscObject)mat), PETSC_ERR_PLIB, 
> "Unspecified symbolic phase for product %s. Call MatProductSetFromOptions() 
> first or the product is not supported", errstr);

I don’t see how this is less confusing.
In fact, to me, error message with conditionals in the sentence but without 
PETSc telling the value of each expression of the conditionals is infuriating.
How do I know if MatProductSetFromOptions() has not been called, or if the 
product is not supported?
This is very difficult to debug, to me, and if it would be possible to catch 
this with two different checks, it would be much better.
But the current design of MatProduct may not allow us to do it, so I will not 
be opposed to such a change.
Maybe add a Boolean à la pc->setupcalled or pc->setfromoptionscalled in the 
MatProduct structure to be able to distinguish better the cause of the failure?

Thanks,
Pierre

> with this fix, I get 
> ./ex27 -f farzad_B_rhs -truncate -solve_augmented
> ...
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Unspecified symbolic phase for product AB with A nest, B 
> nest. Call MatProductSetFromOptions() first or the product is not supported
> 
> If you agree with this fix, I'll create a MR for it.
> Hong
> 
> 
> From: Pierre Jolivet 
> Sent: Tuesday, February 13, 2024 12:08 AM
> To: Zhang, Hong 
> Cc: Hana Honnerová ; petsc-users@mcs.anl.gov 
> 
> Subject: Re: [petsc-users] question on PCLSC with matrix blocks of type 
> MATNEST
>  
> 
> 
>> On 13 Feb 2024, at 12:33 AM, Zhang, Hong  wrote:
>> 
>> Pierre,
>> I just modified the error message of MatProductSymbolic() and added a 
>> testing segment in src/mat/tests/ex195.c. I have not pushed my change yet.
>> 
>> Your fix at 
>> https://gitlab.com/petsc/petsc/-/commit/9dcea022de3b0309e5c16b8c554ad9c85dea29cf?merge_request_iid=7283
>>  is more general. Has this fix merged to release and main? With latest main 
>> and release, I get same previous error message.
> 
> I don’t (anymore, but could prior to my fix).
> The trigger is MatMatMult() with MAT_INITIAL_MATRIX in PCSetUp_LSC().
> Reproducible with:
> diff --git a/src/ksp/ksp/tutorials/ex27.c b/src/ksp/ksp/tutorials/ex27.c
> index 116b7df8522..9bdf4d7334a 100644
> --- a/src/ksp/ksp/tutorials/ex27.c
> +++ b/src/ksp/ksp/tutorials/ex27.c
> @@ -245,2 +245,5 @@ int main(int argc, char **args)
>  PetscCall(MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, array, ));
> +Mat D;
> +PetscCall(MatMatMult(C, C, MAT_INITIAL_MATRIX, PETSC_DECIDE, ));
> +PetscCall(MatDestroy());
>  if (!sbaij) PetscCall(MatNestSetVecType(C, VECNEST));
> Which now generates:
> $ ./ex27 -f ${DATAFILESPATH}/matrices/farzad_B_rhs -truncate -solve_augmented 
> Failed to load RHS, so use a vector of all ones.
> Failed to load initial guess, so use a vector of all zeros.
> [0]PETSC ERROR: - Error Messa

Re: [petsc-users] question on PCLSC with matrix blocks of type MATNEST

2024-02-12 Thread Pierre Jolivet


> On 13 Feb 2024, at 12:33 AM, Zhang, Hong  wrote:
> 
> Pierre,
> I just modified the error message of MatProductSymbolic() and added a testing 
> segment in src/mat/tests/ex195.c. I have not pushed my change yet.
> 
> Your fix at 
> https://gitlab.com/petsc/petsc/-/commit/9dcea022de3b0309e5c16b8c554ad9c85dea29cf?merge_request_iid=7283
>  is more general. Has this fix merged to release and main? With latest main 
> and release, I get same previous error message.

I don’t (anymore, but could prior to my fix).
The trigger is MatMatMult() with MAT_INITIAL_MATRIX in PCSetUp_LSC().
Reproducible with:
diff --git a/src/ksp/ksp/tutorials/ex27.c b/src/ksp/ksp/tutorials/ex27.c
index 116b7df8522..9bdf4d7334a 100644
--- a/src/ksp/ksp/tutorials/ex27.c
+++ b/src/ksp/ksp/tutorials/ex27.c
@@ -245,2 +245,5 @@ int main(int argc, char **args)
 PetscCall(MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, array, ));
+Mat D;
+PetscCall(MatMatMult(C, C, MAT_INITIAL_MATRIX, PETSC_DECIDE, ));
+PetscCall(MatDestroy());
 if (!sbaij) PetscCall(MatNestSetVecType(C, VECNEST));
Which now generates:
$ ./ex27 -f ${DATAFILESPATH}/matrices/farzad_B_rhs -truncate -solve_augmented 
Failed to load RHS, so use a vector of all ones.
Failed to load initial guess, so use a vector of all zeros.
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: MatProduct AB not supported for nest and nest
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

Thanks,
Pierre

> Hong
> 
> From: Pierre Jolivet 
> Sent: Sunday, February 11, 2024 7:43 AM
> To: Zhang, Hong 
> Cc: Hana Honnerová ; petsc-users@mcs.anl.gov 
> 
> Subject: Re: [petsc-users] question on PCLSC with matrix blocks of type 
> MATNEST
>  
> 
> On 8 Feb 2024, at 5:37 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Hana,
> "product AB with A nest, B nest" is not supported by PETSc. I do not know why 
> PETSc does not display such an error message. I'll check it.
> 
> Did you?
> A naive fix is to simply add the missing PetscCheck() in MatProduct_Private() 
> right after 
> MatProductSetFromOptions()https://petsc.org/release/src/mat/interface/matrix.c.html#line10026
>  (notice that it is there line 10048 in the other code branch)
> I have this at 
> https://gitlab.com/petsc/petsc/-/commit/9dcea022de3b0309e5c16b8c554ad9c85dea29cf?merge_request_iid=7283
>  (coupled with code refactoring to avoid missing any other operations), but 
> maybe we could do things more elegantly.
> 
> Thanks,
> Pierre
> 
> Hong
> From: petsc-users  on behalf of Hana 
> Honnerová 
> Sent: Thursday, February 8, 2024 4:45 AM
> To: petsc-users@mcs.anl.gov 
> Subject: [petsc-users] question on PCLSC with matrix blocks of type MATNEST
>  
> Hi all,
> I am trying to solve linear systems arising from isogeometric discretization 
> (similar to FEM) of the Navier-Stokes equations in parallel using PETSc. The 
> linear systems are of saddle-point type, so I would like to use the 
> PCFIELDSPLIT preconditioner with the -pc_fieldsplit_detect_saddle_point 
> option, Schur complement factorization and the LSC Schur complement 
> preconditioner. I do not provide any user-defined operators for PCLSC in my 
> codes (at least for now).
> I store the matrix as a MATNEST consisting of 4 blocks (F for 
> velocity-velocity part, Bt for velocity-pressure part, B for 
> pressure-velocity part and NULL for pressure-pressure part). It is also 
> convenient for me to store the blocks F, Bt and B as another MATNEST 
> consisting of blocks corresponding to individual velocity components. 
> 
> However, in this setting, I get the following error message:
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Unspecified symbolic phase for product AB with A nest, B 
> nest. Call MatProductSetFromOptions() first
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.20.4, unknown 
> [0]PETSC ERROR: 
> /home/hhornik/gismo/build-petsc-mpi/RelWithDebInfo/bin/gsINSSolverPETScTest 
> on a arch-debug named ThinkPad-T14 by hhornik Thu Feb  8 11:04:17 2024
> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-debug --with-debugging=1 
> --download-mumps --download-scalapack
> [0]PETSC ERROR: #1 MatProductSymbolic() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matproduct.c:807
> [0]PETSC ERROR: #2 MatProduct_Private() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matrix.c:10027
> [0]PETSC ERROR: #3 MatMatMult() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matrix.c:10103
> [0]PETSC ERROR: #4 PCSetUp_LSC() at 
> /home/hhornik/Softw

Re: [petsc-users] question on PCLSC with matrix blocks of type MATNEST

2024-02-11 Thread Pierre Jolivet

> On 8 Feb 2024, at 5:37 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Hana,
> "product AB with A nest, B nest" is not supported by PETSc. I do not know why 
> PETSc does not display such an error message. I'll check it.

Did you?
A naive fix is to simply add the missing PetscCheck() in MatProduct_Private() 
right after MatProductSetFromOptions() 
https://petsc.org/release/src/mat/interface/matrix.c.html#line10026 (notice 
that it is there line 10048 in the other code branch)
I have this at 
https://gitlab.com/petsc/petsc/-/commit/9dcea022de3b0309e5c16b8c554ad9c85dea29cf?merge_request_iid=7283
 (coupled with code refactoring to avoid missing any other operations), but 
maybe we could do things more elegantly.

Thanks,
Pierre

> Hong
> From: petsc-users  on behalf of Hana 
> Honnerová 
> Sent: Thursday, February 8, 2024 4:45 AM
> To: petsc-users@mcs.anl.gov 
> Subject: [petsc-users] question on PCLSC with matrix blocks of type MATNEST
>  
> Hi all,
> I am trying to solve linear systems arising from isogeometric discretization 
> (similar to FEM) of the Navier-Stokes equations in parallel using PETSc. The 
> linear systems are of saddle-point type, so I would like to use the 
> PCFIELDSPLIT preconditioner with the -pc_fieldsplit_detect_saddle_point 
> option, Schur complement factorization and the LSC Schur complement 
> preconditioner. I do not provide any user-defined operators for PCLSC in my 
> codes (at least for now).
> I store the matrix as a MATNEST consisting of 4 blocks (F for 
> velocity-velocity part, Bt for velocity-pressure part, B for 
> pressure-velocity part and NULL for pressure-pressure part). It is also 
> convenient for me to store the blocks F, Bt and B as another MATNEST 
> consisting of blocks corresponding to individual velocity components. 
> 
> However, in this setting, I get the following error message:
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Unspecified symbolic phase for product AB with A nest, B 
> nest. Call MatProductSetFromOptions() first
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.20.4, unknown 
> [0]PETSC ERROR: 
> /home/hhornik/gismo/build-petsc-mpi/RelWithDebInfo/bin/gsINSSolverPETScTest 
> on a arch-debug named ThinkPad-T14 by hhornik Thu Feb  8 11:04:17 2024
> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-debug --with-debugging=1 
> --download-mumps --download-scalapack
> [0]PETSC ERROR: #1 MatProductSymbolic() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matproduct.c:807
> [0]PETSC ERROR: #2 MatProduct_Private() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matrix.c:10027
> [0]PETSC ERROR: #3 MatMatMult() at 
> /home/hhornik/Software/PETSc/src/mat/interface/matrix.c:10103
> [0]PETSC ERROR: #4 PCSetUp_LSC() at 
> /home/hhornik/Software/PETSc/src/ksp/pc/impls/lsc/lsc.c:79
> [0]PETSC ERROR: #5 PCSetUp() at 
> /home/hhornik/Software/PETSc/src/ksp/pc/interface/precon.c:1080
> [0]PETSC ERROR: #6 KSPSetUp() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/interface/itfunc.c:415
> [0]PETSC ERROR: #7 KSPSolve_Private() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/interface/itfunc.c:832
> [0]PETSC ERROR: #8 KSPSolve() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/interface/itfunc.c:1079
> [0]PETSC ERROR: #9 PCApply_FieldSplit_Schur() at 
> /home/hhornik/Software/PETSc/src/ksp/pc/impls/fieldsplit/fieldsplit.c:1165
> [0]PETSC ERROR: #10 PCApply() at 
> /home/hhornik/Software/PETSc/src/ksp/pc/interface/precon.c:498
> [0]PETSC ERROR: #11 KSP_PCApply() at 
> /home/hhornik/Software/PETSc/include/petsc/private/kspimpl.h:383
> [0]PETSC ERROR: #12 KSPFGMRESCycle() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c:123
> [0]PETSC ERROR: #13 KSPSolve_FGMRES() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c:235
> [0]PETSC ERROR: #14 KSPSolve_Private() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/interface/itfunc.c:906
> [0]PETSC ERROR: #15 KSPSolve() at 
> /home/hhornik/Software/PETSc/src/ksp/ksp/interface/itfunc.c:1079
> [0]PETSC ERROR: #16 applySolver() at 
> /home/hhornik/gismo/optional/gsIncompressibleFlow/src/gsINSSolver.hpp:531
> 
> I could not find any solution for this so far. My question is: Is it possible 
> to use the LSC preconditioner in such case, where the matrix blocks are of 
> type MATNEST? And if so, how?
> Thank you,
> Hana Honnerova



Re: [petsc-users] PETSc init question

2024-01-31 Thread Pierre Jolivet


> On 31 Jan 2024, at 11:31 AM, Alain O' Miniussi  wrote:
> 
> Hi,
> 
> It is indicated in:
> https://petsc.org/release/manualpages/Sys/PetscInitialize/
> that the init function will call MPI_Init.
> 
> What if MPI_Init was already called (as it is the case in my application) and 
> what about MPI_Init_thread.

Then, MPI_Init() is not called, see the call to MPI_Initialized() in 
https://petsc.org/release/src/sys/objects/pinit.c.html#PetscInitialize.

> Wouldn't it be more convenient to have a Petsc init function taking a already 
> initialized communicator as argument ?
> 
> Also, that initialization seems to imply that it is not possible to have 
> multiple instance of PETSc on different communicators. Is that the case ?

No, you can initialize MPI yourself and then set PETSC_COMM_WORLD to whatever 
you need before calling PetscInitialize().

Thanks,
Pierre

> Thanks
> 
> 
> Alain Miniussi
> DSI, Pôles Calcul et Genie Log.
> Observatoire de la Côte d'Azur
> Tél. : +33609650665



Re: [petsc-users] Bug in VecNorm, 3.20.3

2024-01-26 Thread Pierre Jolivet

> On 26 Jan 2024, at 3:11 PM, Pierre Jolivet  wrote:
> 
>> 
>> On 26 Jan 2024, at 3:03 PM, mich...@paraffinalia.co.uk wrote:
>> 
>> On 2024-01-23 18:09, Junchao Zhang wrote:
>>> Do you have an example to reproduce it?
>>> --Junchao Zhang
>> 
>> I have put a minimum example on github:
>> 
>> https://github.com/mjcarley/petsc-test
>> 
>> It does seem that the problem occurs if I do not use the PETSc interface to 
>> do a matrix multiplication.
>> 
>> In the original code, the PETSc matrix is a wrapper for a Fast Multipole 
>> Method evaluation; in the minimum example I have simulated this by using an 
>> array as a matrix. The sample code generates a randomised matrix A and 
>> reference solution vector ref, and generates a right hand side
>> 
>> b = A*ref
>> 
>> which is then supplied as the right hand side for the GMRES solver. If I use 
>> the PETSc matrix multiplication, the solver behaves as expected; if I 
>> generate b directly from the underlying array for the matrix, I get the 
>> result
> 
> You should not use VecGetArrayRead() if you change the Vec, but instead, 
> VecGetArrayWrite().
> Does that solve the issue?

Sorry, I sent the message too fast, you are also missing a couple of calls to 
VecRestoreArray[Read,Write]().
These are the ones which will let PETSc know that the Vec has had its state 
being increased, and that the cached norm are not valid anymore, see 
https://petsc.org/release/src/vec/vec/interface/rvector.c.html#line2177

Thanks,
Pierre

> Thanks,
> Pierre
> 
>> 0 KSP Residual norm < 1.e-11
>> Linear solve converged due to CONVERGED_ATOL iterations 0



Re: [petsc-users] Bug in VecNorm, 3.20.3

2024-01-26 Thread Pierre Jolivet


> On 26 Jan 2024, at 3:03 PM, mich...@paraffinalia.co.uk wrote:
> 
> On 2024-01-23 18:09, Junchao Zhang wrote:
>> Do you have an example to reproduce it?
>> --Junchao Zhang
> 
> I have put a minimum example on github:
> 
> https://github.com/mjcarley/petsc-test
> 
> It does seem that the problem occurs if I do not use the PETSc interface to 
> do a matrix multiplication.
> 
> In the original code, the PETSc matrix is a wrapper for a Fast Multipole 
> Method evaluation; in the minimum example I have simulated this by using an 
> array as a matrix. The sample code generates a randomised matrix A and 
> reference solution vector ref, and generates a right hand side
> 
> b = A*ref
> 
> which is then supplied as the right hand side for the GMRES solver. If I use 
> the PETSc matrix multiplication, the solver behaves as expected; if I 
> generate b directly from the underlying array for the matrix, I get the result

You should not use VecGetArrayRead() if you change the Vec, but instead, 
VecGetArrayWrite().
Does that solve the issue?

Thanks,
Pierre

>  0 KSP Residual norm < 1.e-11
> Linear solve converged due to CONVERGED_ATOL iterations 0



Re: [petsc-users] Hypre BoomerAMG settings options database

2024-01-06 Thread Pierre Jolivet


> On 6 Jan 2024, at 3:15 PM, Mark Adams  wrote:
> 
> Does this work for you?
> -pc_hypre_boomeramg_grid_sweeps_all 2
> The comment in our code says SSOR is the default but it looks like it is 
> really "hSGS"
> I thought it was an L1 Jacobi, but you would want to ask Hypre about this.
HYPRE’s default settings are not the same as the ones we set in PETSc as 
default, so do not ask HYPRE people (about this particular issue).

Thanks,
Pierre
> Mark
> 
> 
> On Fri, Jan 5, 2024 at 10:21 AM Barry Smith  > wrote:
>> 
>>   Yes, the handling of BoomerAMG options starts at line 365. If we don't 
>> support what you want but hypre has a function call that allows one to set 
>> the values then the option could easily be added to the PETSc options 
>> database here either by you (with a merge request) or us. So I would say 
>> check the hypre docs.
>> 
>>   Just let us know what BoomerAMG function is missing from the code.
>> 
>>   Barry
>> 
>> 
>>> On Jan 5, 2024, at 7:52 AM, Khurana, Parv >> > wrote:
>>> 
>>> Hello PETSc users community,
>>>  
>>> Happy new year! Thank you for the community support as always.
>>>  
>>> I am using BoomerAMG for my research, and it is interfaced to my software 
>>> via PETSc. I can only use options database keys as of now to tweak the 
>>> settings I want for the AMG solve.
>>>  
>>> I want to control the number of smoothener iterations at pre/post step for 
>>> a given AMG cycle. I am looking for an options database key which helps me 
>>> control this. I am not sure whether this is possible directly via the keys 
>>> (Line 365: 
>>> https://www.mcs.anl.gov/petsc/petsc-3.5.4/src/ksp/pc/impls/hypre/hypre.c.html).
>>>  My comprehension of the current setup is that I have 1 smoothener 
>>> iteration at every coarsening step. My aim is to do two pre and 2 post 
>>> smoothening steps using the SSOR smoothener.
>>>  
>>> BoomerAMG SOLVER PARAMETERS:
>>>  
>>>   Maximum number of cycles: 1 
>>>   Stopping Tolerance:   0.00e+00 
>>>   Cycle type (1 = V, 2 = W, etc.):  1
>>>  
>>>   Relaxation Parameters:
>>>Visiting Grid: down   up  coarse
>>> Number of sweeps:11 1 
>>>Type 0=Jac, 3=hGS, 6=hSGS, 9=GE:  66 9 
>>>Point types, partial sweeps (1=C, -1=F):
>>>   Pre-CG relaxation (down):   1  -1
>>>Post-CG relaxation (up):  -1   1
>>>  Coarsest grid:   0
>>>  
>>> PETSC settings I am using currently: 
>>>  
>>> -ksp_type preonly
>>> -pc_type hypre
>>> -pc_hypre_type boomeramg
>>> -pc_hypre_boomeramg_coarsen_type hmis
>>> -pc_hypre_boomeramg_relax_type_all symmetric-sor/jacobi
>>> -pc_hypre_boomeramg_strong_threshold 0.7
>>> -pc_hypre_boomeramg_interp_type ext+i
>>> -pc_hypre_boomeramg_P_max 2
>>> -pc_hypre_boomeramg_truncfactor 0.3
>>>  
>>> Thanks and Best
>>> Parv Khurana
>> 



Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-21 Thread Pierre Jolivet


> On 21 Dec 2023, at 7:38 PM, Junchao Zhang  wrote:
> 
> 
> 
> 
> On Thu, Dec 21, 2023 at 5:54 AM Matthew Knepley  <mailto:knep...@gmail.com>> wrote:
>> On Thu, Dec 21, 2023 at 6:46 AM Sreeram R Venkat > <mailto:srven...@utexas.edu>> wrote:
>>> Ok, I think the error I'm getting has something to do with how the multiple 
>>> solves are being done in succession. I'll try to see if there's anything 
>>> I'm doing wrong there. 
>>> 
>>> One question about the -pc_type lu -ksp_type preonly method: do you know 
>>> which parts of the solve (factorization/triangular solves) are done on host 
>>> and which are done on device?
>> 
>> For SEQDENSE, I believe both the factorization and solve is on device. It is 
>> hard to see, but I believe the dispatch code is here:
> Yes, it is correct.

But Sreeram matrix is sparse, so this does not really matter.
Sreeram, I don’t enough about the internals of CHOLMOD (and its interface in 
PETSc) to know which part is done on host and which part is done on device.
By the way, you mentioned a very high number of right-hand sides (> 1E4) for a 
moderately-sized problem (~ 1E6).
Is there a particular reason why you need so many of them?
Have you considered doing some sort of deflation to reduce the number of solves?

Thanks,
Pierre

>> 
>>   
>> https://gitlab.com/petsc/petsc/-/blob/main/src/mat/impls/dense/seq/cupm/matseqdensecupm.hpp?ref_type=heads#L368
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>>> Thanks,
>>> Sreeram
>>> 
>>> On Sat, Dec 16, 2023 at 10:56 PM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>> Unfortunately, I am not able to reproduce such a failure with your input 
>>>> matrix.
>>>> I’ve used ex79 that I linked previously and the system is properly solved.
>>>> $ ./ex79 -pc_type hypre -ksp_type hpddm -ksp_hpddm_type cg 
>>>> -ksp_converged_reason -ksp_view_mat ascii::ascii_info -ksp_view_rhs 
>>>> ascii::ascii_info
>>>> Linear solve converged due to CONVERGED_RTOL iterations 6
>>>> Mat Object: 1 MPI process
>>>>   type: seqaijcusparse
>>>>   rows=289, cols=289
>>>>   total: nonzeros=2401, allocated nonzeros=2401
>>>>   total number of mallocs used during MatSetValues calls=0
>>>> not using I-node routines
>>>> Mat Object: 1 MPI process
>>>>   type: seqdensecuda
>>>>   rows=289, cols=10
>>>>   total: nonzeros=2890, allocated nonzeros=2890
>>>>   total number of mallocs used during MatSetValues calls=0
>>>> 
>>>> You mentioned in a subsequent email that you are interested in systems 
>>>> with at most 1E6 unknowns, and up to 1E4 right-hand sides.
>>>> I’m not sure you can expect significant gains from using GPU for such 
>>>> systems.
>>>> Probably, the fastest approach would indeed be -pc_type lu -ksp_type 
>>>> preonly -ksp_matsolve_batch_size 100 or something, depending on the memory 
>>>> available on your host.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> On 15 Dec 2023, at 9:52 PM, Sreeram R Venkat >>>> <mailto:srven...@utexas.edu>> wrote:
>>>>> 
>>>>> Here are the ksp_view files.  I set the options 
>>>>> -ksp_error_if_not_converged to try to get the vectors that caused the 
>>>>> error. I noticed that some of the KSPMatSolves converge while others 
>>>>> don't. In the code, the solves are called as:
>>>>> 
>>>>> input vector v --> insert data of v into a dense mat --> KSPMatSolve() 
>>>>> --> MatMatMult() --> KSPMatSolve() --> insert data of dense mat into 
>>>>> output vector w -- output w
>>>>> 
>>>>> The operator used in the KSP is a Laplacian-like operator, and the 
>>>>> MatMatMult is with a Mass Matrix. The whole thing is supposed to be a 
>>>>> solve with a biharmonic-like operator. I can also run it with only the 
>>>>> first KSPMatSolve (i.e. just a Laplacian-like operator). In that case, 
>>>>> the KSP reportedly converges after 0 iterations (see the next line), but 
>>>>> this causes problems in other parts of the code later on. 
>>>>> 
>>>>> I saw that sometimes the first KSPMatSolve "converges" after 0 iterations 
>>>>> due to CONVERGED_RTOL. Then, the second KSPMatSolve produces a NaN/Inf. I 
>>>>> 

Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-19 Thread Pierre Jolivet


> On 20 Dec 2023, at 8:42 AM, Sreeram R Venkat  wrote:
> 
> Ok, I think the error I'm getting has something to do with how the multiple 
> solves are being done in succession. I'll try to see if there's anything I'm 
> doing wrong there. 
> 
> One question about the -pc_type lu -ksp_type preonly method: do you know 
> which parts of the solve (factorization/triangular solves) are done on host 
> and which are done on device?

I think only the triangular solves can be done on device.
Since you have many right-hand sides, it may not be that bad.
GPU people will hopefully give you a more insightful answer.

Thanks,
Pierre

> Thanks,
> Sreeram
> 
> On Sat, Dec 16, 2023 at 10:56 PM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> Unfortunately, I am not able to reproduce such a failure with your input 
>> matrix.
>> I’ve used ex79 that I linked previously and the system is properly solved.
>> $ ./ex79 -pc_type hypre -ksp_type hpddm -ksp_hpddm_type cg 
>> -ksp_converged_reason -ksp_view_mat ascii::ascii_info -ksp_view_rhs 
>> ascii::ascii_info
>> Linear solve converged due to CONVERGED_RTOL iterations 6
>> Mat Object: 1 MPI process
>>   type: seqaijcusparse
>>   rows=289, cols=289
>>   total: nonzeros=2401, allocated nonzeros=2401
>>   total number of mallocs used during MatSetValues calls=0
>> not using I-node routines
>> Mat Object: 1 MPI process
>>   type: seqdensecuda
>>   rows=289, cols=10
>>   total: nonzeros=2890, allocated nonzeros=2890
>>   total number of mallocs used during MatSetValues calls=0
>> 
>> You mentioned in a subsequent email that you are interested in systems with 
>> at most 1E6 unknowns, and up to 1E4 right-hand sides.
>> I’m not sure you can expect significant gains from using GPU for such 
>> systems.
>> Probably, the fastest approach would indeed be -pc_type lu -ksp_type preonly 
>> -ksp_matsolve_batch_size 100 or something, depending on the memory available 
>> on your host.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 15 Dec 2023, at 9:52 PM, Sreeram R Venkat >> <mailto:srven...@utexas.edu>> wrote:
>>> 
>>> Here are the ksp_view files.  I set the options -ksp_error_if_not_converged 
>>> to try to get the vectors that caused the error. I noticed that some of the 
>>> KSPMatSolves converge while others don't. In the code, the solves are 
>>> called as:
>>> 
>>> input vector v --> insert data of v into a dense mat --> KSPMatSolve() --> 
>>> MatMatMult() --> KSPMatSolve() --> insert data of dense mat into output 
>>> vector w -- output w
>>> 
>>> The operator used in the KSP is a Laplacian-like operator, and the 
>>> MatMatMult is with a Mass Matrix. The whole thing is supposed to be a solve 
>>> with a biharmonic-like operator. I can also run it with only the first 
>>> KSPMatSolve (i.e. just a Laplacian-like operator). In that case, the KSP 
>>> reportedly converges after 0 iterations (see the next line), but this 
>>> causes problems in other parts of the code later on. 
>>> 
>>> I saw that sometimes the first KSPMatSolve "converges" after 0 iterations 
>>> due to CONVERGED_RTOL. Then, the second KSPMatSolve produces a NaN/Inf. I 
>>> tried setting ksp_min_it, but that didn't seem to do anything. 
>>> 
>>> I'll keep trying different options and also try to get the MWE made (this 
>>> KSPMatSolve is pretty performance critical for us). 
>>> 
>>> Thanks for all your help,
>>> Sreeram
>>> 
>>> On Fri, Dec 15, 2023 at 1:01 AM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>> 
>>>>> On 14 Dec 2023, at 11:45 PM, Sreeram R Venkat >>>> <mailto:srven...@utexas.edu>> wrote:
>>>>> 
>>>>> Thanks, I will try to create a minimal reproducible example. This may 
>>>>> take me some time though, as I need to figure out how to extract only the 
>>>>> relevant parts (the full program this solve is used in is getting quite 
>>>>> complex).
>>>> 
>>>> You could just do -ksp_view_mat binary:Amat.bin -ksp_view_pmat 
>>>> binary:Pmat.bin -ksp_view_rhs binary:rhs.bin and send me those three files 
>>>> (I’m guessing your are using double-precision scalars with 32-bit 
>>>> PetscInt).
>>>> 
>>>>> I'll also try out some of the BoomerAMG options to see if that helps.
>>>> 
>>>> These should work (this is where all “PCMatApply()-ready” PC a

Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-16 Thread Pierre Jolivet
Unfortunately, I am not able to reproduce such a failure with your input matrix.
I’ve used ex79 that I linked previously and the system is properly solved.
$ ./ex79 -pc_type hypre -ksp_type hpddm -ksp_hpddm_type cg 
-ksp_converged_reason -ksp_view_mat ascii::ascii_info -ksp_view_rhs 
ascii::ascii_info
Linear solve converged due to CONVERGED_RTOL iterations 6
Mat Object: 1 MPI process
  type: seqaijcusparse
  rows=289, cols=289
  total: nonzeros=2401, allocated nonzeros=2401
  total number of mallocs used during MatSetValues calls=0
not using I-node routines
Mat Object: 1 MPI process
  type: seqdensecuda
  rows=289, cols=10
  total: nonzeros=2890, allocated nonzeros=2890
  total number of mallocs used during MatSetValues calls=0

You mentioned in a subsequent email that you are interested in systems with at 
most 1E6 unknowns, and up to 1E4 right-hand sides.
I’m not sure you can expect significant gains from using GPU for such systems.
Probably, the fastest approach would indeed be -pc_type lu -ksp_type preonly 
-ksp_matsolve_batch_size 100 or something, depending on the memory available on 
your host.

Thanks,
Pierre

> On 15 Dec 2023, at 9:52 PM, Sreeram R Venkat  wrote:
> 
> Here are the ksp_view files.  I set the options -ksp_error_if_not_converged 
> to try to get the vectors that caused the error. I noticed that some of the 
> KSPMatSolves converge while others don't. In the code, the solves are called 
> as:
> 
> input vector v --> insert data of v into a dense mat --> KSPMatSolve() --> 
> MatMatMult() --> KSPMatSolve() --> insert data of dense mat into output 
> vector w -- output w
> 
> The operator used in the KSP is a Laplacian-like operator, and the MatMatMult 
> is with a Mass Matrix. The whole thing is supposed to be a solve with a 
> biharmonic-like operator. I can also run it with only the first KSPMatSolve 
> (i.e. just a Laplacian-like operator). In that case, the KSP reportedly 
> converges after 0 iterations (see the next line), but this causes problems in 
> other parts of the code later on. 
> 
> I saw that sometimes the first KSPMatSolve "converges" after 0 iterations due 
> to CONVERGED_RTOL. Then, the second KSPMatSolve produces a NaN/Inf. I tried 
> setting ksp_min_it, but that didn't seem to do anything. 
> 
> I'll keep trying different options and also try to get the MWE made (this 
> KSPMatSolve is pretty performance critical for us). 
> 
> Thanks for all your help,
> Sreeram
> 
> On Fri, Dec 15, 2023 at 1:01 AM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> 
>>> On 14 Dec 2023, at 11:45 PM, Sreeram R Venkat >> <mailto:srven...@utexas.edu>> wrote:
>>> 
>>> Thanks, I will try to create a minimal reproducible example. This may take 
>>> me some time though, as I need to figure out how to extract only the 
>>> relevant parts (the full program this solve is used in is getting quite 
>>> complex).
>> 
>> You could just do -ksp_view_mat binary:Amat.bin -ksp_view_pmat 
>> binary:Pmat.bin -ksp_view_rhs binary:rhs.bin and send me those three files 
>> (I’m guessing your are using double-precision scalars with 32-bit PetscInt).
>> 
>>> I'll also try out some of the BoomerAMG options to see if that helps.
>> 
>> These should work (this is where all “PCMatApply()-ready” PC are being 
>> tested): https://petsc.org/release/src/ksp/ksp/tutorials/ex79.c.html#line215
>> You can see it’s also testing PCHYPRE + KSPHPDDM on device (but not with 
>> HIP).
>> I’m aware the performance should not be optimal (see your comment about 
>> host/device copies), I’ve money to hire someone to work on this but: a) I 
>> need to find the correct engineer/post-doc, b) I currently don’t have good 
>> use cases (of course, I could generate a synthetic benchmark, for science).
>> So even if you send me the three Mat, a MWE would be appreciated if the 
>> KSPMatSolve() is performance-critical for you (see point b) from above).
>> 
>> Thanks,
>> Pierre
>> 
>>> Thanks,
>>> Sreeram
>>> 
>>> On Thu, Dec 14, 2023, 1:12 PM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>> 
>>>> 
>>>>> On 14 Dec 2023, at 8:02 PM, Sreeram R Venkat >>>> <mailto:srven...@utexas.edu>> wrote:
>>>>> 
>>>>> Hello Pierre,
>>>>> 
>>>>> Thank you for your reply. I tried out the HPDDM CG as you said, and it 
>>>>> seems to be doing the batched solves, but the KSP is not converging due 
>>>>> to a NaN or Inf being generated. I also noticed there are a lot of 
>>>>> host-to-device and device-

Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-14 Thread Pierre Jolivet

> On 14 Dec 2023, at 11:45 PM, Sreeram R Venkat  wrote:
> 
> Thanks, I will try to create a minimal reproducible example. This may take me 
> some time though, as I need to figure out how to extract only the relevant 
> parts (the full program this solve is used in is getting quite complex).

You could just do -ksp_view_mat binary:Amat.bin -ksp_view_pmat binary:Pmat.bin 
-ksp_view_rhs binary:rhs.bin and send me those three files (I’m guessing your 
are using double-precision scalars with 32-bit PetscInt).

> I'll also try out some of the BoomerAMG options to see if that helps.

These should work (this is where all “PCMatApply()-ready” PC are being tested): 
https://petsc.org/release/src/ksp/ksp/tutorials/ex79.c.html#line215
You can see it’s also testing PCHYPRE + KSPHPDDM on device (but not with HIP).
I’m aware the performance should not be optimal (see your comment about 
host/device copies), I’ve money to hire someone to work on this but: a) I need 
to find the correct engineer/post-doc, b) I currently don’t have good use cases 
(of course, I could generate a synthetic benchmark, for science).
So even if you send me the three Mat, a MWE would be appreciated if the 
KSPMatSolve() is performance-critical for you (see point b) from above).

Thanks,
Pierre

> Thanks,
> Sreeram
> 
> On Thu, Dec 14, 2023, 1:12 PM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> 
>> 
>>> On 14 Dec 2023, at 8:02 PM, Sreeram R Venkat >> <mailto:srven...@utexas.edu>> wrote:
>>> 
>>> Hello Pierre,
>>> 
>>> Thank you for your reply. I tried out the HPDDM CG as you said, and it 
>>> seems to be doing the batched solves, but the KSP is not converging due to 
>>> a NaN or Inf being generated. I also noticed there are a lot of 
>>> host-to-device and device-to-host copies of the matrices (the non-batched 
>>> KSP solve did not have any memcopies). I have attached dump.0 again. Could 
>>> you please take a look?
>> 
>> Yes, but you’d need to send me something I can run with your set of options 
>> (if you are more confident doing this in private, you can remove the list 
>> from c/c).
>> Not all BoomerAMG smoothers handle blocks of right-hand sides, and there is 
>> not much error checking, so instead of erroring out, this may be the reason 
>> why you are getting garbage.
>> 
>> Thanks,
>> Pierre
>> 
>>> Thanks,
>>> Sreeram
>>> 
>>> On Thu, Dec 14, 2023 at 12:42 AM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>> Hello Sreeram,
>>>> KSPCG (PETSc implementation of CG) does not handle solves with multiple 
>>>> columns at once.
>>>> There is only a single native PETSc KSP implementation which handles 
>>>> solves with multiple columns at once: KSPPREONLY.
>>>> If you use --download-hpddm, you can use a CG (or GMRES, or more advanced 
>>>> methods) implementation which handles solves with multiple columns at once 
>>>> (via -ksp_type hpddm -ksp_hpddm_type cg or KSPSetType(ksp, KSPHPDDM); 
>>>> KSPHPDDMSetType(ksp, KSP_HPDDM_TYPE_CG);).
>>>> I’m the main author of HPDDM, there is preliminary support for device 
>>>> matrices, but if it’s not working as intended/not faster than column by 
>>>> column, I’d be happy to have a deeper look (maybe in private), because 
>>>> most (if not all) of my users interested in (pseudo-)block Krylov solvers 
>>>> (i.e., solvers that treat right-hand sides in a single go) are using plain 
>>>> host matrices.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>> PS: you could have a look at 
>>>> https://www.sciencedirect.com/science/article/abs/pii/S089812212155 to 
>>>> understand the philosophy behind block iterative methods in PETSc (and in 
>>>> HPDDM), src/mat/tests/ex237.c, the benchmark I mentioned earlier, was 
>>>> developed in the context of this paper to produce Figures 2-3. Note that 
>>>> this paper is now slightly outdated, since then, PCHYPRE and PCMG (among 
>>>> others) have been made “PCMatApply()-ready”.
>>>> 
>>>>> On 13 Dec 2023, at 11:05 PM, Sreeram R Venkat >>>> <mailto:srven...@utexas.edu>> wrote:
>>>>> 
>>>>> Hello Pierre,
>>>>> 
>>>>> I am trying out the KSPMatSolve with the BoomerAMG preconditioner. 
>>>>> However, I am noticing that it is still solving column by column (this is 
>>>>> stated explicitly in the info dump attached). I looked at the code for 
>&

Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-14 Thread Pierre Jolivet


> On 14 Dec 2023, at 8:02 PM, Sreeram R Venkat  wrote:
> 
> Hello Pierre,
> 
> Thank you for your reply. I tried out the HPDDM CG as you said, and it seems 
> to be doing the batched solves, but the KSP is not converging due to a NaN or 
> Inf being generated. I also noticed there are a lot of host-to-device and 
> device-to-host copies of the matrices (the non-batched KSP solve did not have 
> any memcopies). I have attached dump.0 again. Could you please take a look?

Yes, but you’d need to send me something I can run with your set of options (if 
you are more confident doing this in private, you can remove the list from c/c).
Not all BoomerAMG smoothers handle blocks of right-hand sides, and there is not 
much error checking, so instead of erroring out, this may be the reason why you 
are getting garbage.

Thanks,
Pierre

> Thanks,
> Sreeram
> 
> On Thu, Dec 14, 2023 at 12:42 AM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> Hello Sreeram,
>> KSPCG (PETSc implementation of CG) does not handle solves with multiple 
>> columns at once.
>> There is only a single native PETSc KSP implementation which handles solves 
>> with multiple columns at once: KSPPREONLY.
>> If you use --download-hpddm, you can use a CG (or GMRES, or more advanced 
>> methods) implementation which handles solves with multiple columns at once 
>> (via -ksp_type hpddm -ksp_hpddm_type cg or KSPSetType(ksp, KSPHPDDM); 
>> KSPHPDDMSetType(ksp, KSP_HPDDM_TYPE_CG);).
>> I’m the main author of HPDDM, there is preliminary support for device 
>> matrices, but if it’s not working as intended/not faster than column by 
>> column, I’d be happy to have a deeper look (maybe in private), because most 
>> (if not all) of my users interested in (pseudo-)block Krylov solvers (i.e., 
>> solvers that treat right-hand sides in a single go) are using plain host 
>> matrices.
>> 
>> Thanks,
>> Pierre
>> 
>> PS: you could have a look at 
>> https://www.sciencedirect.com/science/article/abs/pii/S089812212155 to 
>> understand the philosophy behind block iterative methods in PETSc (and in 
>> HPDDM), src/mat/tests/ex237.c, the benchmark I mentioned earlier, was 
>> developed in the context of this paper to produce Figures 2-3. Note that 
>> this paper is now slightly outdated, since then, PCHYPRE and PCMG (among 
>> others) have been made “PCMatApply()-ready”.
>> 
>>> On 13 Dec 2023, at 11:05 PM, Sreeram R Venkat >> <mailto:srven...@utexas.edu>> wrote:
>>> 
>>> Hello Pierre,
>>> 
>>> I am trying out the KSPMatSolve with the BoomerAMG preconditioner. However, 
>>> I am noticing that it is still solving column by column (this is stated 
>>> explicitly in the info dump attached). I looked at the code for 
>>> KSPMatSolve_Private() and saw that as long as ksp->ops->matsolve is true, 
>>> it should do the batched solve, though I'm not sure where that gets set. 
>>> 
>>> I am using the options -pc_type hypre -pc_hypre_type boomeramg when running 
>>> the code.
>>> 
>>> Can you please help me with this?
>>> 
>>> Thanks,
>>> Sreeram
>>> 
>>> 
>>> On Thu, Dec 7, 2023 at 4:04 PM Mark Adams >> <mailto:mfad...@lbl.gov>> wrote:
>>>> N.B., AMGX interface is a bit experimental.
>>>> Mark
>>>> 
>>>> On Thu, Dec 7, 2023 at 4:11 PM Sreeram R Venkat >>> <mailto:srven...@utexas.edu>> wrote:
>>>>> Oh, in that case I will try out BoomerAMG. Getting AMGX to build 
>>>>> correctly was also tricky so hopefully the HYPRE build will be easier.
>>>>> 
>>>>> Thanks,
>>>>> Sreeram
>>>>> 
>>>>> On Thu, Dec 7, 2023, 3:03 PM Pierre Jolivet >>>> <mailto:pie...@joliv.et>> wrote:
>>>>>> 
>>>>>> 
>>>>>>> On 7 Dec 2023, at 9:37 PM, Sreeram R Venkat >>>>>> <mailto:srven...@utexas.edu>> wrote:
>>>>>>> 
>>>>>>> Thank you Barry and Pierre; I will proceed with the first option. 
>>>>>>> 
>>>>>>> I want to use the AMGX preconditioner for the KSP. I will try it out 
>>>>>>> and see how it performs.
>>>>>> 
>>>>>> Just FYI, AMGX does not handle systems with multiple RHS, and thus has 
>>>>>> no PCMatApply() implementation.
>>>>>> BoomerAMG does, and there is a PCMatApply_HYPRE_BoomerAMG() 
>>>>>> implementation.

Re: [petsc-users] Some question about compiling c++ program including PETSc using cmake

2023-12-13 Thread Pierre Jolivet


> On 14 Dec 2023, at 4:13 AM, 291--- via petsc-users 
>  wrote:
> 
> Dear SLEPc Developers,
> 
> I a am student from Tongji University. Recently I am trying to write a c++ 
> program for matrix solving, which requires importing the PETSc library that 
> you have developed. However a lot of errors occur in the cpp file when I use 
> #include  directly. I also try to use extern "C" but it gives me 
> the error in the picture below. Is there a good way to use the PETSc library 
> in a c++ program? (I compiled using cmake and my compiler is g++ (GCC) 4.8.5 
> 20150623 (Red Hat 4.8.5-44)).

This compiler (gcc 4.8.5) is known to not be C++11 compliant, but you are using 
the -std=c++11 flag.
Furthermore, since version 3.18 (or maybe slightly later), PETSc requires a 
C++11-compliant compiler if using C++.
Could you switch to a newer compiler, or try to reconfigure?
Also, you should not put all the include inside an extern { }.
In any case, you’ll need to send the compilation error log and configure.log to 
petsc-ma...@mcs.anl.gov  if you want further 
help, as we won’t be able to give a better diagnosis with just the currently 
provided information.

Thanks,
Pierre

> My cmakelists.txt is:
> 
> cmake_minimum_required(VERSION 3.1.0)
> 
> set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
> 
> set(PETSC $ENV{PETSC_DIR}/$ENV{PETSC_ARCH})
> set(SLEPC $ENV{SLEPC_DIR}/$ENV{PETSC_ARCH})
> set(ENV{PKG_CONFIG_PATH} ${PETSC}/lib/pkgconfig:${SLEPC}/lib/pkgconfig)
> 
> set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")  
> set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c99")  
> 
> project(test)
> 
> add_executable(${PROJECT_NAME} eigen_test2.cpp)
> find_package(PkgConfig REQUIRED)
> 
> pkg_search_module(PETSc REQUIRED IMPORTED_TARGET PETSc)
> target_link_libraries(${PROJECT_NAME} PkgConfig::PETSc)
> 
> The testing code is:eigen_test2.cpp
> extern "C"{
>   //#include 
>   #include 
>   #include 
>   #include 
>   #include 
> }
> 
> int main(int argc,char **argv)
> { 
>   return 0;
> }
> 
> 
> 
> Best regards
> 
> Weijie Xu



Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-13 Thread Pierre Jolivet
Hello Sreeram,
KSPCG (PETSc implementation of CG) does not handle solves with multiple columns 
at once.
There is only a single native PETSc KSP implementation which handles solves 
with multiple columns at once: KSPPREONLY.
If you use --download-hpddm, you can use a CG (or GMRES, or more advanced 
methods) implementation which handles solves with multiple columns at once (via 
-ksp_type hpddm -ksp_hpddm_type cg or KSPSetType(ksp, KSPHPDDM); 
KSPHPDDMSetType(ksp, KSP_HPDDM_TYPE_CG);).
I’m the main author of HPDDM, there is preliminary support for device matrices, 
but if it’s not working as intended/not faster than column by column, I’d be 
happy to have a deeper look (maybe in private), because most (if not all) of my 
users interested in (pseudo-)block Krylov solvers (i.e., solvers that treat 
right-hand sides in a single go) are using plain host matrices.

Thanks,
Pierre

PS: you could have a look at 
https://www.sciencedirect.com/science/article/abs/pii/S089812212155 to 
understand the philosophy behind block iterative methods in PETSc (and in 
HPDDM), src/mat/tests/ex237.c, the benchmark I mentioned earlier, was developed 
in the context of this paper to produce Figures 2-3. Note that this paper is 
now slightly outdated, since then, PCHYPRE and PCMG (among others) have been 
made “PCMatApply()-ready”.

> On 13 Dec 2023, at 11:05 PM, Sreeram R Venkat  wrote:
> 
> Hello Pierre,
> 
> I am trying out the KSPMatSolve with the BoomerAMG preconditioner. However, I 
> am noticing that it is still solving column by column (this is stated 
> explicitly in the info dump attached). I looked at the code for 
> KSPMatSolve_Private() and saw that as long as ksp->ops->matsolve is true, it 
> should do the batched solve, though I'm not sure where that gets set. 
> 
> I am using the options -pc_type hypre -pc_hypre_type boomeramg when running 
> the code.
> 
> Can you please help me with this?
> 
> Thanks,
> Sreeram
> 
> 
> On Thu, Dec 7, 2023 at 4:04 PM Mark Adams  <mailto:mfad...@lbl.gov>> wrote:
>> N.B., AMGX interface is a bit experimental.
>> Mark
>> 
>> On Thu, Dec 7, 2023 at 4:11 PM Sreeram R Venkat > <mailto:srven...@utexas.edu>> wrote:
>>> Oh, in that case I will try out BoomerAMG. Getting AMGX to build correctly 
>>> was also tricky so hopefully the HYPRE build will be easier.
>>> 
>>> Thanks,
>>> Sreeram
>>> 
>>> On Thu, Dec 7, 2023, 3:03 PM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>> 
>>>> 
>>>>> On 7 Dec 2023, at 9:37 PM, Sreeram R Venkat >>>> <mailto:srven...@utexas.edu>> wrote:
>>>>> 
>>>>> Thank you Barry and Pierre; I will proceed with the first option. 
>>>>> 
>>>>> I want to use the AMGX preconditioner for the KSP. I will try it out and 
>>>>> see how it performs.
>>>> 
>>>> Just FYI, AMGX does not handle systems with multiple RHS, and thus has no 
>>>> PCMatApply() implementation.
>>>> BoomerAMG does, and there is a PCMatApply_HYPRE_BoomerAMG() implementation.
>>>> But let us know if you need assistance figuring things out.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> Thanks,
>>>>> Sreeram
>>>>> 
>>>>> On Thu, Dec 7, 2023 at 2:02 PM Pierre Jolivet >>>> <mailto:pie...@joliv.et>> wrote:
>>>>>> To expand on Barry’s answer, we have observed repeatedly that MatMatMult 
>>>>>> with MatAIJ performs better than MatMult with MatMAIJ, you can reproduce 
>>>>>> this on your own with 
>>>>>> https://petsc.org/release/src/mat/tests/ex237.c.html.
>>>>>> Also, I’m guessing you are using some sort of preconditioner within your 
>>>>>> KSP.
>>>>>> Not all are “KSPMatSolve-ready”, i.e., they may treat blocks of 
>>>>>> right-hand sides column by column, which is very inefficient.
>>>>>> You could run your code with -info dump and send us dump.0 to see what 
>>>>>> needs to be done on our end to make things more efficient, should you 
>>>>>> not be satisfied with the current performance of the code.
>>>>>> 
>>>>>> Thanks,
>>>>>> Pierre
>>>>>> 
>>>>>>> On 7 Dec 2023, at 8:34 PM, Barry Smith >>>>>> <mailto:bsm...@petsc.dev>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Dec 7, 2023, at 1:17 PM, Sreera

Re: [petsc-users] Bug report VecNorm

2023-12-10 Thread Pierre Jolivet


> On 10 Dec 2023, at 8:40 AM, Stephan Köhler 
>  wrote:
> 
> Dear PETSc/Tao team, 
> 
> there is a bug in the voector interface:  In the function 
> VecNorm, see, eg. 
> https://petsc.org/release/src/vec/vec/interface/rvector.c.html#VecNorm line 
> 197 the check for consistency in line 214 is done on the wrong communicator.  
> The  communicator should be PETSC_COMM_SELF.
> Otherwise the program may hang when PetscCheck is executed.

I think the communicator should not be changed, but instead, the 
check/conditional should be changed, à la PetscValidLogicalCollectiveBool().

Thanks,
Pierre

> Please find a minimal example attached.
> 
> 
> Kind regards, 
> Stephan Köhler
> -- 
> Stephan Köhler
> TU Bergakademie Freiberg
> Institut für numerische Mathematik und Optimierung
> 
> Akademiestraße 6
> 09599 Freiberg
> Gebäudeteil Mittelbau, Zimmer 2.07
> 
> Telefon: +49 (0)3731 39-3188 (Büro)
> 



Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-07 Thread Pierre Jolivet


> On 7 Dec 2023, at 9:37 PM, Sreeram R Venkat  wrote:
> 
> Thank you Barry and Pierre; I will proceed with the first option. 
> 
> I want to use the AMGX preconditioner for the KSP. I will try it out and see 
> how it performs.

Just FYI, AMGX does not handle systems with multiple RHS, and thus has no 
PCMatApply() implementation.
BoomerAMG does, and there is a PCMatApply_HYPRE_BoomerAMG() implementation.
But let us know if you need assistance figuring things out.

Thanks,
Pierre

> Thanks,
> Sreeram
> 
> On Thu, Dec 7, 2023 at 2:02 PM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> To expand on Barry’s answer, we have observed repeatedly that MatMatMult 
>> with MatAIJ performs better than MatMult with MatMAIJ, you can reproduce 
>> this on your own with https://petsc.org/release/src/mat/tests/ex237.c.html.
>> Also, I’m guessing you are using some sort of preconditioner within your KSP.
>> Not all are “KSPMatSolve-ready”, i.e., they may treat blocks of right-hand 
>> sides column by column, which is very inefficient.
>> You could run your code with -info dump and send us dump.0 to see what needs 
>> to be done on our end to make things more efficient, should you not be 
>> satisfied with the current performance of the code.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 7 Dec 2023, at 8:34 PM, Barry Smith >> <mailto:bsm...@petsc.dev>> wrote:
>>> 
>>> 
>>> 
>>>> On Dec 7, 2023, at 1:17 PM, Sreeram R Venkat >>> <mailto:srven...@utexas.edu>> wrote:
>>>> 
>>>> I have 2 sequential matrices M and R (both MATSEQAIJCUSPARSE of size n x 
>>>> n) and a vector v of size n*m. v = [v_1 , v_2 ,... , v_m] where v_i has 
>>>> size n. The data for v can be stored either in column-major or row-major 
>>>> order.  Now, I want to do 2 types of operations:
>>>> 
>>>> 1. Matvecs of the form M*v_i = w_i, for i = 1..m. 
>>>> 2. KSPSolves of the form R*x_i = v_i, for i = 1..m.
>>>> 
>>>> From what I have read on the documentation, I can think of 2 approaches. 
>>>> 
>>>> 1. Get the pointer to the data in v (column-major) and use it to create a 
>>>> dense matrix V. Then do a MatMatMult with M*V = W, and take the data 
>>>> pointer of W to create the vector w. For KSPSolves, use KSPMatSolve with R 
>>>> and V.
>>>> 
>>>> 2. Create a MATMAIJ using M/R and use that for matvecs directly with the 
>>>> vector v. I don't know if KSPSolve with the MATMAIJ will know that it is a 
>>>> multiple RHS system and act accordingly.
>>>> 
>>>> Which would be the more efficient option?
>>> 
>>> Use 1. 
>>>> 
>>>> As a side-note, I am also wondering if there is a way to use row-major 
>>>> storage of the vector v.
>>> 
>>> No
>>> 
>>>> The reason is that this could allow for more coalesced memory access when 
>>>> doing matvecs.
>>> 
>>>   PETSc matrix-vector products use BLAS GMEV matrix-vector products for the 
>>> computation so in theory they should already be well-optimized
>>> 
>>>> 
>>>> Thanks,
>>>> Sreeram
>> 



Re: [petsc-users] Matvecs and KSPSolves with multiple vectors

2023-12-07 Thread Pierre Jolivet
To expand on Barry’s answer, we have observed repeatedly that MatMatMult with 
MatAIJ performs better than MatMult with MatMAIJ, you can reproduce this on 
your own with https://petsc.org/release/src/mat/tests/ex237.c.html.
Also, I’m guessing you are using some sort of preconditioner within your KSP.
Not all are “KSPMatSolve-ready”, i.e., they may treat blocks of right-hand 
sides column by column, which is very inefficient.
You could run your code with -info dump and send us dump.0 to see what needs to 
be done on our end to make things more efficient, should you not be satisfied 
with the current performance of the code.

Thanks,
Pierre

> On 7 Dec 2023, at 8:34 PM, Barry Smith  wrote:
> 
> 
> 
>> On Dec 7, 2023, at 1:17 PM, Sreeram R Venkat  wrote:
>> 
>> I have 2 sequential matrices M and R (both MATSEQAIJCUSPARSE of size n x n) 
>> and a vector v of size n*m. v = [v_1 , v_2 ,... , v_m] where v_i has size n. 
>> The data for v can be stored either in column-major or row-major order.  
>> Now, I want to do 2 types of operations:
>> 
>> 1. Matvecs of the form M*v_i = w_i, for i = 1..m. 
>> 2. KSPSolves of the form R*x_i = v_i, for i = 1..m.
>> 
>> From what I have read on the documentation, I can think of 2 approaches. 
>> 
>> 1. Get the pointer to the data in v (column-major) and use it to create a 
>> dense matrix V. Then do a MatMatMult with M*V = W, and take the data pointer 
>> of W to create the vector w. For KSPSolves, use KSPMatSolve with R and V.
>> 
>> 2. Create a MATMAIJ using M/R and use that for matvecs directly with the 
>> vector v. I don't know if KSPSolve with the MATMAIJ will know that it is a 
>> multiple RHS system and act accordingly.
>> 
>> Which would be the more efficient option?
> 
> Use 1. 
>> 
>> As a side-note, I am also wondering if there is a way to use row-major 
>> storage of the vector v.
> 
> No
> 
>> The reason is that this could allow for more coalesced memory access when 
>> doing matvecs.
> 
>   PETSc matrix-vector products use BLAS GMEV matrix-vector products for the 
> computation so in theory they should already be well-optimized
> 
>> 
>> Thanks,
>> Sreeram



Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-07 Thread Pierre Jolivet


> On 7 Nov 2023, at 8:47 PM, Victoria Rolandi  
> wrote:
> 
> Hi Pierre,
> 
> Thanks for your reply. I am now trying to configure PETSc with the same 
> METIS/ParMETIS of my main code. 
> 
> I get the following error, and I still get it even if I change the option 
> --with-precision=double/--with-precision=single
> 
> Metis specified is incompatible!
> IDXTYPEWIDTH=64 metis build appears to be specified for a default 
> 32-bit-indices build of PETSc.
> Suggest using --download-metis for a compatible metis
> ***
> 
> In the cofigure.log I have: 
> 
> compilation aborted for /tmp/petsc-yxtl_gwd/config.packages.metis/conftest.c 
> (code 2)
> Source:
> #include "confdefs.h"
> #include "conffix.h"
> #include "metis.h"
> 
> int main() {
> #if (IDXTYPEWIDTH != 32)
> #error incompatible IDXTYPEWIDTH
> #endif;
>   return 0;
> }
> 
> 
> How could I proceed?

I would use --download-metis and then have your code use METIS from PETSc, not 
the other way around.

Thanks,
Pierre

> Thanks,
> Victoria 
> 
> 
> 
> Il giorno ven 3 nov 2023 alle ore 11:34 Pierre Jolivet  <mailto:pie...@joliv.et>> ha scritto:
>> 
>> 
>>> On 3 Nov 2023, at 7:28 PM, Victoria Rolandi >> <mailto:victoria.roland...@gmail.com>> wrote:
>>> 
>>> Pierre, 
>>> 
>>> Sure, I have now installed PETSc with MUMPS and  PT-SCHOTCH, I got some 
>>> errors at the beginning but then it worked adding 
>>> --COPTFLAGS="-D_POSIX_C_SOURCE=199309L" to the configuration. 
>>> Also, I have compilation errors when I try to use newer versions, so I kept 
>>> the 3.17.0 for the moment.
>> 
>> You should ask for assistance to get the latest version.
>> (Par)METIS snapshots may have not changed, but the MUMPS one did, with 
>> performance improvements.
>> 
>>> Now the parallel ordering works with PT-SCOTCH, however, is it normal that 
>>> I do not see any difference in the performance compared to sequential 
>>> ordering ? 
>> 
>> Impossible to tell without you providing actual figures (number of nnz, 
>> number of processes, timings with sequential ordering, etc.), but 699k is 
>> not that big of a problem, so that is not extremely surprising.
>> 
>>> Also, could the error using Metis/Parmetis be due to the fact that my main 
>>> code (to which I linked PETSc) uses a different ParMetis than the one 
>>> separately installed by PETSC during the configuration?
>> 
>> Yes.
>> 
>>> Hence should I configure PETSc linking ParMetis to the same library used by 
>>> my main code?
>> 
>> Yes.
>> 
>> Thanks,
>> Pierre
>> 
>>> Thanks,
>>> Victoria 
>>> 
>>> Il giorno gio 2 nov 2023 alle ore 09:35 Pierre Jolivet >> <mailto:pie...@joliv.et>> ha scritto:
>>>> 
>>>>> On 2 Nov 2023, at 5:29 PM, Victoria Rolandi >>>> <mailto:victoria.roland...@gmail.com>> wrote:
>>>>> 
>>>>> Pierre, 
>>>>> Yes, sorry, I'll keep the list in copy.
>>>>> Launching with those options (-mat_mumps_icntl_28 2 -mat_mumps_icntl_29 
>>>>> 2) I get an error during the analysis step. I also launched increasing 
>>>>> the memory and I still have the error.
>>>> 
>>>> Oh, OK, that’s bad.
>>>> Would you be willing to give SCOTCH and/or PT-SCOTCH a try?
>>>> You’d need to reconfigure/recompile with --download-ptscotch (and maybe 
>>>> --download-bison depending on your system).
>>>> Then, the option would become either -mat_mumps_icntl_28 2 
>>>> -mat_mumps_icntl_29 2 (PT-SCOTCH) or -mat_mumps_icntl_7 3 (SCOTCH).
>>>> It may be worth updating PETSc as well (you are using 3.17.0, we are at 
>>>> 3.20.1), though I’m not sure we updated the METIS/ParMETIS snapshots since 
>>>> then, so it may not fix the present issue.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> The calculations stops at :
>>>>> 
>>>>> Entering CMUMPS 5.4.1 from C interface with JOB, N =   1  699150
>>>>>   executing #MPI =  2, without OMP
>>>>> 
>>>>>  =
>>>>>  MUMPS compiled with option -Dmetis
>>>>>  MUMPS compiled with option -Dparmetis
>>>>>  ==

Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-03 Thread Pierre Jolivet


> On 3 Nov 2023, at 7:28 PM, Victoria Rolandi  
> wrote:
> 
> Pierre, 
> 
> Sure, I have now installed PETSc with MUMPS and  PT-SCHOTCH, I got some 
> errors at the beginning but then it worked adding 
> --COPTFLAGS="-D_POSIX_C_SOURCE=199309L" to the configuration. 
> Also, I have compilation errors when I try to use newer versions, so I kept 
> the 3.17.0 for the moment.

You should ask for assistance to get the latest version.
(Par)METIS snapshots may have not changed, but the MUMPS one did, with 
performance improvements.

> Now the parallel ordering works with PT-SCOTCH, however, is it normal that I 
> do not see any difference in the performance compared to sequential ordering 
> ? 

Impossible to tell without you providing actual figures (number of nnz, number 
of processes, timings with sequential ordering, etc.), but 699k is not that big 
of a problem, so that is not extremely surprising.

> Also, could the error using Metis/Parmetis be due to the fact that my main 
> code (to which I linked PETSc) uses a different ParMetis than the one 
> separately installed by PETSC during the configuration?

Yes.

> Hence should I configure PETSc linking ParMetis to the same library used by 
> my main code?

Yes.

Thanks,
Pierre

> Thanks,
> Victoria 
> 
> Il giorno gio 2 nov 2023 alle ore 09:35 Pierre Jolivet  <mailto:pie...@joliv.et>> ha scritto:
>> 
>>> On 2 Nov 2023, at 5:29 PM, Victoria Rolandi >> <mailto:victoria.roland...@gmail.com>> wrote:
>>> 
>>> Pierre, 
>>> Yes, sorry, I'll keep the list in copy.
>>> Launching with those options (-mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2) 
>>> I get an error during the analysis step. I also launched increasing the 
>>> memory and I still have the error.
>> 
>> Oh, OK, that’s bad.
>> Would you be willing to give SCOTCH and/or PT-SCOTCH a try?
>> You’d need to reconfigure/recompile with --download-ptscotch (and maybe 
>> --download-bison depending on your system).
>> Then, the option would become either -mat_mumps_icntl_28 2 
>> -mat_mumps_icntl_29 2 (PT-SCOTCH) or -mat_mumps_icntl_7 3 (SCOTCH).
>> It may be worth updating PETSc as well (you are using 3.17.0, we are at 
>> 3.20.1), though I’m not sure we updated the METIS/ParMETIS snapshots since 
>> then, so it may not fix the present issue.
>> 
>> Thanks,
>> Pierre
>> 
>>> The calculations stops at :
>>> 
>>> Entering CMUMPS 5.4.1 from C interface with JOB, N =   1  699150
>>>   executing #MPI =  2, without OMP
>>> 
>>>  =
>>>  MUMPS compiled with option -Dmetis
>>>  MUMPS compiled with option -Dparmetis
>>>  =
>>> L U Solver for unsymmetric matrices
>>> Type of parallelism: Working host
>>> 
>>>  ** ANALYSIS STEP 
>>> 
>>>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>>>  Using ParMETIS for parallel ordering
>>>  Structural symmetry is: 90%
>>> 
>>> 
>>> The error:
>>> 
>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
>>> probably memory access out of range
>>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind
>>> [0]PETSC ERROR: or try http://valgrind.org <http://valgrind.org/> on 
>>> GNU/linux and Apple MacOS to find memory corruption errors
>>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and 
>>> run
>>> [0]PETSC ERROR: to get more information on the crash.
>>> [0]PETSC ERROR: - Error Message 
>>> --
>>> [0]PETSC ERROR: Signal received
>>> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.17.0, unknown
>>> [0]PETSC ERROR: ./charlin.exe on a  named n1056 by vrolandi Wed Nov  1 
>>> 11:38:28 2023
>>> [0]PETSC ERROR: Configure options 
>>> --prefix=/u/home/v/vrolandi/CODES/LIBRARY/packages/petsc/installationDir 
>>> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort CXXOPTFLAGS=-O3 
>>> --with-scalar-type=complex --with-debugging=0 --with-precision=single 
>>> --download-mumps --download-scalapack --download-parmetis --download-metis
>>> 
>>> [0]PETSC ERROR: #1 User provided function() at unknown file:0
>&g

Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-02 Thread Pierre Jolivet

> On 2 Nov 2023, at 5:29 PM, Victoria Rolandi  
> wrote:
> 
> Pierre, 
> Yes, sorry, I'll keep the list in copy.
> Launching with those options (-mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2) I 
> get an error during the analysis step. I also launched increasing the memory 
> and I still have the error.

Oh, OK, that’s bad.
Would you be willing to give SCOTCH and/or PT-SCOTCH a try?
You’d need to reconfigure/recompile with --download-ptscotch (and maybe 
--download-bison depending on your system).
Then, the option would become either -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 
2 (PT-SCOTCH) or -mat_mumps_icntl_7 3 (SCOTCH).
It may be worth updating PETSc as well (you are using 3.17.0, we are at 
3.20.1), though I’m not sure we updated the METIS/ParMETIS snapshots since 
then, so it may not fix the present issue.

Thanks,
Pierre

> The calculations stops at :
> 
> Entering CMUMPS 5.4.1 from C interface with JOB, N =   1  699150
>   executing #MPI =  2, without OMP
> 
>  =
>  MUMPS compiled with option -Dmetis
>  MUMPS compiled with option -Dparmetis
>  =
> L U Solver for unsymmetric matrices
> Type of parallelism: Working host
> 
>  ** ANALYSIS STEP 
> 
>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>  Using ParMETIS for parallel ordering
>  Structural symmetry is: 90%
> 
> 
> The error:
> 
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind
> [0]PETSC ERROR: or try http://valgrind.org <http://valgrind.org/> on 
> GNU/linux and Apple MacOS to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Signal received
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.17.0, unknown
> [0]PETSC ERROR: ./charlin.exe on a  named n1056 by vrolandi Wed Nov  1 
> 11:38:28 2023
> [0]PETSC ERROR: Configure options 
> --prefix=/u/home/v/vrolandi/CODES/LIBRARY/packages/petsc/installationDir 
> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort CXXOPTFLAGS=-O3 
> --with-scalar-type=complex --with-debugging=0 --with-precision=single 
> --download-mumps --download-scalapack --download-parmetis --download-metis
> 
> [0]PETSC ERROR: #1 User provided function() at unknown file:0
> [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is 
> causing the crash.
> Abort(59) on node 0 (rank 0 in comm 0): application called 
> MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> 
> 
> Thanks, 
> Victoria 
> 
> Il giorno mer 1 nov 2023 alle ore 10:33 Pierre Jolivet  <mailto:pie...@joliv.et>> ha scritto:
>> Victoria, please keep the list in copy.
>> 
>>> I am not understanding how can I switch to ParMetis if it does not appear 
>>> in the options of -mat_mumps_icntl_7.In the options I only have Metis and 
>>> not ParMetis.
>> 
>> 
>> You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2
>> 
>> Barry, I don’t think we can programmatically shut off this warning, it’s 
>> guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are 
>> only settable/gettable by people with access to consortium releases.
>> I’ll ask the MUMPS people for confirmation.
>> Note that this warning is only printed to screen with the option 
>> -mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 1 Nov 2023, at 5:52 PM, Barry Smith >> <mailto:bsm...@petsc.dev>> wrote:
>>> 
>>> 
>>>   Pierre,
>>> 
>>>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation so 
>>> as to not trigger the confusing warning message from MUMPS?
>>> 
>>>   Barry
>>> 
>>>> On Nov 1, 2023, at 12:17 PM, Pierre Jolivet >>> <mailto:pie...@joliv.et>> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
>>>>> mailto:petsc-users@mcs.anl.gov>> wrote:
>>>>> 
>>>>> Victoria,
>>>>

Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-02 Thread Pierre Jolivet

> On 1 Nov 2023, at 8:02 PM, Barry Smith  wrote:
> 
> 
>   Pierre,
> 
>Sorry, I was not clear. What I meant was that the PETSc code that calls 
> MUMPS could change the value of ICNTL(6) under certain conditions before 
> calling MUMPS, thus the MUMPS warning might not be triggered.

Again, I’m not sure it is possible, as the message is not guarded by the value 
of ICNTL(6), but by some other internal parameters.

Thanks,
Pierre

$ for i in {1..7} do echo "ICNTL(6) = ${i}" 
../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
-mat_mumps_icntl_4 2 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 
-mat_mumps_icntl_6 ${i} | grep -i "not allowed" done
ICNTL(6) = 1
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 2
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 3
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 4
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 5
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 6
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
ICNTL(6) = 7
 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed

> I am basing this on a guess from looking at the MUMPS manual and the warning 
> message that the particular value of ICNTL(6) is incompatible with the given 
> matrix state. But I could easily be wrong.
> 
>   Barry
> 
> 
>> On Nov 1, 2023, at 1:33 PM, Pierre Jolivet  wrote:
>> 
>> Victoria, please keep the list in copy.
>> 
>>> I am not understanding how can I switch to ParMetis if it does not appear 
>>> in the options of -mat_mumps_icntl_7.In the options I only have Metis and 
>>> not ParMetis.
>> 
>> 
>> You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2
>> 
>> Barry, I don’t think we can programmatically shut off this warning, it’s 
>> guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are 
>> only settable/gettable by people with access to consortium releases.
>> I’ll ask the MUMPS people for confirmation.
>> Note that this warning is only printed to screen with the option 
>> -mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 1 Nov 2023, at 5:52 PM, Barry Smith  wrote:
>>> 
>>> 
>>>   Pierre,
>>> 
>>>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation so 
>>> as to not trigger the confusing warning message from MUMPS?
>>> 
>>>   Barry
>>> 
>>>> On Nov 1, 2023, at 12:17 PM, Pierre Jolivet  wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
>>>>>  wrote:
>>>>> 
>>>>> Victoria,
>>>>> "** Maximum transversal (ICNTL(6)) not allowed because matrix is 
>>>>> distributed
>>>>> Ordering based on METIS"
>>>> 
>>>> This warning is benign and appears for every run using a sequential 
>>>> partitioner in MUMPS with a MATMPIAIJ.
>>>> (I’m not saying switching to ParMETIS will not make the issue go away)
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>> $ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
>>>> -mat_mumps_icntl_4 2
>>>> Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
>>>>   executing #MPI =  2, without OMP
>>>> 
>>>>  =
>>>>  MUMPS compiled with option -Dmetis
>>>>  MUMPS compiled with option -Dparmetis
>>>>  MUMPS compiled with option -Dpord
>>>>  MUMPS compiled with option -Dptscotch
>>>>  MUMPS compiled with option -Dscotch
>>>>  =
>>>> L U Solver for unsymmetric matrices
>>>> Type of parallelism: Working host
>>>> 
>>>>  ** ANALYSIS STEP 
>>>> 
>>>>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is 
>>>> distributed
>>>>  Processing a graph of size:56 with   194 edges
>>>>  Ordering based on AMF 
>>>>  WARNING: Largest root node of size26 not selected for parallel 
>>>> execution
>>>> 
>>>> Leaving analysis phase with  ...
>>>

Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-01 Thread Pierre Jolivet
Victoria, please keep the list in copy.

> I am not understanding how can I switch to ParMetis if it does not appear in 
> the options of -mat_mumps_icntl_7.In the options I only have Metis and not 
> ParMetis.


You need to use -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2

Barry, I don’t think we can programmatically shut off this warning, it’s 
guarded by a bunch of KEEP() values, see src/dana_driver.F:4707, which are only 
settable/gettable by people with access to consortium releases.
I’ll ask the MUMPS people for confirmation.
Note that this warning is only printed to screen with the option 
-mat_mumps_icntl_4 2 (or higher), so this won’t show up for standard runs.

Thanks,
Pierre

> On 1 Nov 2023, at 5:52 PM, Barry Smith  wrote:
> 
> 
>   Pierre,
> 
>Could the PETSc MUMPS interface "turn-off" ICNTL(6) in this situation so 
> as to not trigger the confusing warning message from MUMPS?
> 
>   Barry
> 
>> On Nov 1, 2023, at 12:17 PM, Pierre Jolivet  wrote:
>> 
>> 
>> 
>>> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
>>>  wrote:
>>> 
>>> Victoria,
>>> "** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>>> Ordering based on METIS"
>> 
>> This warning is benign and appears for every run using a sequential 
>> partitioner in MUMPS with a MATMPIAIJ.
>> (I’m not saying switching to ParMETIS will not make the issue go away)
>> 
>> Thanks,
>> Pierre
>> 
>> $ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
>> -mat_mumps_icntl_4 2
>> Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
>>   executing #MPI =  2, without OMP
>> 
>>  =
>>  MUMPS compiled with option -Dmetis
>>  MUMPS compiled with option -Dparmetis
>>  MUMPS compiled with option -Dpord
>>  MUMPS compiled with option -Dptscotch
>>  MUMPS compiled with option -Dscotch
>>  =
>> L U Solver for unsymmetric matrices
>> Type of parallelism: Working host
>> 
>>  ** ANALYSIS STEP 
>> 
>>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>>  Processing a graph of size:56 with   194 edges
>>  Ordering based on AMF 
>>  WARNING: Largest root node of size26 not selected for parallel 
>> execution
>> 
>> Leaving analysis phase with  ...
>>  INFOG(1)   =   0
>>  INFOG(2)   =   0
>> […]
>> 
>>> Try parmetis.
>>> Hong
>>> From: petsc-users  on behalf of Victoria 
>>> Rolandi 
>>> Sent: Tuesday, October 31, 2023 10:30 PM
>>> To: petsc-users@mcs.anl.gov 
>>> Subject: [petsc-users] Error using Metis with PETSc installed with MUMPS
>>>  
>>> Hi, 
>>> 
>>> I'm solving a large sparse linear system in parallel and I am using PETSc 
>>> with MUMPS. I am trying to test different options, like the ordering of the 
>>> matrix. Everything works if I use the -mat_mumps_icntl_7 2  or 
>>> -mat_mumps_icntl_7 0 options (with the first one, AMF, performing better 
>>> than AMD), however when I test METIS -mat_mumps_icntl_7 5 I get an error 
>>> (reported at the end of the email).
>>> 
>>> I have configured PETSc with the following options: 
>>> 
>>> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort  
>>> --with-scalar-type=complex --with-debugging=0 --with-precision=single 
>>> --download-mumps --download-scalapack --download-parmetis --download-metis
>>> 
>>> and the installation didn't give any problems.
>>> 
>>> Could you help me understand why metis is not working? 
>>> 
>>> Thank you in advance,
>>> Victoria 
>>> 
>>> Error:
>>> 
>>>  ** ANALYSIS STEP 
>>>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>>>  Processing a graph of size:699150 with  69238690 edges
>>>  Ordering based on METIS
>>> 510522 37081376 [100] [10486 699150]
>>> Error! Unknown CType: -1
>> 
> 



Re: [petsc-users] Error using Metis with PETSc installed with MUMPS

2023-11-01 Thread Pierre Jolivet


> On 1 Nov 2023, at 3:33 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Victoria,
> "** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
> Ordering based on METIS"

This warning is benign and appears for every run using a sequential partitioner 
in MUMPS with a MATMPIAIJ.
(I’m not saying switching to ParMETIS will not make the issue go away)

Thanks,
Pierre

$ ../../../../arch-darwin-c-debug-real/bin/mpirun -n 2 ./ex2 -pc_type lu 
-mat_mumps_icntl_4 2
Entering DMUMPS 5.6.2 from C interface with JOB, N =   1  56
  executing #MPI =  2, without OMP

 =
 MUMPS compiled with option -Dmetis
 MUMPS compiled with option -Dparmetis
 MUMPS compiled with option -Dpord
 MUMPS compiled with option -Dptscotch
 MUMPS compiled with option -Dscotch
 =
L U Solver for unsymmetric matrices
Type of parallelism: Working host

 ** ANALYSIS STEP 

 ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
 Processing a graph of size:56 with   194 edges
 Ordering based on AMF 
 WARNING: Largest root node of size26 not selected for parallel 
execution

Leaving analysis phase with  ...
 INFOG(1)   =   0
 INFOG(2)   =   0
[…]

> Try parmetis.
> Hong
> From: petsc-users  on behalf of Victoria 
> Rolandi 
> Sent: Tuesday, October 31, 2023 10:30 PM
> To: petsc-users@mcs.anl.gov 
> Subject: [petsc-users] Error using Metis with PETSc installed with MUMPS
>  
> Hi, 
> 
> I'm solving a large sparse linear system in parallel and I am using PETSc 
> with MUMPS. I am trying to test different options, like the ordering of the 
> matrix. Everything works if I use the -mat_mumps_icntl_7 2  or 
> -mat_mumps_icntl_7 0 options (with the first one, AMF, performing better than 
> AMD), however when I test METIS -mat_mumps_icntl_7 5 I get an error (reported 
> at the end of the email).
> 
> I have configured PETSc with the following options: 
> 
> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort  
> --with-scalar-type=complex --with-debugging=0 --with-precision=single 
> --download-mumps --download-scalapack --download-parmetis --download-metis
> 
> and the installation didn't give any problems.
> 
> Could you help me understand why metis is not working? 
> 
> Thank you in advance,
> Victoria 
> 
> Error:
> 
>  ** ANALYSIS STEP 
>  ** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed
>  Processing a graph of size:699150 with  69238690 edges
>  Ordering based on METIS
> 510522 37081376 [100] [10486 699150]
> Error! Unknown CType: -1



Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Pierre Jolivet

> On 11 Oct 2023, at 9:13 AM, Thanasis Boutsikakis 
>  wrote:
> 
> Very good catch Pierre, thanks a lot!
> 
> This made everything work: the two-step process and the ptap(). I mistakenly 
> thought that I should not let the local number of columns to be None, since 
> the matrix is only partitioned row-wise. Could you please explain what 
> happened because of my setting the local column number so that I get the 
> philosophy behind this partitioning?

Hopefully this should make things clearer to you: 
https://petsc.org/release/manual/mat/#sec-matlayout

Thanks,
Pierre

> Thanks again,
> Thanos
> 
>> On 11 Oct 2023, at 09:04, Pierre Jolivet  wrote:
>> 
>> That’s because:
>> size = ((None, global_rows), (global_cols, global_cols)) 
>> should be:
>> size = ((None, global_rows), (None, global_cols)) 
>> Then, it will work.
>> $ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py 
>> && echo $?
>> 0
>> 
>> Thanks,
>> Pierre
>> 
>>> On 11 Oct 2023, at 8:58 AM, Thanasis Boutsikakis 
>>>  wrote:
>>> 
>>> Pierre, I see your point, but my experiment shows that it does not even run 
>>> due to size mismatch, so I don’t see how being sparse would change things 
>>> here. There must be some kind of problem with the parallel ptap(), because 
>>> it does run sequentially. In order to test that, I changed the flags of the 
>>> matrix creation to sparse=True and ran it again. Here is the code
>>> 
>>> """Experimenting with PETSc mat-mat multiplication"""
>>> 
>>> import numpy as np
>>> from firedrake import COMM_WORLD
>>> from firedrake.petsc import PETSc
>>> 
>>> from utilities import Print
>>> 
>>> nproc = COMM_WORLD.size
>>> rank = COMM_WORLD.rank
>>> 
>>> 
>>> def create_petsc_matrix(input_array, sparse=True):
>>> """Create a PETSc matrix from an input_array
>>> 
>>> Args:
>>> input_array (np array): Input array
>>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to 
>>> None.
>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>> True.
>>> 
>>> Returns:
>>> PETSc mat: PETSc mpi matrix
>>> """
>>> # Check if input_array is 1D and reshape if necessary
>>> assert len(input_array.shape) == 2, "Input array should be 
>>> 2-dimensional"
>>> global_rows, global_cols = input_array.shape
>>> size = ((None, global_rows), (global_cols, global_cols))
>>> 
>>> # Create a sparse or dense matrix based on the 'sparse' argument
>>> if sparse:
>>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>>> else:
>>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>>> matrix.setUp()
>>> 
>>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>>> 
>>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>>> # Calculate the correct row in the array for the current process
>>> row_in_array = counter + local_rows_start
>>> matrix.setValues(
>>> i, range(global_cols), input_array[row_in_array, :], addv=False
>>> )
>>> 
>>> # Assembly the matrix to compute the final structure
>>> matrix.assemblyBegin()
>>> matrix.assemblyEnd()
>>> 
>>> return matrix
>>> 
>>> 
>>> # 
>>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc 
>>> matrix Phi
>>> #  A' = Phi.T * A * Phi
>>> # [k x k] <- [k x m] x [m x m] x [m x k]
>>> # 
>>> 
>>> m, k = 100, 7
>>> # Generate the random numpy matrices
>>> np.random.seed(0)  # sets the seed to 0
>>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>>> 
>>> # 
>>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>>> # 
>>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>>> 
>>> # Create A as an mpi matrix distributed on each process
>>> A = create_petsc_matrix(

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Pierre Jolivet
4py.PETSc.Mat.ptap
> petsc4py.PETSc.Error: error code 60
> [0] MatPtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
> [0] MatProductSetFromOptions() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
> [0] MatProductSetFromOptions_Private() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
> [0] MatProductSetFromOptions_MPIAIJ() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
> [0] MatProductSetFromOptions_MPIAIJ_PtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
> [0] Nonconforming object sizes
> [0] Matrix local dimensions are incompatible, Acol (0, 100) != Prow (0,34)
> Abort(1) on node 0 (rank 0 in comm 496): application called 
> MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0
> petsc4py.PETSc.Error: error code 60
> [1] MatPtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
> [1] MatProductSetFromOptions() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
> [1] MatProductSetFromOptions_Private() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
> [1] MatProductSetFromOptions_MPIAIJ() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
> [1] MatProductSetFromOptions_MPIAIJ_PtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
> [1] Nonconforming object sizes
> [1] Matrix local dimensions are incompatible, Acol (100, 200) != Prow (34,67)
> Abort(1) on node 1 (rank 1 in comm 496): application called 
> MPI_Abort(PYOP2_COMM_WORLD, 1) - process 1
> petsc4py.PETSc.Error: error code 60
> [2] MatPtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
> [2] MatProductSetFromOptions() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
> [2] MatProductSetFromOptions_Private() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
> [2] MatProductSetFromOptions_MPIAIJ() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
> [2] MatProductSetFromOptions_MPIAIJ_PtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
> [2] Nonconforming object sizes
> [2] Matrix local dimensions are incompatible, Acol (200, 300) != Prow (67,100)
> Abort(1) on node 2 (rank 2 in comm 496): application called 
> MPI_Abort(PYOP2_COMM_WORLD, 1) - process 2
> 
>> On 11 Oct 2023, at 07:18, Pierre Jolivet  wrote:
>> 
>> I disagree with what Mark and Matt are saying: your code is fine, the error 
>> message is fine, petsc4py is fine (in this instance).
>> It’s not a typical use case of MatPtAP(), which is mostly designed for 
>> MatAIJ, not MatDense.
>> On the one hand, in the MatDense case, indeed there will be a mismatch 
>> between the number of columns of A and the number of rows of P, as written 
>> in the error message.
>> On the other hand, there is not much to optimize when computing C = P’ A P 
>> with everything being dense.
>> I would just write this as B = A P and then C = P’ B (but then you may face 
>> the same issue as initially reported, please let us know then).
>> 
>> Thanks,
>> Pierre
>> 
>>> On 11 Oct 2023, at 2:42 AM, Mark Adams  wrote:
>>> 
>>> This looks like a false positive or there is some subtle bug here that we 
>>> are not seeing.
>>> Could this be the first time parallel PtAP has been used (and reported) in 
>>> petsc4py?
>>> 
>>> Mark
>>> 
>>> On Tue, Oct 10, 2023 at 8:27 PM Matthew Knepley >> <mailto:knep...@gmail.com>> wrote:
>>>> On Tue, Oct 10, 2023 at 5:34 PM Thanasis Boutsikakis 
>>>> >>> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>>> Hi all,
>>>>> 
>>>>> Revisiting my code and the proposed solution from Pierre, I realized this 
>>>>> works only in sequential. The reason is that PETSc partitions those 
>>>>> matrices only row-wise, which leads to an error due to the mismatch 
>>>>> between number of columns of A (non-partitioned) and the number of rows 
>>>>> of Phi (partitioned).
>>>> 
>>>> Are you positive about this? P^T A P is designed to run in this scenario, 
>>>> so either we have a bug or the diagnosis is wrong.
>>>> 
>>>>   Thanks,
>>>> 
>>>>  Matt
>>>>  
>>>>> """Experimenting with PETSc mat-mat multip

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-10 Thread Pierre Jolivet
>>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>>> # 
>>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>>> Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
>>> Print(f"{Aprime_np}")
>>> 
>>> # Create A as an mpi matrix distributed on each process
>>> A = create_petsc_matrix(A_np, sparse=False)
>>> 
>>> # Create Phi as an mpi matrix distributed on each process
>>> Phi = create_petsc_matrix(Phi_np, sparse=False)
>>> 
>>> # Create an empty PETSc matrix object to store the result of the PtAP 
>>> operation.
>>> # This will hold the result A' = Phi.T * A * Phi after the computation.
>>> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)
>>> 
>>> # Perform the PtAP (Phi Transpose times A times Phi) operation.
>>> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
>>> # A_prime will store the result of the operation.
>>> A_prime = A.ptap(Phi)
>>> 
>>> Here is the error
>>> 
>>> MATRIX mpiaij A [100x100]
>>> Assembled
>>> 
>>> Partitioning for A:
>>>   Rank 0: Rows [0, 34)
>>>   Rank 1: Rows [34, 67)
>>>   Rank 2: Rows [67, 100)
>>> 
>>> MATRIX mpiaij Phi [100x7]
>>> Assembled
>>> 
>>> Partitioning for Phi:
>>>   Rank 0: Rows [0, 34)
>>>   Rank 1: Rows [34, 67)
>>>   Rank 2: Rows [67, 100)
>>> 
>>> Traceback (most recent call last):
>>>   File "/Users/boutsitron/work/galerkin_projection.py", line 87, in 
>>> A_prime = A.ptap(Phi)
>>>   ^^^
>>>   File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
>>> petsc4py.PETSc.Error: error code 60
>>> [0] MatPtAP() at 
>>> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
>>> [0] MatProductSetFromOptions() at 
>>> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
>>> [0] MatProductSetFromOptions_Private() at 
>>> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
>>> [0] MatProductSetFromOptions_MPIAIJ() at 
>>> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
>>> [0] MatProductSetFromOptions_MPIAIJ_PtAP() at 
>>> /Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
>>> [0] Nonconforming object sizes
>>> [0] Matrix local dimensions are incompatible, Acol (0, 100) != Prow (0,34)
>>> Abort(1) on node 0 (rank 0 in comm 496): application called 
>>> MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0
>>> 
>>> Any thoughts?
>>> 
>>> Thanks,
>>> Thanos
>>> 
>>>> On 5 Oct 2023, at 14:23, Thanasis Boutsikakis 
>>>> >>> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>> 
>>>> This works Pierre. Amazing input, thanks a lot!
>>>> 
>>>>> On 5 Oct 2023, at 14:17, Pierre Jolivet >>>> <mailto:pie...@joliv.et>> wrote:
>>>>> 
>>>>> Not a petsc4py expert here, but you may to try instead:
>>>>> A_prime = A.ptap(Phi)
>>>>> 
>>>>> Thanks,
>>>>> Pierre
>>>>> 
>>>>>> On 5 Oct 2023, at 2:02 PM, Thanasis Boutsikakis 
>>>>>> >>>>> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>>>> 
>>>>>> Thanks Pierre! So I tried this and got a segmentation fault. Is this 
>>>>>> supposed to work right off the bat or am I missing sth?
>>>>>> 
>>>>>> [0]PETSC ERROR: 
>>>>>> 
>>>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
>>>>>> probably memory access out of range
>>>>>> [0]PETSC ERROR: Try option -start_in_debugger or 
>>>>>> -on_error_attach_debugger
>>>>>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
>>>>>> https://petsc.org/release/faq/
>>>>>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, 
>>>>>> and run
>>>>>> [0]PETSC ERROR: to get more information on the crash.
>>>>>> [0]PETSC ERROR: Run with -malloc_debug t

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Pierre Jolivet
Not a petsc4py expert here, but you may to try instead:
A_prime = A.ptap(Phi)

Thanks,
Pierre

> On 5 Oct 2023, at 2:02 PM, Thanasis Boutsikakis 
>  wrote:
> 
> Thanks Pierre! So I tried this and got a segmentation fault. Is this supposed 
> to work right off the bat or am I missing sth?
> 
> [0]PETSC ERROR: 
> 
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
> https://petsc.org/release/faq/
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is 
> causing the crash.
> Abort(59) on node 0 (rank 0 in comm 0): application called 
> MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> 
> """Experimenting with PETSc mat-mat multiplication"""
> 
> import time
> 
> import numpy as np
> from colorama import Fore
> from firedrake import COMM_SELF, COMM_WORLD
> from firedrake.petsc import PETSc
> from mpi4py import MPI
> from numpy.testing import assert_array_almost_equal
> 
> from utilities import (
> Print,
> create_petsc_matrix,
> print_matrix_partitioning,
> )
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> # 
> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
> Phi
> #  A' = Phi.T * A * Phi
> # [k x k] <- [k x m] x [m x m] x [m x k]
> # 
> 
> m, k = 11, 7
> # Generate the random numpy matrices
> np.random.seed(0)  # sets the seed to 0
> A_np = np.random.randint(low=0, high=6, size=(m, m))
> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
> 
> # 
> # TEST: Galerking projection of numpy matrices A_np and Phi_np
> # 
> Aprime_np = Phi_np.T @ A_np @ Phi_np
> Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
> Print(f"{Aprime_np}")
> 
> # Create A as an mpi matrix distributed on each process
> A = create_petsc_matrix(A_np, sparse=False)
> 
> # Create Phi as an mpi matrix distributed on each process
> Phi = create_petsc_matrix(Phi_np, sparse=False)
> 
> # Create an empty PETSc matrix object to store the result of the PtAP 
> operation.
> # This will hold the result A' = Phi.T * A * Phi after the computation.
> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)
> 
> # Perform the PtAP (Phi Transpose times A times Phi) operation.
> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
> # A_prime will store the result of the operation.
> Phi.PtAP(A, A_prime)
> 
>> On 5 Oct 2023, at 13:22, Pierre Jolivet  wrote:
>> 
>> How about using ptap which will use MatPtAP?
>> It will be more efficient (and it will help you bypass the issue).
>> 
>> Thanks,
>> Pierre
>> 
>>> On 5 Oct 2023, at 1:18 PM, Thanasis Boutsikakis 
>>>  wrote:
>>> 
>>> Sorry, forgot function create_petsc_matrix()
>>> 
>>> def create_petsc_matrix(input_array sparse=True):
>>> """Create a PETSc matrix from an input_array
>>> 
>>> Args:
>>> input_array (np array): Input array
>>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to 
>>> None.
>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>> True.
>>> 
>>> Returns:
>>> PETSc mat: PETSc matrix
>>> """
>>> # Check if input_array is 1D and reshape if necessary
>>> assert len(input_array.shape) == 2, "Input array should be 
>>> 2-dimensional"
>>> global_rows, global_cols = input_array.shape
>>> 
>>> size = ((None, global_rows), (global_cols, global_cols))
>>> 
>>> # Create a sparse or dense matrix based on the 'sparse' argument
>>> if sparse:
>>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>>> else:
>>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>>> matrix.setUp()
>>> 
>>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>>> 
>>> for counter, i in e

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Pierre Jolivet
How about using ptap which will use MatPtAP?
It will be more efficient (and it will help you bypass the issue).

Thanks,
Pierre

> On 5 Oct 2023, at 1:18 PM, Thanasis Boutsikakis 
>  wrote:
> 
> Sorry, forgot function create_petsc_matrix()
> 
> def create_petsc_matrix(input_array sparse=True):
> """Create a PETSc matrix from an input_array
> 
> Args:
> input_array (np array): Input array
> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> PETSc mat: PETSc matrix
> """
> # Check if input_array is 1D and reshape if necessary
> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
> global_rows, global_cols = input_array.shape
> 
> size = ((None, global_rows), (global_cols, global_cols))
> 
> # Create a sparse or dense matrix based on the 'sparse' argument
> if sparse:
> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
> else:
> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
> matrix.setUp()
> 
> local_rows_start, local_rows_end = matrix.getOwnershipRange()
> 
> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
> # Calculate the correct row in the array for the current process
> row_in_array = counter + local_rows_start
> matrix.setValues(
> i, range(global_cols), input_array[row_in_array, :], addv=False
> )
> 
> # Assembly the matrix to compute the final structure
> matrix.assemblyBegin()
> matrix.assemblyEnd()
> 
> return matrix
> 
>> On 5 Oct 2023, at 13:09, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Hi everyone,
>> 
>> I am trying a Galerkin projection (see MFE below) and I cannot get the 
>> Phi.transposeMatMult(A, A1) work. The error is
>> 
>> Phi.transposeMatMult(A, A1)
>>   File "petsc4py/PETSc/Mat.pyx", line 1514, in 
>> petsc4py.PETSc.Mat.transposeMatMult
>> petsc4py.PETSc.Error: error code 56
>> [0] MatTransposeMatMult() at 
>> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10135
>> [0] MatProduct_Private() at 
>> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9989
>> [0] No support for this operation for this object type
>> [0] Call MatProductCreate() first
>> 
>> Do you know if these exposed to petsc4py or maybe there is another way? I 
>> cannot get the MFE to work (neither in sequential nor in parallel)
>> 
>> """Experimenting with PETSc mat-mat multiplication"""
>> 
>> import time
>> 
>> import numpy as np
>> from colorama import Fore
>> from firedrake import COMM_SELF, COMM_WORLD
>> from firedrake.petsc import PETSc
>> from mpi4py import MPI
>> from numpy.testing import assert_array_almost_equal
>> 
>> from utilities import (
>> Print,
>> create_petsc_matrix,
>> )
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> # 
>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
>> Phi
>> #  A' = Phi.T * A * Phi
>> # [k x k] <- [k x m] x [m x m] x [m x k]
>> # 
>> 
>> m, k = 11, 7
>> # Generate the random numpy matrices
>> np.random.seed(0)  # sets the seed to 0
>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>> 
>> # Create A as an mpi matrix distributed on each process
>> A = create_petsc_matrix(A_np)
>> 
>> # Create Phi as an mpi matrix distributed on each process
>> Phi = create_petsc_matrix(Phi_np)
>> 
>> A1 = create_petsc_matrix(np.zeros((k, m)))
>> 
>> # Now A1 contains the result of Phi^T * A
>> Phi.transposeMatMult(A, A1)
>> 
> 



Re: [petsc-users] Error when configure cmake

2023-09-29 Thread Pierre Jolivet
You are using g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23).
You need to use a less ancient C++ compiler.
Please send logs to petsc-maint, not petsc-users.

Thanks,
Pierre

> On 30 Sep 2023, at 7:00 AM, Ivan Luthfi  wrote:
> 
> I am trying to configure my pets-3.13.6, but I have an error when running 
> configure on cmake. Please help me on this. Here is the following 
> configure.log.
> 
> 



Re: [petsc-users] [petsc-maint] PETSc with Xcode 15

2023-09-21 Thread Pierre Jolivet
This is enough to bypass most of the warnings.There is still the "ld: warning: ignoring duplicate libraries:", but I think these should be filtered (unlike the other warnings which should be fixed).Thanks,Pierrediff --git a/config/BuildSystem/config/setCompilers.py 
b/config/BuildSystem/config/setCompilers.py
index 152272a9709..2d73fb20660 100644
--- a/config/BuildSystem/config/setCompilers.py
+++ b/config/BuildSystem/config/setCompilers.py
@@ -2316,6 +2316,6 @@ class Configure(config.base.Configure):
   if 'with-shared-ld' in self.argDB:
-yield (self.argDB['with-shared-ld'], ['-dynamiclib -single_module', 
'-undefined dynamic_lookup', '-multiply_defined suppress', 
'-no_compact_unwind'], 'dylib')
+yield (self.argDB['with-shared-ld'], ['-dynamiclib', '-undefined 
dynamic_lookup', '-no_compact_unwind'], 'dylib')
   if hasattr(self, 'CXX') and self.mainLanguage == 'Cxx':
-yield (self.CXX, ['-dynamiclib -single_module', '-undefined 
dynamic_lookup', '-multiply_defined suppress', '-no_compact_unwind'], 'dylib')
-  yield (self.CC, ['-dynamiclib -single_module', '-undefined 
dynamic_lookup', '-multiply_defined suppress', '-no_compact_unwind'], 'dylib')
+yield (self.CXX, ['-dynamiclib', '-undefined dynamic_lookup', 
'-no_compact_unwind'], 'dylib')
+  yield (self.CC, ['-dynamiclib', '-undefined dynamic_lookup', 
'-no_compact_unwind'], 'dylib')
 if hasattr(self, 'CXX') and self.mainLanguage == 'Cxx':
@@ -2437,3 +2437,3 @@ class Configure(config.base.Configure):
   self.pushLanguage(language)
-  for testFlag in ['-Wl,-bind_at_load','-Wl,-multiply_defined,suppress', 
'-Wl,-multiply_defined -Wl,suppress', '-Wl,-commons,use_dylibs', 
'-Wl,-search_paths_first', '-Wl,-no_compact_unwind']:
+  for testFlag in ['-Wl,-search_paths_first', '-Wl,-no_compact_unwind']:
 if self.checkLinkerFlag(testFlag):
@@ -2536,6 +2536,6 @@ class Configure(config.base.Configure):
   if 'with-dynamic-ld' in self.argDB:
-yield (self.argDB['with-dynamic-ld'], ['-dynamiclib -single_module 
-undefined dynamic_lookup -multiply_defined suppress'], 'dylib')
+yield (self.argDB['with-dynamic-ld'], ['-dynamiclib'], 'dylib')
   if hasattr(self, 'CXX') and self.mainLanguage == 'Cxx':
-yield (self.CXX, ['-dynamiclib -single_module -undefined 
dynamic_lookup -multiply_defined suppress'], 'dylib')
-  yield (self.CC, ['-dynamiclib -single_module -undefined dynamic_lookup 
-multiply_defined suppress'], 'dylib')
+yield (self.CXX, ['-dynamiclib'], 'dylib')
+  yield (self.CC, ['-dynamiclib'], 'dylib')
 # Shared default
diff --git a/config/PETSc/options/sharedLibraries.py 
b/config/PETSc/options/sharedLibraries.py
index fae390456e8..c1e7861c290 100755
--- a/config/PETSc/options/sharedLibraries.py
+++ b/config/PETSc/options/sharedLibraries.py
@@ -68,3 +68,3 @@ class Configure(config.base.Configure):
 self.addMakeMacro('SONAME_FUNCTION', '$(1).$(2).dylib')
-self.addMakeMacro('SL_LINKER_FUNCTION', '-dynamiclib -install_name 
$(call SONAME_FUNCTION,$(1),$(2)) -compatibility_version $(2) -current_version 
$(3) -single_module -multiply_defined suppress -undefined dynamic_lookup')
+self.addMakeMacro('SL_LINKER_FUNCTION', '-dynamiclib -install_name 
$(call SONAME_FUNCTION,$(1),$(2)) -compatibility_version $(2) -current_version 
$(3) -undefined dynamic_lookup')
   elif self.setCompilers.CC.find('win32fe') >=0:
diff --git a/config/BuildSystem/config/setCompilers.py b/config/BuildSystem/config/setCompilers.pyindex 152272a9709..2d73fb20660 100644--- a/config/BuildSystem/config/setCompilers.py+++ b/config/BuildSystem/config/setCompilers.py@@ -2316,6 +2316,6 @@ class Configure(config.base.Configure):       if 'with-shared-ld' in self.argDB:-        yield (self.argDB['with-shared-ld'], ['-dynamiclib -single_module', '-undefined dynamic_lookup', '-multiply_defined suppress', '-no_compact_unwind'], 'dylib')+        yield (self.argDB['with-shared-ld'], ['-dynamiclib', '-undefined dynamic_lookup', '-no_compact_unwind'], 'dylib')       if hasattr(self, 'CXX') and self.mainLanguage == 'Cxx':-        yield (self.CXX, ['-dynamiclib -single_module', '-undefined dynamic_lookup', '-multiply_defined suppress', '-no_compact_unwind'], 'dylib')-      yield (self.CC, ['-dynamiclib -single_module', '-undefined dynamic_lookup', '-multiply_defined suppress', '-no_compact_unwind'], 'dylib')+        yield (self.CXX, ['-dynamiclib', '-undefined dynamic_lookup', '-no_compact_unwind'], 'dylib')+      yield (self.CC, ['-dynamiclib', '-undefined dynamic_lookup', '-no_compact_unwind'], 'dylib')     if hasattr(self, 'CXX') and self.mainLanguage == 'Cxx':@@ -2437,3 +2437,3 @@ class Configure(config.base.Configure):       self.pushLanguage(language)-      for testFlag in ['-Wl,-bind_at_load','-Wl,-multiply_defined,suppress', '-Wl,-multiply_defined -Wl,suppress', '-Wl,-commons,use_dylibs', '-Wl,-search_paths_first', '-Wl,-no_compact_unwind']:+      for 

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-09-06 Thread Pierre Jolivet
(switching back to petsc-users with the previous attachment striped)I’m assuming your problem is kind of ill-posed?PCICC and PCILU have different default parameters, and thus can yield different convergence curves, especially for difficult problems.When switching from AIJ to SBAIJ, ILU will switch to ICC.If I switch to something more “robust” (PCCHOLESKY from MUMPS), then I get the same convergence (up to machine precision).I think this highlight there is no issue with the new PCFIELDSPLIT/MatCreateSubMatri[x,ces]() code, but merely that you need to be a bit careful with the inner solvers.Thanks,Pierre

asm_sbaij.log
Description: Binary data


asm_aij.log
Description: Binary data
On 6 Sep 2023, at 11:27 AM, Carl-Johan Thore  wrote:Ok, here's data from a small instance of my problem. It's in Matlab-format which is what I usually use, but let me know if you want something else.I can send smaller versions of the problem if you prefer that (really small versions will take a bit of time to code so that one doesn't just get the solution 0 everywhere).  One IS, ISO, is included because in my code PETSc computes the other IS as the complement. Also included is the output from PETSc for aij and sbaij, and a Matlab-script to check the data.Kind regards,Carl-JohanOn Wed, Sep 6, 2023 at 10:10 AM Carl-Johan Thore <carljohanth...@gmail.com> wrote:
"Naïve question, but is your problem really symmetric?For example, if you do FEM, how do you impose Dirichlet boundary conditions?"The structure of the matrix is K =  [A B; B^T C] with A and C symmetric, so the problem should be symmetric. Numerically, the maximum relative difference between K and K^T ison the order of, let's say, 1e-18 which I guess is a good as can be expected? I'm using FEM with (homogeneous) Dirichlet boundary conditions imposed by zeroing rows and columns for the fixed DOFs and adding ones to the corresponding diagonal elements, i.e. same as what MatZeroRowsColumns does. 
MatZeroRowsColumns

 doesn't work for sbaij but we already have an implementation in which rows and columns are cancelled in the element matrices before assembly which is well-tested for the aij-case. We've also verified numerically that aij and sbaij yields the same upper triangular part, and that the RHS is the same in both cases."If you can reproduce this on a smaller example and are at liberty to share the Mat, IS, and RHS Vec, feel free to send them at petsc-ma...@mcs.anl.gov, I can have a look."Yes, this is reproducible on smaller problems. In case I send you Mat, IS, and RHS, which format is preferable?Kind regards,Carl-JohanOn Wed, Sep 6, 2023 at 8:21 AM Pierre Jolivet <pierre.joli...@lip6.fr> wrote:Naïve question, but is your problem really symmetric?For example, if you do FEM, how do you impose Dirichlet boundary conditions?If you can reproduce this on a smaller example and are at liberty to share the Mat, IS, and RHS Vec, feel free to send them at petsc-ma...@mcs.anl.gov, I can have a look.Thanks,PierreOn 6 Sep 2023, at 8:14 AM, Carl-Johan Thore <carljohanth...@gmail.com> wrote:Thanks for this!I now get convergence with sbaij and fieldsplit. However, as can be seen in the attached file, it requires around three times as many outer iterations as with aij to reach the same point (to within rtol). Switching from -pc_fieldsplit_schur_fact_type LOWER to FULL (which is the "correct" choice?) helps quite a bit, but still sbaij takes many more iterations. So it seems that your sbaij-implementation works but that I'm doing something wrong when setting up the fieldsplit pc, or that some of the subsolvers doesn't work properly with sbaij. I've tried bjacobi as you had in your logs, but not hypre yet. But anyway, one doesn't expect the matrix format to have this much impact on convergence? Any other suggestions?/Carl-JohanOn Mon, Sep 4, 2023 at 9:31 PM Pierre Jolivet <pierre.joli...@lip6.fr> wrote:The branch should now be good to go (https://gitlab.com/petsc/petsc/-/merge_requests/6841).Sorry, I made a mistake before, hence the error on PetscObjectQuery().I’m not sure the code will be covered by the pipeline, but I have tested this on a Raviart—Thomas discretization with PCFIELDSPLIT.You’ll see in the attached logs that:1) the numerics match2) in the SBAIJ case, PCFIELDSPLIT extract the (non-symmetric) A_{01} block from the global (symmetric) A and we get the A_{10} block cheaply by just using MatCreateHermitianTranspose(), instead of calling another time MatCreateSubMatrix()Please let me know if you have some time to test the branch and whether it fails or succeeds on your test cases.Also, I do not agree with what Hong said.Sometimes, the assembly of a coefficient can be more expensive than the communication of the said coefficient.So they are instances where SBAIJ would be more efficient than AIJ even if it would require more communication, it is not a black and white picture.Thanks,PierreOn 28 Aug 2023, at 12:12 PM, Pierre Joliv

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-09-06 Thread Pierre Jolivet


> On 6 Sep 2023, at 10:10 AM, Carl-Johan Thore  wrote:
> 
> 
> "Naïve question, but is your problem really symmetric?
> For example, if you do FEM, how do you impose Dirichlet boundary conditions?"
> 
> The structure of the matrix is K =  [A B; B^T C] with A and C symmetric, so 
> the problem should be symmetric. Numerically, the maximum relative difference 
> between K and K^T is
> on the order of, let's say, 1e-18 which I guess is a good as can be expected? 
> I'm using FEM with (homogeneous) Dirichlet boundary conditions imposed by 
> zeroing rows and columns 
> for the fixed DOFs and adding ones to the corresponding diagonal elements, 
> i.e. same as what MatZeroRowsColumns does.  MatZeroRowsColumns doesn't work 
> for sbaij but we already have an implementation in which rows and columns are 
> cancelled in the element matrices before assembly which is well-tested for 
> the aij-case. We've also verified numerically that aij and sbaij yields the 
> same upper triangular part, and that the RHS is the same in both cases.
> 
> "If you can reproduce this on a smaller example and are at liberty to share 
> the Mat, IS, and RHS Vec, feel free to send them at petsc-ma...@mcs.anl.gov, 
> I can have a look."
> 
> Yes, this is reproducible on smaller problems. In case I send you Mat, IS, 
> and RHS, which format is preferable?

If the problem is small, it doesn’t make much difference, as long as you tell 
me the scalar type and whether you are using 32- or 64-bit indices.

Thanks,
Pierre

> Kind regards,
> Carl-Johan
> 
> 
> On Wed, Sep 6, 2023 at 8:21 AM Pierre Jolivet  wrote:
>> Naïve question, but is your problem really symmetric?
>> For example, if you do FEM, how do you impose Dirichlet boundary conditions?
>> If you can reproduce this on a smaller example and are at liberty to share 
>> the Mat, IS, and RHS Vec, feel free to send them at petsc-ma...@mcs.anl.gov, 
>> I can have a look.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 6 Sep 2023, at 8:14 AM, Carl-Johan Thore  
>>> wrote:
>>> 
>>> 
>>> Thanks for this!
>>> I now get convergence with sbaij and fieldsplit. However, as can be seen in 
>>> the attached file, it requires around three times as many outer iterations 
>>> as with aij to reach the same point (to within rtol). Switching from 
>>> -pc_fieldsplit_schur_fact_type LOWER to FULL (which is the "correct" 
>>> choice?) helps quite a bit, but still sbaij takes many more iterations. So 
>>> it seems that your sbaij-implementation works but that I'm doing something 
>>> wrong when setting up the fieldsplit pc, or that some of the subsolvers 
>>> doesn't work properly with sbaij. I've tried bjacobi as you had in your 
>>> logs, but not hypre yet. But anyway, one doesn't expect the matrix format 
>>> to have this much impact on convergence? Any other suggestions?
>>> /Carl-Johan
>>> 
>>> On Mon, Sep 4, 2023 at 9:31 PM Pierre Jolivet  
>>> wrote:
>>>> The branch should now be good to go 
>>>> (https://gitlab.com/petsc/petsc/-/merge_requests/6841).
>>>> Sorry, I made a mistake before, hence the error on PetscObjectQuery().
>>>> I’m not sure the code will be covered by the pipeline, but I have tested 
>>>> this on a Raviart—Thomas discretization with PCFIELDSPLIT.
>>>> You’ll see in the attached logs that:
>>>> 1) the numerics match
>>>> 2) in the SBAIJ case, PCFIELDSPLIT extract the (non-symmetric) A_{01} 
>>>> block from the global (symmetric) A and we get the A_{10} block cheaply by 
>>>> just using MatCreateHermitianTranspose(), instead of calling another time 
>>>> MatCreateSubMatrix()
>>>> Please let me know if you have some time to test the branch and whether it 
>>>> fails or succeeds on your test cases.
>>>> 
>>>> Also, I do not agree with what Hong said.
>>>> Sometimes, the assembly of a coefficient can be more expensive than the 
>>>> communication of the said coefficient.
>>>> So they are instances where SBAIJ would be more efficient than AIJ even if 
>>>> it would require more communication, it is not a black and white picture.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>> 
>>>>> On 28 Aug 2023, at 12:12 PM, Pierre Jolivet  
>>>>> wrote:
>>>>> 
>>>>> 
>>>>>>> On 28 Aug 2023, at 6:50 PM, Carl-Johan Thore  
>>>>>>> wrote:
>>>>>>> 
>>>>>>&

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-09-06 Thread Pierre Jolivet
Naïve question, but is your problem really symmetric?
For example, if you do FEM, how do you impose Dirichlet boundary conditions?
If you can reproduce this on a smaller example and are at liberty to share the 
Mat, IS, and RHS Vec, feel free to send them at petsc-ma...@mcs.anl.gov, I can 
have a look.

Thanks,
Pierre

> On 6 Sep 2023, at 8:14 AM, Carl-Johan Thore  wrote:
> 
> 
> Thanks for this!
> I now get convergence with sbaij and fieldsplit. However, as can be seen in 
> the attached file, it requires around three times as many outer iterations as 
> with aij to reach the same point (to within rtol). Switching from 
> -pc_fieldsplit_schur_fact_type LOWER to FULL (which is the "correct" choice?) 
> helps quite a bit, but still sbaij takes many more iterations. So it seems 
> that your sbaij-implementation works but that I'm doing something wrong when 
> setting up the fieldsplit pc, or that some of the subsolvers doesn't work 
> properly with sbaij. I've tried bjacobi as you had in your logs, but not 
> hypre yet. But anyway, one doesn't expect the matrix format to have this much 
> impact on convergence? Any other suggestions?
> /Carl-Johan
> 
>> On Mon, Sep 4, 2023 at 9:31 PM Pierre Jolivet  wrote:
>> The branch should now be good to go 
>> (https://gitlab.com/petsc/petsc/-/merge_requests/6841).
>> Sorry, I made a mistake before, hence the error on PetscObjectQuery().
>> I’m not sure the code will be covered by the pipeline, but I have tested 
>> this on a Raviart—Thomas discretization with PCFIELDSPLIT.
>> You’ll see in the attached logs that:
>> 1) the numerics match
>> 2) in the SBAIJ case, PCFIELDSPLIT extract the (non-symmetric) A_{01} block 
>> from the global (symmetric) A and we get the A_{10} block cheaply by just 
>> using MatCreateHermitianTranspose(), instead of calling another time 
>> MatCreateSubMatrix()
>> Please let me know if you have some time to test the branch and whether it 
>> fails or succeeds on your test cases.
>> 
>> Also, I do not agree with what Hong said.
>> Sometimes, the assembly of a coefficient can be more expensive than the 
>> communication of the said coefficient.
>> So they are instances where SBAIJ would be more efficient than AIJ even if 
>> it would require more communication, it is not a black and white picture.
>> 
>> Thanks,
>> Pierre
>> 
>> 
>>>> On 28 Aug 2023, at 12:12 PM, Pierre Jolivet  wrote:
>>>> 
>>>> 
>>>> On 28 Aug 2023, at 6:50 PM, Carl-Johan Thore  
>>>> wrote:
>>>> 
>>>> I've tried the new files, and with them, PCFIELDSPLIT now gets set up 
>>>> without crashes (but the setup is significantly slower than for MATAIJ)
>>> 
>>> I’ll be back from Japan at the end of this week, my schedule is too packed 
>>> to get anything done in the meantime.
>>> But I’ll let you know when things are working properly (last I checked, I 
>>> think it was working, but I may have forgotten about a corner case or two).
>>> But yes, though one would except things to be faster and less memory 
>>> intensive with SBAIJ, it’s unfortunately not always the case.
>>> 
>>> Thanks,
>>> Pierre
>>> 
>>>> Unfortunately I still get errors later in the process:
>>>> 
>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer
>>>> [0]PETSC ERROR: Null Pointer: Parameter # 1
>>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.4-1023-ga6d78fcba1d  
>>>> GIT Date: 2023-08-22 20:32:33 -0400
>>>> [0]PETSC ERROR: Configure options -f --with-fortran-bindings=0 --with-cuda 
>>>> --with-cusp --download-scalapack --download-hdf5 --download-zlib 
>>>> --download-mumps --download-parmetis --download-metis --download-ptscotch 
>>>> --download-hypre --download-spai
>>>> [0]PETSC ERROR: #1 PetscObjectQuery() at 
>>>> /mnt/c/mathware/petsc/petsc-v3-19-4/src/sys/objects/inherit.c:742
>>>> [0]PETSC ERROR: #2 MatCreateSubMatrix_MPISBAIJ() at 
>>>> /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/impls/sbaij/mpi/mpisbaij.c:1414
>>>> [0]PETSC ERROR: #3 MatCreateSubMatrix() at 
>>>> /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/interface/matrix.c:8476
>>>> [0]PETSC ERROR: #4 PCSetUp_FieldSplit() at 
>>>> /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/impls/fieldsplit/fieldsplit.c:826
>>>> [0]PETSC ERROR: #5 PCSetUp() at 
>>>> /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/interface/precon.c:1069
>>>> [0]PETSC ERROR: #6 KSPSetUp() at 
>>>&g

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-09-04 Thread Pierre Jolivet
  rows=5712, cols=5712
  total: nonzeros=134208, allocated nonzeros=134208
  total number of mallocs used during MatSetValues calls=0
using I-node (on process 0) routines: found 470 nodes, limit used 
is 5
  linear system matrix = precond matrix:
  Mat Object: 4 MPI processes
type: mpiaij
rows=15828, cols=15828
total: nonzeros=407340, allocated nonzeros=407340
total number of mallocs used during MatSetValues calls=0
  using I-node (on process 0) routines: found 965 nodes, limit used is 5
On 28 Aug 2023, at 12:12 PM, Pierre Jolivet  wrote:On 28 Aug 2023, at 6:50 PM, Carl-Johan Thore  wrote:I've tried the new files, and with them, PCFIELDSPLIT now gets set up without crashes (but the setup is significantly slower than for MATAIJ)I’ll be back from Japan at the end of this week, my schedule is too packed to get anything done in the meantime.But I’ll let you know when things are working properly (last I checked, I think it was working, but I may have forgotten about a corner case or two).But yes, though one would except things to be faster and less memory intensive with SBAIJ, it’s unfortunately not always the case.Thanks,PierreUnfortunately I still get errors later in the process:[0]PETSC ERROR: Null argument, when expecting valid pointer[0]PETSC ERROR: Null Pointer: Parameter # 1[0]PETSC ERROR: Petsc Development GIT revision: v3.19.4-1023-ga6d78fcba1d  GIT Date: 2023-08-22 20:32:33 -0400[0]PETSC ERROR: Configure options -f --with-fortran-bindings=0 --with-cuda --with-cusp --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai[0]PETSC ERROR: #1 PetscObjectQuery() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/sys/objects/inherit.c:742[0]PETSC ERROR: #2 MatCreateSubMatrix_MPISBAIJ() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/impls/sbaij/mpi/mpisbaij.c:1414[0]PETSC ERROR: #3 MatCreateSubMatrix() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/interface/matrix.c:8476[0]PETSC ERROR: #4 PCSetUp_FieldSplit() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/impls/fieldsplit/fieldsplit.c:826[0]PETSC ERROR: #5 PCSetUp() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/interface/precon.c:1069[0]PETSC ERROR: #6 KSPSetUp() at /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/ksp/interface/itfunc.c:415The code I'm running here works without any problems for MATAIJ. To run it with MATSBAIJ I've simply used the command-line option-dm_mat_type sbaijKind regards,Carl-JohanOn Sat, Aug 26, 2023 at 5:21 PM Pierre Jolivet via petsc-users <petsc-users@mcs.anl.gov> wrote:On 27 Aug 2023, at 12:14 AM, Carl-Johan Thore <carl-johan.th...@liu.se> wrote:“Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with the same issue.”I’m not sure I follow. Does PCFIELDSPLIT extract further submatrices from these blocks, or is theresomewhere else in the code that things will go wrong?Ah, no, you are right, in that case it should work.For the MATNEST I was thinking to get some savings from the block-symmetry at leasteven if symmetry in A00 and A11 cannot be exploited; using SBAIJ for them would just be a(pretty big) bonus. “I’ll rebase on top of main and try to get it integrated if it could be useful to you (but I’m travelingright now so everything gets done more slowly, sorry).”Sound great, Thanks again!The MR is there https://gitlab.com/petsc/petsc/-/merge_requests/6841.I need to add a new code path in MatCreateRedundantMatrix() to make sure the resulting Mat is indeed SBAIJ, but that is orthogonal to the PCFIELDSPLIT issue.The branch should be usable in its current state.Thanks,Pierre From: Pierre Jolivet <pierre.joli...@lip6.fr> Sent: Saturday, August 26, 2023 4:36 PMTo: Carl-Johan Thore <carl-johan.th...@liu.se>Cc: Carl-Johan Thore <carljohanth...@gmail.com>; petsc-users@mcs.anl.govSubject: Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ  On 26 Aug 2023, at 11:16 PM, Carl-Johan Thore <carl-johan.th...@liu.se> wrote: "(Sadly) MATSBAIJ is extremely broken, in particular, it cannot be used to retrieve rectangular blocks in MatCreateSubMatrices, thus you cannot get the A01 and A10 blocks in PCFIELDSPLIT.I have a branch that fixes this, but I haven’t rebased in a while (and I’m AFK right now), would you want me to rebase and give it a go, or must you stick to a release tarball?"Ok, would be great if you could look at this! I don't need to stick to any particular branch.Do you think MATNEST could be an alternative here? Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with the same issue.I’m using both approaches (monolithic SBAIJ or Nest + SBAIJ), it was crashing but I think it was thoroughly fixed in https://gitlab.com/petsc/petsc/-/commits/jolivet/feature-matcreatesubmatrices-rectangular-sbaij/It is ugly code on top of ugly code, so I didn’t try to get it integrated and just used the branch locally, and then moved 

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-08-28 Thread Pierre Jolivet

> On 28 Aug 2023, at 6:50 PM, Carl-Johan Thore  wrote:
> 
> I've tried the new files, and with them, PCFIELDSPLIT now gets set up without 
> crashes (but the setup is significantly slower than for MATAIJ)

I’ll be back from Japan at the end of this week, my schedule is too packed to 
get anything done in the meantime.
But I’ll let you know when things are working properly (last I checked, I think 
it was working, but I may have forgotten about a corner case or two).
But yes, though one would except things to be faster and less memory intensive 
with SBAIJ, it’s unfortunately not always the case.

Thanks,
Pierre

> Unfortunately I still get errors later in the process:
> 
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 1
> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.4-1023-ga6d78fcba1d  
> GIT Date: 2023-08-22 20:32:33 -0400
> [0]PETSC ERROR: Configure options -f --with-fortran-bindings=0 --with-cuda 
> --with-cusp --download-scalapack --download-hdf5 --download-zlib 
> --download-mumps --download-parmetis --download-metis --download-ptscotch 
> --download-hypre --download-spai
> [0]PETSC ERROR: #1 PetscObjectQuery() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/sys/objects/inherit.c:742
> [0]PETSC ERROR: #2 MatCreateSubMatrix_MPISBAIJ() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/impls/sbaij/mpi/mpisbaij.c:1414
> [0]PETSC ERROR: #3 MatCreateSubMatrix() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/mat/interface/matrix.c:8476
> [0]PETSC ERROR: #4 PCSetUp_FieldSplit() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/impls/fieldsplit/fieldsplit.c:826
> [0]PETSC ERROR: #5 PCSetUp() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/pc/interface/precon.c:1069
> [0]PETSC ERROR: #6 KSPSetUp() at 
> /mnt/c/mathware/petsc/petsc-v3-19-4/src/ksp/ksp/interface/itfunc.c:415
> 
> The code I'm running here works without any problems for MATAIJ. To run it 
> with MATSBAIJ I've simply used the command-line option
> -dm_mat_type sbaij
> 
> 
> Kind regards,
> Carl-Johan
> 
> 
> On Sat, Aug 26, 2023 at 5:21 PM Pierre Jolivet via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>> 
>> 
>>> On 27 Aug 2023, at 12:14 AM, Carl-Johan Thore >> <mailto:carl-johan.th...@liu.se>> wrote:
>>> 
>>> “Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with 
>>> the same issue.”
>>> I’m not sure I follow. Does PCFIELDSPLIT extract further submatrices from 
>>> these blocks, or is there
>>> somewhere else in the code that things will go wrong?
>> 
>> Ah, no, you are right, in that case it should work.
>> 
>>> For the MATNEST I was thinking to get some savings from the block-symmetry 
>>> at least
>>> even if symmetry in A00 and A11 cannot be exploited; using SBAIJ for them 
>>> would just be a
>>> (pretty big) bonus.
>>>  
>>> “I’ll rebase on top of main and try to get it integrated if it could be 
>>> useful to you (but I’m traveling
>>> right now so everything gets done more slowly, sorry).”
>>> Sound great, Thanks again!
>> 
>> The MR is there https://gitlab.com/petsc/petsc/-/merge_requests/6841.
>> I need to add a new code path in MatCreateRedundantMatrix() to make sure the 
>> resulting Mat is indeed SBAIJ, but that is orthogonal to the PCFIELDSPLIT 
>> issue.
>> The branch should be usable in its current state.
>> 
>> Thanks,
>> Pierre
>> 
>>>  
>>> From: Pierre Jolivet >> <mailto:pierre.joli...@lip6.fr>> 
>>> Sent: Saturday, August 26, 2023 4:36 PM
>>> To: Carl-Johan Thore >> <mailto:carl-johan.th...@liu.se>>
>>> Cc: Carl-Johan Thore >> <mailto:carljohanth...@gmail.com>>; petsc-users@mcs.anl.gov 
>>> <mailto:petsc-users@mcs.anl.gov>
>>> Subject: Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ
>>>  
>>>  
>>> 
>>> 
>>> On 26 Aug 2023, at 11:16 PM, Carl-Johan Thore >> <mailto:carl-johan.th...@liu.se>> wrote:
>>>  
>>> "(Sadly) MATSBAIJ is extremely broken, in particular, it cannot be used to 
>>> retrieve rectangular blocks in MatCreateSubMatrices, thus you cannot get 
>>> the A01 and A10 blocks in PCFIELDSPLIT.
>>> I have a branch that fixes this, but I haven’t rebased in a while (and I’m 
>>> AFK right now), would you want me to rebase and give it a go, or must you 
>>> stick to a release tarball?"
>>> 
>>> Ok, would be great if you could look at this! I don't need to stick to any 
>>> partic

Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-08-26 Thread Pierre Jolivet via petsc-users


> On 27 Aug 2023, at 12:14 AM, Carl-Johan Thore  wrote:
> 
> “Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with 
> the same issue.”
> I’m not sure I follow. Does PCFIELDSPLIT extract further submatrices from 
> these blocks, or is there
> somewhere else in the code that things will go wrong?

Ah, no, you are right, in that case it should work.

> For the MATNEST I was thinking to get some savings from the block-symmetry at 
> least
> even if symmetry in A00 and A11 cannot be exploited; using SBAIJ for them 
> would just be a
> (pretty big) bonus.
>  
> “I’ll rebase on top of main and try to get it integrated if it could be 
> useful to you (but I’m traveling
> right now so everything gets done more slowly, sorry).”
> Sound great, Thanks again!

The MR is there https://gitlab.com/petsc/petsc/-/merge_requests/6841.
I need to add a new code path in MatCreateRedundantMatrix() to make sure the 
resulting Mat is indeed SBAIJ, but that is orthogonal to the PCFIELDSPLIT issue.
The branch should be usable in its current state.

Thanks,
Pierre

>  
> From: Pierre Jolivet  
> Sent: Saturday, August 26, 2023 4:36 PM
> To: Carl-Johan Thore 
> Cc: Carl-Johan Thore ; petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ
>  
>  
> 
> 
> On 26 Aug 2023, at 11:16 PM, Carl-Johan Thore  <mailto:carl-johan.th...@liu.se>> wrote:
>  
> "(Sadly) MATSBAIJ is extremely broken, in particular, it cannot be used to 
> retrieve rectangular blocks in MatCreateSubMatrices, thus you cannot get the 
> A01 and A10 blocks in PCFIELDSPLIT.
> I have a branch that fixes this, but I haven’t rebased in a while (and I’m 
> AFK right now), would you want me to rebase and give it a go, or must you 
> stick to a release tarball?"
> 
> Ok, would be great if you could look at this! I don't need to stick to any 
> particular branch.
> 
> Do you think MATNEST could be an alternative here?
>  
> Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with the 
> same issue.
> I’m using both approaches (monolithic SBAIJ or Nest + SBAIJ), it was crashing 
> but I think it was thoroughly fixed in 
> https://gitlab.com/petsc/petsc/-/commits/jolivet/feature-matcreatesubmatrices-rectangular-sbaij/
> It is ugly code on top of ugly code, so I didn’t try to get it integrated and 
> just used the branch locally, and then moved to some other stuff.
> I’ll rebase on top of main and try to get it integrated if it could be useful 
> to you (but I’m traveling right now so everything gets done more slowly, 
> sorry).
>  
> Thanks,
> Pierre
> 
> 
> My matrix is
> [A00 A01;
> A01^t A11]
> so perhaps with MATNEST I can make use of the block-symmetry at least, and 
> then use MATSBAIJ for 
> A00 and A11 if it's possible to combine matrix types which the manual seems 
> to imply. 
> 
> Kind regards
> Carl-Johan
> 
> 
> 
> On 26 Aug 2023, at 10:09 PM, Carl-Johan Thore  <mailto:carljohanth...@gmail.com>> wrote:
> 
> Hi,
> 
> I'm trying to use PCFIELDSPLIT with MATSBAIJ in PETSc 3.19.4. 
> According to the manual "[t]he fieldsplit preconditioner cannot 
> currently be used with the MATBAIJ or MATSBAIJ data formats if the 
> blocksize is larger than 1". Since my blocksize is exactly 1 it would seem 
> that I can use PCFIELDSPLIT. But this fails with "PETSC ERROR: For symmetric 
> format, iscol must equal isrow"
> from MatCreateSubMatrix_MPISBAIJ. Tracing backwards one ends up in 
> fieldsplit.c at
> 
> /* extract the A01 and A10 matrices */ ilink = jac->head; 
> PetscCall(ISComplement(ilink->is_col, rstart, rend, )); if 
> (jac->offdiag_use_amat) { PetscCall(MatCreateSubMatrix(pc->mat, 
> ilink->is, ccis, MAT_INITIAL_MATRIX, >B)); } else {
>PetscCall(MatCreateSubMatrix(pc->pmat, ilink->is, ccis, 
> MAT_INITIAL_MATRIX, >B)); }
> 
> This, since my A01 and A10 are not square, seems to explain why iscol is not 
> equal to isrow.
> From this I gather that it is in general NOT possible to use 
> PCFIELDSPLIT with MATSBAIJ even with block size 1?
> 
> Kind regards,
> Carl-Johan



Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-08-26 Thread Pierre Jolivet


> On 26 Aug 2023, at 11:16 PM, Carl-Johan Thore  wrote:
> 
> "(Sadly) MATSBAIJ is extremely broken, in particular, it cannot be used to 
> retrieve rectangular blocks in MatCreateSubMatrices, thus you cannot get the 
> A01 and A10 blocks in PCFIELDSPLIT.
> I have a branch that fixes this, but I haven’t rebased in a while (and I’m 
> AFK right now), would you want me to rebase and give it a go, or must you 
> stick to a release tarball?"
> 
> Ok, would be great if you could look at this! I don't need to stick to any 
> particular branch.
> 
> Do you think MATNEST could be an alternative here?

Well, your A00 and A11 will possibly be SBAIJ also, so you’ll end up with the 
same issue.
I’m using both approaches (monolithic SBAIJ or Nest + SBAIJ), it was crashing 
but I think it was thoroughly fixed in 
https://gitlab.com/petsc/petsc/-/commits/jolivet/feature-matcreatesubmatrices-rectangular-sbaij/
It is ugly code on top of ugly code, so I didn’t try to get it integrated and 
just used the branch locally, and then moved to some other stuff.
I’ll rebase on top of main and try to get it integrated if it could be useful 
to you (but I’m traveling right now so everything gets done more slowly, sorry).

Thanks,
Pierre

> My matrix is
> [A00 A01;
> A01^t A11]
> so perhaps with MATNEST I can make use of the block-symmetry at least, and 
> then use MATSBAIJ for 
> A00 and A11 if it's possible to combine matrix types which the manual seems 
> to imply. 
> 
> Kind regards
> Carl-Johan
> 
> 
>> On 26 Aug 2023, at 10:09 PM, Carl-Johan Thore  
>> wrote:
>> 
>> Hi,
>> 
>> I'm trying to use PCFIELDSPLIT with MATSBAIJ in PETSc 3.19.4. 
>> According to the manual "[t]he fieldsplit preconditioner cannot 
>> currently be used with the MATBAIJ or MATSBAIJ data formats if the 
>> blocksize is larger than 1". Since my blocksize is exactly 1 it would seem 
>> that I can use PCFIELDSPLIT. But this fails with "PETSC ERROR: For symmetric 
>> format, iscol must equal isrow"
>> from MatCreateSubMatrix_MPISBAIJ. Tracing backwards one ends up in 
>> fieldsplit.c at
>> 
>> /* extract the A01 and A10 matrices */ ilink = jac->head; 
>> PetscCall(ISComplement(ilink->is_col, rstart, rend, )); if 
>> (jac->offdiag_use_amat) { PetscCall(MatCreateSubMatrix(pc->mat, 
>> ilink->is, ccis, MAT_INITIAL_MATRIX, >B)); } else {
>>PetscCall(MatCreateSubMatrix(pc->pmat, ilink->is, ccis, 
>> MAT_INITIAL_MATRIX, >B)); }
>> 
>> This, since my A01 and A10 are not square, seems to explain why iscol is not 
>> equal to isrow.
>> From this I gather that it is in general NOT possible to use 
>> PCFIELDSPLIT with MATSBAIJ even with block size 1?
>> 
>> Kind regards,
>> Carl-Johan
> 



Re: [petsc-users] PCFIELDSPLIT with MATSBAIJ

2023-08-26 Thread Pierre Jolivet
(Sadly) MATSBAIJ is extremely broken, in particular, it cannot be used to 
retrieve rectangular blocks in MatCreateSubMatrices, thus you cannot get the 
A01 and A10 blocks in PCFIELDSPLIT.
I have a branch that fixes this, but I haven’t rebased in a while (and I’m AFK 
right now), would you want me to rebase and give it a go, or must you stick to 
a release tarball?

Thanks,
Pierre

> On 26 Aug 2023, at 10:09 PM, Carl-Johan Thore  
> wrote:
> 
> Hi,
> 
> I'm trying to use PCFIELDSPLIT with MATSBAIJ in PETSc 3.19.4. According to 
> the manual
> "[t]he fieldsplit preconditioner cannot currently be used with the MATBAIJ or 
> MATSBAIJ data 
> formats if the blocksize is larger than 1". Since my blocksize is exactly 1 
> it would seem that I can
> use PCFIELDSPLIT. But this fails with "PETSC ERROR: For symmetric format, 
> iscol must equal isrow"
> from MatCreateSubMatrix_MPISBAIJ. Tracing backwards one ends up in 
> fieldsplit.c at
> 
> /* extract the A01 and A10 matrices */
> ilink = jac->head;
> PetscCall(ISComplement(ilink->is_col, rstart, rend, ));
> if (jac->offdiag_use_amat) {
> PetscCall(MatCreateSubMatrix(pc->mat, ilink->is, ccis, MAT_INITIAL_MATRIX, 
> >B));
> } else {
> PetscCall(MatCreateSubMatrix(pc->pmat, ilink->is, ccis, 
> MAT_INITIAL_MATRIX, >B));
> }
> 
> This, since my A01 and A10 are not square, seems to explain why iscol is not 
> equal to isrow.
> From this I gather that it is in general NOT possible to use PCFIELDSPLIT 
> with MATSBAIJ even
> with block size 1?
> 
> Kind regards,
> Carl-Johan



Re: [petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-23 Thread Pierre Jolivet
_end - local_rows_start
> 
> print("local_rows_start:", local_rows_start)
> print("local_rows_end:", local_rows_end)
> print("local_rows:", local_rows)
> 
> local_A = PETSc.Mat().createAIJ(size=(local_rows, k), comm=PETSc.COMM_SELF)
> 
> # pdb.set_trace()
> 
> comm = A.getComm()
> rows = PETSc.IS().createStride(local_rows, first=0, step=1, comm=comm)
> cols = PETSc.IS().createStride(k, first=0, step=1, comm=comm)
> 
> print("rows indices:", rows.getIndices())
> print("cols indices:", cols.getIndices())
> 
> # pdb.set_trace()
> 
> # Create the local to global mapping for rows and columns
> l2g_rows = PETSc.LGMap().create(rows.getIndices(), comm=comm)
> l2g_cols = PETSc.LGMap().create(cols.getIndices(), comm=comm)
> 
> print("l2g_rows type:", type(l2g_rows))
> print("l2g_rows:", l2g_rows.view())
> print("l2g_rows type:", type(l2g_cols))
> print("l2g_cols:", l2g_cols.view())
> 
> # pdb.set_trace()
> 
> # Set the local-to-global mapping for the matrix
> A.setLGMap(l2g_rows, l2g_cols)
> 
> # pdb.set_trace()
> 
> # Now you can get the local submatrix
> local_A = A.getLocalSubMatrix(rows, cols)
> 
> # Assembly the matrix to compute the final structure
> local_A.assemblyBegin()
> local_A.assemblyEnd()
> 
> Print(local_A.getType())
> Print(local_A.getSizes())
> 
> # pdb.set_trace()
> 
> # Multiply the two matrices
> local_C = local_A.matMult(B_seq)
> 
> 
>> On 23 Aug 2023, at 12:56, Mark Adams  wrote:
>> 
>> 
>> 
>> On Wed, Aug 23, 2023 at 5:36 AM Thanasis Boutsikakis 
>> > <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>> Thanks for the suggestion Pierre.
>>> 
>>> Yes B is duplicated by all processes.
>>> 
>>> In this case, should B be created as a sequential sparse matrix using 
>>> COMM_SELF?
>> 
>> Yes, that is what Pierre said,
>> 
>> Mark
>>  
>>> I guess if not, the multiplication of B with the output of 
>>> https://petsc.org/main/manualpages/Mat/MatMPIAIJGetLocalMat/ would not go 
>>> through, right? 
>>> 
>>> Thanks,
>>> Thanos
>>>  
>>>> On 23 Aug 2023, at 10:47, Pierre Jolivet >>> <mailto:pierre.joli...@lip6.fr>> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 23 Aug 2023, at 5:35 PM, Thanasis Boutsikakis 
>>>>> >>>> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>>> 
>>>>> Hi all,
>>>>> 
>>>>> I am trying to multiply two Petsc matrices as C = A * B, where A is a 
>>>>> tall matrix and B is a relatively small matrix.
>>>>> 
>>>>> I have taken the decision to create A as (row-)partitioned matrix and B 
>>>>> as a non-partitioned matrix that it is entirely shared by all procs (to 
>>>>> avoid unnecessary communication).
>>>>> 
>>>>> Here is my code:
>>>>> 
>>>>> import numpy as np
>>>>> from firedrake import COMM_WORLD
>>>>> from firedrake.petsc import PETSc
>>>>> from numpy.testing import assert_array_almost_equal
>>>>> 
>>>>> nproc = COMM_WORLD.size
>>>>> rank = COMM_WORLD.rank
>>>>> 
>>>>> def create_petsc_matrix_non_partitioned(input_array):
>>>>> """Building a mpi non-partitioned petsc matrix from an array
>>>>> 
>>>>> Args:
>>>>> input_array (np array): Input array
>>>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>>>> True.
>>>>> 
>>>>> Returns:
>>>>> mpi mat: PETSc matrix
>>>>> """
>>>>> assert len(input_array.shape) == 2
>>>>> 
>>>>> m, n = input_array.shape
>>>>> 
>>>>> matrix = PETSc.Mat().createAIJ(size=((m, n), (m, n)), comm=COMM_WORLD)
>>>>> 
>>>>> # Set the values of the matrix
>>>>> matrix.setValues(range(m), range(n), input_array[:, :], addv=False)
>>>>> 
>>>>> # Assembly the matrix to compute the final structure
>>>>> matrix.assemblyBegin()
>>>>> matrix.assemblyEnd()
>>>>> 
>>>>> return matrix
>>>>> 
>>>>> 
>>>>&g

Re: [petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-23 Thread Pierre Jolivet


> On 23 Aug 2023, at 5:35 PM, Thanasis Boutsikakis 
>  wrote:
> Hi all,
> 
> I am trying to multiply two Petsc matrices as C = A * B, where A is a tall 
> matrix and B is a relatively small matrix.
> 
> I have taken the decision to create A as (row-)partitioned matrix and B as a 
> non-partitioned matrix that it is entirely shared by all procs (to avoid 
> unnecessary communication).
> 
> Here is my code:
> 
> import numpy as np
> from firedrake import COMM_WORLD
> from firedrake.petsc import PETSc
> from numpy.testing import assert_array_almost_equal
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> def create_petsc_matrix_non_partitioned(input_array):
> """Building a mpi non-partitioned petsc matrix from an array
> 
> Args:
> input_array (np array): Input array
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> mpi mat: PETSc matrix
> """
> assert len(input_array.shape) == 2
> 
> m, n = input_array.shape
> 
> matrix = PETSc.Mat().createAIJ(size=((m, n), (m, n)), comm=COMM_WORLD)
> 
> # Set the values of the matrix
> matrix.setValues(range(m), range(n), input_array[:, :], addv=False)
> 
> # Assembly the matrix to compute the final structure
> matrix.assemblyBegin()
> matrix.assemblyEnd()
> 
> return matrix
> 
> 
> def create_petsc_matrix(input_array, partition_like=None):
> """Create a PETSc matrix from an input_array
> 
> Args:
> input_array (np array): Input array
> partition_like (petsc mat, optional): Petsc matrix. Defaults to None.
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> petsc mat: PETSc matrix
> """
> # Check if input_array is 1D and reshape if necessary
> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
> global_rows, global_cols = input_array.shape
> 
> comm = COMM_WORLD
> if partition_like is not None:
> local_rows_start, local_rows_end = partition_like.getOwnershipRange()
> local_rows = local_rows_end - local_rows_start
> 
> # No parallelization in the columns, set local_cols = None to 
> parallelize
> size = ((local_rows, global_rows), (global_cols, global_cols))
> else:
> size = ((None, global_rows), (global_cols, global_cols))
> 
> matrix = PETSc.Mat().createAIJ(size=size, comm=comm)
> matrix.setUp()
> 
> local_rows_start, local_rows_end = matrix.getOwnershipRange()
> 
> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
> # Calculate the correct row in the array for the current process
> row_in_array = counter + local_rows_start
> matrix.setValues(
> i, range(global_cols), input_array[row_in_array, :], addv=False
> )
> 
> # Assembly the matrix to compute the final structure
> matrix.assemblyBegin()
> matrix.assemblyEnd()
> 
> return matrix
> 
> 
> m, k = 10, 3
> # Generate the random numpy matrices
> np.random.seed(0)  # sets the seed to 0
> A_np = np.random.randint(low=0, high=6, size=(m, k))
> B_np = np.random.randint(low=0, high=6, size=(k, k))
> 
> 
> A = create_petsc_matrix(A_np)
> 
> B = create_petsc_matrix_non_partitioned(B_np)
> 
> # Now perform the multiplication
> C = A * B
> 
> The problem with this is that there is a mismatch between the local rows of A 
> (depend on the partitioning) and the global rows of B (3 for all procs), so 
> that the multiplication cannot happen in parallel. Here is the error:
> 
> [0]PETSC ERROR: 
> 
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
> https://petsc.org/release/faq/
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is 
> causing the crash.
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59
> :
> system msg for write_line failure : Bad file descriptor
> 
> 
> Is there a standard way to achieve this?

Your B is duplicated by all processes?
If so, then, call https://petsc.org/main/manualpages/Mat/MatMPIAIJGetLocalMat/, 
do a sequential product with B on COMM_SELF, not COMM_WORLD, and use 
https://petsc.org/main/manualpages/Mat/MatCreateMPIMatConcatenateSeqMat/ with 
the output.

Thanks,
Pierre

> Thanks,
> Thanos


Re: [petsc-users] eigenvalue problem involving inverse of a matrix

2023-08-14 Thread Pierre Jolivet


> On 14 Aug 2023, at 10:39 AM, maitri ksh  wrote:
> 
> 
> Hi, 
> I need to solve an eigenvalue problem  Ax=lmbda*x, where A=(B^-H)*Q*B^-1 is a 
> hermitian matrix, 'B^-H' refers to the hermitian of the inverse of the matrix 
> B. Theoretically it would take around 1.8TB to explicitly compute the matrix 
> B^-1 . A feasible way to solve this eigenvalue problem would be to use the LU 
> factors of the B matrix instead. So the problem looks something like this: 
>  (((LU)^-H)*Q*(LU)^-1)*x = lmbda*x
> For a guess value of the (normalised) eigen-vector 'x', 
> 1) one would require to solve two linear equations to get 'Ax', 
> (LU)*y=x, solve for 'y',
>((LU)^H)*z=Q*y,   solve for 'z' 
> then one can follow the conventional power-iteration procedure
> 2) update eigenvector: x= z/||z||
> 3) get eigenvalue using the Rayleigh quotient 
> 4) go to step-1 and loop through with a conditional break.
> 
> Is there any example in petsc that does not require explicit declaration of 
> the matrix 'A' (Ax=lmbda*x) and instead takes a vector 'Ax' as input for an 
> iterative algorithm (like the one above). I looked into some of the examples 
> of eigenvalue problems ( it's highly possible that I might have overlooked, I 
> am new to petsc) but I couldn't find a way to circumvent the explicit 
> declaration of matrix A.

You could use SLEPc with a MatShell, that’s the very purpose of this MatType.

Thanks,
Pierre

> Maitri
> 
> 
> 


Re: [petsc-users] performance regression with GAMG

2023-08-10 Thread Pierre Jolivet


> On 11 Aug 2023, at 1:14 AM, Mark Adams  wrote:
> 
> BTW, nice bug report ...
>> 
>> So in the first step it coarsens from 150e6 to 5.4e6 DOFs instead of to 
>> 2.6e6 DOFs.
> 
> Yes, this is the critical place to see what is different and going wrong.
> 
> My 3D tests were not that different and I see you lowered the threshold.
> Note, you can set the threshold to zero, but your test is running so much 
> differently than mine there is something else going on.
> Note, the new, bad, coarsening rate of 30:1 is what we tend to shoot for in 
> 3D.
> 
> So it is not clear what the problem is.  Some questions:
> 
> * do you have a picture of this mesh to show me?
> * what do you mean by Q1-Q2 elements?
> 
> It would be nice to see if the new and old codes are similar without 
> aggressive coarsening.
> This was the intended change of the major change in this time frame as you 
> noticed.
> If these jobs are easy to run, could you check that the old and new versions 
> are similar with "-pc_gamg_square_graph  0 ",  ( and you only need one time 
> step).
> All you need to do is check that the first coarse grid has about the same 
> number of equations (large).
> 
> BTW, I am starting to think I should add the old method back as an option. I 
> did not think this change would cause large differences.

Not op, but that would be extremely valuable, IMHO.
This is impacting codes left, right, and center (see, e.g., another research 
group left wondering https://github.com/feelpp/feelpp/issues/2138).

Mini-rant: as developers, we are being asked to maintain backward compatibility 
of the API/headers, but there is no such an enforcement for the numerics.
A breakage in the API is “easy” to fix, you get a compilation error, you either 
try to fix your code or stick to a lower version of PETSc.
Changes in the numerics trigger silent errors which are much more delicate to 
fix because users do not know whether something needs to be addressed in their 
code or if there is a change in PETSc.
I don’t see the point of enforcing one backward compatibility but not the other.

Thanks,
Pierre

> Thanks,
> Mark
> 
> 
>  
>> Note that we are providing the rigid body near nullspace, 
>> hence the bs=3 to bs=6.
>> We have tried different values for the gamg_threshold but it doesn't 
>> really seem to significantly alter the coarsening amount in that first step.
>> 
>> Do you have any suggestions for further things we should try/look at? 
>> Any feedback would be much appreciated
>> 
>> Best wishes
>> Stephan Kramer
>> 
>> Full logs including log_view timings available from 
>> https://github.com/stephankramer/petsc-scaling/
>> 
>> In particular:
>> 
>> https://github.com/stephankramer/petsc-scaling/blob/main/before/Level_5/output_2.dat
>> https://github.com/stephankramer/petsc-scaling/blob/main/after/Level_5/output_2.dat
>> https://github.com/stephankramer/petsc-scaling/blob/main/before/Level_6/output_2.dat
>> https://github.com/stephankramer/petsc-scaling/blob/main/after/Level_6/output_2.dat
>> https://github.com/stephankramer/petsc-scaling/blob/main/before/Level_7/output_2.dat
>> https://github.com/stephankramer/petsc-scaling/blob/main/after/Level_7/output_2.dat
>>  
>> 



Re: [petsc-users] compiler related error (configuring Petsc)

2023-08-01 Thread Pierre Jolivet
Right, so configure did the proper job and told you that your compiler does not 
(fully) work with C++11, there is no point in trying to add extra flags to 
bypass this limitation.
As Satish suggested: either use a newer g++ or configure --with-cxx=0

Thanks,
Pierre

> On 2 Aug 2023, at 6:42 AM, maitri ksh  wrote:
> 
> sure, attached.
> 
> On Tue, Aug 1, 2023 at 10:36 PM Jacob Faibussowitsch  > wrote:
>> >> Initially I got an error related
>> >> to 'C++11' flag,
>> 
>> Can you send the configure.log for this as well
>> 
>> Best regards,
>> 
>> Jacob Faibussowitsch
>> (Jacob Fai - booss - oh - vitch)
>> 
>> > On Aug 1, 2023, at 14:42, Satish Balay via petsc-users 
>> > mailto:petsc-users@mcs.anl.gov>> wrote:
>> > 
>> >> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
>> > 
>> > Is it possible for you to use a newer version GNU compilers?
>> > 
>> > If not - your alternative is to build PETSc with --with-cxx=0 option
>> > 
>> > But then - you can't use --download-superlu_dist or any pkgs that need
>> > c++ [you could try building them separately though]
>> > 
>> > Satish
>> > 
>> > 
>> > On Tue, 1 Aug 2023, maitri ksh wrote:
>> > 
>> >> I am trying to compile petsc on a cluster ( x86_64-redhat-linux, '
>> >> *configure.log'*  is attached herewith) . Initially I got an error related
>> >> to 'C++11' flag, to troubleshoot this issue, I used 'CPPFLAGS' and
>> >> 'CXXFLAGS' and could surpass the non-compliant error related to c++ 
>> >> compiler
>> >> but now it gives me another error 'cannot find a C preprocessor'. How to
>> >> fix this?
>> >> 
>> > 
>> 
> 



Re: [petsc-users] MUMPS Error 'INFOG(1)=-3 INFO(2)=3' (SPARSE MATRIX INVERSE)

2023-07-27 Thread Pierre Jolivet
MUMPS errors are documented in section 8 of 
https://mumps-solver.org/doc/userguide_5.6.1.pdf

Thanks,
Pierre

> On 27 Jul 2023, at 3:50 PM, maitri ksh  wrote:
> 
> I am using 'MatMumpsGetInverse()' to get the inverse of a sparse matrix. I am 
> using parts of ex214.c 
> 
>  code to get the inverse, but I get an error that seems to be coming from 
> MUMPS-library. Any suggestions?
> 
> ERROR:
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Error in external library
> [0]PETSC ERROR: Error reported by MUMPS in solve phase: INFOG(1)=-3 INFO(2)=3
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.19.3, unknown
> [0]PETSC ERROR: ./MatInv_MUMPS on a arch-linux-c-debug named LAPTOP-0CP4FI1T 
> by maitri Thu Jul 27 16:35:02 2023
> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
> --with-fc=gfortran --download-mpich --download-fblaslapack --with-matlab 
> --with-matlab-dir=/usr/local/MATLAB/R2022a --download-hdf5 --with-hdf5=1 
> --download-mumps --download-scalapack --download-parmetis --download-metis 
> --download-ptscotch --download-bison --download-cmake
> [0]PETSC ERROR: #1 MatMumpsGetInverse_MUMPS() at 
> /home/maitri/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:2720
> [0]PETSC ERROR: #2 MatMumpsGetInverse() at 
> /home/maitri/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:2753
> [0]PETSC ERROR: #3 main() at MatInv_MUMPS.c:74
> [0]PETSC ERROR: No PETSc Option Table entries
> [0]PETSC ERROR: End of Error Message ---send entire error 
> message to petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_SELF, 76) - process 0
> [unset]: PMIU_write error; fd=-1 buf=:cmd=abort exitcode=76 
> message=application called MPI_Abort(MPI_COMM_SELF, 76) - process 0
> :
> system msg for write_line failure : Bad file descriptor
> 
> 
> Maitri
> 



Re: [petsc-users] MPICH C++ compilers when using PETSC --with-cxx=0

2023-07-21 Thread Pierre Jolivet


> On 21 Jul 2023, at 5:11 PM, Robert Crockett via petsc-users 
>  wrote:
> 
> Hello,
> I built PETSc with –with-cxx=0 in order to get around a likely Intel C++ 
> compiler bug.
> However, the MPICH that also gets built by PETSc then picks up the wrong C++ 
> compiler; mpicxx -show indicates that it is using G++, while mpicc is 
> correctly using icc.
>  
> Is there a way to get PETSc to pass the correct C++ compiler for the MPICH 
> build when using –with-cxx=0? I need to compile parts of my own program with 
> mpicxx/icpc.

You could use 
$ export MPICH_CXX=icpc
But the fact that there is a mismatch is not a great sign.

Thanks,
Pierre

> Robert Crockett 
> Plasma Simulation Engineer | OCTO - Computational Products
> P: 617.648.8349  M: 415.205.4567
> 
> LAM RESEARCH
> 4650 Cushing Pkwy, Fremont CA 94538 USA 
> lamresearch.com 
> 
>  
> 
> LAM RESEARCH CONFIDENTIALITY NOTICE: This e-mail transmission, and any 
> documents, files, or previous e-mail messages attached to it, (collectively, 
> "E-mail Transmission") may be subject to one or more of the following based 
> on the associated sensitivity level: E-mail Transmission (i) contains 
> confidential information, (ii) is prohibited from distribution outside of 
> Lam, and/or (iii) is intended solely for and restricted to the specified 
> recipient(s). If you are not the intended recipient, or a person responsible 
> for delivering it to the intended recipient, you are hereby notified that any 
> disclosure, copying, distribution or use of any of the information contained 
> in or attached to this message is STRICTLY PROHIBITED. If you have received 
> this transmission in error, please immediately notify the sender and destroy 
> the original transmission and its attachments without reading them or saving 
> them to disk. Thank you.
> 
> Confidential – Limited Access and Use
> 



Re: [petsc-users] [EXTERNAL] PETSc Installation Assistance

2023-07-17 Thread Pierre Jolivet
r traceback are not always exact. 
> [0]PETSC ERROR: #1 PetscEventRegLogGetEvent() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/sys/logging/u
> tils/eventlog.c:622 
> [0]PETSC ERROR: #2 PetscLogEventRegister() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/sys/logging/plog
> .c:802 
> [0]PETSC ERROR: #3 VecInitializePackage() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/vec/vec/interface
> /dlregisvec.c:187 
> [0]PETSC ERROR: #4 VecCreate() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/vec/vec/interface/veccreate.
> c:32 
> [0]PETSC ERROR: #5 DMCreateLocalVector_Section_Private() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm
> /interface/dmi.c:80 
> [0]PETSC ERROR: #6 DMCreateLocalVector_Plex() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/plex
> /plexcreate.c:4432 
> [0]PETSC ERROR: #7 DMCreateLocalVector() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/interface/dm.c:
> 1056 
> [0]PETSC ERROR: #8 DMPlexCreateGmsh() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/plex/plexgms
> h.c:1933 
> [0]PETSC ERROR: #9 DMPlexCreateGmshFromFile() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/plex
> /plexgmsh.c:1433 
> [0]PETSC ERROR: #10 JAF_DMPlexCreateFromMesh() at 
> /home/jesus/Desktop/JAF_NML/ApplicationCode/PETSc/PETScCGH5.c:5845 
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 
> [unset]: PMIU_write error; fd=-1 buf=:cmd=abort exitcode=59 
> message=application called MPI_Abort(MPI_COMM_WORLD, 59)
> - process 0 
> : 
> system msg for write_line failure : Bad file descriptor 
> [0]PETSC ERROR: 
>  
> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch 
> system) has told this process to end 
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger 
> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
> https://petsc.org/release/faq/ 
> [0]PETSC ERROR: -  Stack Frames 
>  
> [0]PETSC ERROR: The line numbers in the error traceback are not always exact. 
> [0]PETSC ERROR: #1 PetscStrcasecmp() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/sys/utils/str.c:285 
> [0]PETSC ERROR: #2 PetscEventRegLogGetEvent() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/sys/logging/u
> tils/eventlog.c:622 
> [0]PETSC ERROR: #3 PetscLogEventRegister() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/sys/logging/plog
> .c:802 
> [0]PETSC ERROR: #4 VecInitializePackage() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/vec/vec/interface
> /dlregisvec.c:188 
> [0]PETSC ERROR: #5 VecCreate() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/vec/vec/interface/veccreate.
> c:32 
> [0]PETSC ERROR: #6 DMCreateLocalVector_Section_Private() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm
> /interface/dmi.c:80 
> [0]PETSC ERROR: #7 DMCreateLocalVector_Plex() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/plex
> /plexcreate.c:4432 
> [0]PETSC ERROR: #8 DMCreateLocalVector() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/interface/dm.c:
> 1056 
> [0]PETSC ERROR: #9 DMPlexCreateGmsh() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/plex/plexgms
> h.c:1933 
> [0]PETSC ERROR: #10 DMPlexCreateGmshFromFile() at 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/dm/impls/ple
> x/plexgmsh.c:1433 
> [0]PETSC ERROR: #11 JAF_DMPlexCreateFromMesh() at 
> /home/jesus/Desktop/JAF_NML/ApplicationCode/PETSc/PETScCGH5.c:5845 
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 
> [unset]: PMIU_write error; fd=-1 buf=:cmd=abort exitcode=59 
> message=application called MPI_Abort(MPI_COMM_WORLD, 59)
> - process 0 
> : 
> system msg for write_line failure : Bad file descriptor 
> -- 
> mpiexec detected that one or more processes exited with non-zero status, thus 
> causing 
> the job to be terminated. The first process to do so was: 
> 
>  Process name: [[33478,1],2] 
>  Exit code:15
> ===
> 
> Here are the options I give ./configure:
> 
> ./configure --download-mpich=yes --download-viennacl=yes --download-hdf5=yes 
> --download-chaco=yes --download-metis=yes --download-parmetis=yes 
> --download-cgns=yes
> From: Pierre Jolivet mailto:pierre.joli...@lip6.fr>>
> Sent: Monday, July 17, 2023 1:58 PM
> To: Ferrand, Jesus A. mail

Re: [petsc-users] PETSc Installation Assistance

2023-07-17 Thread Pierre Jolivet
https://petsc.org/release/faq/#what-does-the-message-hwloc-linux-ignoring-pci-device-with-non-16bit-domain-mean

Thanks,
Pierre

> On 17 Jul 2023, at 7:51 PM, Ferrand, Jesus A.  wrote:
> 
> Greetings.
> 
> I recently changed operating systems (Ubuntu 20.04 -> Debian 12 "Bookworm") 
> and tried to reinstall PETSc.
> I tried doing the usual as described in 
> (https://petsc.org/release/install/download/#recommended-obtain-release-version-with-git):
> git clone/pull
> ./configure -- ... --
> make install
> make check
> Everything proceeds smoothly until the "make check" step, where I get the 
> following error:
> ==
> Running check examples to verify correct installation 
> Using PETSC_DIR=/home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc and 
> PETSC_ARCH=arch-linux-c-debug 
> Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. 
> Number of SNES iterations = 2 
> Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI processes 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. 
> Number of SNES iterations = 2 
> 0a1,3 
> > hwloc/linux: Ignoring PCI device with non-16bit domain. 
> > Pass --enable-32bits-pci-domain to configure to support such devices 
> > (warning: it would break the library ABI, don't enable unless really 
> > needed). 
> /home/jesus/Desktop/JAF_NML/3rd_Party/PETSc/petsc/src/vec/vec/tests 
> Possible problem with ex47 running with hdf5, diffs above 
> = 
> Possible error running Fortran example src/snes/tutorials/ex5f with 1 MPI 
> process 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> Number of SNES iterations = 3 
> Completed test examples 
> Error while running make check 
> gmake[1]: *** [makefile:123: check] Error 1 
> make: *** [GNUmakefile:17: check] Error 2
> ==
> 
> 
> I tried reinstalling the same version I was able to use prior to changing 
> OS's (PETSc 3.18.3, via tarball) and get a similar error.
> ==
> make PETSC_DIR=/home/jesus/Desktop/JAF_NML/3rd_P
> arty/newPETSC/petsc-3.18.3 PETSC_ARCH=arch-linux-c-debug check 
> Running check examples to verify correct installation 
> Using PETSC_DIR=/home/jesus/Desktop/JAF_NML/3rd_Party/newPETSC/petsc-3.18.3 
> and PETSC_ARCH=arch-linux-c-debug 
> Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. 
> Number of SNES iterations = 2 
> Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI processes 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't enable unless really needed). 
> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. 
> Number of SNES iterations = 2 
> 0a1,3 
> > hwloc/linux: Ignoring PCI device with non-16bit domain. 
> > Pass --enable-32bits-pci-domain to configure to support such devices 
> > (warning: it would break the library ABI, don't enable unless really 
> > needed). 
> /home/jesus/Desktop/JAF_NML/3rd_Party/newPETSC/petsc-3.18.3/src/vec/vec/tests 
> Possible problem with ex47 running with hdf5, diffs above 
> = 
> Possible error running Fortran example src/snes/tutorials/ex5f with 1 MPI 
> process 
> See https://petsc.org/release/faq/ 
> hwloc/linux: Ignoring PCI device with non-16bit domain. 
> Pass --enable-32bits-pci-domain to configure to support such devices 
> (warning: it would break the library ABI, don't 

Re: [petsc-users] Near null space for a fieldsplit in petsc4py

2023-07-13 Thread Pierre Jolivet
Dear Nicolas,

> On 13 Jul 2023, at 10:17 AM, TARDIEU Nicolas  wrote:
> 
> Dear Pierre,
> 
> You are absolutely right. I was using a --with-debugging=0 (aka release) 
> install and this is definitely an error.
> Once I used my debug install, I found the way to fix my problem. The solution 
> is in the attached script: I first need to extract the correct block from the 
> PC operator's MatNest and then append the null space to it.
> Anyway this is a bit tricky...

Yep, it’s the same with all “nested” solvers, fieldsplit, ASM, MG, you name it.
You first need the initial PCSetUp() so that the bare minimum is put in place, 
then you have to fetch things yourself and adapt it to your needs.
We had a similar discussion with the MEF++ people last week, there is currently 
no way around this, AFAIK.

Thanks,
Pierre

> Regards, 
> Nicolas
> 
> De : pierre.joli...@lip6.fr 
> Envoyé : mercredi 12 juillet 2023 19:52
> À : TARDIEU Nicolas 
> Cc : petsc-users@mcs.anl.gov 
> Objet : Re: [petsc-users] Near null space for a fieldsplit in petsc4py
>  
> 
> > On 12 Jul 2023, at 6:04 PM, TARDIEU Nicolas via petsc-users 
> >  wrote:
> > 
> > Dear PETSc team,
> > 
> > In the attached example, I set up a block pc for a saddle-point problem in 
> > petsc4py. The IS define the unknowns, namely some physical quantity (phys) 
> > and a Lagrange multiplier (lags).
> > I would like to attach a near null space to the physical block, in order to 
> > get the best performance from an AMG pc. 
> > I have been trying hard, attaching it to the initial block, to the IS but 
> > no matter what I am doing, when it comes to "ksp_view", no near null space 
> > is attached to the matrix.
> > 
> > Could you please help me figure out what I am doing wrong ?
> 
> Are you using a double-precision 32-bit integers real build of PETSc?
> Is it --with-debugging=0?
> Because with my debug build, I get the following error (thus explaining why 
> it’s not attached to the KSP).
> Traceback (most recent call last):
>   File "/Volumes/Data/Downloads/test/test_NullSpace.py", line 35, in 
> ns = NullSpace().create(True, [v], comm=comm)
>  
>   File "petsc4py/PETSc/Mat.pyx", line 5611, in petsc4py.PETSc.NullSpace.create
> petsc4py.PETSc.Error: error code 62
> [0] MatNullSpaceCreate() at 
> /Volumes/Data/repositories/petsc/src/mat/interface/matnull.c:249
> [0] Invalid argument
> [0] Vector 0 must have 2-norm of 1.0, it is 22.3159
> 
> Furthermore, if you set yourself the constant vector in the near null-space, 
> then the first argument of create() must be False, otherwise, you’ll have 
> twice the same vector, and you’ll end up with another error (the vectors in 
> the near null-space must be orthonormal).
> If things still don’t work after those couple of fixes, please feel free to 
> send an up-to-date reproducer.
> 
> Thanks,
> Pierre
> 
> > Thanks,
> > Nicolas
> > 
> > 
> > 
> > 
> > Ce message et toutes les pièces jointes (ci-après le 'Message') sont 
> > établis à l'intention exclusive des destinataires et les informations qui y 
> > figurent sont strictement confidentielles. Toute utilisation de ce Message 
> > non conforme à sa destination, toute diffusion ou toute publication totale 
> > ou partielle, est interdite sauf autorisation expresse.
> > 
> > Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de 
> > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou 
> > partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de 
> > votre système, ainsi que toutes ses copies, et de n'en garder aucune trace 
> > sur quelque support que ce soit. Nous vous remercions également d'en 
> > avertir immédiatement l'expéditeur par retour du message.
> > 
> > Il est impossible de garantir que les communications par messagerie 
> > électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
> > erreur ou virus.
> > 
> > 
> > This message and any attachments (the 'Message') are intended solely for 
> > the addressees. The information contained in this Message is confidential. 
> > Any use of information contained in this Message not in accord with its 
> > purpose, any dissemination or disclosure, either whole or partial, is 
> > prohibited except formal approval.
> > 
> > If you are not the addressee, you may not copy, forward, disclose or use 
> > any part of it. If you have received this message in error, please delete 
> > it and all copies from your system and notify the sender immediately by 
> > return message.
> > 
> > E-mail communication cannot be guaranteed to be timely secure, error or 
> > virus-free.
> > 
> 
> 
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis 
> à l'intention exclusive des destinataires et les informations qui y figurent 
> sont strictement confidentielles. Toute utilisation de ce Message non 
> conforme à sa destination, toute diffusion 

Re: [petsc-users] Near null space for a fieldsplit in petsc4py

2023-07-12 Thread Pierre Jolivet


> On 12 Jul 2023, at 6:04 PM, TARDIEU Nicolas via petsc-users 
>  wrote:
> 
> Dear PETSc team,
> 
> In the attached example, I set up a block pc for a saddle-point problem in 
> petsc4py. The IS define the unknowns, namely some physical quantity (phys) 
> and a Lagrange multiplier (lags).
> I would like to attach a near null space to the physical block, in order to 
> get the best performance from an AMG pc. 
> I have been trying hard, attaching it to the initial block, to the IS but no 
> matter what I am doing, when it comes to "ksp_view", no near null space is 
> attached to the matrix.
> 
> Could you please help me figure out what I am doing wrong ?

Are you using a double-precision 32-bit integers real build of PETSc?
Is it --with-debugging=0?
Because with my debug build, I get the following error (thus explaining why 
it’s not attached to the KSP).
Traceback (most recent call last):
  File "/Volumes/Data/Downloads/test/test_NullSpace.py", line 35, in 
ns = NullSpace().create(True, [v], comm=comm)
 
  File "petsc4py/PETSc/Mat.pyx", line 5611, in petsc4py.PETSc.NullSpace.create
petsc4py.PETSc.Error: error code 62
[0] MatNullSpaceCreate() at 
/Volumes/Data/repositories/petsc/src/mat/interface/matnull.c:249
[0] Invalid argument
[0] Vector 0 must have 2-norm of 1.0, it is 22.3159

Furthermore, if you set yourself the constant vector in the near null-space, 
then the first argument of create() must be False, otherwise, you’ll have twice 
the same vector, and you’ll end up with another error (the vectors in the near 
null-space must be orthonormal).
If things still don’t work after those couple of fixes, please feel free to 
send an up-to-date reproducer.

Thanks,
Pierre

> Thanks,
> Nicolas
> 
> 
> 
> 
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis 
> à l'intention exclusive des destinataires et les informations qui y figurent 
> sont strictement confidentielles. Toute utilisation de ce Message non 
> conforme à sa destination, toute diffusion ou toute publication totale ou 
> partielle, est interdite sauf autorisation expresse.
> 
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
> copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. 
> Si vous avez reçu ce Message par erreur, merci de le supprimer de votre 
> système, ainsi que toutes ses copies, et de n'en garder aucune trace sur 
> quelque support que ce soit. Nous vous remercions également d'en avertir 
> immédiatement l'expéditeur par retour du message.
> 
> Il est impossible de garantir que les communications par messagerie 
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
> erreur ou virus.
> 
> 
> This message and any attachments (the 'Message') are intended solely for the 
> addressees. The information contained in this Message is confidential. Any 
> use of information contained in this Message not in accord with its purpose, 
> any dissemination or disclosure, either whole or partial, is prohibited 
> except formal approval.
> 
> If you are not the addressee, you may not copy, forward, disclose or use any 
> part of it. If you have received this message in error, please delete it and 
> all copies from your system and notify the sender immediately by return 
> message.
> 
> E-mail communication cannot be guaranteed to be timely secure, error or 
> virus-free.
> 



Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-06-23 Thread Pierre Jolivet

> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet  wrote:
> 
> 
>> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay  
>> wrote:
>> 
>> Ah, I see that if I use Pierre's new 'full' option for 
>> -mat_schur_complement_ainv_type
> 
> That was not initially done by me

Oops, sorry for the noise, looks like it was done by me indeed in 
9399e4fd88c6621aad8fe9558ce84df37bd6fada…

Thanks,
Pierre

> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit 
> to use KSPMatSolve(), so that if you have a small Schur complement — which is 
> not really the case for NS — this could be a viable option, it was previously 
> painfully slow).
> 
> Thanks,
> Pierre
> 
>> that I get a single iteration for the Schur complement solve with LU. That's 
>> a nice testing option
>> 
>> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay > <mailto:alexlindsay...@gmail.com>> wrote:
>>> I guess it is because the inverse of the diagonal form of A00 becomes a 
>>> poor representation of the inverse of A00? I guess naively I would have 
>>> thought that the blockdiag form of A00 is A00
>>> 
>>> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay 
>>> mailto:alexlindsay...@gmail.com>> wrote:
>>>> Hi Jed, I will come back with answers to all of your questions at some 
>>>> point. I mostly just deal with MOOSE users who come to me and tell me 
>>>> their solve is converging slowly, asking me how to fix it. So I generally 
>>>> assume they have built an appropriate mesh and problem size for the 
>>>> problem they want to solve and added appropriate turbulence modeling 
>>>> (although my general assumption is often violated).
>>>> 
>>>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure 
>>>> > solve?
>>>> 
>>>> Yes, this is our default.
>>>> 
>>>> A general question: it seems that it is well known that the quality of 
>>>> selfp degrades with increasing advection. Why is that?
>>>> 
>>>> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown >>> <mailto:j...@jedbrown.org>> wrote:
>>>>> Alexander Lindsay >>>> <mailto:alexlindsay...@gmail.com>> writes:
>>>>> 
>>>>> > This has been a great discussion to follow. Regarding
>>>>> >
>>>>> >> when time stepping, you have enough mass matrix that cheaper 
>>>>> >> preconditioners are good enough
>>>>> >
>>>>> > I'm curious what some algebraic recommendations might be for high Re in
>>>>> > transients. 
>>>>> 
>>>>> What mesh aspect ratio and streamline CFL number? Assuming your model is 
>>>>> turbulent, can you say anything about momentum thickness Reynolds number 
>>>>> Re_θ? What is your wall normal spacing in plus units? (Wall resolved or 
>>>>> wall modeled?)
>>>>> 
>>>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure 
>>>>> solve?
>>>>> 
>>>>> > I've found one-level DD to be ineffective when applied monolithically 
>>>>> > or to the momentum block of a split, as it scales with the mesh size. 
>>>>> 
>>>>> I wouldn't put too much weight on "scaling with mesh size" per se. You 
>>>>> want an efficient solver for the coarsest mesh that delivers sufficient 
>>>>> accuracy in your flow regime. Constants matter.
>>>>> 
>>>>> Refining the mesh while holding time steps constant changes the advective 
>>>>> CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful 
>>>>> scaling study is to increase Reynolds number (e.g., by growing the 
>>>>> domain) while keeping mesh size matched in terms of plus units in the 
>>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That 
>>>>> turns out to not be a very automatic study to do, but it's what matters 
>>>>> and you can spend a lot of time chasing ghosts with naive scaling studies.
> 



Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-06-23 Thread Pierre Jolivet

> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay  
> wrote:
> 
> Ah, I see that if I use Pierre's new 'full' option for 
> -mat_schur_complement_ainv_type

That was not initially done by me (though I recently tweaked 
MatSchurComplementComputeExplicitOperator() a bit to use KSPMatSolve(), so that 
if you have a small Schur complement — which is not really the case for NS — 
this could be a viable option, it was previously painfully slow).

Thanks,
Pierre

> that I get a single iteration for the Schur complement solve with LU. That's 
> a nice testing option
> 
> On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay  > wrote:
>> I guess it is because the inverse of the diagonal form of A00 becomes a poor 
>> representation of the inverse of A00? I guess naively I would have thought 
>> that the blockdiag form of A00 is A00
>> 
>> On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay > > wrote:
>>> Hi Jed, I will come back with answers to all of your questions at some 
>>> point. I mostly just deal with MOOSE users who come to me and tell me their 
>>> solve is converging slowly, asking me how to fix it. So I generally assume 
>>> they have built an appropriate mesh and problem size for the problem they 
>>> want to solve and added appropriate turbulence modeling (although my 
>>> general assumption is often violated).
>>> 
>>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure 
>>> > solve?
>>> 
>>> Yes, this is our default.
>>> 
>>> A general question: it seems that it is well known that the quality of 
>>> selfp degrades with increasing advection. Why is that?
>>> 
>>> On Wed, Jun 7, 2023 at 8:01 PM Jed Brown >> > wrote:
 Alexander Lindsay >>> > writes:
 
 > This has been a great discussion to follow. Regarding
 >
 >> when time stepping, you have enough mass matrix that cheaper 
 >> preconditioners are good enough
 >
 > I'm curious what some algebraic recommendations might be for high Re in
 > transients. 
 
 What mesh aspect ratio and streamline CFL number? Assuming your model is 
 turbulent, can you say anything about momentum thickness Reynolds number 
 Re_θ? What is your wall normal spacing in plus units? (Wall resolved or 
 wall modeled?)
 
 And to confirm, are you doing a nonlinearly implicit velocity-pressure 
 solve?
 
 > I've found one-level DD to be ineffective when applied monolithically or 
 > to the momentum block of a split, as it scales with the mesh size. 
 
 I wouldn't put too much weight on "scaling with mesh size" per se. You 
 want an efficient solver for the coarsest mesh that delivers sufficient 
 accuracy in your flow regime. Constants matter.
 
 Refining the mesh while holding time steps constant changes the advective 
 CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful 
 scaling study is to increase Reynolds number (e.g., by growing the domain) 
 while keeping mesh size matched in terms of plus units in the viscous 
 sublayer and Kolmogorov length in the outer boundary layer. That turns out 
 to not be a very automatic study to do, but it's what matters and you can 
 spend a lot of time chasing ghosts with naive scaling studies.



Re: [petsc-users] parallel computing error

2023-05-05 Thread Pierre Jolivet


> On 5 May 2023, at 2:00 PM, ­권승리 / 학생 / 항공우주공학과  wrote:
> 
> Dear Pierre Jolivet
> 
> Thank you for your explanation.
> 
> I will try to use a converting matrix.
> 
> I know it's really inefficient, but I need an inverse matrix (inv(A)) itself 
> for my research.
> 
> If parallel computing is difficult to get inv(A),  can I run the part related 
> to MatMatSolve with a single core?

Yes.

Thanks,
Pierre

> Best,
> Seung Lee Kwon
> 
> 2023년 5월 5일 (금) 오후 8:35, Pierre Jolivet  <mailto:pierre.joli...@lip6.fr>>님이 작성:
>> 
>> 
>>> On 5 May 2023, at 1:25 PM, ­권승리 / 학생 / 항공우주공학과 >> <mailto:ksl7...@snu.ac.kr>> wrote:
>>> 
>>> Dear Matthew Knepley
>>> 
>>> However, I've already installed ScaLAPACK.
>>> cd $PETSC_DIR
>>> ./configure --download-mpich --with-debugging=0 COPTFLAGS='-O3 
>>> -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' 
>>> FOPTFLAGS='-O3 -march=native -mtune=native' --download-mumps 
>>> --download-scalapack --download-parmetis --download-metis 
>>> --download-parmetis --download-hpddm --download-slepc
>>> 
>>> Is there some way to use ScaLAPCK?
>> 
>> You need to convert your MatDense to MatSCALAPACK before the call to 
>> MatMatSolve().
>> This library (ScaLAPACK, but also Elemental) has severe limitations with 
>> respect to the matrix distribution.
>> Depending on what you are doing, you may be better of using KSPMatSolve() 
>> and computing only an approximation of the solution with a cheap 
>> preconditioner (I don’t recall you telling us why you need to do such an 
>> operation even though we told you it was not practical — or maybe I’m being 
>> confused by another thread).
>> 
>> Thanks,
>> Pierre
>> 
>>> Or, Can I run the part related to MatMatSolve with a single core? 
>>> 
>>> 2023년 5월 5일 (금) 오후 6:21, Matthew Knepley >> <mailto:knep...@gmail.com>>님이 작성:
>>>> On Fri, May 5, 2023 at 3:49 AM ­권승리 / 학생 / 항공우주공학과 >>> <mailto:ksl7...@snu.ac.kr>> wrote:
>>>>> Dear Barry Smith
>>>>> 
>>>>> Thanks to you, I knew the difference between MATAIJ and MATDENSE.
>>>>> 
>>>>> However, I still have some problems.
>>>>> 
>>>>> There is no problem when I run with a single core. But, MatGetFactor 
>>>>> error occurs when using multi-core.
>>>>> 
>>>>> Could you give me some advice?
>>>>> 
>>>>> The error message is
>>>>> 
>>>>> [0]PETSC ERROR: - Error Message 
>>>>> --
>>>>> [0]PETSC ERROR: See 
>>>>> https://petsc.org/release/overview/linear_solve_table/ for possible LU 
>>>>> and Cholesky solvers
>>>>> [0]PETSC ERROR: MatSolverType petsc does not support matrix type mpidense
>>>> 
>>>> PETSc uses 3rd party packages for parallel dense factorization. You would 
>>>> need to reconfigure with either ScaLAPACK
>>>> or Elemental.
>>>> 
>>>>   Thanks,
>>>> 
>>>>  Matt
>>>>  
>>>>> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
>>>>> [0]PETSC ERROR: Petsc Release Version 3.18.5, unknown 
>>>>> [0]PETSC ERROR: ./app on a arch-linux-c-opt named ubuntu by ksl Fri May  
>>>>> 5 00:35:23 2023
>>>>> [0]PETSC ERROR: Configure options --download-mpich --with-debugging=0 
>>>>> COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 
>>>>> -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" 
>>>>> --download-mumps --download-scalapack --download-parmetis 
>>>>> --download-metis --download-parmetis --download-hpddm --download-slepc
>>>>> [0]PETSC ERROR: #1 MatGetFactor() at 
>>>>> /home/ksl/petsc/src/mat/interface/matrix.c:4757
>>>>> [0]PETSC ERROR: #2 main() at 
>>>>> /home/ksl/Downloads/coding_test/coding/a1.c:66
>>>>> [0]PETSC ERROR: No PETSc Option Table entries
>>>>> [0]PETSC ERROR: End of Error Message ---send entire 
>>>>> error message to petsc-ma...@mcs.anl.gov--
>>>>> application called MPI_Abort(MPI_COMM_SELF, 92) - process 0
>>>>> 
>>>>> My code is below

Re: [petsc-users] parallel computing error

2023-05-05 Thread Pierre Jolivet


> On 5 May 2023, at 1:25 PM, ­권승리 / 학생 / 항공우주공학과  wrote:
> 
> Dear Matthew Knepley
> 
> However, I've already installed ScaLAPACK.
> cd $PETSC_DIR
> ./configure --download-mpich --with-debugging=0 COPTFLAGS='-O3 -march=native 
> -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 
> -march=native -mtune=native' --download-mumps --download-scalapack 
> --download-parmetis --download-metis --download-parmetis --download-hpddm 
> --download-slepc
> 
> Is there some way to use ScaLAPCK?

You need to convert your MatDense to MatSCALAPACK before the call to 
MatMatSolve().
This library (ScaLAPACK, but also Elemental) has severe limitations with 
respect to the matrix distribution.
Depending on what you are doing, you may be better of using KSPMatSolve() and 
computing only an approximation of the solution with a cheap preconditioner (I 
don’t recall you telling us why you need to do such an operation even though we 
told you it was not practical — or maybe I’m being confused by another thread).

Thanks,
Pierre

> Or, Can I run the part related to MatMatSolve with a single core? 
> 
> 2023년 5월 5일 (금) 오후 6:21, Matthew Knepley  >님이 작성:
>> On Fri, May 5, 2023 at 3:49 AM ­권승리 / 학생 / 항공우주공학과 > > wrote:
>>> Dear Barry Smith
>>> 
>>> Thanks to you, I knew the difference between MATAIJ and MATDENSE.
>>> 
>>> However, I still have some problems.
>>> 
>>> There is no problem when I run with a single core. But, MatGetFactor error 
>>> occurs when using multi-core.
>>> 
>>> Could you give me some advice?
>>> 
>>> The error message is
>>> 
>>> [0]PETSC ERROR: - Error Message 
>>> --
>>> [0]PETSC ERROR: See https://petsc.org/release/overview/linear_solve_table/ 
>>> for possible LU and Cholesky solvers
>>> [0]PETSC ERROR: MatSolverType petsc does not support matrix type mpidense
>> 
>> PETSc uses 3rd party packages for parallel dense factorization. You would 
>> need to reconfigure with either ScaLAPACK
>> or Elemental.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>>> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.18.5, unknown 
>>> [0]PETSC ERROR: ./app on a arch-linux-c-opt named ubuntu by ksl Fri May  5 
>>> 00:35:23 2023
>>> [0]PETSC ERROR: Configure options --download-mpich --with-debugging=0 
>>> COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native 
>>> -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --download-mumps 
>>> --download-scalapack --download-parmetis --download-metis 
>>> --download-parmetis --download-hpddm --download-slepc
>>> [0]PETSC ERROR: #1 MatGetFactor() at 
>>> /home/ksl/petsc/src/mat/interface/matrix.c:4757
>>> [0]PETSC ERROR: #2 main() at /home/ksl/Downloads/coding_test/coding/a1.c:66
>>> [0]PETSC ERROR: No PETSc Option Table entries
>>> [0]PETSC ERROR: End of Error Message ---send entire 
>>> error message to petsc-ma...@mcs.anl.gov--
>>> application called MPI_Abort(MPI_COMM_SELF, 92) - process 0
>>> 
>>> My code is below:
>>> 
>>> int main(int argc, char** args)
>>> {
>>> Mat A, E, A_temp, A_fac;
>>> int n = 15;
>>> PetscInitialize(, , NULL, NULL);
>>> PetscCallMPI(MPI_Comm_size(PETSC_COMM_WORLD, ));
>>> 
>>> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
>>> PetscCall(MatSetType(A,MATDENSE));
>>> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, n, n));
>>> PetscCall(MatSetFromOptions(A));
>>> PetscCall(MatSetUp(A));
>>> // Insert values
>>> double val;
>>> for (int i = 0; i < n; i++) {
>>> for (int j = 0; j < n; j++) {
>>> if (i == j){
>>> val = 2.0;
>>> }
>>> else{
>>> val = 1.0;
>>> }
>>> PetscCall(MatSetValue(A, i, j, val, INSERT_VALUES));
>>> }
>>> }
>>> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
>>> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>>> 
>>> // Make Identity matrix
>>> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
>>> PetscCall(MatSetType(E,MATDENSE));
>>> PetscCall(MatSetSizes(E, PETSC_DECIDE, PETSC_DECIDE, n, n));
>>> PetscCall(MatSetFromOptions(E));
>>> PetscCall(MatSetUp(E));
>>> PetscCall(MatShift(E,1.0));
>>> PetscCall(MatAssemblyBegin(E, MAT_FINAL_ASSEMBLY));
>>> PetscCall(MatAssemblyEnd(E, MAT_FINAL_ASSEMBLY));
>>> 
>>> PetscCall(MatDuplicate(A, MAT_DO_NOT_COPY_VALUES, _temp));
>>> PetscCall(MatGetFactor(A, MATSOLVERPETSC, MAT_FACTOR_LU, _fac));
>>> 
>>> IS isr, isc; MatFactorInfo info;
>>> MatGetOrdering(A, MATORDERINGNATURAL, , );
>>> PetscCall(MatLUFactorSymbolic(A_fac, A, isr, isc, ));
>>> PetscCall(MatLUFactorNumeric(A_fac, A, ));
>>> MatMatSolve(A_fac, E, A_temp);
>>> 
>>> PetscCall(MatView(A_temp, PETSC_VIEWER_STDOUT_WORLD));
>>> 

Re: [petsc-users] 'mpirun' run not found error

2023-05-02 Thread Pierre Jolivet

> On 2 May 2023, at 8:56 AM, ­권승리 / 학생 / 항공우주공학과  wrote:
> 
> Dear developers
> 
> I'm trying to use the mpi, but I'm encountering error messages like below:
> 
> 
> Command 'mpirun' not found, but can be installed with:
> sudo apt install lam-runtime   # version 7.1.4-6build2, or
> sudo apt install mpich # version 3.3.2-2build1
> sudo apt install openmpi-bin   # version 4.0.3-0ubuntu1
> sudo apt install slurm-wlm-torque  # version 19.05.5-1
> //
> 
> However, I've already installed the mpich.
> cd $PETSC_DIR
> ./configure --download-mpich --with-debugging=0 COPTFLAGS='-O3 -march=native 
> -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 
> -march=native -mtune=native' --download-mumps --download-scalapack 
> --download-parmetis --download-metis --download-parmetis --download-hpddm 
> --download-slepc
> 
> Could you recommend some advice related to this?

Most likely you do not want to run mpirun, but 
${PETSC_DIR}/${PETSC_ARCH}/bin/mpirun instead.
Or add ${PETSC_DIR}/${PETSC_ARCH}/bin to your PATH environment variable.

Thanks,
Pierre

> Best,
> Seung Lee Kwon
> -- 
> Seung Lee Kwon, Ph.D.Candidate
> Aerospace Structures and Materials Laboratory
> Department of Mechanical and Aerospace Engineering
> Seoul National University
> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
> E-mail : ksl7...@snu.ac.kr 
> Office : +82-2-880-7389
> C. P : +82-10-4695-1062



Re: [petsc-users] Question about linking LAPACK library

2023-04-25 Thread Pierre Jolivet

> On 25 Apr 2023, at 11:43 AM, Matthew Knepley  wrote:
> 
> On Mon, Apr 24, 2023 at 11:47 PM ­권승리 / 학생 / 항공우주공학과  > wrote:
>> Dear all
>> 
>> It depends on the problem. It can have hundreds of thousands of degrees of 
>> freedom.
> 
> Suppose your matrix was dense and had 1e6 dofs. The work to invert a matrix 
> is O(N^3) with a small
> constant, so it would take 1e18 = 1 exaflop to invert this matrix and about 
> 10 Terabytes of RAM to store
> it. Is this available to you? PETSc's supports Elemental and SCALAPACK for 
> this kind of calculation.
> 
> If the system is sparse, you could invert it using MUMPS, SuperLU_dist, or 
> Pardiso. Then the work and
> storage depend on the density. There are good estimates for connectivity 
> based on regular grids of given
> dimension. The limiting resource here is usually memory, which motivates 
> people to try iterative methods.
> The convergence of iterative methods depend on detailed properties of your 
> system, like the operator spectrum.

And to wrap this up, if your operator is truly dense, e.g., BEM or non-local 
discretizations, their are available hierarchical formats such as MatH2Opus and 
MatHtool.
They have efficient matrix-vector product implementations such that you can 
solve linear systems without having to invert (or even store) the coefficient 
matrix explicitly.

Thanks,
Pierre

>   Thanks,
> 
>  Matt
>  
>> best,
>> 
>> Seung Lee Kwon
>> 
>> 2023년 4월 25일 (화) 오후 12:32, Barry Smith > >님이 작성:
>>> 
>>>   How large are the dense matrices you would like to invert?
>>> 
 On Apr 24, 2023, at 11:27 PM, ­권승리 / 학생 / 항공우주공학과 >>> > wrote:
 
 Dear all
 
 Hello.
 I want to make an inverse matrix like inv(A) in MATLAB.
 
 Are there some methods to inverse matrix in petsc?
 
 If not, I want to use the inverse function in the LAPACK library.
 
 Then, how to use the LAPACK library in petsc? I use the C language.
 
 Best,
 
 Seung Lee Kwon
 
 -- 
 Seung Lee Kwon, Ph.D.Candidate
 Aerospace Structures and Materials Laboratory
 Department of Mechanical and Aerospace Engineering
 Seoul National University
 Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
 E-mail : ksl7...@snu.ac.kr 
 Office : +82-2-880-7389
 C. P : +82-10-4695-1062
>>> 
>> 
>> 
>> -- 
>> Seung Lee Kwon, Ph.D.Candidate
>> Aerospace Structures and Materials Laboratory
>> Department of Mechanical and Aerospace Engineering
>> Seoul National University
>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>> E-mail : ksl7...@snu.ac.kr 
>> Office : +82-2-880-7389
>> C. P : +82-10-4695-1062
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] question about MatSetLocalToGlobalMapping

2023-04-20 Thread Pierre Jolivet

> On 20 Apr 2023, at 10:28 PM, Zhang, Hong  wrote:
> 
> Pierre,
> 1) Is there any hope to get PDIPDM to use a MatNest?
> 
> KKT matrix is indefinite and ill-conditioned, which must be solved using a 
> direct matrix factorization method.

But you are using PCBJACOBI in the paper you attached?
In any case, there are many such systems, e.g., a discretization of Stokes 
equations, that can be solved with something else than a direct factorization.

> For the current implementation, we use MUMPS Cholesky as default. To use 
> MatNest, what direct solver to use, SCHUR_FACTOR? I do not know how to get it 
> work. 

On the one hand, MatNest can efficiently convert to AIJ or SBAIJ if you want to 
stick to PCLU or PCCHOLESKY.
On the other hand, it allows to easily switch to PCFIELDSPLIT which can be used 
to solve saddle-point problems.

> 2) Is this fixed 
> https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-September/026398.html ?
> I cannot get users to transition away from Ipopt because of these two missing 
> features.
> 
> The existing pdipm is the result of a MS student intern project. None of us 
> involved are experts on the optimization solvers. We made a straightforward 
> parallelization of Ipopt. It indeed needs further work, e.g., more features, 
> better matrix storage, convergence criteria... To our knowledge, parallel 
> pdipm is not available other than our pdipm.

Ipopt can use MUMPS and PARDISO internally, so it’s in some sense parallel 
(using shared memory).
Also, this is not a very potent selling point.
My users that are satisfied with Ipopt as a "non-parallel" black box don’t want 
to have to touch part of their code just to stick it in a parallel black box 
which is limited to the same kind of linear solver and which has severe 
limitations with respect to Hessian/Jacobian/constraint distributions.

Thanks,
Pierre

> We should improve our pdipm. 
> Hong
> 
>> On 20 Apr 2023, at 5:47 PM, Zhang, Hong via petsc-users 
>> mailto:petsc-users@mcs.anl.gov>> wrote:
>> 
>> Karthik,
>> We built a KKT matrix in  TaoSNESJacobian_PDIPM() (see 
>> petsc/src/tao/constrained/impls/ipm/pdipm.c) which assembles several small 
>> matrices into a large KKT matrix in mpiaij format. You could take the same 
>> approach to insert P and P^T into your K.
>> FYI, I attached our paper.
>> Hong
>>  
>> From: petsc-users > > on behalf of Matthew Knepley 
>> mailto:knep...@gmail.com>>
>> Sent: Thursday, April 20, 2023 5:37 AM
>> To: Karthikeyan Chockalingam - STFC UKRI 
>> > >
>> Cc: petsc-users@mcs.anl.gov  
>> mailto:petsc-users@mcs.anl.gov>>
>> Subject: Re: [petsc-users] question about MatSetLocalToGlobalMapping
>>  
>> On Thu, Apr 20, 2023 at 6:13 AM Karthikeyan Chockalingam - STFC UKRI via 
>> petsc-users mailto:petsc-users@mcs.anl.gov>> wrote:
>> Hello,
>>  
>> I created a new thread, thought would it be more appropriate (and is a 
>> continuation of my previous post). I want to construct the below K matrix 
>> (which is composed of submatrices)
>>  
>> K = [A P^T
>>P   0]
>>  
>> Where K is of type MatMPIAIJ. I first constructed the top left [A] using 
>> MatSetValues().
>>  
>> Now, I would like to construct the bottom left [p] and top right [p^T] using 
>> MatSetValuesLocal().
>>  
>> To use  MatSetValuesLocal(),  I first have to create a local-to-global 
>> mapping using ISLocalToGlobalMappingCreate. I have created two mapping 
>> row_mapping and column_mapping.
>> 
>> I do not understand why they are not the same map. Maybe I was unclear 
>> before. It looks like you have two fields, say phi and lambda, where lambda 
>> is a Lagrange multiplier imposing some constraint. Then you get a saddle 
>> point like this. You can imagine matrices
>> 
>>   (phi, phi)--> A
>>   (phi, lambda) --> P^T
>>   (lambda, phi) --> P
>> 
>> So you make a L2G map for the phi field and the lambda field. Oh, you are 
>> calling them row and col map, but they are my phi and lambda
>> maps. I do not like the row and col names since in P they reverse.
>>  
>> Q1) At what point should I declare MatSetLocalToGlobalMapping – is it just 
>> before I use MatSetValuesLocal()?
>> 
>> Okay, it is good you are asking this because my thinking was somewhat 
>> confused. I think the precise steps are:
>> 
>>   1) Create the large saddle point matrix K
>> 
>> 1a) We must call 
>> https://petsc.org/main/manualpages/Mat/MatSetLocalToGlobalMapping/ on it. In 
>> the simplest case, this just maps
>>the local rows numbers [0, Nrows) to the global rows numbers 
>> [rowStart, rowStart + Nrows).
>> 
>>   2) To form each piece:
>> 
>> 2a) Extract that block using 
>> https://petsc.org/main/manualpages/Mat/MatGetLocalSubMatrix/
>> 
>>This gives back a Mat object that you subsequently restore using 
>> https://petsc.org/main/manualpages/Mat/MatRestoreLocalSubMatrix/
>> 
>>  2b) Insert values 

Re: [petsc-users] question about MatSetLocalToGlobalMapping

2023-04-20 Thread Pierre Jolivet
Hong,
1) Is there any hope to get PDIPDM to use a MatNest?
2) Is this fixed 
https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-September/026398.html ?
I cannot get users to transition away from Ipopt because of these two missing 
features.

Thanks,
Pierre

> On 20 Apr 2023, at 5:47 PM, Zhang, Hong via petsc-users 
>  wrote:
> 
> Karthik,
> We built a KKT matrix in  TaoSNESJacobian_PDIPM() (see 
> petsc/src/tao/constrained/impls/ipm/pdipm.c) which assembles several small 
> matrices into a large KKT matrix in mpiaij format. You could take the same 
> approach to insert P and P^T into your K.
> FYI, I attached our paper.
> Hong
> From: petsc-users  on behalf of Matthew 
> Knepley 
> Sent: Thursday, April 20, 2023 5:37 AM
> To: Karthikeyan Chockalingam - STFC UKRI 
> Cc: petsc-users@mcs.anl.gov 
> Subject: Re: [petsc-users] question about MatSetLocalToGlobalMapping
>  
> On Thu, Apr 20, 2023 at 6:13 AM Karthikeyan Chockalingam - STFC UKRI via 
> petsc-users mailto:petsc-users@mcs.anl.gov>> wrote:
> Hello,
>  
> I created a new thread, thought would it be more appropriate (and is a 
> continuation of my previous post). I want to construct the below K matrix 
> (which is composed of submatrices)
>  
> K = [A P^T
>P   0]
>  
> Where K is of type MatMPIAIJ. I first constructed the top left [A] using 
> MatSetValues().
>  
> Now, I would like to construct the bottom left [p] and top right [p^T] using 
> MatSetValuesLocal().
>  
> To use  MatSetValuesLocal(),  I first have to create a local-to-global 
> mapping using ISLocalToGlobalMappingCreate. I have created two mapping 
> row_mapping and column_mapping.
> 
> I do not understand why they are not the same map. Maybe I was unclear 
> before. It looks like you have two fields, say phi and lambda, where lambda 
> is a Lagrange multiplier imposing some constraint. Then you get a saddle 
> point like this. You can imagine matrices
> 
>   (phi, phi)--> A
>   (phi, lambda) --> P^T
>   (lambda, phi) --> P
> 
> So you make a L2G map for the phi field and the lambda field. Oh, you are 
> calling them row and col map, but they are my phi and lambda
> maps. I do not like the row and col names since in P they reverse.
>  
> Q1) At what point should I declare MatSetLocalToGlobalMapping – is it just 
> before I use MatSetValuesLocal()?
> 
> Okay, it is good you are asking this because my thinking was somewhat 
> confused. I think the precise steps are:
> 
>   1) Create the large saddle point matrix K
> 
> 1a) We must call 
> https://petsc.org/main/manualpages/Mat/MatSetLocalToGlobalMapping/ on it. In 
> the simplest case, this just maps
>the local rows numbers [0, Nrows) to the global rows numbers 
> [rowStart, rowStart + Nrows).
> 
>   2) To form each piece:
> 
> 2a) Extract that block using 
> https://petsc.org/main/manualpages/Mat/MatGetLocalSubMatrix/
> 
>This gives back a Mat object that you subsequently restore using 
> https://petsc.org/main/manualpages/Mat/MatRestoreLocalSubMatrix/
> 
>  2b) Insert values using 
> https://petsc.org/main/manualpages/Mat/MatSetValuesLocal/
> 
> The local indices used for insertion here are indices relative to 
> the block itself, and the L2G map for this matrix
> has been rewritten to insert into that block in the larger 
> matrix. Thus this looks like just calling MatSetValuesLocal()
> on the smaller matrix block, but inserts correctly into the 
> larger matrix.
> 
> Therefore, the code you write code in 2) could work equally well making the 
> large matrix from 1), or independent smaller matrix blocks.
> 
> Does this make sense?
> 
>   Thanks,
> 
>  Matt
>  
> I will use MatSetLocalToGlobalMapping(K, row_mapping, column_mapping) to 
> build the bottom left [P].
>  
>  
> Q2) Can now I reset the mapping as MatSetLocalToGlobalMapping(K, 
> column_mapping, row_mapping) to build the top right [P^T]? 
>  
>  
> Many thanks!
>  
> Kind regards,
> Karthik.
>  
>  
>  
>  
>  
>  
>  
>  
>  
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 
>  interior point method for the solution of dynamic.pdf>



Re: [petsc-users] CG fails to converge in parallel

2023-04-20 Thread Pierre Jolivet


> On 20 Apr 2023, at 7:53 AM, Bojan Niceno  
> wrote:
> 
> Dear all,
> 
> 
> I am solving a Laplace equation with finite volume method on an unstructured 
> grid with a Fortran code I have developed, and PETSc 3.19 library.
> 
> I first used cg solver with asm preconditioner, which converges nicely when 
> executed sequentially, but fails for MPI parallel version.  I believed that 
> there must be an error in which I set up and assemble parallel matrices for 
> PETSc, but I soon noticed that if I use bicg with asm, everything works fine, 
> the parallel bicg/asm shows almost the same convergence as sequential version.

KSPCG requires a symmetric PC.
By default, PCASMType is PC_ASM_RESTRICT, which yields a non-symmetric 
preconditioner.
With a single process, this does not matter, but with more than one process, it 
does.
If you switch to -pc_asm_type basic, KSPCG should converge.
That being said, for Laplace equation, there are much faster alternatives than 
PCASM, e.g., PCGAMG.

Thanks,
Pierre

> I could carry on with bicg, but I am still worried a little bit.  To my 
> knowledge of Krylov solvers, which is admittedly basic since I am a physicist 
> only using linear algebra, the convergence of bicg should be very similar to 
> that of cg when symmetric systems are solved.  When I run my cases 
> sequentially, I see that's indeed the case.  But in parallel, bicg converges 
> and cg fails.
> 
> Do you see the above issues as an anomaly and if so, could you advise how to 
> search for a cause?  
> 
> 
> Kind regards,
> 
> 
> Bojan
> 
> 
> 
> 
> 



Re: [petsc-users] PCHPDDM and matrix type

2023-04-17 Thread Pierre Jolivet
1) PCHPDDM handles AIJ, BAIJ, SBAIJ, IS, NORMAL, NORMALHERMITIAN, 
SCHURCOMPLEMENT, HTOOL
2) This PC is based on domain decomposition, with no support yet for “over 
decomposition”. If you run with a single process, it’s like PCASM or PCBJACOBI, 
you’ll get the same behavior as if you were just using the sub PC (in this 
case, an exact factorization)
3) The error you are seeing is likely due to a failure while coarsening, I will 
ask you for some info
4) Unrelated, but you should probably not use --with-cxx-dialect=C++11 and 
instead stick to --with-cxx-dialect=11 (unless you have a good reason to)

Thanks,
Pierre

> On 18 Apr 2023, at 1:26 AM, Alexander Lindsay  
> wrote:
> 
> I don't really get much more of a stack trace out:
> 
> [0]PETSC ERROR: [1]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: - Error Message 
> --
>  
> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be 
> the program crashed before they were used or a spelling mistake, etc!
> [1]PETSC ERROR: Invalid argument
> [1]PETSC ERROR:  
> [0]PETSC ERROR:   Option left: name:-i value: full_upwinding_2D.i source: 
> command line
> [0]PETSC ERROR: [1]PETSC ERROR: WARNING! There are option(s) set that were 
> not used! Could be the program crashed before they were used or a spelling 
> mistake, etc!
>   Option left: name:-ksp_converged_reason value: ::failed source: code
> [0]PETSC ERROR:   Option left: name:-pc_hpddm_coarse_mat_type value: baij 
> source: command line
> [0]PETSC ERROR:   Option left: name:-pc_hpddm_coarse_pc_type value: lu 
> source: command line
> [1]PETSC ERROR:   Option left: name:-i value: full_upwinding_2D.i source: 
> command line
> [1]PETSC ERROR:   Option left: name:-ksp_converged_reason value: ::failed 
> source: code
> [0]PETSC ERROR:   Option left: name:-snes_converged_reason value: ::failed 
> source: code
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.17.4-3368-g5a48edb989d  
> GIT Date: 2023-04-16 17:35:24 +
> [1]PETSC ERROR:   Option left: name:-pc_hpddm_coarse_mat_type value: baij 
> source: command line
> [1]PETSC ERROR:   Option left: name:-pc_hpddm_coarse_pc_type value: lu 
> source: command line
> [0]PETSC ERROR: ../../../moose_test-opt on a arch-moose named rod.hpc.inl.gov 
>  by lindad Mon Apr 17 16:11:09 2023
> [0]PETSC ERROR: Configure options --download-hypre=1 
> --with-shared-libraries=1 --download-hdf5=1 --with-hdf5-fortran-bindings=0   
> --with-debugging=no --download-fblaslapack=1 --download-metis=1 
> --download-ptscotch=1 --download-parmetis=1 --download-superlu_dist=1 
> --download-mumps=1 --download-strumpack=1 --download-scalapack=1 
> --download-slepc=1 --with-mpi=1 --with-openmp=1 --with-cxx-dialect=C++11 
> --with-fortran-bindings=0 --with-sowing=0 --with-64-bit-indices  
> --with-make-np=256 --download-hpddm
> [1]PETSC ERROR:   Option left: name:-snes_converged_reason value: ::failed 
> source: code
> [1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [1]PETSC ERROR: [0]PETSC ERROR: #1 buildTwo() at 
> /raid/lindad/moose/petsc/arch-moose/include/HPDDM_schwarz.hpp:1012
> Petsc Development GIT revision: v3.17.4-3368-g5a48edb989d  GIT Date: 
> 2023-04-16 17:35:24 +
> [1]PETSC ERROR: ../../../moose_test-opt on a arch-moose named rod.hpc.inl.gov 
>  by lindad Mon Apr 17 16:11:09 2023
> [1]PETSC ERROR: Configure options --download-hypre=1 
> --with-shared-libraries=1 --download-hdf5=1 --with-hdf5-fortran-bindings=0   
> --with-debugging=no --download-fblaslapack=1 --download-metis=1 
> --download-ptscotch=1 --download-parmetis=1 --download-superlu_dist=1 
> --download-mumps=1 --download-strumpack=1 --download-scalapack=1 
> --download-slepc=1 --with-mpi=1 --with-openmp=1 --with-cxx-dialect=C++11 
> --with-fortran-bindings=0 --with-sowing=0 --with-64-bit-indices  
> --with-make-np=256 --download-hpddm
> [1]PETSC ERROR: #1 buildTwo() at 
> /raid/lindad/moose/petsc/arch-moose/include/HPDDM_schwarz.hpp:1012
> 
> On Mon, Apr 17, 2023 at 4:55 PM Matthew Knepley  > wrote:
>> I don't think so. Can you show the whole stack?
>> 
>>   THanks,
>> 
>> Matt
>> 
>> On Mon, Apr 17, 2023 at 6:24 PM Alexander Lindsay > > wrote:
>>> If it helps: if I use those exact same options in serial, then no errors 
>>> and the linear solve is beautiful :-) 
>>> 
>>> On Mon, Apr 17, 2023 at 4:22 PM Alexander Lindsay >> > wrote:
 I'm likely revealing a lot of ignorance, but in order to use HPDDM as a 
 preconditioner does my system matrix (I am using the same matrix for A and 
 P) need to be block type, e.g. baij or sbaij ? In MOOSE 

Re: [petsc-users] PETSc error only in debug build

2023-04-17 Thread Pierre Jolivet


> On 17 Apr 2023, at 6:22 PM, Matteo Semplice  
> wrote:
> 
> Dear PETSc users,
> 
> I am investigating a strange error occurring when using my code on a 
> cluster; I managed to reproduce it on my machine as well and it's weird:
> 
> - on petsc3.19, optimized build, the code runs fine, serial and parallel
> 
> - on petsc 3,19, --with=debugging=1, the code crashes without giving me a 
> meaningful message. The output is
> 
> $ ../levelSet -options_file ../test.opts  
> Converting from ../pointClouds/2d/ptCloud_cerchio.txt in binary format: this 
> is slow! 
> Pass in the .info file instead! 
> Read 50 particles from ../pointClouds/2d/ptCloud_cerchio.txt 
> Bounding box: [-0.665297, 0.67] x [-0.666324, 0.666324] 
> [0]PETSC ERROR: - Error Message 
> -- 
> [0]PETSC ERROR: Petsc has generated inconsistent data 
> [0]PETSC ERROR: Invalid stack size 0, pop convertCloudTxt clouds.cpp:139. 
> 
> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be 
> the program crashed before they were used or a spell
> ing mistake, etc! 
> [0]PETSC ERROR:   Option left: name:-delta value: 1.0 source: file 
> [0]PETSC ERROR:   Option left: name:-dx value: 0.1 source: file 
> [0]PETSC ERROR:   Option left: name:-extraCells value: 5 source: file 
> [0]PETSC ERROR:   Option left: name:-maxIter value: 200 source: file 
> [0]PETSC ERROR:   Option left: name:-p value: 1.0 source: file 
> [0]PETSC ERROR:   Option left: name:-tau value: 0.1 source: file 
> [0]PETSC ERROR:   Option left: name:-u0tresh value: 0.3 source: file 
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. 
> [0]PETSC ERROR: Petsc Release Version 3.19.0, unknown  
> [0]PETSC ERROR: ../levelSet on a  named signalkuppe by matteo Mon Apr 17 
> 18:04:03 2023 
> [0]PETSC ERROR: Configure options --download-ml \ --with-metis 
> --with-parmetis \ --download-hdf5 \ --with-triangle --with-gmsh \ P
> ETSC_DIR=/home/matteo/software/petsc --PETSC_ARCH=dbg --with-debugging=1 
> --COPTFLAGS=-O --CXXOPTFLAGS=-O --FOPTFLAGS=-O --prefix=/
> home/matteo/software/petsc/3.19-dbg/ 
> [0]PETSC ERROR: #1 convertCloudTxt() at clouds.cpp:139 
> -- 
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF 
> with errorcode 77. 
> 
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. 
> You may or may not see output from other processes, depending on 
> exactly when Open MPI kills them. 
> --
> 
> Now, line 139 of clouds.cpp is PetscFunctionReturn(PETSC_SUCCESS), so I 
> cannot understand what is the offending operation in that routine. (Note: 
> this is a convertion routine and, skipping it, just make the next routine 
> fail in a similar way...)
> 
> My student has also tried to compile PETSc with --with-strict-petscerrorcode 
> and fixing all the compilation errors that were raised, but it didn't help.
> 
> Do you have any guess on what to look for?
> 
There may be a PetscFunctionBeginUser; missing at the beginning of the 
convertCloudTxt() function.
Could you double-check this?

Thanks,
Pierre
> Bonus question to assess the cluster output what is the default value for 
> --with-debugging? I that option is not specified during PETSc configure, does 
> one get optimized or debug build?
> 
> Thanks
> 
> Matteo
> 
> -- 
> Professore Associato in Analisi Numerica
> Dipartimento di Scienza e Alta Tecnologia
> Università degli Studi dell'Insubria
> Via Valleggio, 11 - Como



Re: [petsc-users] Using nonzero -pc_hypre_boomeramg_restriction_type in field split

2023-04-16 Thread Pierre Jolivet

> On 17 Apr 2023, at 1:10 AM, Alexander Lindsay  
> wrote:
> 
> Are there any plans to get the missing hook into PETSc for AIR? Just curious 
> if there’s an issue I can subscribe to or anything.

Not that I know of, but it would make for a nice contribution if you feel like 
creating a PR.

Thanks,
Pierre 

> (Independently I’m excited to test HPDDM out tomorrow)
> 
>> On Apr 13, 2023, at 10:29 PM, Pierre Jolivet  wrote:
>> 
>> 
>>> On 14 Apr 2023, at 7:02 AM, Alexander Lindsay  
>>> wrote:
>>> 
>>> Pierre,
>>> 
>>> This is very helpful information. Thank you. Yes I would appreciate those 
>>> command line options if you’re willing to share!
>> 
>> No problem, I’ll get in touch with you in private first, because it may 
>> require some extra work (need a couple of extra options in PETSc 
>> ./configure), and this is not very related to the problem at hand, so best 
>> not to spam the mailing list.
>> 
>> Thanks,
>> Pierre
>> 
>>>> On Apr 13, 2023, at 9:54 PM, Pierre Jolivet  wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 13 Apr 2023, at 10:33 PM, Alexander Lindsay  
>>>>> wrote:
>>>>> 
>>>>> Hi, I'm trying to solve steady Navier-Stokes for different Reynolds 
>>>>> numbers. My options table
>>>>> 
>>>>> -dm_moose_fieldsplit_names u,p
>>>>> -dm_moose_nfieldsplits 2
>>>>> -fieldsplit_p_dm_moose_vars pressure
>>>>> -fieldsplit_p_ksp_type preonly
>>>>> -fieldsplit_p_pc_type jacobi
>>>>> -fieldsplit_u_dm_moose_vars vel_x,vel_y
>>>>> -fieldsplit_u_ksp_type preonly
>>>>> -fieldsplit_u_pc_hypre_type boomeramg
>>>>> -fieldsplit_u_pc_type hypre
>>>>> -pc_fieldsplit_schur_fact_type full
>>>>> -pc_fieldsplit_schur_precondition selfp
>>>>> -pc_fieldsplit_type schur
>>>>> -pc_type fieldsplit
>>>>> 
>>>>> works wonderfully for a low Reynolds number of 2.2. The solver 
>>>>> performance crushes LU as I scale up the problem. However, not 
>>>>> surprisingly this options table struggles when I bump the Reynolds number 
>>>>> to 220. I've read that use of AIR (approximate ideal restriction) can 
>>>>> improve performance for advection dominated problems. I've tried setting 
>>>>> -pc_hypre_boomeramg_restriction_type 1 for a simple diffusion problem and 
>>>>> the option works fine. However, when applying it to my field-split 
>>>>> preconditioned Navier-Stokes system, I get immediate non-convergence:
>>>>> 
>>>>>  0 Nonlinear |R| = 1.033077e+03
>>>>>   0 Linear |R| = 1.033077e+03
>>>>>   Linear solve did not converge due to DIVERGED_NANORINF iterations 0
>>>>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>>>>> 
>>>>> Does anyone have an idea as to why this might be happening?
>>>> 
>>>> Do not use this option, even when not part of PCFIELDSPLIT.
>>>> There is some missing plumbing in PETSc which makes it unusable, see Ben’s 
>>>> comment here 
>>>> https://github.com/hypre-space/hypre/issues/764#issuecomment-1353452417.
>>>> In fact, it’s quite easy to make HYPRE generate NaN with a very simple 
>>>> stabilized convection—diffusion problem near the pure convection limit 
>>>> (something that ℓAIR is supposed to handle).
>>>> Even worse, you can make HYPRE fill your terminal with printf-style 
>>>> debugging messages 
>>>> https://github.com/hypre-space/hypre/blob/5546cc22d46b3dba253849f258786da47c9a7b21/src/parcsr_ls/par_lr_restr.c#L1416
>>>>  with this option turned on.
>>>> As a result, I have been unable to reproduce any of the ℓAIR results.
>>>> This also explains why I have been using plain BoomerAMG instead of ℓAIR 
>>>> for the comparison in page 9 of https://arxiv.org/pdf/2201.02250.pdf (if 
>>>> you would like to try the PC we are using, I could send you the command 
>>>> line options).
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> If not, I'd take a suggestion on where to set a breakpoint to start my 
>>>>> own investigation. Alternatively, I welcome other preconditioning 
>>>>> suggestions for an advection dominated problem.
>>>>> 
>>>>> Alex
>>>> 
>> 



Re: [petsc-users] Using nonzero -pc_hypre_boomeramg_restriction_type in field split

2023-04-13 Thread Pierre Jolivet

> On 14 Apr 2023, at 7:02 AM, Alexander Lindsay  
> wrote:
> 
> Pierre,
> 
> This is very helpful information. Thank you. Yes I would appreciate those 
> command line options if you’re willing to share!

No problem, I’ll get in touch with you in private first, because it may require 
some extra work (need a couple of extra options in PETSc ./configure), and this 
is not very related to the problem at hand, so best not to spam the mailing 
list.

Thanks,
Pierre

>> On Apr 13, 2023, at 9:54 PM, Pierre Jolivet  wrote:
>> 
>> 
>> 
>>> On 13 Apr 2023, at 10:33 PM, Alexander Lindsay  
>>> wrote:
>>> 
>>> Hi, I'm trying to solve steady Navier-Stokes for different Reynolds 
>>> numbers. My options table
>>> 
>>> -dm_moose_fieldsplit_names u,p
>>> -dm_moose_nfieldsplits 2
>>> -fieldsplit_p_dm_moose_vars pressure
>>> -fieldsplit_p_ksp_type preonly
>>> -fieldsplit_p_pc_type jacobi
>>> -fieldsplit_u_dm_moose_vars vel_x,vel_y
>>> -fieldsplit_u_ksp_type preonly
>>> -fieldsplit_u_pc_hypre_type boomeramg
>>> -fieldsplit_u_pc_type hypre
>>> -pc_fieldsplit_schur_fact_type full
>>> -pc_fieldsplit_schur_precondition selfp
>>> -pc_fieldsplit_type schur
>>> -pc_type fieldsplit
>>> 
>>> works wonderfully for a low Reynolds number of 2.2. The solver performance 
>>> crushes LU as I scale up the problem. However, not surprisingly this 
>>> options table struggles when I bump the Reynolds number to 220. I've read 
>>> that use of AIR (approximate ideal restriction) can improve performance for 
>>> advection dominated problems. I've tried setting 
>>> -pc_hypre_boomeramg_restriction_type 1 for a simple diffusion problem and 
>>> the option works fine. However, when applying it to my field-split 
>>> preconditioned Navier-Stokes system, I get immediate non-convergence:
>>> 
>>>  0 Nonlinear |R| = 1.033077e+03
>>>   0 Linear |R| = 1.033077e+03
>>>   Linear solve did not converge due to DIVERGED_NANORINF iterations 0
>>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>>> 
>>> Does anyone have an idea as to why this might be happening?
>> 
>> Do not use this option, even when not part of PCFIELDSPLIT.
>> There is some missing plumbing in PETSc which makes it unusable, see Ben’s 
>> comment here 
>> https://github.com/hypre-space/hypre/issues/764#issuecomment-1353452417.
>> In fact, it’s quite easy to make HYPRE generate NaN with a very simple 
>> stabilized convection—diffusion problem near the pure convection limit 
>> (something that ℓAIR is supposed to handle).
>> Even worse, you can make HYPRE fill your terminal with printf-style 
>> debugging messages 
>> https://github.com/hypre-space/hypre/blob/5546cc22d46b3dba253849f258786da47c9a7b21/src/parcsr_ls/par_lr_restr.c#L1416
>>  with this option turned on.
>> As a result, I have been unable to reproduce any of the ℓAIR results.
>> This also explains why I have been using plain BoomerAMG instead of ℓAIR for 
>> the comparison in page 9 of https://arxiv.org/pdf/2201.02250.pdf (if you 
>> would like to try the PC we are using, I could send you the command line 
>> options).
>> 
>> Thanks,
>> Pierre
>> 
>>> If not, I'd take a suggestion on where to set a breakpoint to start my own 
>>> investigation. Alternatively, I welcome other preconditioning suggestions 
>>> for an advection dominated problem.
>>> 
>>> Alex
>> 



Re: [petsc-users] Using nonzero -pc_hypre_boomeramg_restriction_type in field split

2023-04-13 Thread Pierre Jolivet


> On 13 Apr 2023, at 10:33 PM, Alexander Lindsay  
> wrote:
> 
> Hi, I'm trying to solve steady Navier-Stokes for different Reynolds numbers. 
> My options table
> 
> -dm_moose_fieldsplit_names u,p
> -dm_moose_nfieldsplits 2
> -fieldsplit_p_dm_moose_vars pressure
> -fieldsplit_p_ksp_type preonly
> -fieldsplit_p_pc_type jacobi
> -fieldsplit_u_dm_moose_vars vel_x,vel_y
> -fieldsplit_u_ksp_type preonly
> -fieldsplit_u_pc_hypre_type boomeramg
> -fieldsplit_u_pc_type hypre
> -pc_fieldsplit_schur_fact_type full
> -pc_fieldsplit_schur_precondition selfp
> -pc_fieldsplit_type schur
> -pc_type fieldsplit
> 
> works wonderfully for a low Reynolds number of 2.2. The solver performance 
> crushes LU as I scale up the problem. However, not surprisingly this options 
> table struggles when I bump the Reynolds number to 220. I've read that use of 
> AIR (approximate ideal restriction) can improve performance for advection 
> dominated problems. I've tried setting -pc_hypre_boomeramg_restriction_type 1 
> for a simple diffusion problem and the option works fine. However, when 
> applying it to my field-split preconditioned Navier-Stokes system, I get 
> immediate non-convergence:
> 
>  0 Nonlinear |R| = 1.033077e+03
>   0 Linear |R| = 1.033077e+03
>   Linear solve did not converge due to DIVERGED_NANORINF iterations 0
> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
> 
> Does anyone have an idea as to why this might be happening?

Do not use this option, even when not part of PCFIELDSPLIT.
There is some missing plumbing in PETSc which makes it unusable, see Ben’s 
comment here 
https://github.com/hypre-space/hypre/issues/764#issuecomment-1353452417.
In fact, it’s quite easy to make HYPRE generate NaN with a very simple 
stabilized convection—diffusion problem near the pure convection limit 
(something that ℓAIR is supposed to handle).
Even worse, you can make HYPRE fill your terminal with printf-style debugging 
messages 
https://github.com/hypre-space/hypre/blob/5546cc22d46b3dba253849f258786da47c9a7b21/src/parcsr_ls/par_lr_restr.c#L1416
 with this option turned on.
As a result, I have been unable to reproduce any of the ℓAIR results.
This also explains why I have been using plain BoomerAMG instead of ℓAIR for 
the comparison in page 9 of https://arxiv.org/pdf/2201.02250.pdf (if you would 
like to try the PC we are using, I could send you the command line options).

Thanks,
Pierre

> If not, I'd take a suggestion on where to set a breakpoint to start my own 
> investigation. Alternatively, I welcome other preconditioning suggestions for 
> an advection dominated problem.
> 
> Alex



Re: [petsc-users] PETSc build asks for network connections

2023-03-20 Thread Pierre Jolivet

> On 20 Mar 2023, at 2:45 AM, Barry Smith  wrote:
> 
> 
>   I found a bit more information in gmakefile.test which has the magic sauce 
> used by make test to stop the firewall popups while running the test suite.
> 
> # MACOS FIREWALL HANDLING
> # - if run with MACOS_FIREWALL=1
> #   (automatically set in $PETSC_ARCH/lib/petsc/conf/petscvariables if 
> configured --with-macos-firewall-rules),
> #   ensure mpiexec and test executable is on firewall list
> #
> ifeq ($(MACOS_FIREWALL),1)
> FW := /usr/libexec/ApplicationFirewall/socketfilterfw
> # There is no reliable realpath command in macOS without need for 3rd party 
> tools like homebrew coreutils
> # Using Python's realpath seems like the most robust way here
> realpath-py = $(shell $(PYTHON) -c 'import os, sys; 
> print(os.path.realpath(sys.argv[1]))' $(1))
> #
> define macos-firewall-register
>   @APP=$(call realpath-py, $(1)); \
> if ! sudo -n true 2>/dev/null; then printf "Asking for sudo password to 
> add new firewall rule for\n  $$APP\n"; fi; \
> sudo $(FW) --remove $$APP --add $$APP --blockapp $$APP
> endef
> endif
> 
> and below. When building each executable it automatically calls 
> socketfilterfw on that executable so it won't popup.
> 
> From this I think you can reverse engineer how to turn it off for your 
> executables.
> 
> Perhaps PETSc's make ex1 etc should also apply this magic sauce, Pierre?

This configure option was added in 
https://gitlab.com/petsc/petsc/-/merge_requests/3131 but it never worked on my 
machines.
I just tried again this morning a make check with MACOS_FIREWALL=1, it’s asking 
for my password to register MPICH in the firewall, but the popups are still 
appearing afterwards.
That’s why I’ve never used that configure option and why I’m not sure if I can 
trust this code from makefile.test, but I’m probably being paranoid.
Prior to Ventura, when I was running the test suite, I manually disabled the 
firewall https://support.apple.com/en-gb/guide/mac-help/mh11783/12.0/mac/12.0
Apple has done yet again Apple things, and even if you disable the firewall on 
Ventura (https://support.apple.com/en-gb/guide/mac-help/mh11783/13.0/mac/13.0), 
the popups are still appearing.
Right now, I don’t have a solution, except for not using my machine while the 
test suite runs…
I don’t recall whether this has been mentioned by any of the other devs, but 
this is a completely harmless (though frustrating) message: MPI and/or PETSc 
cannot be used without an action from the user to allow others to get access to 
your machine.

Thanks,
Pierre

>> On Mar 19, 2023, at 8:10 PM, Amneet Bhalla  wrote:
>> 
>> This helped only during the configure stage, and not during the check stage 
>> and during executing the application built on PETSc. Do you think it is 
>> because I built mpich locally and not with PETSc?
>> 
>> On Sun, Mar 19, 2023 at 3:51 PM Barry Smith > > wrote:
>>> 
>>>   ./configure option with-macos-firewall-rules
>>> 
>>> 
 On Mar 19, 2023, at 5:25 PM, Amneet Bhalla >>> > wrote:
 
 Yes, this is MPI that is triggering the apple firewall. If I allow it it 
 gets added to the allowed list (see the screenshot) and it does not 
 trigger the firewall again. However, this needs to be done for all 
 executables (there will be several main2d's in the list). Any way to 
 suppress it for all executables linked to mpi in the first place?
 
 
 
 On Sun, Mar 19, 2023 at 11:01 AM Matthew Knepley >>> > wrote:
> On Sun, Mar 19, 2023 at 1:59 PM Amneet Bhalla  > wrote:
>> I'm building PETSc without mpi (I built mpich v 4.1.1 locally). Here is 
>> the configure command line that I used:
>> 
>> ./configure --CC=mpicc --CXX=mpicxx --FC=mpif90 --PETSC_ARCH=darwin-dbg 
>> --with-debugging=1 --download-hypre=1 --with-x=0
>> 
> 
> No, this uses MPI, it just does not built it. Configuring with 
> --with-mpi=0 will shut off any use of MPI, which is what Satish thinks is 
> bugging the firewall.
> 
>   Thanks,
> 
> Matt
>  
>> On Sun, Mar 19, 2023 at 10:56 AM Satish Balay > > wrote:
>>> I think its due to some of the system calls from MPI.
>>> 
>>> You can verify this with a '--with-mpi=0' build.
>>> 
>>> I wonder if there is a way to build mpich or openmpi - that doesn't 
>>> trigger Apple's firewall..
>>> 
>>> Satish
>>> 
>>> On Sun, 19 Mar 2023, Amneet Bhalla wrote:
>>> 
>>> > Hi Folks,
>>> > 
>>> > I'm trying to build PETSc on MacOS Ventura (Apple M2) with hypre. I'm 
>>> > using
>>> > the latest version (v3.18.5). During the configure and make check 
>>> > stage I
>>> > get a request about accepting network connections. The configure and 
>>> > check
>>> > proceeds without my input but the dialog box stays in 

Re: [petsc-users] Random Error of mumps: out of memory: INFOG(1)=-9

2023-03-04 Thread Pierre Jolivet


> On 4 Mar 2023, at 3:26 PM, Zongze Yang  wrote:
> 
> 
> 
> 
>> On Sat, 4 Mar 2023 at 22:03, Pierre Jolivet  wrote:
>> 
>> 
>>>> On 4 Mar 2023, at 2:51 PM, Zongze Yang  wrote:
>>>> 
>>>> 
>>>> 
>>>> On Sat, 4 Mar 2023 at 21:37, Pierre Jolivet  wrote:
>>>>> 
>>>>> 
>>>>> > On 4 Mar 2023, at 2:30 PM, Zongze Yang  wrote:
>>>>> > 
>>>>> > Hi, 
>>>>> > 
>>>>> > I am writing to seek your advice regarding a problem I encountered 
>>>>> > while using multigrid to solve a certain issue.
>>>>> > I am currently using multigrid with the coarse problem solved by PCLU. 
>>>>> > However, the PC failed randomly with the error below (the value of 
>>>>> > INFO(2) may differ):
>>>>> > ```shell
>>>>> > [ 0] Error reported by MUMPS in numerical factorization phase: 
>>>>> > INFOG(1)=-9, INFO(2)=36
>>>>> > ```
>>>>> > 
>>>>> > Upon checking the documentation of MUMPS, I discovered that increasing 
>>>>> > the value of ICNTL(14) may help resolve the issue. Specifically, I set 
>>>>> > the option -mat_mumps_icntl_14 to a higher value (such as 40), and the 
>>>>> > error seemed to disappear after I set the value of ICNTL(14) to 80. 
>>>>> > However, I am still curious as to why MUMPS failed randomly in the 
>>>>> > first place.
>>>>> > 
>>>>> > Upon further inspection, I found that the number of nonzeros of the 
>>>>> > PETSc matrix and the MUMPS matrix were different every time I ran the 
>>>>> > code. I am now left with the following questions:
>>>>> > 
>>>>> > 1. What could be causing the number of nonzeros of the MUMPS matrix to 
>>>>> > change every time I run the code?
>>>>> 
>>>>> Is the Mat being fed to MUMPS distributed on a communicator of size 
>>>>> greater than one?
>>>>> If yes, then, depending on the pivoting and the renumbering, you may get 
>>>>> non-deterministic results.
>>>>  
>>>> Hi, Pierre,
>>>> Thank you for your prompt reply. Yes, the size of the communicator is 
>>>> greater than one. 
>>>> Even if the size of the communicator is equal, are the results still 
>>>> non-deterministic?
>>> 
>>> In the most general case, yes.
>>> 
>>> Can I assume the Mat being fed to MUMPS is the same in this case?
>> 
>> Are you doing algebraic or geometric multigrid?
>> Are the prolongation operators computed by Firedrake or by PETSc, e.g., 
>> through GAMG?
>> If it’s the latter, I believe the Mat being fed to MUMPS should always be 
>> the same.
>> If it’s the former, you’ll have to ask the Firedrake people if there may be 
>> non-determinism in the coarsening process.
> 
> I am using geometric multigrid, and the prolongation operators, I think, are 
> computed by Firedrake. 
> Thanks for your suggestion, I will ask the Firedrake people.
>  
>> 
>>> Is the pivoting and renumbering all done by MUMPS other than PETSc?
>> 
>> You could provide your own numbering, but by default, this is outsourced to 
>> MUMPS indeed, which will itself outsourced this to METIS, AMD, etc.
> 
> I think I won't do this.
> By the way, does the result of superlu_dist  have a similar non-deterministic?

SuperLU_DIST uses static pivoting as far as I know, so it may be more 
deterministic.

Thanks,
Pierre

> Thanks,
> Zongze
> 
>> 
>> Thanks,
>> Pierre
>> 
>>>> 
>>>> > 2. Why is the number of nonzeros of the MUMPS matrix significantly 
>>>> > greater than that of the PETSc matrix (as seen in the output of 
>>>> > ksp_view, 115025949 vs 7346177)?
>>>> 
>>>> Exact factorizations introduce fill-in.
>>>> The number of nonzeros you are seeing for MUMPS is the number of nonzeros 
>>>> in the factors.
>>>> 
>>>> > 3. Is it possible that the varying number of nonzeros of the MUMPS 
>>>> > matrix is the cause of the random failure?
>>>> 
>>>> Yes, MUMPS uses dynamic scheduling, which will depend on numerical 
>>>> pivoting, and which may generate factors with different number of nonzeros.
>>> 
>>> Got it. Thank you for your clear explana

Re: [petsc-users] Random Error of mumps: out of memory: INFOG(1)=-9

2023-03-04 Thread Pierre Jolivet


> On 4 Mar 2023, at 2:51 PM, Zongze Yang  wrote:
> 
> 
> 
> On Sat, 4 Mar 2023 at 21:37, Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>> 
>> 
>> > On 4 Mar 2023, at 2:30 PM, Zongze Yang > > <mailto:yangzon...@gmail.com>> wrote:
>> > 
>> > Hi, 
>> > 
>> > I am writing to seek your advice regarding a problem I encountered while 
>> > using multigrid to solve a certain issue.
>> > I am currently using multigrid with the coarse problem solved by PCLU. 
>> > However, the PC failed randomly with the error below (the value of INFO(2) 
>> > may differ):
>> > ```shell
>> > [ 0] Error reported by MUMPS in numerical factorization phase: 
>> > INFOG(1)=-9, INFO(2)=36
>> > ```
>> > 
>> > Upon checking the documentation of MUMPS, I discovered that increasing the 
>> > value of ICNTL(14) may help resolve the issue. Specifically, I set the 
>> > option -mat_mumps_icntl_14 to a higher value (such as 40), and the error 
>> > seemed to disappear after I set the value of ICNTL(14) to 80. However, I 
>> > am still curious as to why MUMPS failed randomly in the first place.
>> > 
>> > Upon further inspection, I found that the number of nonzeros of the PETSc 
>> > matrix and the MUMPS matrix were different every time I ran the code. I am 
>> > now left with the following questions:
>> > 
>> > 1. What could be causing the number of nonzeros of the MUMPS matrix to 
>> > change every time I run the code?
>> 
>> Is the Mat being fed to MUMPS distributed on a communicator of size greater 
>> than one?
>> If yes, then, depending on the pivoting and the renumbering, you may get 
>> non-deterministic results.
>  
> Hi, Pierre,
> Thank you for your prompt reply. Yes, the size of the communicator is greater 
> than one. 
> Even if the size of the communicator is equal, are the results still 
> non-deterministic?

In the most general case, yes.

> Can I assume the Mat being fed to MUMPS is the same in this case?

Are you doing algebraic or geometric multigrid?
Are the prolongation operators computed by Firedrake or by PETSc, e.g., through 
GAMG?
If it’s the latter, I believe the Mat being fed to MUMPS should always be the 
same.
If it’s the former, you’ll have to ask the Firedrake people if there may be 
non-determinism in the coarsening process.

> Is the pivoting and renumbering all done by MUMPS other than PETSc?

You could provide your own numbering, but by default, this is outsourced to 
MUMPS indeed, which will itself outsourced this to METIS, AMD, etc.

Thanks,
Pierre

>> 
>> > 2. Why is the number of nonzeros of the MUMPS matrix significantly greater 
>> > than that of the PETSc matrix (as seen in the output of ksp_view, 
>> > 115025949 vs 7346177)?
>> 
>> Exact factorizations introduce fill-in.
>> The number of nonzeros you are seeing for MUMPS is the number of nonzeros in 
>> the factors.
>> 
>> > 3. Is it possible that the varying number of nonzeros of the MUMPS matrix 
>> > is the cause of the random failure?
>> 
>> Yes, MUMPS uses dynamic scheduling, which will depend on numerical pivoting, 
>> and which may generate factors with different number of nonzeros.
> 
> Got it. Thank you for your clear explanation.
> Zongze 
> 
>> 
>> Thanks,
>> Pierre
>> 
>> > I have attached a test example written in Firedrake. The output of 
>> > `ksp_view` after running the code twice is included below for your 
>> > reference.
>> > In the output, the number of nonzeros of the MUMPS matrix was 115025949 
>> > and 115377847, respectively, while that of the PETSc matrix was only 
>> > 7346177.
>> > 
>> > ```shell
>> > (complex-int32-mkl) $ mpiexec -n 32 python test_mumps.py -ksp_view 
>> > ::ascii_info_detail | grep -A3 "type: "
>> >   type: preonly
>> >   maximum iterations=1, initial guess is zero
>> >   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> >   left preconditioning
>> > --
>> >   type: lu
>> > out-of-place factorization
>> > tolerance for zero pivot 2.22045e-14
>> > matrix ordering: external
>> > --
>> >   type: mumps
>> >   rows=1050625, cols=1050625
>> >   package used to perform factorization: mumps
>> >   total: nonzeros=115025949, allocated nonzeros=115025949
>> > --
>> > type: mpiaij
>> > rows=1050625, cols=1050625
>> > tot

Re: [petsc-users] Random Error of mumps: out of memory: INFOG(1)=-9

2023-03-04 Thread Pierre Jolivet



> On 4 Mar 2023, at 2:30 PM, Zongze Yang  wrote:
> 
> Hi, 
> 
> I am writing to seek your advice regarding a problem I encountered while 
> using multigrid to solve a certain issue.
> I am currently using multigrid with the coarse problem solved by PCLU. 
> However, the PC failed randomly with the error below (the value of INFO(2) 
> may differ):
> ```shell
> [ 0] Error reported by MUMPS in numerical factorization phase: INFOG(1)=-9, 
> INFO(2)=36
> ```
> 
> Upon checking the documentation of MUMPS, I discovered that increasing the 
> value of ICNTL(14) may help resolve the issue. Specifically, I set the option 
> -mat_mumps_icntl_14 to a higher value (such as 40), and the error seemed to 
> disappear after I set the value of ICNTL(14) to 80. However, I am still 
> curious as to why MUMPS failed randomly in the first place.
> 
> Upon further inspection, I found that the number of nonzeros of the PETSc 
> matrix and the MUMPS matrix were different every time I ran the code. I am 
> now left with the following questions:
> 
> 1. What could be causing the number of nonzeros of the MUMPS matrix to change 
> every time I run the code?

Is the Mat being fed to MUMPS distributed on a communicator of size greater 
than one?
If yes, then, depending on the pivoting and the renumbering, you may get 
non-deterministic results.

> 2. Why is the number of nonzeros of the MUMPS matrix significantly greater 
> than that of the PETSc matrix (as seen in the output of ksp_view, 115025949 
> vs 7346177)?

Exact factorizations introduce fill-in.
The number of nonzeros you are seeing for MUMPS is the number of nonzeros in 
the factors.

> 3. Is it possible that the varying number of nonzeros of the MUMPS matrix is 
> the cause of the random failure?

Yes, MUMPS uses dynamic scheduling, which will depend on numerical pivoting, 
and which may generate factors with different number of nonzeros.

Thanks,
Pierre

> I have attached a test example written in Firedrake. The output of `ksp_view` 
> after running the code twice is included below for your reference.
> In the output, the number of nonzeros of the MUMPS matrix was 115025949 and 
> 115377847, respectively, while that of the PETSc matrix was only 7346177.
> 
> ```shell
> (complex-int32-mkl) $ mpiexec -n 32 python test_mumps.py -ksp_view 
> ::ascii_info_detail | grep -A3 "type: "
>   type: preonly
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
> --
>   type: lu
> out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: external
> --
>   type: mumps
>   rows=1050625, cols=1050625
>   package used to perform factorization: mumps
>   total: nonzeros=115025949, allocated nonzeros=115025949
> --
> type: mpiaij
> rows=1050625, cols=1050625
> total: nonzeros=7346177, allocated nonzeros=7346177
> total number of mallocs used during MatSetValues calls=0
> (complex-int32-mkl) $ mpiexec -n 32 python test_mumps.py -ksp_view 
> ::ascii_info_detail | grep -A3 "type: "
>   type: preonly
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
> --
>   type: lu
> out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: external
> --
>   type: mumps
>   rows=1050625, cols=1050625
>   package used to perform factorization: mumps
>   total: nonzeros=115377847, allocated nonzeros=115377847
> --
> type: mpiaij
> rows=1050625, cols=1050625
> total: nonzeros=7346177, allocated nonzeros=7346177
> total number of mallocs used during MatSetValues calls=0
> ```
> 
> I would greatly appreciate any insights you may have on this matter. Thank 
> you in advance for your time and assistance.
> 
> Best wishes,
> Zongze
> 



Re: [petsc-users] Inquiry regarding DMAdaptLabel function

2023-02-27 Thread Pierre Jolivet


> On 27 Feb 2023, at 4:42 PM, Matthew Knepley  wrote:
> 
> On Mon, Feb 27, 2023 at 10:26 AM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>>> On 27 Feb 2023, at 4:16 PM, Matthew Knepley >> <mailto:knep...@gmail.com>> wrote:
>>> 
>>> On Mon, Feb 27, 2023 at 10:13 AM Pierre Jolivet >> <mailto:pie...@joliv.et>> wrote:
>>>>> On 27 Feb 2023, at 3:59 PM, Matthew Knepley >>>> <mailto:knep...@gmail.com>> wrote:
>>>>> 
>>>>> On Mon, Feb 27, 2023 at 9:53 AM Zongze Yang >>>> <mailto:yangzon...@gmail.com>> wrote:
>>>>>> Hi, Matt,
>>>>>> 
>>>>>> I tested coarsening a mesh by using ParMMg without firedrake, and found 
>>>>>> some issues:
>>>>>>  see the code and results here:  
>>>>>> https://gitlab.com/petsc/petsc/-/issues/1331
>>>>>> 
>>>>>> Could you have a look and give some comments or suggestions?
>>>>> 
>>>>> I replied on the issue. More generally, the adaptive refinement software 
>>>>> has not seen wide use
>>>> 
>>>> :)
>>>> Matt probably meant “the _DMPlex interface_ to adaptive refinement 
>>>> software has not seen wide use”, Mmg has been rather widely used for 10+ 
>>>> years (here is a 13-year old presentation 
>>>> https://www.ljll.math.upmc.fr/hecht/ftp/ff++days/2010/exposes/Morice-MeshMetric.pdf).
>>> 
>>> The interface is certainly new, but even ParMMG is only from Nov 2016, 
>>> which is very new if you are an old person :)
>> 
>> Indeed. In fact, I do believe we should add a DMPlex mechanism to centralize 
>> (redistribute on a single process) a DMPlex and to call Mmg instead of 
>> ParMmg.
>> It would certainly not be scalable for large meshes but:
>> 1) there is no need for ParMmg on small-/medium-scale meshes
>> 2) Mmg is more robust than ParMmg at this point in time
>> 3) Mmg has more feature than ParMmg at this point in time, e.g., implicit 
>> remeshing using a level-set
>> 4) there is more industry money funnelled into Mmg than into ParMmg 
>> I think the mechanism I mentioned initially was in the TODO list of the 
>> Firedrake people (or yours?), maybe it’s already done, but in any case it’s 
>> not hooked in the Mmg adaptor code, though it should (erroring out in the 
>> case where the communicator is of size greater than one would then not 
>> happen anymore).
> 
> Yes, we used to do the same thing with partitioners. We can use 
> DMPlexGather().
> 
> I thought MMG only did 2D and ParMMG only did 3D, but this must be wrong now. 
> Can MMG do both?

Mmg does 2D, 3D, and 3D surfaces.
ParMmg only does 3D (with no short-term plan for 2D or 3D surfaces).

Thanks,
Pierre

>   Thanks,
> 
>  Matt
>  
>> Thanks,
>> Pierre
>> 
>>>   Thanks,
>>> 
>>> Matt
>>>  
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> , and I expect
>>>>> more of these kinds of bugs until more people use it.
>>>>> 
>>>>>   Thanks,
>>>>> 
>>>>>  Matt
>>>>>  
>>>>>> Best wishes,
>>>>>> Zongze
>>>>>> 
>>>>>> 
>>>>>> On Mon, 27 Feb 2023 at 20:19, Matthew Knepley >>>>> <mailto:knep...@gmail.com>> wrote:
>>>>>>> On Sat, Feb 18, 2023 at 6:41 AM Zongze Yang >>>>>> <mailto:yangzon...@gmail.com>> wrote:
>>>>>>>> Another question on mesh coarsening is about `DMCoarsen` which will 
>>>>>>>> fail when running in parallel.
>>>>>>>> 
>>>>>>>> I generate a mesh in Firedrake, and then create function space and 
>>>>>>>> functions, after that, I get the dmplex and coarsen it.
>>>>>>>> When running in serials, I get the mesh coarsened correctly. But it 
>>>>>>>> failed with errors in ParMMG when running parallel.
>>>>>>>> 
>>>>>>>> However, If I did not create function space and functions on the 
>>>>>>>> original mesh, everything works fine too.
>>>>>>>> 
>>>>>>>> The code and the error logs are attached.
>>>>>>> 
>>>>>>> I believe the problem is that Firedrake and PETS

Re: [petsc-users] Inquiry regarding DMAdaptLabel function

2023-02-27 Thread Pierre Jolivet


> On 27 Feb 2023, at 4:16 PM, Matthew Knepley  wrote:
> 
> On Mon, Feb 27, 2023 at 10:13 AM Pierre Jolivet  <mailto:pie...@joliv.et>> wrote:
>>> On 27 Feb 2023, at 3:59 PM, Matthew Knepley >> <mailto:knep...@gmail.com>> wrote:
>>> 
>>> On Mon, Feb 27, 2023 at 9:53 AM Zongze Yang >> <mailto:yangzon...@gmail.com>> wrote:
>>>> Hi, Matt,
>>>> 
>>>> I tested coarsening a mesh by using ParMMg without firedrake, and found 
>>>> some issues:
>>>>  see the code and results here:  
>>>> https://gitlab.com/petsc/petsc/-/issues/1331
>>>> 
>>>> Could you have a look and give some comments or suggestions?
>>> 
>>> I replied on the issue. More generally, the adaptive refinement software 
>>> has not seen wide use
>> 
>> :)
>> Matt probably meant “the _DMPlex interface_ to adaptive refinement software 
>> has not seen wide use”, Mmg has been rather widely used for 10+ years (here 
>> is a 13-year old presentation 
>> https://www.ljll.math.upmc.fr/hecht/ftp/ff++days/2010/exposes/Morice-MeshMetric.pdf).
> 
> The interface is certainly new, but even ParMMG is only from Nov 2016, which 
> is very new if you are an old person :)

Indeed. In fact, I do believe we should add a DMPlex mechanism to centralize 
(redistribute on a single process) a DMPlex and to call Mmg instead of ParMmg.
It would certainly not be scalable for large meshes but:
1) there is no need for ParMmg on small-/medium-scale meshes
2) Mmg is more robust than ParMmg at this point in time
3) Mmg has more feature than ParMmg at this point in time, e.g., implicit 
remeshing using a level-set
4) there is more industry money funnelled into Mmg than into ParMmg 
I think the mechanism I mentioned initially was in the TODO list of the 
Firedrake people (or yours?), maybe it’s already done, but in any case it’s not 
hooked in the Mmg adaptor code, though it should (erroring out in the case 
where the communicator is of size greater than one would then not happen 
anymore).

Thanks,
Pierre

>   Thanks,
> 
> Matt
>  
>> Thanks,
>> Pierre
>> 
>>> , and I expect
>>> more of these kinds of bugs until more people use it.
>>> 
>>>   Thanks,
>>> 
>>>  Matt
>>>  
>>>> Best wishes,
>>>> Zongze
>>>> 
>>>> 
>>>> On Mon, 27 Feb 2023 at 20:19, Matthew Knepley >>> <mailto:knep...@gmail.com>> wrote:
>>>>> On Sat, Feb 18, 2023 at 6:41 AM Zongze Yang >>>> <mailto:yangzon...@gmail.com>> wrote:
>>>>>> Another question on mesh coarsening is about `DMCoarsen` which will fail 
>>>>>> when running in parallel.
>>>>>> 
>>>>>> I generate a mesh in Firedrake, and then create function space and 
>>>>>> functions, after that, I get the dmplex and coarsen it.
>>>>>> When running in serials, I get the mesh coarsened correctly. But it 
>>>>>> failed with errors in ParMMG when running parallel.
>>>>>> 
>>>>>> However, If I did not create function space and functions on the 
>>>>>> original mesh, everything works fine too.
>>>>>> 
>>>>>> The code and the error logs are attached.
>>>>> 
>>>>> I believe the problem is that Firedrake and PETSc currently have 
>>>>> incompatible coordinate spaces. We are working
>>>>> to fix this, and I expect it to work by this summer.
>>>>> 
>>>>>   Thanks,
>>>>> 
>>>>>  Matt
>>>>>  
>>>>>> Thank you for your time and attention。
>>>>>> 
>>>>>> Best wishes,
>>>>>> Zongze
>>>>>> 
>>>>>> 
>>>>>> On Sat, 18 Feb 2023 at 15:24, Zongze Yang >>>>> <mailto:yangzon...@gmail.com>> wrote:
>>>>>>> Dear PETSc Group,
>>>>>>> 
>>>>>>> I am writing to inquire about the function DMAdaptLabel in PETSc. 
>>>>>>> I am trying to use it coarse a mesh, but the resulting mesh is refined.
>>>>>>> 
>>>>>>> In the following code, all of the `adpat` label values were set to 2 
>>>>>>> (DM_ADAPT_COARSEN).
>>>>>>> There must be something wrong. Could you give some suggestions?
>>>>>>>  
>>>>>>> ```python
>>>>>>> f

Re: [petsc-users] Inquiry regarding DMAdaptLabel function

2023-02-27 Thread Pierre Jolivet


> On 27 Feb 2023, at 3:59 PM, Matthew Knepley  wrote:
> 
> On Mon, Feb 27, 2023 at 9:53 AM Zongze Yang  > wrote:
>> Hi, Matt,
>> 
>> I tested coarsening a mesh by using ParMMg without firedrake, and found some 
>> issues:
>>  see the code and results here:  https://gitlab.com/petsc/petsc/-/issues/1331
>> 
>> Could you have a look and give some comments or suggestions?
> 
> I replied on the issue. More generally, the adaptive refinement software has 
> not seen wide use

:)
Matt probably meant “the _DMPlex interface_ to adaptive refinement software has 
not seen wide use”, Mmg has been rather widely used for 10+ years (here is a 
13-year old presentation 
https://www.ljll.math.upmc.fr/hecht/ftp/ff++days/2010/exposes/Morice-MeshMetric.pdf).

Thanks,
Pierre

> , and I expect
> more of these kinds of bugs until more people use it.
> 
>   Thanks,
> 
>  Matt
>  
>> Best wishes,
>> Zongze
>> 
>> 
>> On Mon, 27 Feb 2023 at 20:19, Matthew Knepley > > wrote:
>>> On Sat, Feb 18, 2023 at 6:41 AM Zongze Yang >> > wrote:
 Another question on mesh coarsening is about `DMCoarsen` which will fail 
 when running in parallel.
 
 I generate a mesh in Firedrake, and then create function space and 
 functions, after that, I get the dmplex and coarsen it.
 When running in serials, I get the mesh coarsened correctly. But it failed 
 with errors in ParMMG when running parallel.
 
 However, If I did not create function space and functions on the original 
 mesh, everything works fine too.
 
 The code and the error logs are attached.
>>> 
>>> I believe the problem is that Firedrake and PETSc currently have 
>>> incompatible coordinate spaces. We are working
>>> to fix this, and I expect it to work by this summer.
>>> 
>>>   Thanks,
>>> 
>>>  Matt
>>>  
 Thank you for your time and attention。
 
 Best wishes,
 Zongze
 
 
 On Sat, 18 Feb 2023 at 15:24, Zongze Yang >>> > wrote:
> Dear PETSc Group,
> 
> I am writing to inquire about the function DMAdaptLabel in PETSc. 
> I am trying to use it coarse a mesh, but the resulting mesh is refined.
> 
> In the following code, all of the `adpat` label values were set to 2 
> (DM_ADAPT_COARSEN).
> There must be something wrong. Could you give some suggestions?
>  
> ```python
> from firedrake import *
> from firedrake.petsc import PETSc
> 
> def mark_all_cells(mesh):
> plex = mesh.topology_dm
> with PETSc.Log.Event("ADD_ADAPT_LABEL"):
> plex.createLabel('adapt')
> cs, ce = plex.getHeightStratum(0)
> for i in range(cs, ce):
> plex.setLabelValue('adapt', i, 2)
> 
> return plex
> 
> mesh = RectangleMesh(10, 10, 1, 1)
> 
> x = SpatialCoordinate(mesh)
> V = FunctionSpace(mesh, 'CG', 1)
> f = Function(V).interpolate(10 + 10*sin(x[0]))
> triplot(mesh)
> 
> plex = mark_all_cells(mesh)
> new_plex = plex.adaptLabel('adapt')
> mesh = Mesh(new_plex)
> triplot(mesh)
> ```
> 
> Thank you very much for your time.
> 
> Best wishes,
> Zongze
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/ 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] DMPlex Halo Communication or Graph Partitioner Issue

2023-02-26 Thread Pierre Jolivet


> On 26 Feb 2023, at 8:07 PM, Mike Michell  wrote:
> 
> I cannot agree with this argument, unless you also tested with petsc 3.18.4 
> tarball from https://petsc.org/release/install/download/. 
> If library has issue, it is trivial that I will see an error from my code. 
> 
> I ran my code with valgrind and see no error if it is with petsc 3.18.4. You 
> can test with my code with valgrind or address sanitizer with this version of 
> petsc-3.18.4.tar.gz from (https://petsc.org/release/install/download/). I 
> expect you see no error.
> 
> 
> Let me ask my question differently: 
> Has any change been made on DMPlexMarkBoundaryFaces() recently?

Yes, and it will may break your application if you are not careful about it: 
https://gitlab.com/petsc/petsc/-/commit/a29bf4df3e5335fbd3b27b552a624c7f2a5a2f0a

Thanks,
Pierre

> I found that the latest petsc does not recognize parallel (but not physical) 
> boundary as boundary for distributed dm (line 235 of my example code). 
> Because of this, you saw the error from the arrays:
> 
> ! midpoint of median-dual face for inner face
>  axrf(ifa,1) = 0.5d0*(yc(nc1)+yfc(ifa)) ! for nc1 cell
>  axrf(ifa,2) = 0.5d0*(yc(nc2)+yfc(ifa)) ! for nc2 cell
> 
> and these were allocated here
> 
>  allocate(xc(ncell))
>  allocate(yc(ncell))
> 
> as you pointed out. Or any change made on distribution of dm over procs?
> 
> Thanks,
> Mike
> 
>> 
>> On Sun, Feb 26, 2023 at 11:32 AM Mike Michell > > wrote:
>>> This is what I get from petsc main which is not correct:
>>> Overall volume computed from median-dual ...
>>>6.37050098781844 
>>> Overall volume computed from PETSc ...
>>>3.1547005380
>>> 
>>> 
>>> This is what I get from petsc 3.18.4 which is correct:
>>> Overall volume computed from median-dual ...
>>>3.1547005380 
>>> Overall volume computed from PETSc ...
>>>3.1547005380
>>> 
>>> 
>>> If there is a problem in the code, it is also strange for me that petsc 
>>> 3.18.4 gives the correct answer
>> 
>> As I said, this can happen due to different layouts in memory. If you run it 
>> under valgrind, or address sanitizer, you will see
>> that there is a problem.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>>> Thanks,
>>> Mike
>>> 
 
 On Sun, Feb 26, 2023 at 11:19 AM Mike Michell >>> > wrote:
> Which version of petsc you tested? With petsc 3.18.4, median duan volume 
> gives the same value with petsc from DMPlexComputeCellGeometryFVM(). 
 
 This is only an accident of the data layout. The code you sent writes over 
 memory in the local Fortran arrays.
 
   Thanks,
 
  Matt
  
>> 
>> On Sat, Feb 25, 2023 at 3:11 PM Mike Michell > > wrote:
>>> My apologies for the late follow-up. There was a time conflict. 
>>> 
>>> A simple example code related to the issue I mentioned is attached 
>>> here. The sample code does: (1) load grid on dm, (2) compute 
>>> vertex-wise control volume for each node in a median-dual way, (3) halo 
>>> exchange among procs to have complete control volume values, and (4) 
>>> print out its field as a .vtu file. To make sure, the computed control 
>>> volume is also compared with PETSc-computed control volume via 
>>> DMPlexComputeCellGeometryFVM() (see lines 771-793). 
>>> 
>>> Back to the original problem, I can get a proper control volume field 
>>> with PETSc 3.18.4, which is the latest stable release. However, if I 
>>> use PETSc from the main repo, it gives a strange control volume field. 
>>> Something is certainly strange around the parallel boundaries, thus I 
>>> think something went wrong with halo communication. To help understand, 
>>> a comparing snapshot is also attached. I guess a certain part of my 
>>> code is no longer compatible with PETSc unless there is a bug in the 
>>> library. Could I get comments on it?
>> 
>> I can run your example. The numbers I get for "median-dual volume" do 
>> not match the "PETSc volume", and the PETSc volume is correct. Moreover, 
>> the median-dual numbers change, which suggests a memory fault. I 
>> compiled it using address sanitizer, and it found an error:
>> 
>>  Number of physical boundary edge ...4   0  
>>  Number of physical and parallel boundary edge ...4  
>>  0  
>>  Number of parallel boundary edge ...0   0  
>>  Number of physical boundary edge ...4   1  
>>  Number of physical and parallel boundary edge ...4  
>>  1  
>>  Number of parallel boundary edge ...0   1  
>> =
>> ==36587==ERROR: AddressSanitizer: heap-buffer-overflow on address 
>> 0x60322d40 at pc 0x0001068e12a8 bp 

Re: [petsc-users] petsc compiled without MPI

2023-02-25 Thread Pierre Jolivet


> On 25 Feb 2023, at 11:44 PM, Long, Jianbo  wrote:
> 
> Hello,
> 
> For some of my applications, I need to use petsc without mpi, or use it 
> sequentially. I wonder where I can find examples/tutorials for this ?

You can run sequentially with just a single MPI process (-n 1).
If you need to run without MPI whatsoever, you’ll need to have a separate PETSc 
installation which was configured --with-mpi=0
In both cases, the same user-code will run, i.e., all PETSc examples available 
with the sources will work (though some are designed purely for parallel 
experiments and may error out early on purpose).

Thanks,
Pierre

> Thanks very much,
> Jianbo Long



Re: [petsc-users] HYPRE requires C++ compiler

2023-02-20 Thread Pierre Jolivet


> On 21 Feb 2023, at 4:07 AM, Park, Heeho via petsc-users 
>  wrote:
> 
> 
> Hi PETSc developers,
>  
> I’m using the same configure script on my system to compile petsc-main branch 
> as petsc-v3.17.2, but now I am receiving this message.
> I’ve tried it several different ways but HYPRE installer does not recognize 
> the mpicxx I’m using. I can send you the configure log and file if you would 
> like to see them.

Send those files to petsc-ma...@mcs.anl.gov

Thanks,
Pierre

>  
> =
>   Trying to download https://github.com/hypre-space/hypre for 
> HYPRE
> =
>  
> *
>UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for 
> details):
> -
>   Error: Hypre requires C++ compiler. None specified
> *
>  
>  
> Currently Loaded Modules:
>   1) intel/21.3.0   2) mkl/21.3.0   3) openmpi-intel/4.0   4) anaconda3/5.2.0 
>   5) gcc/4.9.3
>  
> Thanks,
>  
> Heeho D. Park, Ph.D.
>Computational Hydrologist & Nuclear Engineer
> Center for Energy & Earth Systems
> Applied Systems Analysis and Research Dept.
>Sandia National Laboratories 
>Email: heep...@sandia.gov
>Web: Homepage
>  


Re: [petsc-users] Question about rank of matrix

2023-02-17 Thread Pierre Jolivet

> On 17 Feb 2023, at 8:56 AM, Stefano Zampini  wrote:
> 
> 
> On Fri, Feb 17, 2023, 10:43 user_gong Kim  > wrote:
>> Hello,
>> 
>> I have a question about rank of matrix.
>> At the problem 
>> Au = b, 
>> 
>> In my case, sometimes global matrix A is not full rank.
>> In this case, the global matrix A is more likely to be singular, and if it 
>> becomes singular, the problem cannot be solved even in the case of the 
>> direct solver.
>> I haven't solved the problem with an iterative solver yet, but I would like 
>> to ask someone who has experienced this kind of problem.
>> 
>> 1. If it is not full rank, is there a numerical technique to solve it by 
>> catching rows and columns with empty ranks in advance?
>> 
>> 2.If anyone has solved it in a different way than the above numerical 
>> analysis method, please tell me your experience.
>> 
>> Thanks,
>> Hyung Kim
> 
> 
> My experience with this is usually associated to reading a book and find the 
> solution I'm looking for. 

On top of that, some exact factorization packages can solve singular systems, 
unlike what you are stating.
E.g., MUMPS, together with the option -mat_mumps_icntl_24, see 
https://mumps-solver.org/doc/userguide_5.5.1.pdf

Thanks,
Pierre

Re: [petsc-users] Question about preconditioner

2023-02-16 Thread Pierre Jolivet


> On 16 Feb 2023, at 8:43 AM, user_gong Kim  wrote:
> 
> 
>  
> 
> Hello,
> 
>  
> 
> There are some questions about some preconditioners.
> 
> The questions are from problem Au=b. The global matrix A has zero value 
> diagonal terms.
> 
> 1. Which preconditioner is preferred for matrix A which has zero value in 
> diagonal terms?
> 

This question has not a single answer. It all depends on where your A and b are 
coming from.

> The most frequently used basic 2 preconditioners are jacobi and SOR (gauss 
> seidel).
> 
They are not the most frequently used. And rightfully so, as they very often 
can’t handle non-trivial systems.

> As people knows both methods should have non zero diagonal terms. Although 
> the improved method is applied in PETSc, jacobi can also solve the case with 
> zero diagonal term, but I ask because I know that it is not recommended.
> 
> 2. Second question is about running code with the two command options 
> below in a single process.
> 1st command : -ksp_type gmres -pc_type bjacobi -sub_pc_type jacobi
> 2nd command : -ksp_type gmres -pc_type hpddm -sub_pc_type jacobi
> When domain decomposition methods such as bjacobi or hpddm are parallel, the 
> global matrix is divided for each process. As far as I know, running it in a 
> single process should eventually produce the same result if the sub pc type 
> is the same. However, in the second option, ksp did not converge.
> 
1st command: it’s pointless to couple PCBJACOBI with PCJABOCI, it’s equivalent 
to only using PCJABOBI.
2nd command: it’s pointless to use PCHPDDM if you don’t specify in some way how 
to coarsen your problem (either algebraically or via an auxiliary operator). 
You just have a single level (equivalent to PCBJACOBI), but its options are 
prefixed by -pc_hpddm_coarse_ instead of -sub_
Again, both sets of options do not make sense.
If you want, you could share your A and b (or tell us what you are 
discretizing) and we will be able to provide a better feedback.

Thanks,
Pierre

> In this case, I wonder how to analyze the situation.
> How can I monitor and see the difference between the two?
> 
>  
> 
>  
> 
> Thanks,
> 
> Hyung Kim


Re: [petsc-users] MatMatMul inefficient

2023-02-15 Thread Pierre Jolivet
Thank you for the reproducer.
I didn’t realize your test case was _this_ small.
Still, you are not setting the MatType of Q, and PETSc tends to only like AIJ, 
so everything defaults to this type.
So instead of doing C=AB with a sparse A and a dense B, it does a sparse-sparse 
product which is much costlier.
If you add  call MatSetType(Q,MATDENSE,ierr) before the MatLoad(), you will 
then get:
 Running with1  processors
 AQ time using MatMatMul   1.062000471919E-003
 AQ time using 6 MatMul   1.427001488370E-003
Not an ideal efficiency (still greater than 1 though, so we are in the clear), 
but things will get better if you either increase the size of A or Q.

Thanks,
Pierre

> On 15 Feb 2023, at 4:34 PM, Guido Margherita  wrote:
> 
> Hi, 
> 
> You can find the reproducer at this link 
> https://github.com/margheguido/Miniapp_MatMatMul , including the matrices I 
> used.
> I have trouble undrerstanding what is different in my case from the one you 
> referenced me to. 
> 
> Thank you so much,
> Margherita 
> 
>> Il giorno 13 feb 2023, alle ore 3:51 PM, Pierre Jolivet  ha 
>> scritto:
>> 
>> Could you please share a reproducer?
>> What you are seeing is not typical of the performance of such a kernel, both 
>> from a theoretical or a practical (see fig. 2 of 
>> https://joliv.et/article.pdf) point of view.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 13 Feb 2023, at 3:38 PM, Guido Margherita via petsc-users 
>>>  wrote:
>>> 
>>> A is a sparse MATSEQAIJ, Q is dense.
>>> 
>>> Thanks,
>>> Margherita 
>>> 
>>>> Il giorno 13 feb 2023, alle ore 3:27 PM, knep...@gmail.com ha scritto:
>>>> 
>>>> On Mon, Feb 13, 2023 at 9:21 AM Guido Margherita via petsc-users 
>>>>  wrote:
>>>> Hi all, 
>>>> 
>>>> I realised that performing a matrix-matrix multiplication using the 
>>>> function MatMatMult it is not at all computationally efficient with 
>>>> respect to performing N times a matrix-vector multiplication with MatMul, 
>>>> being N the number of columns of the second matrix in the product. 
>>>> When I multiply I matrix A  46816 x 46816 to a matrix Q  46816 x 6, the 
>>>> MatMatMul function is indeed 6 times more expensive than 6 times a call to 
>>>> MatMul, when performed sequentially (0.04056  s vs 0.0062 s ). When the 
>>>> same code is run in parallel the gap grows even more, being10 times more 
>>>> expensive.
>>>>  Is there an explanation for it?
>>>> 
>>>> So we can reproduce this, what kind of matrix is A? I am assuming that Q 
>>>> is dense.
>>>> 
>>>>  Thanks,
>>>> 
>>>> Matt
>>>> 
>>>> 
>>>> t1 = MPI_Wtime()
>>>> call MatMatMult(A,Q,MAT_INITIAL_MATRIX, PETSC_DEFAULT_REAL, AQ, ierr )
>>>> t2 = MPI_Wtime() 
>>>> t_MatMatMul = t2-t1
>>>> 
>>>> t_MatMul=0.0
>>>> do j = 0, m-1
>>>>call MatGetColumnVector(Q, q_vec, j,ierr)
>>>> 
>>>>t1 = MPI_Wtime()
>>>>call MatMult(A, q_vec, aq_vec, ierr) 
>>>>t2 = MPI_Wtime()
>>>> 
>>>>t_MatMul = t_MatMul + t2-t1
>>>> end do
>>>> 
>>>> Thank you, 
>>>> Margherita Guido
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> What most experimenters take for granted before they begin their 
>>>> experiments is infinitely more interesting than any results to which their 
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>> 
>>>> https://www.cse.buffalo.edu/~knepley/
>>> 
> 



Re: [petsc-users] MatMatMul inefficient

2023-02-13 Thread Pierre Jolivet
Could you please share a reproducer?
What you are seeing is not typical of the performance of such a kernel, both 
from a theoretical or a practical (see fig. 2 of https://joliv.et/article.pdf) 
point of view.

Thanks,
Pierre

> On 13 Feb 2023, at 3:38 PM, Guido Margherita via petsc-users 
>  wrote:
> A is a sparse MATSEQAIJ, Q is dense.
> 
> Thanks,
> Margherita 
> 
>> Il giorno 13 feb 2023, alle ore 3:27 PM, knep...@gmail.com ha scritto:
>> 
>> On Mon, Feb 13, 2023 at 9:21 AM Guido Margherita via petsc-users 
>>  wrote:
>> Hi all, 
>> 
>> I realised that performing a matrix-matrix multiplication using the function 
>> MatMatMult it is not at all computationally efficient with respect to 
>> performing N times a matrix-vector multiplication with MatMul, being N the 
>> number of columns of the second matrix in the product. 
>> When I multiply I matrix A  46816 x 46816 to a matrix Q  46816 x 6, the 
>> MatMatMul function is indeed 6 times more expensive than 6 times a call to 
>> MatMul, when performed sequentially (0.04056  s vs 0.0062 s ). When the same 
>> code is run in parallel the gap grows even more, being10 times more 
>> expensive.
>>  Is there an explanation for it?
>> 
>> So we can reproduce this, what kind of matrix is A? I am assuming that Q is 
>> dense.
>> 
>>  Thanks,
>> 
>> Matt
>> 
>> 
>> t1 = MPI_Wtime()
>> call MatMatMult(A,Q,MAT_INITIAL_MATRIX, PETSC_DEFAULT_REAL, AQ, ierr )
>> t2 = MPI_Wtime() 
>> t_MatMatMul = t2-t1
>> 
>> t_MatMul=0.0
>> do j = 0, m-1
>>call MatGetColumnVector(Q, q_vec, j,ierr)
>> 
>>t1 = MPI_Wtime()
>>call MatMult(A, q_vec, aq_vec, ierr) 
>>t2 = MPI_Wtime()
>> 
>>t_MatMul = t_MatMul + t2-t1
>> end do
>> 
>> Thank you, 
>> Margherita Guido
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] MatConvert changes distribution of local rows

2023-01-13 Thread Pierre Jolivet


> On 13 Jan 2023, at 9:18 AM, Marius Buerkle  wrote:
> 
> Matrix types is from MATMPIDENSE to MATSCALAPACK,

OK, that’s not possible, because PETSc and ScaLAPACK use different 
distributions for dense matrices.

> but I think it happens also for other matrix types IIRC.

Which one?

Thanks,
Pierre

>   
> Gesendet: Freitag, 13. Januar 2023 um 16:58 Uhr
> Von: "Pierre Jolivet" 
> An: "Marius Buerkle" 
> Cc: petsc-users@mcs.anl.gov
> Betreff: Re: [petsc-users] MatConvert changes distribution of local rows
>  
> On 13 Jan 2023, at 8:49 AM, Marius Buerkle  wrote:
>  
> Hi,
>  
> I have a matrix A for which I defined the number of local rows per process 
> manually using MatSetSizes. When I use MatConvert to change the matrix type 
> it changes the number of local rows (to what one would get if MatSetSize is 
> called with PETSC_DECIDE for number of local rows), which causes problems 
> when doing MatVec producs and stuff like that. Is there any way to preserve 
> the the number of local rows when using MatConvert?
>  
> This is most likely a bug, it’s not handled properly in some MatConvert() 
> implementations.
> Could you please share either the matrix types or a minimal working example?
>  
> Thanks,
> Pierre
>  
>  
> Best,
> Marius



Re: [petsc-users] MatConvert changes distribution of local rows

2023-01-12 Thread Pierre Jolivet

> On 13 Jan 2023, at 8:49 AM, Marius Buerkle  wrote:
> 
> Hi,
>  
> I have a matrix A for which I defined the number of local rows per process 
> manually using MatSetSizes. When I use MatConvert to change the matrix type 
> it changes the number of local rows (to what one would get if MatSetSize is 
> called with PETSC_DECIDE for number of local rows), which causes problems 
> when doing MatVec producs and stuff like that. Is there any way to preserve 
> the the number of local rows when using MatConvert?

This is most likely a bug, it’s not handled properly in some MatConvert() 
implementations.
Could you please share either the matrix types or a minimal working example?

Thanks,
Pierre

>  
> Best,
> Marius



Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Pierre Jolivet


> On 6 Jan 2023, at 4:49 PM, Danyang Su  wrote:
> 
> Hi All,
>  
> I get ‘Error running configure on HDF5’ in PETSc-3.18.3 on MacOS, but no 
> problem on Ubuntu. Attached is the configuration log file. 
>  
> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps 
> --download-scalapack --download-parmetis --download-metis --download-ptscotch 
> --download-fblaslapack --download-mpich --download-hypre 
> --download-superlu_dist --download-hdf5=yes --with-debugging=0 
> --download-cmake --with-hdf5-fortran-bindings
>  
> Any idea on this?

Could you try to reconfigure in a shell without conda being activated?
You have 
PATH=/Users/danyangsu/Soft/Anaconda3/bin:/Users/danyangsu/Soft/Anaconda3/condabin:[…]
 which typically results in a broken configuration.

Thanks,
Pierre

> Thanks,
>  
> Danyang



Re: [petsc-users] error when trying to compile with HPDDM

2023-01-05 Thread Pierre Jolivet


> On 5 Jan 2023, at 7:06 PM, Matthew Knepley  wrote:
> 
> On Thu, Jan 5, 2023 at 11:36 AM Alfredo Jaramillo  > wrote:
>> Dear developers,
>> I'm trying to compile petsc together with the HPDDM library. A series on 
>> errors appeared:
>> 
>> /home/ajaramillo/petsc/x64-openmpi-aldaas2021/include/HPDDM_specifications.hpp:
>>  In static member function ‘static constexpr __float128 
>> std::numeric_limits<__float128>::min()’:
>> /home/ajaramillo/petsc/x64-openmpi-aldaas2021/include/HPDDM_specifications.hpp:54:57:
>>  error: unable to find numeric literal operator ‘operator""Q’
>>54 | static constexpr __float128 min() noexcept { return FLT128_MIN; }
>> 
>> I'm attaching the log files to this email.
> 
> Pierre,
> 
> It looks like we may need to test for FLT_MIN and FLT_MAX in configure since 
> it looks like Alfredo's headers do not have them.
> Is this correct?

We could do that, but I bet this is a side effect of the fact that Alfredo is 
using --with-cxx-dialect=C++11.
Alfredo, did you got that flag from someone else’s configure, or do you know 
what that flag is doing?
- If yes, do you really need to stick to -std=c++11?
- If no, please look at 
https://gitlab.com/petsc/petsc/-/issues/1284#note_1173803107 and consider 
removing that flag, or at least changing the option to --with-cxx-dialect=11. 
If compilation still fails, please send the up-to-date configure.log/make.log

Thanks,
Pierre

>   Thanks,
> 
>  Matt
>  
>> Could you please help me with this?
>> 
>> bests regards
>> Alfredo
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] Installation With MSYS2 and MinGW Compilers

2022-12-06 Thread Pierre Jolivet

> On 6 Dec 2022, at 7:50 PM, Singh, Rajesh K  wrote:
> 
> Hi Pierre,
>  
> Thank you again for prompt response. 
>  
> 1) MinGW does not handle %td and %zu, so a default build triggers tons of 
> warnings. Rajesh, you can add CFLAGS="-Wno-format-extra-args 
> -Wno-stringop-overflow -Wformat=0" CXXFLAGS="-Wno-format-extra-args 
> -Wno-stringop-overflow -Wformat=0" to have much fewer gibberish printed on 
> screen
>  
>Where should I add these FLAGS? In make file or somewhere else.

On the ./configure command line.

> I also followed steps suggested by Sathish. I got errors. 
>  
> Sing***@WE* MINGW64 ~/petsc-3.18.2/src/snes/tutorials
> $ mpiexe -n 2 ex5f90.exe
> bash: mpiexe: command not found

Either add Microsoft MPI to your path, or use  /C/Program\ Files/Microsoft\ 
MPI/Bin/mpiexec.exe instead of mpiexe

Thanks,
Pierre

> Then I tried command 
>  
> mpif90.exe -n 2 ./ex5f90.exe
>  
> I received many errors. 
>  
> Your helps for fixing these issue would be appreciated.
>  
> Thanks,
> Rajesh
>  
>  
>  
>  
>  
> From: Pierre Jolivet  
> Sent: Tuesday, December 6, 2022 12:01 AM
> To: petsc-users 
> Cc: Singh, Rajesh K 
> Subject: Re: [petsc-users] Installation With MSYS2 and MinGW Compilers
>  
>  
> On 6 Dec 2022, at 6:43 AM, Satish Balay  <mailto:ba...@mcs.anl.gov>> wrote:
>  
> The build log looks fine. Its not clear why these warnings [and
> errors] are coming up in make check [while there are no such warnings
> in the build].
>  
> 1) MinGW does not handle %td and %zu, so a default build triggers tons of 
> warnings. Rajesh, you can add CFLAGS="-Wno-format-extra-args 
> -Wno-stringop-overflow -Wformat=0" CXXFLAGS="-Wno-format-extra-args 
> -Wno-stringop-overflow -Wformat=0" to have much fewer gibberish printed on 
> screen
> 2) I’m able to reproduce the “Circular […] dependency dropped.” warnings 
> after doing two successive make, but I know too little about Fortran (or 
> Sowing?) to understand what could trigger this.
> But I agree with Satish, the build looks good otherwise and should work.
> I made the necessary changes to Sowing, I hope I’ll be able to finish the MR 
> (https://gitlab.com/petsc/petsc/-/merge_requests/5903/ 
> <https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fpetsc%2Fpetsc%2F-%2Fmerge_requests%2F5903%2F=05%7C01%7Crajesh.singh%40pnnl.gov%7Cf969bc9a874347cccf5408dad7601bee%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638059108381044976%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=%2Fk%2BlOijLXiL8Lf0O82B2naOW8cDEIFvWwjNB2SG%2BfpE%3D=0>)
>  by the end of the day so you can switch back to using the repository, if 
> need be.
>  
> Thanks,
> Pierre
> 
> 
> Since you are using PETSc fromfortran - Can you try compiling/running
> a test manually and see if that works.
> 
> cd src/snes/tutorials
> make ex5f90
> ./ex5f90
> mpiexec -n 2 ./ex5f90
> 
> Satish
> 
> On Tue, 6 Dec 2022, Singh, Rajesh K wrote:
> 
> 
> Hi Satish,
> 
> Thank you so much for offering help for installing PETSc in MSYS2 and 
> mingw64. Attached are configure.log and make.log files. I am newb in this. 
> Please let me know if you need further information. I will be glad to 
> provide. 
> 
> Regards,
> Rajesh
> 
> -Original Message-
> From: Satish Balay mailto:ba...@mcs.anl.gov>> 
> Sent: Monday, December 5, 2022 6:53 PM
> To: Singh, Rajesh K mailto:rajesh.si...@pnnl.gov>>
> Cc: Pierre Jolivet mailto:pie...@joliv.et>>; 
> petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] Installation With MSYS2 and MinGW Compilers
> 
> Can you send corresponding configure.log and make.log? [should be in 
> PETSC_DIR/PETSC_ARCH/lib/petsc/conf
> 
> Also what is your requirement wrt using PETSc on windows?
> - need to link with other MS compiler libraries?
> - Can you use WSL?
> 
> Our primary instructions for windows usage is with MS/Intel compilers [with 
> cygwin tools] not MSYS2. 
> 
> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpetsc.org%2Frelease%2Finstall%2Fwindows%2Fdata=05%7C01%7Crajesh.singh%40pnnl.gov%7Ccf1975f0691a4a6f77a808dad734eec8%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638058922771517675%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=hpyNWLZfFlS6NgV5ehRZ0EwyCnkrJw4yYT0HBvrVJF8%3Dreserved=0
> 
> But As Pierre mentioned - MSYS2 should also work.
> 
> Satish
> 
> On Mon, 5 Dec 2022, Singh, Rajesh K via petsc-users wrote:
> 
> 
> Hi Pierre,
> 
> I got f

Re: [petsc-users] Installation With MSYS2 and MinGW Compilers

2022-12-06 Thread Pierre Jolivet

> On 6 Dec 2022, at 6:43 AM, Satish Balay  wrote:
> 
> The build log looks fine. Its not clear why these warnings [and
> errors] are coming up in make check [while there are no such warnings
> in the build].

1) MinGW does not handle %td and %zu, so a default build triggers tons of 
warnings. Rajesh, you can add CFLAGS="-Wno-format-extra-args 
-Wno-stringop-overflow -Wformat=0" CXXFLAGS="-Wno-format-extra-args 
-Wno-stringop-overflow -Wformat=0" to have much fewer gibberish printed on 
screen
2) I’m able to reproduce the “Circular […] dependency dropped.” warnings after 
doing two successive make, but I know too little about Fortran (or Sowing?) to 
understand what could trigger this.
But I agree with Satish, the build looks good otherwise and should work.
I made the necessary changes to Sowing, I hope I’ll be able to finish the MR 
(https://gitlab.com/petsc/petsc/-/merge_requests/5903/) by the end of the day 
so you can switch back to using the repository, if need be.

Thanks,
Pierre

> Since you are using PETSc fromfortran - Can you try compiling/running
> a test manually and see if that works.
> 
> cd src/snes/tutorials
> make ex5f90
> ./ex5f90
> mpiexec -n 2 ./ex5f90
> 
> Satish
> 
> On Tue, 6 Dec 2022, Singh, Rajesh K wrote:
> 
>> Hi Satish,
>> 
>> Thank you so much for offering help for installing PETSc in MSYS2 and 
>> mingw64. Attached are configure.log and make.log files. I am newb in this. 
>> Please let me know if you need further information. I will be glad to 
>> provide. 
>> 
>> Regards,
>> Rajesh
>> 
>> -Original Message-
>> From: Satish Balay  
>> Sent: Monday, December 5, 2022 6:53 PM
>> To: Singh, Rajesh K 
>> Cc: Pierre Jolivet ; petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] Installation With MSYS2 and MinGW Compilers
>> 
>> Can you send corresponding configure.log and make.log? [should be in 
>> PETSC_DIR/PETSC_ARCH/lib/petsc/conf
>> 
>> Also what is your requirement wrt using PETSc on windows?
>> - need to link with other MS compiler libraries?
>> - Can you use WSL?
>> 
>> Our primary instructions for windows usage is with MS/Intel compilers [with 
>> cygwin tools] not MSYS2. 
>> 
>> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpetsc.org%2Frelease%2Finstall%2Fwindows%2Fdata=05%7C01%7Crajesh.singh%40pnnl.gov%7Ccf1975f0691a4a6f77a808dad734eec8%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638058922771517675%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=hpyNWLZfFlS6NgV5ehRZ0EwyCnkrJw4yYT0HBvrVJF8%3Dreserved=0
>> 
>> But As Pierre mentioned - MSYS2 should also work.
>> 
>> Satish
>> 
>> On Mon, 5 Dec 2022, Singh, Rajesh K via petsc-users wrote:
>> 
>>> Hi Pierre,
>>> 
>>> I got following error while compiling PETSc.
>>> 
>>> make all check
>>> 
>>> [cid:image001.png@01D908C1.7267DCE0]
>>> 
>>> Help for this would be appreciated.
>>> 
>>> Thanks,
>>> Rajesh
>>> 
>>> From: Pierre Jolivet 
>>> Sent: Monday, December 5, 2022 1:15 PM
>>> To: Singh, Rajesh K 
>>> Cc: petsc-users@mcs.anl.gov
>>> Subject: Re: [petsc-users] Installation With MSYS2 and MinGW Compilers
>>> 
>>> 
>>> On 5 Dec 2022, at 9:50 PM, Singh, Rajesh K 
>>> mailto:rajesh.si...@pnnl.gov>> wrote:
>>> 
>>> Hi Pierre,
>>> 
>>> Thank you so much for prompt response. I will run FORTRAN based code with 
>>> PETSc. Therefore I guess I will need Fortran binding.
>>> 
>>> OK, so, two things:
>>> 1) as said earlier, Sowing is broken with MinGW, but I'm sadly one of 
>>> the few PETSc people using this environment, so I'm one of the few who 
>>> can fix it, but I can't tell you when I'll be able to deliver
>>> 2) if you stick to an official tarball, the Fortran bindings should be 
>>> shipped in. While I work on 1), could you stick to, e.g., 
>>> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fftp.mcs.anl.gov%2Fpub%2Fpetsc%2Frelease-snapshots%2Fpetsc-3.18.2.tar.gzdata=05%7C01%7Crajesh.singh%40pnnl.gov%7Ccf1975f0691a4a6f77a808dad734eec8%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638058922771517675%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=3gU1u2WRKKvNlBMbq%2FL6RZwvp%2FcRY%2BcLx5i9gJbp9nI%3Dreserved=0<https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fftp.mcs.anl.gov%2Fpub%2Fpetsc%2Frelease-snapshots%2Fpetsc-3.18.2.tar.gzdata=05%7C01%7Crajes

Re: [petsc-users] Installation With MSYS2 and MinGW Compilers

2022-12-05 Thread Pierre Jolivet

> On 5 Dec 2022, at 9:50 PM, Singh, Rajesh K  wrote:
> 
> Hi Pierre,
>  
> Thank you so much for prompt response. I will run FORTRAN based code with 
> PETSc. Therefore I guess I will need Fortran binding.

OK, so, two things:
1) as said earlier, Sowing is broken with MinGW, but I’m sadly one of the few 
PETSc people using this environment, so I’m one of the few who can fix it, but 
I can’t tell you when I’ll be able to deliver
2) if you stick to an official tarball, the Fortran bindings should be shipped 
in. While I work on 1), could you stick to, e.g., 
https://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.18.2.tar.gz?

Thanks,
Pierre

> Regards
> Rajesh
>  
> From: Pierre Jolivet mailto:pie...@joliv.et>> 
> Sent: Monday, December 5, 2022 12:41 PM
> To: Singh, Rajesh K mailto:rajesh.si...@pnnl.gov>>
> Cc: petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] Installation With MSYS2 and MinGW Compilers
>  
> Check twice before you click! This email originated from outside PNNL.
>  
> Hello Rajesh, 
> Do you need Fortran bindings?
> Otherwise, ./configure --with-fortran-bindings=0 should do the trick.
> Sowing compilation is broken with MinGW compiler.
> If you need Fortran bindings, we could try to fix it.
>  
> Thanks,
> Pierre
> 
> 
> On 5 Dec 2022, at 9:22 PM, Singh, Rajesh K via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>  
> Dear All:
>  
> I am having diffisulty to install PETSC on the window system. I went through 
> the the steps esplained in the web site and got following error.
>  
> 
>  
> Help for resolving this issue would be really appricaited.
>  
> Thanks,
> Rajesh



Re: [petsc-users] Installation With MSYS2 and MinGW Compilers

2022-12-05 Thread Pierre Jolivet
Hello Rajesh,
Do you need Fortran bindings?
Otherwise, ./configure --with-fortran-bindings=0 should do the trick.
Sowing compilation is broken with MinGW compiler.
If you need Fortran bindings, we could try to fix it.

Thanks,
Pierre

> On 5 Dec 2022, at 9:22 PM, Singh, Rajesh K via petsc-users 
>  wrote:
> 
> Dear All:
>  
> I am having diffisulty to install PETSC on the window system. I went through 
> the the steps esplained in the web site and got following error.
>  
> 
>  
> Help for resolving this issue would be really appricaited.
>  
> Thanks,
> Rajesh



Re: [petsc-users] Example for MatSeqAIJKron?

2022-12-05 Thread Pierre Jolivet
Dear Yuyun,
Here is the simple example that I wrote to test the function: 
https://petsc.org/release/src/mat/tests/ex248.c.html
You can stick to MatCreate(), but this will only work if the type of your Mat 
is indeed MatSeqAIJ.
If you need this for other types, let us know.

Thanks,
Pierre

> On 5 Dec 2022, at 9:12 AM, Yuyun Yang  wrote:
> 
> Dear PETSc team,
>  
> Is there an example for using MatSeqAIJKron? I’m using MatCreate for all 
> matrices in the code, and wonder if I need to switch to MatCreateSeqAIJ in 
> order to use this function? Just want to compute simple Kronecker products of 
> a sparse matrix with an identity matrix.
>  
> Thank you,
> Yuyun



Re: [petsc-users] About MatMumpsSetIcntl function

2022-11-30 Thread Pierre Jolivet


> On 30 Nov 2022, at 3:54 PM, Matthew Knepley  wrote:
> 
> On Wed, Nov 30, 2022 at 9:31 AM 김성익  > wrote:
>> After folloing the comment,  ./app -pc_type lu -ksp_type preonly 
>> -ksp_monitor_true_residual -ksp_converged_reason -ksp_view 
>> -mat_mumps_icntl_7 5
> 
> Okay, you can see that it is using METIS:
> 
>   INFOG(7) (ordering option effectively used after analysis): 5
> 
> It looks like the server stuff was not seeing the option. Put it back in and 
> send the output.

With a small twist, the option should now read -mpi_mat_mumps_icntl_7 5, cf. 
https://petsc.org/release/src/ksp/pc/impls/mpi/pcmpi.c.html#line126

Thanks,
Pierre

>   Thanks,
> 
>  Matt
> 
>> The outputs are as below.
>> 
>>   0 KSP none resid norm 2.e+00 true resid norm 
>> 4.241815708566e-16 ||r(i)||/||b|| 2.120907854283e-16
>>   1 KSP none resid norm 4.241815708566e-16 true resid norm 
>> 4.241815708566e-16 ||r(i)||/||b|| 2.120907854283e-16
>> Linear solve converged due to CONVERGED_ITS iterations 1
>> KSP Object: 1 MPI process
>>   type: preonly
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>>   left preconditioning
>>   using NONE norm type for convergence test
>> PC Object: 1 MPI process
>>   type: lu
>> out-of-place factorization
>> tolerance for zero pivot 2.22045e-14
>> matrix ordering: external
>> factor fill ratio given 0., needed 0.
>>   Factored matrix follows:
>> Mat Object: 1 MPI process
>>   type: mumps
>>   rows=24, cols=24
>>   package used to perform factorization: mumps
>>   total: nonzeros=576, allocated nonzeros=576
>> MUMPS run parameters:
>>   Use -ksp_view ::ascii_info_detail to display information for 
>> all processes
>>   RINFOG(1) (global estimated flops for the elimination after 
>> analysis): 8924.
>>   RINFOG(2) (global estimated flops for the assembly after 
>> factorization): 0.
>>   RINFOG(3) (global estimated flops for the elimination after 
>> factorization): 8924.
>>   (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): 
>> (0.,0.)*(2^0)
>>   INFOG(3) (estimated real workspace for factors on all 
>> processors after analysis): 576
>>   INFOG(4) (estimated integer workspace for factors on all 
>> processors after analysis): 68
>>   INFOG(5) (estimated maximum front size in the complete tree): 
>> 24
>>   INFOG(6) (number of nodes in the complete tree): 1
>>   INFOG(7) (ordering option effectively used after analysis): 5
>>   INFOG(8) (structural symmetry in percent of the permuted 
>> matrix after analysis): 100
>>   INFOG(9) (total real/complex workspace to store the matrix 
>> factors after factorization): 576
>>   INFOG(10) (total integer space store the matrix factors after 
>> factorization): 68
>>   INFOG(11) (order of largest frontal matrix after 
>> factorization): 24
>>   INFOG(12) (number of off-diagonal pivots): 0
>>   INFOG(13) (number of delayed pivots after factorization): 0
>>   INFOG(14) (number of memory compress after factorization): 0
>>   INFOG(15) (number of steps of iterative refinement after 
>> solution): 0
>>   INFOG(16) (estimated size (in MB) of all MUMPS internal data 
>> for factorization after analysis: value on the most memory consuming 
>> processor): 0
>>   INFOG(17) (estimated size of all MUMPS internal data for 
>> factorization after analysis: sum over all processors): 0
>>   INFOG(18) (size of all MUMPS internal data allocated during 
>> factorization: value on the most memory consuming processor): 0
>>   INFOG(19) (size of all MUMPS internal data allocated during 
>> factorization: sum over all processors): 0
>>   INFOG(20) (estimated number of entries in the factors): 576
>>   INFOG(21) (size in MB of memory effectively used during 
>> factorization - value on the most memory consuming processor): 0
>>   INFOG(22) (size in MB of memory effectively used during 
>> factorization - sum over all processors): 0
>>   INFOG(23) (after analysis: value of ICNTL(6) effectively 
>> used): 0
>>   INFOG(24) (after analysis: value of ICNTL(12) effectively 
>> used): 1
>>   INFOG(25) (after factorization: number of pivots modified by 
>> static pivoting): 0
>>   INFOG(28) (after factorization: number of null pivots 
>> encountered): 0
>>   INFOG(29) (after factorization: effective number of entries in 
>> the factors (sum over all processors)): 576
>>   INFOG(30, 31) (after solution: size in Mbytes of memory used 
>> during solution phase): 0, 0
>>   INFOG(32) (after analysis: type of analysis done): 

Re: [petsc-users] Fortran DMLoad bindings

2022-11-25 Thread Pierre Jolivet
That example has no DMLoad(), and the interface is indeed not automatically 
generated https://gitlab.com/petsc/petsc/-/blob/main/src/dm/interface/dm.c#L4075
I’m not sure why, though.

Thanks,
Pierre

> On 25 Nov 2022, at 6:42 PM, Mark Adams  wrote:
> 
> It looks like it is available with an example here: 
> 
> https://petsc.org/main/src/dm/impls/plex/tutorials/ex3f90.F90.html
> 
> Try 'cd src/dm/impls/plex/tutorials; make ex3f90'
> 
> Mark
> 
> 
> 
> 
> On Fri, Nov 25, 2022 at 6:32 AM Nicholas Arnold-Medabalimi 
> mailto:narno...@umich.edu>> wrote:
>> Good Morning
>> 
>> I am adding some Petsc for mesh management into an existing Fortran Solver. 
>> I'd like to use the DMLoad() function to read in a generated DMPlex (using 
>> DMView from a companion C code I've been using to debug). It appears there 
>> isn't an existing binding for that function (or I might be making a 
>> mistake.) 
>> 
>>  I noticed some outdated user posts about using the more general 
>> PetscObjectView to achieve the result, but I can't seem to replicate it (and 
>> it might be outdated information).
>> 
>> Any assistance on this would be appreciated.
>> 
>> Happy Thanksgiving & Sincerely
>> Nicholas
>> 
>> -- 
>> Nicholas Arnold-Medabalimi
>> 
>> Ph.D. Candidate
>> Computational Aeroscience Lab
>> University of Michigan



  1   2   >