Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Amneet Bhalla
Awesome! MatZeroRows is very useful and simplified the code logic. On Wed, Nov 29, 2023 at 5:14 PM Matthew Knepley wrote: > On Wed, Nov 29, 2023 at 7:27 PM Amneet Bhalla > wrote: > >> Ah, I also tried without step 2 (i.e., manually doing MPI_allgatherv for >> Dirichlet rows), and that also

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Amneet Bhalla
Ah, I also tried without step 2 (i.e., manually doing MPI_allgatherv for Dirichlet rows), and that also works. So it seems that each processor needs to send in their own Dirichlet rows, and not a union of them. Is that correct? On Wed, Nov 29, 2023 at 3:48 PM Amneet Bhalla wrote: > Thanks

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Amneet Bhalla
Thanks Barry! I tried that and it seems to be working. This is what I did. It would be great if you could take a look at it and let me know if this is what you had in mind. 1. Collected Dirichlet rows locally

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Barry Smith
> On Nov 29, 2023, at 2:11 PM, Matthew Knepley wrote: > > On Wed, Nov 29, 2023 at 1:55 PM Amneet Bhalla > wrote: >> So the code logic is after the matrix is assembled, I iterate over all >> distributed patches in the domain to see which of the patch is abutting

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Matthew Knepley
On Wed, Nov 29, 2023 at 1:55 PM Amneet Bhalla wrote: > So the code logic is after the matrix is assembled, I iterate over all > distributed patches in the domain to see which of the patch is abutting a > Dirichlet boundary. Depending upon which patch abuts a physical and > Dirichlet boundary, a

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Amneet Bhalla
So the code logic is after the matrix is assembled, I iterate over all distributed patches in the domain to see which of the patch is abutting a Dirichlet boundary. Depending upon which patch abuts a physical and Dirichlet boundary, a processor will call this routine. However, that same processor

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Matthew Knepley
On Wed, Nov 29, 2023 at 12:30 PM Amneet Bhalla wrote: > Ok, I added both, but it still hangs. Here, is bt from all three tasks: > It looks like two processes are calling AllReduce, but one is not. Are all procs not calling MatZeroRows? Thanks, Matt > Task 1: > >

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-29 Thread Barry Smith
> On Nov 29, 2023, at 1:16 AM, Amneet Bhalla wrote: > > BTW, I think you meant using MatSetOption(mat, MAT_NO_OFF_PROC_ZERO_ROWS, > PETSC_TRUE) Yes > instead ofMatSetOption(mat, MAT_NO_OFF_PROC_ENTRIES, PETSC_TRUE) ?? Please try setting both flags. > However, that also did not help to

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-28 Thread Amneet Bhalla
I added that option but the code still gets stuck at the same call MatZeroRows with 3 processors. On Tue, Nov 28, 2023 at 7:23 PM Amneet Bhalla wrote: > > > On Tue, Nov 28, 2023 at 6:42 PM Barry Smith wrote: > >> >> for (int comp = 0; comp < 2; ++comp) >> { >>

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-28 Thread Amneet Bhalla
On Tue, Nov 28, 2023 at 6:42 PM Barry Smith wrote: > > for (int comp = 0; comp < 2; ++comp) > { > ... > for (Box::Iterator bc(bc_coef_box); bc; bc++) > { >.. > if

Re: [petsc-users] MPI barrier issue using MatZeroRows

2023-11-28 Thread Barry Smith
for (int comp = 0; comp < 2; ++comp) { ... for (Box::Iterator bc(bc_coef_box); bc; bc++) { .. if (IBTK::abs_equal_eps(b, 0.0)) {