I'm truly sorry for my bad.
I set the nDof = 1 for simplicity. You can find my code in the attachments. In 
that code I tried to find an example of a cell which is neighbor to a cell in 
the another processor and print them.
Here is my output:
(base) ➜ build git:(main) ✗ /petsc/lib/petsc/bin/petscmpiexec -n 2 ./ex1_eg 
-dm_plex_dim 3 -dm_plex_simplex 0 -dm_plex_box_faces 3,3,3
Before Distribution Rank: 0, cStart: 0, cEndInterior: 27, cEnd: 27
Before Distribution Rank: 1, cStart: 0, cEndInterior: 0, cEnd: 0
After Distribution Rank: 0, cStart: 0, cEndInterior: 13, cEnd: 27
After Distribution Rank: 1, cStart: 0, cEndInterior: 14, cEnd: 24
[0] m: 13 n: 13
[1] m: 14 n: 14
[1] Face: 94, Center Cell: 7, Ghost Neighbor Cell: 23[0] Face: 145, Center 
Cell: 12, Ghost Neighbor Cell: 20

For example, if I'm writing residual for cell 12 on rank 0, I thought I need to 
write on (12,20) on the matrix too. But looks like that isn't the case.
Thanks,
Guer

Sent with [Proton Mail](https://proton.me/) secure email.

------- Original Message -------
On Monday, October 16th, 2023 at 4:26 PM, Matthew Knepley <[email protected]> 
wrote:

> On Mon, Oct 16, 2023 at 9:22 AM erdemguer <[email protected]> wrote:
>
>> Thank you for your responses many times. Looks like I'm missing something, 
>> sorry for my confusion, but let's take processor 0 on your first output. 
>> cEndInterior: 16 and cEnd: 24.
>> I'm calculating jacobian for cell=14, dof=0 (row = 42) and cell=18, dof=2 
>> (col = 56) have influence on it. (Cell 18 is on processor boundary)
>> Shouldn't I have to write values on the (42,56)?
>
> Imagine you are me getting this mail. When I mail you, I show you _exactly_ 
> what I ran and which command line options I used. You do not. I provide you 
> all the output. You do not. You can see that someone would only be guessing 
> when replying to this email. Also note that you have two dofs per cell, so 
> the cell numbers are not the row numbers for the Jacobian. Please send 
> something reproducible when you want help on running.
>
> Thanks,
>
> Matt
>
>> Thanks,
>> Guer
>>
>> Sent with [Proton Mail](https://proton.me/) secure email.
>>
>> ------- Original Message -------
>> On Monday, October 16th, 2023 at 4:11 PM, Matthew Knepley 
>> <[email protected]> wrote:
>>
>>> On Mon, Oct 16, 2023 at 6:54 AM erdemguer <[email protected]> wrote:
>>>
>>>> Hey again.
>>>>
>>>> This code outputs for example:
>>>>
>>>> After Distribution Rank: 1, cStart: 0, cEndInterior: 14, cEnd: 24
>>>> After Distribution Rank: 0, cStart: 0, cEndInterior: 13, cEnd: 27
>>>> [0] m: 39 n: 39[1] m: 42 n: 42
>>>>
>>>> Shouldn't it be 39 x 81 and 42 x 72 because of the overlapping cells on 
>>>> processor boundaries?
>>>
>>> Here is my output
>>>
>>> master *:~/Downloads/tmp/Guer$ /PETSc3/petsc/apple/bin/mpiexec -n 2 ./ex1 
>>> -malloc_debug 0 -dm_refine 1
>>> Before Distribution Rank: 1, cStart: 0, cEndInterior: 0, cEnd: 0
>>> Before Distribution Rank: 0, cStart: 0, cEndInterior: 32, cEnd: 32
>>> After Distribution Rank: 1, cStart: 0, cEndInterior: 16, cEnd: 24
>>> After Distribution Rank: 0, cStart: 0, cEndInterior: 16, cEnd: 24
>>> [0] m: 48 n: 48
>>> [1] m: 48 n: 48
>>>
>>> The mesh is 4x4 and also split into two triangles, so 32 triangles. Then we 
>>> split it and have 8 overlap cells on each side. You can get quads using
>>>
>>> master *:~/Downloads/tmp/Guer$ /PETSc3/petsc/apple/bin/mpiexec -n 2 ./ex1 
>>> -malloc_debug 0 -dm_plex_simplex 0 -dm_refine 1 -dm_view
>>> Before Distribution Rank: 1, cStart: 0, cEndInterior: 0, cEnd: 0
>>> Before Distribution Rank: 0, cStart: 0, cEndInterior: 16, cEnd: 16
>>> After Distribution Rank: 1, cStart: 0, cEndInterior: 8, cEnd: 12
>>> After Distribution Rank: 0, cStart: 0, cEndInterior: 8, cEnd: 12
>>> [0] m: 24 n: 24
>>> [1] m: 24 n: 24
>>>
>>> It is the same 4x4 mesh, but now with quads.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>>> P.S. It looks like I should use PetscFV or something like that at the 
>>>> first place. At first I thought, "I will just use SNES, I will compute 
>>>> only residual and jacobian on cells so why do bother with PetscFV?" So
>>>>
>>>> Thanks,
>>>> E.
>>>>
>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>
>>>> ------- Original Message -------
>>>> On Friday, October 13th, 2023 at 3:00 PM, Matthew Knepley 
>>>> <[email protected]> wrote:
>>>>
>>>>> On Fri, Oct 13, 2023 at 7:26 AM erdemguer <[email protected]> wrote:
>>>>>
>>>>>> Hi, unfortunately it's me again.
>>>>>>
>>>>>> I have some weird troubles with creating matrix with DMPlex. Actually I 
>>>>>> might not need to create matrix explicitly, but SNESSolve crashes at 
>>>>>> there too. So, I updated the code you provided. When I tried to use 
>>>>>> DMCreateMatrix() at first, I got an error "Unknown discretization type 
>>>>>> for field 0" at first I applied DMSetLocalSection() and this error is 
>>>>>> gone. But this time when I run the code with multiple processors, 
>>>>>> sometimes I got an output like:
>>>>>
>>>>> Some setup was out of order so the section size on proc1 was 0, and I was 
>>>>> not good about checking this.
>>>>> I have fixed it and attached.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Matt
>>>>>
>>>>>> Before Distribution Rank: 0, cStart: 0, cEndInterior: 27, cEnd: 27
>>>>>> Before Distribution Rank: 1, cStart: 0, cEndInterior: 0, cEnd: 0
>>>>>> [1] ghost cell 14
>>>>>> [1] ghost cell 15
>>>>>> [1] ghost cell 16
>>>>>> [1] ghost cell 17
>>>>>> [1] ghost cell 18
>>>>>> [1] ghost cell 19
>>>>>> [1] ghost cell 20
>>>>>> [1] ghost cell 21
>>>>>> [1] ghost cell 22
>>>>>> After Distribution Rank: 1, cStart: 0, cEndInterior: 14, cEnd: 23
>>>>>> [0] ghost cell 13
>>>>>> [0] ghost cell 14
>>>>>> [0] ghost cell 15
>>>>>> [0] ghost cell 16
>>>>>> [0] ghost cell 17
>>>>>> [0] ghost cell 18
>>>>>> [0] ghost cell 19
>>>>>> [0] ghost cell 20
>>>>>> [0] ghost cell 21
>>>>>> [0] ghost cell 22
>>>>>> [0] ghost cell 23
>>>>>> After Distribution Rank: 0, cStart: 0, cEndInterior: 13, cEnd: 24
>>>>>> Fatal error in internal_Waitall: Unknown error class, error stack:
>>>>>> internal_Waitall(82)......................: MPI_Waitall(count=1, 
>>>>>> array_of_requests=0xaaaaf5f72264, array_of_statuses=0x1) failed
>>>>>> MPIR_Waitall(1099)........................:
>>>>>> MPIR_Waitall_impl(1011)...................:
>>>>>> MPIR_Waitall_state(976)...................:
>>>>>> MPIDI_CH3i_Progress_wait(187).............: an error occurred while 
>>>>>> handling an event returned by MPIDI_CH3I_Sock_Wait()
>>>>>> MPIDI_CH3I_Progress_handle_sock_event(411):
>>>>>> ReadMoreData(744).........................: ch3|sock|immedread 
>>>>>> 0xffff8851c5c0 0xaaaaf5e81cd0 
>>>>>> 0xaaaaf5e8a880MPIDI_CH3I_Sock_readv(2553)...............: the supplied 
>>>>>> buffer contains invalid memory (set=0,sock=1,errno=14:Bad address)
>>>>>>
>>>>>> Sometimes the error message isn't appearing but for example I'm trying 
>>>>>> to print size of the matrix but it isn't working.
>>>>>> If necessary, my Configure options --download-mpich --download-hwloc 
>>>>>> --download-pastix --download-hypre --download-ml --download-ctetgen 
>>>>>> --download-triangle --download-exodusii --download-netcdf 
>>>>>> --download-zlib --download-pnetcdf --download-ptscotch --download-hdf5 
>>>>>> --with-cc=clang-16 --with-cxx=clang++-16 COPTFLAGS="-g -O2" 
>>>>>> CXXOPTFLAGS="-g -O2" FOPTFLAGS="-g -O2" --with-debugging=1
>>>>>>
>>>>>> Version: Petsc Release Version 3.20.0
>>>>>>
>>>>>> Thank you,
>>>>>> Guer
>>>>>>
>>>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>>>
>>>>>> ------- Original Message -------
>>>>>> On Thursday, October 12th, 2023 at 12:59 AM, erdemguer 
>>>>>> <[email protected]> wrote:
>>>>>>
>>>>>>> Thank you! That's exactly what I need.
>>>>>>>
>>>>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>>>>
>>>>>>> ------- Original Message -------
>>>>>>> On Wednesday, October 11th, 2023 at 4:17 PM, Matthew Knepley 
>>>>>>> <[email protected]> wrote:
>>>>>>>
>>>>>>>> On Wed, Oct 11, 2023 at 4:42 AM erdemguer <[email protected]> wrote:
>>>>>>>>
>>>>>>>>> Hi again,
>>>>>>>>
>>>>>>>> I see the problem. FV ghosts mean extra boundary cells added in FV 
>>>>>>>> methods using DMPlexCreateGhostCells() in order to impose boundary 
>>>>>>>> conditions. They are not the "ghost" cells for overlapping parallel 
>>>>>>>> decompositions. I have changed your code to give you what you want. It 
>>>>>>>> is attached.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Matt
>>>>>>>>
>>>>>>>>> Here is my code:
>>>>>>>>> #include <petsc.h>
>>>>>>>>> static char help[] = "dmplex";
>>>>>>>>>
>>>>>>>>> int main(int argc, char **argv)
>>>>>>>>> {
>>>>>>>>> PetscCall(PetscInitialize(&argc, &argv, NULL, help));
>>>>>>>>> DM dm, dm_dist;
>>>>>>>>> PetscSection section;
>>>>>>>>> PetscInt cStart, cEndInterior, cEnd, rank;
>>>>>>>>> PetscInt nc[3] = {3, 3, 3};
>>>>>>>>> PetscReal upper[3] = {1, 1, 1};
>>>>>>>>>
>>>>>>>>> PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, &rank));
>>>>>>>>>
>>>>>>>>> DMPlexCreateBoxMesh(PETSC_COMM_WORLD, 3, PETSC_FALSE, nc, NULL, 
>>>>>>>>> upper, NULL, PETSC_TRUE, &dm);
>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm1_view");
>>>>>>>>> PetscCall(DMSetFromOptions(dm));
>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm2_view");
>>>>>>>>>
>>>>>>>>> PetscCall(DMPlexGetDepthStratum(dm, 3, &cStart, &cEnd));
>>>>>>>>> DMPlexComputeCellTypes(dm);
>>>>>>>>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_INTERIOR_GHOST, 
>>>>>>>>> &cEndInterior, NULL));
>>>>>>>>> PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, cStart: 
>>>>>>>>> %d, cEndInterior: %d, cEnd: %d\n", rank, cStart,
>>>>>>>>> cEndInterior, cEnd);
>>>>>>>>>
>>>>>>>>> PetscInt nField = 1, nDof = 3, field = 0;
>>>>>>>>> PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, &section));
>>>>>>>>> PetscSectionSetNumFields(section, nField);
>>>>>>>>> PetscCall(PetscSectionSetChart(section, cStart, cEnd));
>>>>>>>>> for (PetscInt p = cStart; p < cEnd; p++)
>>>>>>>>> {
>>>>>>>>> PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
>>>>>>>>> PetscCall(PetscSectionSetDof(section, p, nDof));
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> PetscCall(PetscSectionSetUp(section));
>>>>>>>>>
>>>>>>>>> DMSetLocalSection(dm, section);
>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm3_view");
>>>>>>>>>
>>>>>>>>> DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE);
>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm4_view");
>>>>>>>>> PetscCall(DMPlexDistribute(dm, 1, NULL, &dm_dist));
>>>>>>>>> if (dm_dist)
>>>>>>>>> {
>>>>>>>>> DMDestroy(&dm);
>>>>>>>>> dm = dm_dist;
>>>>>>>>> }
>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm5_view");
>>>>>>>>> PetscCall(DMPlexGetDepthStratum(dm, 3, &cStart, &cEnd));
>>>>>>>>> DMPlexComputeCellTypes(dm);
>>>>>>>>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_FV_GHOST, 
>>>>>>>>> &cEndInterior, NULL));
>>>>>>>>> PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, cStart: 
>>>>>>>>> %d, cEndInterior: %d, cEnd: %d\n", rank, cStart,
>>>>>>>>> cEndInterior, cEnd);
>>>>>>>>>
>>>>>>>>> DMDestroy(&dm);
>>>>>>>>> PetscCall(PetscFinalize());}
>>>>>>>>>
>>>>>>>>> This codes output is currently (on 2 processors) is:
>>>>>>>>> Before Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 14
>>>>>>>>> Before Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 13
>>>>>>>>> After Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 
>>>>>>>>> 27After Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 24
>>>>>>>>>
>>>>>>>>> DMView outputs:
>>>>>>>>> dm1_view (after creation):
>>>>>>>>> DM Object: 2 MPI processes
>>>>>>>>> type: plex
>>>>>>>>> DM_0x84000004_0 in 3 dimensions:
>>>>>>>>> Number of 0-cells per rank: 64 0
>>>>>>>>> Number of 1-cells per rank: 144 0
>>>>>>>>> Number of 2-cells per rank: 108 0
>>>>>>>>> Number of 3-cells per rank: 27 0
>>>>>>>>> Labels:
>>>>>>>>> marker: 1 strata with value/size (1 (218))
>>>>>>>>> Face Sets: 6 strata with value/size (6 (9), 5 (9), 3 (9), 4 (9), 1 
>>>>>>>>> (9), 2 (9))
>>>>>>>>> depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27)) 
>>>>>>>>> celltype: 4 strata with value/size (7 (27), 0 (64), 4 (108), 1 (144))
>>>>>>>>>
>>>>>>>>> dm2_view (after setfromoptions):
>>>>>>>>> DM Object: 2 MPI processes
>>>>>>>>> type: plex
>>>>>>>>> DM_0x84000004_0 in 3 dimensions:
>>>>>>>>> Number of 0-cells per rank: 40 46
>>>>>>>>> Number of 1-cells per rank: 83 95
>>>>>>>>> Number of 2-cells per rank: 57 64
>>>>>>>>> Number of 3-cells per rank: 13 14
>>>>>>>>> Labels:
>>>>>>>>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>>>>>>>>> marker: 1 strata with value/size (1 (109))
>>>>>>>>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 
>>>>>>>>> (4)) celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 
>>>>>>>>> (13))
>>>>>>>>>
>>>>>>>>> dm3_view (after setting local section):
>>>>>>>>> DM Object: 2 MPI processes
>>>>>>>>> type: plex
>>>>>>>>> DM_0x84000004_0 in 3 dimensions:
>>>>>>>>> Number of 0-cells per rank: 40 46
>>>>>>>>> Number of 1-cells per rank: 83 95
>>>>>>>>> Number of 2-cells per rank: 57 64
>>>>>>>>> Number of 3-cells per rank: 13 14
>>>>>>>>> Labels:
>>>>>>>>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>>>>>>>>> marker: 1 strata with value/size (1 (109))
>>>>>>>>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 
>>>>>>>>> (4))
>>>>>>>>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>>>>>>>>> Field Field_0: adjacency FEM
>>>>>>>>>
>>>>>>>>> dm4_view (after setting adjacency):
>>>>>>>>> DM Object: 2 MPI processes
>>>>>>>>> type: plex
>>>>>>>>> DM_0x84000004_0 in 3 dimensions:
>>>>>>>>> Number of 0-cells per rank: 40 46
>>>>>>>>> Number of 1-cells per rank: 83 95
>>>>>>>>> Number of 2-cells per rank: 57 64
>>>>>>>>> Number of 3-cells per rank: 13 14
>>>>>>>>> Labels:
>>>>>>>>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>>>>>>>>> marker: 1 strata with value/size (1 (109))
>>>>>>>>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 
>>>>>>>>> (4))
>>>>>>>>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>>>>>>>>> Field Field_0: adjacency FVM++
>>>>>>>>>
>>>>>>>>> dm5_view (after distribution):
>>>>>>>>> DM Object: Parallel Mesh 2 MPI processes
>>>>>>>>> type: plex
>>>>>>>>> Parallel Mesh in 3 dimensions:
>>>>>>>>> Number of 0-cells per rank: 64 60
>>>>>>>>> Number of 1-cells per rank: 144 133
>>>>>>>>> Number of 2-cells per rank: 108 98
>>>>>>>>> Number of 3-cells per rank: 27 24
>>>>>>>>> Labels:
>>>>>>>>> depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27))
>>>>>>>>> marker: 1 strata with value/size (1 (218))
>>>>>>>>> Face Sets: 6 strata with value/size (1 (9), 2 (9), 3 (9), 4 (9), 5 
>>>>>>>>> (9), 6 (9))
>>>>>>>>> celltype: 4 strata with value/size (0 (64), 1 (144), 4 (108), 7 (27))
>>>>>>>>> Field Field_0: adjacency FVM++
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Guer.
>>>>>>>>>
>>>>>>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>>>>>>
>>>>>>>>> ------- Original Message -------
>>>>>>>>> On Wednesday, October 11th, 2023 at 3:33 AM, Matthew Knepley 
>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>
>>>>>>>>>> On Tue, Oct 10, 2023 at 7:01 PM erdemguer <[email protected]> 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>> Sorry for my late response. I tried with your suggestions and I 
>>>>>>>>>>> think I made a progress. But I still got issues. Let me explain my 
>>>>>>>>>>> latest mesh routine:
>>>>>>>>>>>
>>>>>>>>>>> - DMPlexCreateBoxMesh
>>>>>>>>>>>
>>>>>>>>>>> - DMSetFromOptions
>>>>>>>>>>> - PetscSectionCreate
>>>>>>>>>>> - PetscSectionSetNumFields
>>>>>>>>>>> - PetscSectionSetFieldDof
>>>>>>>>>>>
>>>>>>>>>>> - PetscSectionSetDof
>>>>>>>>>>>
>>>>>>>>>>> - PetscSectionSetUp
>>>>>>>>>>> - DMSetLocalSection
>>>>>>>>>>> - DMSetAdjacency
>>>>>>>>>>> - DMPlexDistribute
>>>>>>>>>>>
>>>>>>>>>>> It's still not working but it's promising, if I call 
>>>>>>>>>>> DMPlexGetDepthStratum for cells, I can see that after distribution 
>>>>>>>>>>> processors have more cells.
>>>>>>>>>>
>>>>>>>>>> Please send the output of DMPlexView() for each incarnation of the 
>>>>>>>>>> mesh. What I do is put
>>>>>>>>>>
>>>>>>>>>> DMViewFromOptions(dm, NULL, "-dm1_view")
>>>>>>>>>>
>>>>>>>>>> with a different string after each call.
>>>>>>>>>>
>>>>>>>>>>> But I couldn't figure out how to decide where the ghost/processor 
>>>>>>>>>>> boundary cells start.
>>>>>>>>>>
>>>>>>>>>> Please send the actual code because the above is not specific 
>>>>>>>>>> enough. For example, you will not have
>>>>>>>>>> "ghost cells" unless you partition with overlap. This is because by 
>>>>>>>>>> default cells are the partitioned quantity,
>>>>>>>>>> so each process gets a unique set.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> Matt
>>>>>>>>>>
>>>>>>>>>>> In older mails I saw there is a function DMPlexGetHybridBounds but 
>>>>>>>>>>> I think that function is deprecated. I tried to use, 
>>>>>>>>>>> DMPlexGetCellTypeStratumas in ts/tutorials/ex11_sa.c but I'm 
>>>>>>>>>>> getting -1 as cEndInterior before and after distribution. I tried 
>>>>>>>>>>> it for DM_POLYTOPE_FV_GHOST, DM_POLYTOPE_INTERIOR_GHOST polytope 
>>>>>>>>>>> types. I also tried calling DMPlexComputeCellTypes before 
>>>>>>>>>>> DMPlexGetCellTypeStratum but nothing changed. I think I can 
>>>>>>>>>>> calculate the ghost cell indices using cStart/cEnd before & after 
>>>>>>>>>>> distribution but I think there is a better way I'm currently 
>>>>>>>>>>> missing.
>>>>>>>>>>>
>>>>>>>>>>> Thanks again,
>>>>>>>>>>> Guer.
>>>>>>>>>>>
>>>>>>>>>>> ------- Original Message -------
>>>>>>>>>>> On Thursday, September 28th, 2023 at 10:42 PM, Matthew Knepley 
>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Sep 28, 2023 at 3:38 PM erdemguer via petsc-users 
>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am currently using DMPlex in my code. It runs serially at the 
>>>>>>>>>>>>> moment, but I'm interested in adding parallel options. Here is my 
>>>>>>>>>>>>> workflow:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Create a DMPlex mesh from GMSH.
>>>>>>>>>>>>> Reorder it with DMPlexPermute.
>>>>>>>>>>>>> Create necessary pre-processing arrays related to the 
>>>>>>>>>>>>> mesh/problem.
>>>>>>>>>>>>> Create field(s) with multi-dofs.
>>>>>>>>>>>>> Create residual vectors.
>>>>>>>>>>>>> Define a function to calculate the residual for each cell and, 
>>>>>>>>>>>>> use SNES.
>>>>>>>>>>>>> As you can see, I'm not using FV or FE structures (most examples 
>>>>>>>>>>>>> do). Now, I'm trying to implement this in parallel using a 
>>>>>>>>>>>>> similar approach. However, I'm struggling to understand how to 
>>>>>>>>>>>>> create corresponding vectors and how to obtain index sets for 
>>>>>>>>>>>>> each processor. Is there a tutorial or paper that covers this 
>>>>>>>>>>>>> topic?
>>>>>>>>>>>>
>>>>>>>>>>>> The intention was that there is enough information in the manual 
>>>>>>>>>>>> to do this.
>>>>>>>>>>>>
>>>>>>>>>>>> Using PetscFE/PetscFV is not required. However, I strongly 
>>>>>>>>>>>> encourage you to use PetscSection. Without this, it would be 
>>>>>>>>>>>> incredibly hard to do what you want. Once the DM has a Section, it 
>>>>>>>>>>>> can do things like automatically create vectors and matrices for 
>>>>>>>>>>>> you. It can redistribute them, subset them, etc. The Section 
>>>>>>>>>>>> describes how dofs are assigned to pieces of the mesh (mesh 
>>>>>>>>>>>> points). This is in the manual, and there are a few examples that 
>>>>>>>>>>>> do it by hand.
>>>>>>>>>>>>
>>>>>>>>>>>> So I suggest changing your code to use PetscSection, and then 
>>>>>>>>>>>> letting us know if things still do not work.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>
>>>>>>>>>>>> Matt
>>>>>>>>>>>>
>>>>>>>>>>>>> Thank you.
>>>>>>>>>>>>> Guer.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> What most experimenters take for granted before they begin their 
>>>>>>>>>>>> experiments is infinitely more interesting than any results to 
>>>>>>>>>>>> which their experiments lead.
>>>>>>>>>>>> -- Norbert Wiener
>>>>>>>>>>>>
>>>>>>>>>>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> What most experimenters take for granted before they begin their 
>>>>>>>>>> experiments is infinitely more interesting than any results to which 
>>>>>>>>>> their experiments lead.
>>>>>>>>>> -- Norbert Wiener
>>>>>>>>>>
>>>>>>>>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> What most experimenters take for granted before they begin their 
>>>>>>>> experiments is infinitely more interesting than any results to which 
>>>>>>>> their experiments lead.
>>>>>>>> -- Norbert Wiener
>>>>>>>>
>>>>>>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>>>>>
>>>>> --
>>>>>
>>>>> What most experimenters take for granted before they begin their 
>>>>> experiments is infinitely more interesting than any results to which 
>>>>> their experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>>>
>>> --
>>>
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>
> --
>
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>
> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
#include <petsc.h>
static char help[] = "dmplex";

// Ghost cells are in the point SF
static PetscErrorCode GetLeafCells(DM dm, PetscInt *cEndInterior)
{
    PetscSF sf;
    const PetscInt *leaves;
    PetscInt Nl, cStart, cEnd;
    PetscMPIInt rank;

    PetscFunctionBeginUser;
    PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, &rank));
    PetscCall(DMPlexGetHeightStratum(dm, 0, &cStart, &cEnd));
    *cEndInterior = cEnd;
    PetscCall(DMGetPointSF(dm, &sf));
    PetscCall(PetscSFGetGraph(sf, NULL, &Nl, &leaves, NULL));
    for (PetscInt l = 0; l < Nl; ++l)
    {
        const PetscInt leaf = leaves[l];

        if (leaf >= cStart && leaf < cEnd)
        {
            *cEndInterior = PetscMin(leaf, *cEndInterior);
            // PetscCall(PetscPrintf(PETSC_COMM_SELF, "[%d] ghost cell %" 
PetscInt_FMT "\n", rank, leaf));
        }
    }
    PetscFunctionReturn(0);
}

int main(int argc, char **argv)
{
    Mat J;
    DM dm, dm_dist;
    PetscSection section;
    PetscInt cStart, cEndInterior, cEnd, rank, m, n;
    PetscInt nField = 1, nDof = 1, field = 0;

    PetscCall(PetscInitialize(&argc, &argv, NULL, help));
    PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, &rank));

    PetscCall(DMCreate(PETSC_COMM_WORLD, &dm));
    PetscCall(DMSetType(dm, DMPLEX));
    PetscCall(DMPlexDistributeSetDefault(dm, PETSC_FALSE));
    PetscCall(DMSetFromOptions(dm));

    PetscCall(GetLeafCells(dm, &cEndInterior));

    PetscCall(DMPlexGetHeightStratum(dm, 0, &cStart, &cEnd));
    PetscCall(PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, 
cStart: %d, cEndInterior: %d, cEnd: %d\n",
                          rank, cStart, cEndInterior, cEnd));

    PetscCall(DMPlexDistribute(dm, 1, NULL, &dm_dist));
    if (dm_dist)
    {
        PetscCall(DMDestroy(&dm));
        dm = dm_dist;
    }

    PetscCall(DMPlexGetHeightStratum(dm, 0, &cStart, &cEnd));
    PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, &section));
    PetscCall(PetscSectionSetNumFields(section, nField));
    PetscCall(PetscSectionSetChart(section, cStart, cEnd));
    for (PetscInt p = cStart; p < cEnd; p++)
    {
        PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
        PetscCall(PetscSectionSetDof(section, p, nDof));
    }
    PetscCall(PetscSectionSetUp(section));
    PetscCall(DMSetLocalSection(dm, section));
    // PetscCall(PetscSectionView(section, NULL));
    PetscCall(PetscSectionDestroy(&section));
    PetscCall(DMPlexGetHeightStratum(dm, 0, &cStart, &cEnd));
    PetscCall(GetLeafCells(dm, &cEndInterior));
    PetscCall(PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, 
cStart: %d, cEndInterior: %d, cEnd: %d\n",
                          rank, cStart, cEndInterior, cEnd));

    PetscCall(DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE));
    PetscCall(DMCreateMatrix(dm, &J));
    PetscCall(MatGetLocalSize(J, &m, &n));
    PetscCall(PetscPrintf(PETSC_COMM_SELF, "[%d] m: %d n: %d\n", rank, m, n));

    PetscInt fStart, fEnd, supportSize, cellCenter, cellGhostNeighbor, face;
    const PetscInt *support;
    PetscCall(DMPlexGetHeightStratum(dm, 1, &fStart, &fEnd));

    for (PetscInt p = fStart; p < fEnd; p++)
    {
        PetscCall(DMPlexGetSupportSize(dm, p, &supportSize));
        if (supportSize == 2)
        {
            PetscCall(DMPlexGetSupport(dm, p, &support));
            if ((support[0] < cEndInterior) && (support[1] >= cEndInterior))
            {
                cellCenter = support[0];
                cellGhostNeighbor = support[1];
                face = p;
                break;
            }
            if ((support[0] >= cEndInterior) && (support[1] < cEndInterior))
            {
                cellCenter = support[1];
                cellGhostNeighbor = support[0];
                face = p;
                break;
            }
        }
    }

    PetscCall(PetscPrintf(PETSC_COMM_SELF, "[%d] Face: %d, Center Cell: %d, 
Ghost Neighbor Cell: %d\n", rank, face,
                          cellCenter, cellGhostNeighbor));

    PetscCall(DMDestroy(&dm));
    PetscCall(MatDestroy(&J));
    PetscCall(PetscFinalize());
    return 0;
}

/*TEST

  test:
    args: -dm_plex_dim 3 -dm_plex_simplex 0 -dm_plex_box_faces 3,3,3

TEST*/

Reply via email to