Re: [petsc-users] Field split degree of freedom ordering

2022-11-02 Thread Jed Brown
Yes, the normal approach is to partition your mesh once, then for each field, 
resolve ownership of any interface dofs with respect to the element partition 
(so shared vertex velocity can land on any process that owns an adjacent 
element, though even this isn't strictly necessary).

Alexander Lindsay  writes:

> So, in the latter case, IIUC we can maintain how we distribute data among
> the processes (partitioning of elements) such that with respect to a
> `-ksp_view_pmat` nothing changes and our velocity and pressure dofs are
> interlaced on a global scale (e.g. each process has some velocity and
> pressure dofs) ... but in order to leverage field split we need those index
> sets in order to avoid the equal size constraint?
>
> On Tue, Nov 1, 2022 at 11:57 PM Jed Brown  wrote:
>
>> In most circumstances, you can and should interlace in some form such that
>> each block in fieldsplit is distributed across all ranks. If you interlace
>> at scalar granularity as described, then each block needs to be able to do
>> that. So for the Stokes equations with equal order elements (like P1-P1
>> stabilized), you can interlace (u,v,w,p), but for mixed elements (like
>> Q2-P1^discontinuous) you can't interlace in that way. You can still
>> distribute pressure and velocity over all processes, but will need index
>> sets to identify the velocity-pressure splits.
>>
>> Alexander Lindsay  writes:
>>
>> > In the block matrices documentation, it's stated: "Note that for
>> interlaced
>> > storage the number of rows/columns of each block must be the same size"
>> Is
>> > interlacing defined in a global sense, or a process-local sense? So
>> > explicitly, if I don't want the same size restriction, do I need to
>> ensure
>> > that globally all of my block 1 dofs are numbered after my block 0 dofs?
>> Or
>> > do I need to follow that on a process-local level? Essentially in libMesh
>> > we always follow rank-major ordering. I'm asking whether for unequal row
>> > sizes, in order to split, would we need to strictly follow variable-major
>> > ordering (splitting here meaning splitting by variable)?
>> >
>> > Alex
>>


Re: [petsc-users] Field split degree of freedom ordering

2022-11-02 Thread Alexander Lindsay
So, in the latter case, IIUC we can maintain how we distribute data among
the processes (partitioning of elements) such that with respect to a
`-ksp_view_pmat` nothing changes and our velocity and pressure dofs are
interlaced on a global scale (e.g. each process has some velocity and
pressure dofs) ... but in order to leverage field split we need those index
sets in order to avoid the equal size constraint?

On Tue, Nov 1, 2022 at 11:57 PM Jed Brown  wrote:

> In most circumstances, you can and should interlace in some form such that
> each block in fieldsplit is distributed across all ranks. If you interlace
> at scalar granularity as described, then each block needs to be able to do
> that. So for the Stokes equations with equal order elements (like P1-P1
> stabilized), you can interlace (u,v,w,p), but for mixed elements (like
> Q2-P1^discontinuous) you can't interlace in that way. You can still
> distribute pressure and velocity over all processes, but will need index
> sets to identify the velocity-pressure splits.
>
> Alexander Lindsay  writes:
>
> > In the block matrices documentation, it's stated: "Note that for
> interlaced
> > storage the number of rows/columns of each block must be the same size"
> Is
> > interlacing defined in a global sense, or a process-local sense? So
> > explicitly, if I don't want the same size restriction, do I need to
> ensure
> > that globally all of my block 1 dofs are numbered after my block 0 dofs?
> Or
> > do I need to follow that on a process-local level? Essentially in libMesh
> > we always follow rank-major ordering. I'm asking whether for unequal row
> > sizes, in order to split, would we need to strictly follow variable-major
> > ordering (splitting here meaning splitting by variable)?
> >
> > Alex
>


Re: [petsc-users] Field split degree of freedom ordering

2022-11-01 Thread Jed Brown
In most circumstances, you can and should interlace in some form such that each 
block in fieldsplit is distributed across all ranks. If you interlace at scalar 
granularity as described, then each block needs to be able to do that. So for 
the Stokes equations with equal order elements (like P1-P1 stabilized), you can 
interlace (u,v,w,p), but for mixed elements (like Q2-P1^discontinuous) you 
can't interlace in that way. You can still distribute pressure and velocity 
over all processes, but will need index sets to identify the velocity-pressure 
splits.

Alexander Lindsay  writes:

> In the block matrices documentation, it's stated: "Note that for interlaced
> storage the number of rows/columns of each block must be the same size" Is
> interlacing defined in a global sense, or a process-local sense? So
> explicitly, if I don't want the same size restriction, do I need to ensure
> that globally all of my block 1 dofs are numbered after my block 0 dofs? Or
> do I need to follow that on a process-local level? Essentially in libMesh
> we always follow rank-major ordering. I'm asking whether for unequal row
> sizes, in order to split, would we need to strictly follow variable-major
> ordering (splitting here meaning splitting by variable)?
>
> Alex


[petsc-users] Field split degree of freedom ordering

2022-11-01 Thread Alexander Lindsay
In the block matrices documentation, it's stated: "Note that for interlaced
storage the number of rows/columns of each block must be the same size" Is
interlacing defined in a global sense, or a process-local sense? So
explicitly, if I don't want the same size restriction, do I need to ensure
that globally all of my block 1 dofs are numbered after my block 0 dofs? Or
do I need to follow that on a process-local level? Essentially in libMesh
we always follow rank-major ordering. I'm asking whether for unequal row
sizes, in order to split, would we need to strictly follow variable-major
ordering (splitting here meaning splitting by variable)?

Alex