Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Brandon Denton via petsc-users
How exactly does the aux data work? What is typically available there? Is it 
something the user can populate?



From: Matthew Knepley 
Sent: Wednesday, October 11, 2023 8:07 PM
To: Brandon Denton 
Cc: Jed Brown ; petsc-users 
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

On Wed, Oct 11, 2023 at 4:15 PM Brandon Denton 
mailto:blden...@buffalo.edu>> wrote:
By natural coordinates, I am referring to the reference element coordinates. 
Usually these are represented as (xi, eta, zeta) in the literature.

Yes. I would like to have the Jacobian and the derivatives of the map available 
within PetscDSSetResidual() f0 and f1 functions.

Yes, we can get these passed an aux data.

  I believe DMPlexComputeCellGeometryFEM() function provides this information. 
Is there a way to get the cell, shape functions as well? It not, can we talk 
about this more? I would like to understand how the shape functions are 
addressed within PETSc. Dr. Kirk's approach uses the shape function gradients 
in its SUPG parameter. I'd love to talk with you about this is more detail.

There should be a way to formulate this in a basis independent way.  I would 
much prefer that to
explicit inclusion of the basis.

  Thanks,

 Matt

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Wednesday, October 11, 2023 3:13 PM
To: Brandon Denton mailto:blden...@buffalo.edu>>
Cc: Jed Brown mailto:j...@jedbrown.org>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

On Wed, Oct 11, 2023 at 2:09 PM Brandon Denton 
mailto:blden...@buffalo.edu>> wrote:
Thank you for the discussion.

Are we agreed then that the derivatives of the natural coordinates are required 
for the described approach? If so, is this something PETSc can currently do 
within the point-wise residual functions?

I am not sure what natural coordinates are. Do we just mean the Jacobian, 
derivatives of the map between reference and real coordinates? If so, yes the 
Jacobian is available. Right now I do not pass it
directly, but passing it is easy.

  Thanks,

 Matt

Matt - Thank you for the command line option for the 2nd derivatives. Those 
will be needed to implement the discussed approach. Specifically in the 
stabilization and shock capture parameters. (Ref.: B. Kirk's Thesis). What is a 
good reference for the usual SUPG method you are referencing? I've been looking 
through my textbooks but haven't found a good reference.

Jed - Thank you for the link. I will review the information on it.

Sorry about the attachment. I will upload it to this thread later (I'm at work 
right now and I can't do it from here).

From: Jed Brown mailto:j...@jedbrown.org>>
Sent: Wednesday, October 11, 2023 1:38 PM
To: Matthew Knepley mailto:knep...@gmail.com>>
Cc: Brandon Denton mailto:blden...@buffalo.edu>>; 
petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

Matthew Knepley mailto:knep...@gmail.com>> writes:

> On Wed, Oct 11, 2023 at 1:03 PM Jed Brown 
> mailto:j...@jedbrown.org>> wrote:
>
>> I don't see an attachment, but his thesis used conservative variables and
>> defined an effective length scale in a way that seemed to assume constant
>> shape function gradients. I'm not aware of systematic literature comparing
>> the covariant and contravariant length measures on anisotropic meshes, but
>> I believe most people working in the Shakib/Hughes approach use the
>> covariant measure. Our docs have a brief discussion of this choice.
>>
>> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibceed.org%2Fen%2Flatest%2Fexamples%2Ffluids%2F%23equation-eq-peclet=05%7C01%7Cbldenton%40buffalo.edu%7Cd9372f934b26455371a708dbca80dc8e%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638326427028053956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=skMsKDmpBxiaXtBSqhsyckvVpTOkGqDsNJIYo22Ywps%3D=0
>>
>> Matt, I don't understand how the second derivative comes into play as a
>> length measure on anistropic meshes -- the second derivatives can be
>> uniformly zero and yet you still need a length measure.
>>
>
> I was talking about the usual SUPG where we just penalize the true residual.

I think you're focused on computing the strong diffusive flux (which can be 
done using second derivatives or by a projection; the latter produces somewhat 
better results). But you still need a length scale and that's most naturally 
computed using the derivative of reference coordinates with respect to physical 
(or equivalently, the associated metric tensor).


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Matthew Knepley
On Wed, Oct 11, 2023 at 4:15 PM Brandon Denton  wrote:

> By natural coordinates, I am referring to the reference element
> coordinates. Usually these are represented as (xi, eta, zeta) in the
> literature.
>
> Yes. I would like to have the Jacobian and the derivatives of the map
> available within PetscDSSetResidual() f0 and f1 functions.
>

Yes, we can get these passed an aux data.


>   I believe DMPlexComputeCellGeometryFEM() function provides this
> information. Is there a way to get the cell, shape functions as well? It
> not, can we talk about this more? I would like to understand how the shape
> functions are addressed within PETSc. Dr. Kirk's approach uses the shape
> function gradients in its SUPG parameter. I'd love to talk with you about
> this is more detail.
>

There should be a way to formulate this in a basis independent way.  I
would much prefer that to
explicit inclusion of the basis.

  Thanks,

 Matt


> *From:* Matthew Knepley 
> *Sent:* Wednesday, October 11, 2023 3:13 PM
> *To:* Brandon Denton 
> *Cc:* Jed Brown ; petsc-users 
> *Subject:* Re: [petsc-users] FEM Implementation of NS with SUPG
> Stabilization
>
> On Wed, Oct 11, 2023 at 2:09 PM Brandon Denton 
> wrote:
>
> Thank you for the discussion.
>
> Are we agreed then that the derivatives of the natural coordinates are
> required for the described approach? If so, is this something PETSc can
> currently do within the point-wise residual functions?
>
>
> I am not sure what natural coordinates are. Do we just mean the Jacobian,
> derivatives of the map between reference and real coordinates? If so, yes
> the Jacobian is available. Right now I do not pass it
> directly, but passing it is easy.
>
>   Thanks,
>
>  Matt
>
>
> Matt - Thank you for the command line option for the 2nd derivatives.
> Those will be needed to implement the discussed approach. Specifically in
> the stabilization and shock capture parameters. (Ref.: B. Kirk's Thesis).
> What is a good reference for the usual SUPG method you are referencing?
> I've been looking through my textbooks but haven't found a good reference.
>
> Jed - Thank you for the link. I will review the information on it.
>
> Sorry about the attachment. I will upload it to this thread later (I'm at
> work right now and I can't do it from here).
> --
> *From:* Jed Brown 
> *Sent:* Wednesday, October 11, 2023 1:38 PM
> *To:* Matthew Knepley 
> *Cc:* Brandon Denton ; petsc-users <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] FEM Implementation of NS with SUPG
> Stabilization
>
> Matthew Knepley  writes:
>
> > On Wed, Oct 11, 2023 at 1:03 PM Jed Brown  wrote:
> >
> >> I don't see an attachment, but his thesis used conservative variables
> and
> >> defined an effective length scale in a way that seemed to assume
> constant
> >> shape function gradients. I'm not aware of systematic literature
> comparing
> >> the covariant and contravariant length measures on anisotropic meshes,
> but
> >> I believe most people working in the Shakib/Hughes approach use the
> >> covariant measure. Our docs have a brief discussion of this choice.
> >>
> >>
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibceed.org%2Fen%2Flatest%2Fexamples%2Ffluids%2F%23equation-eq-peclet=05%7C01%7Cbldenton%40buffalo.edu%7Cd9372f934b26455371a708dbca80dc8e%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638326427028053956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=skMsKDmpBxiaXtBSqhsyckvVpTOkGqDsNJIYo22Ywps%3D=0
> 
> >>
> >> Matt, I don't understand how the second derivative comes into play as a
> >> length measure on anistropic meshes -- the second derivatives can be
> >> uniformly zero and yet you still need a length measure.
> >>
> >
> > I was talking about the usual SUPG where we just penalize the true
> residual.
>
> I think you're focused on computing the strong diffusive flux (which can
> be done using second derivatives or by a projection; the latter produces
> somewhat better results). But you still need a length scale and that's most
> naturally computed using the derivative of reference coordinates with
> respect to physical (or equivalently, the associated metric tensor).
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Parallel DMPlex

2023-10-11 Thread erdemguer via petsc-users
Thank you! That's exactly what I need.

Sent with [Proton Mail](https://proton.me/) secure email.

--- Original Message ---
On Wednesday, October 11th, 2023 at 4:17 PM, Matthew Knepley 
 wrote:

> On Wed, Oct 11, 2023 at 4:42 AM erdemguer  wrote:
>
>> Hi again,
>
> I see the problem. FV ghosts mean extra boundary cells added in FV methods 
> using DMPlexCreateGhostCells() in order to impose boundary conditions. They 
> are not the "ghost" cells for overlapping parallel decompositions. I have 
> changed your code to give you what you want. It is attached.
>
> Thanks,
>
> Matt
>
>> Here is my code:
>> #include 
>> static char help[] = "dmplex";
>>
>> int main(int argc, char **argv)
>> {
>> PetscCall(PetscInitialize(, , NULL, help));
>> DM dm, dm_dist;
>> PetscSection section;
>> PetscInt cStart, cEndInterior, cEnd, rank;
>> PetscInt nc[3] = {3, 3, 3};
>> PetscReal upper[3] = {1, 1, 1};
>>
>> PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, ));
>>
>> DMPlexCreateBoxMesh(PETSC_COMM_WORLD, 3, PETSC_FALSE, nc, NULL, upper, NULL, 
>> PETSC_TRUE, );
>> DMViewFromOptions(dm, NULL, "-dm1_view");
>> PetscCall(DMSetFromOptions(dm));
>> DMViewFromOptions(dm, NULL, "-dm2_view");
>>
>> PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
>> DMPlexComputeCellTypes(dm);
>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_INTERIOR_GHOST, 
>> , NULL));
>> PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, cStart: %d, 
>> cEndInterior: %d, cEnd: %d\n", rank, cStart,
>> cEndInterior, cEnd);
>>
>> PetscInt nField = 1, nDof = 3, field = 0;
>> PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, ));
>> PetscSectionSetNumFields(section, nField);
>> PetscCall(PetscSectionSetChart(section, cStart, cEnd));
>> for (PetscInt p = cStart; p < cEnd; p++)
>> {
>> PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
>> PetscCall(PetscSectionSetDof(section, p, nDof));
>> }
>>
>> PetscCall(PetscSectionSetUp(section));
>>
>> DMSetLocalSection(dm, section);
>> DMViewFromOptions(dm, NULL, "-dm3_view");
>>
>> DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE);
>> DMViewFromOptions(dm, NULL, "-dm4_view");
>> PetscCall(DMPlexDistribute(dm, 1, NULL, _dist));
>> if (dm_dist)
>> {
>> DMDestroy();
>> dm = dm_dist;
>> }
>> DMViewFromOptions(dm, NULL, "-dm5_view");
>> PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
>> DMPlexComputeCellTypes(dm);
>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_FV_GHOST, , 
>> NULL));
>> PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, cStart: %d, 
>> cEndInterior: %d, cEnd: %d\n", rank, cStart,
>> cEndInterior, cEnd);
>>
>> DMDestroy();
>> PetscCall(PetscFinalize());}
>>
>> This codes output is currently (on 2 processors) is:
>> Before Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 14
>> Before Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 13
>> After Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 27After 
>> Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 24
>>
>> DMView outputs:
>> dm1_view (after creation):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x8404_0 in 3 dimensions:
>> Number of 0-cells per rank: 64 0
>> Number of 1-cells per rank: 144 0
>> Number of 2-cells per rank: 108 0
>> Number of 3-cells per rank: 27 0
>> Labels:
>> marker: 1 strata with value/size (1 (218))
>> Face Sets: 6 strata with value/size (6 (9), 5 (9), 3 (9), 4 (9), 1 (9), 2 
>> (9))
>> depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27)) celltype: 
>> 4 strata with value/size (7 (27), 0 (64), 4 (108), 1 (144))
>>
>> dm2_view (after setfromoptions):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x8404_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 1 strata with value/size (1 (109))
>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4)) 
>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>>
>> dm3_view (after setting local section):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x8404_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 1 strata with value/size (1 (109))
>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>> Field Field_0: adjacency FEM
>>
>> dm4_view (after setting adjacency):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x8404_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 

Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Brandon Denton via petsc-users
By natural coordinates, I am referring to the reference element coordinates. 
Usually these are represented as (xi, eta, zeta) in the literature.

Yes. I would like to have the Jacobian and the derivatives of the map available 
within PetscDSSetResidual() f0 and f1 functions.  I believe 
DMPlexComputeCellGeometryFEM() function provides this information. Is there a 
way to get the cell, shape functions as well? It not, can we talk about this 
more? I would like to understand how the shape functions are addressed within 
PETSc. Dr. Kirk's approach uses the shape function gradients in its SUPG 
parameter. I'd love to talk with you about this is more detail.







From: Matthew Knepley 
Sent: Wednesday, October 11, 2023 3:13 PM
To: Brandon Denton 
Cc: Jed Brown ; petsc-users 
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

On Wed, Oct 11, 2023 at 2:09 PM Brandon Denton 
mailto:blden...@buffalo.edu>> wrote:
Thank you for the discussion.

Are we agreed then that the derivatives of the natural coordinates are required 
for the described approach? If so, is this something PETSc can currently do 
within the point-wise residual functions?

I am not sure what natural coordinates are. Do we just mean the Jacobian, 
derivatives of the map between reference and real coordinates? If so, yes the 
Jacobian is available. Right now I do not pass it
directly, but passing it is easy.

  Thanks,

 Matt

Matt - Thank you for the command line option for the 2nd derivatives. Those 
will be needed to implement the discussed approach. Specifically in the 
stabilization and shock capture parameters. (Ref.: B. Kirk's Thesis). What is a 
good reference for the usual SUPG method you are referencing? I've been looking 
through my textbooks but haven't found a good reference.

Jed - Thank you for the link. I will review the information on it.

Sorry about the attachment. I will upload it to this thread later (I'm at work 
right now and I can't do it from here).

From: Jed Brown mailto:j...@jedbrown.org>>
Sent: Wednesday, October 11, 2023 1:38 PM
To: Matthew Knepley mailto:knep...@gmail.com>>
Cc: Brandon Denton mailto:blden...@buffalo.edu>>; 
petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

Matthew Knepley mailto:knep...@gmail.com>> writes:

> On Wed, Oct 11, 2023 at 1:03 PM Jed Brown 
> mailto:j...@jedbrown.org>> wrote:
>
>> I don't see an attachment, but his thesis used conservative variables and
>> defined an effective length scale in a way that seemed to assume constant
>> shape function gradients. I'm not aware of systematic literature comparing
>> the covariant and contravariant length measures on anisotropic meshes, but
>> I believe most people working in the Shakib/Hughes approach use the
>> covariant measure. Our docs have a brief discussion of this choice.
>>
>> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibceed.org%2Fen%2Flatest%2Fexamples%2Ffluids%2F%23equation-eq-peclet=05%7C01%7Cbldenton%40buffalo.edu%7Cd9372f934b26455371a708dbca80dc8e%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638326427028053956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=skMsKDmpBxiaXtBSqhsyckvVpTOkGqDsNJIYo22Ywps%3D=0
>>
>> Matt, I don't understand how the second derivative comes into play as a
>> length measure on anistropic meshes -- the second derivatives can be
>> uniformly zero and yet you still need a length measure.
>>
>
> I was talking about the usual SUPG where we just penalize the true residual.

I think you're focused on computing the strong diffusive flux (which can be 
done using second derivatives or by a projection; the latter produces somewhat 
better results). But you still need a length scale and that's most naturally 
computed using the derivative of reference coordinates with respect to physical 
(or equivalently, the associated metric tensor).


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Matthew Knepley
On Wed, Oct 11, 2023 at 2:09 PM Brandon Denton  wrote:

> Thank you for the discussion.
>
> Are we agreed then that the derivatives of the natural coordinates are
> required for the described approach? If so, is this something PETSc can
> currently do within the point-wise residual functions?
>

I am not sure what natural coordinates are. Do we just mean the Jacobian,
derivatives of the map between reference and real coordinates? If so, yes
the Jacobian is available. Right now I do not pass it
directly, but passing it is easy.

  Thanks,

 Matt


> Matt - Thank you for the command line option for the 2nd derivatives.
> Those will be needed to implement the discussed approach. Specifically in
> the stabilization and shock capture parameters. (Ref.: B. Kirk's Thesis).
> What is a good reference for the usual SUPG method you are referencing?
> I've been looking through my textbooks but haven't found a good reference.
>
> Jed - Thank you for the link. I will review the information on it.
>
> Sorry about the attachment. I will upload it to this thread later (I'm at
> work right now and I can't do it from here).
> --
> *From:* Jed Brown 
> *Sent:* Wednesday, October 11, 2023 1:38 PM
> *To:* Matthew Knepley 
> *Cc:* Brandon Denton ; petsc-users <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] FEM Implementation of NS with SUPG
> Stabilization
>
> Matthew Knepley  writes:
>
> > On Wed, Oct 11, 2023 at 1:03 PM Jed Brown  wrote:
> >
> >> I don't see an attachment, but his thesis used conservative variables
> and
> >> defined an effective length scale in a way that seemed to assume
> constant
> >> shape function gradients. I'm not aware of systematic literature
> comparing
> >> the covariant and contravariant length measures on anisotropic meshes,
> but
> >> I believe most people working in the Shakib/Hughes approach use the
> >> covariant measure. Our docs have a brief discussion of this choice.
> >>
> >>
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibceed.org%2Fen%2Flatest%2Fexamples%2Ffluids%2F%23equation-eq-peclet=05%7C01%7Cbldenton%40buffalo.edu%7Cd9372f934b26455371a708dbca80dc8e%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638326427028053956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=skMsKDmpBxiaXtBSqhsyckvVpTOkGqDsNJIYo22Ywps%3D=0
> 
> >>
> >> Matt, I don't understand how the second derivative comes into play as a
> >> length measure on anistropic meshes -- the second derivatives can be
> >> uniformly zero and yet you still need a length measure.
> >>
> >
> > I was talking about the usual SUPG where we just penalize the true
> residual.
>
> I think you're focused on computing the strong diffusive flux (which can
> be done using second derivatives or by a projection; the latter produces
> somewhat better results). But you still need a length scale and that's most
> naturally computed using the derivative of reference coordinates with
> respect to physical (or equivalently, the associated metric tensor).
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Brandon Denton via petsc-users
Thank you for the discussion.

Are we agreed then that the derivatives of the natural coordinates are required 
for the described approach? If so, is this something PETSc can currently do 
within the point-wise residual functions?

Matt - Thank you for the command line option for the 2nd derivatives. Those 
will be needed to implement the discussed approach. Specifically in the 
stabilization and shock capture parameters. (Ref.: B. Kirk's Thesis). What is a 
good reference for the usual SUPG method you are referencing? I've been looking 
through my textbooks but haven't found a good reference.

Jed - Thank you for the link. I will review the information on it.

Sorry about the attachment. I will upload it to this thread later (I'm at work 
right now and I can't do it from here).

From: Jed Brown 
Sent: Wednesday, October 11, 2023 1:38 PM
To: Matthew Knepley 
Cc: Brandon Denton ; petsc-users 
Subject: Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

Matthew Knepley  writes:

> On Wed, Oct 11, 2023 at 1:03 PM Jed Brown  wrote:
>
>> I don't see an attachment, but his thesis used conservative variables and
>> defined an effective length scale in a way that seemed to assume constant
>> shape function gradients. I'm not aware of systematic literature comparing
>> the covariant and contravariant length measures on anisotropic meshes, but
>> I believe most people working in the Shakib/Hughes approach use the
>> covariant measure. Our docs have a brief discussion of this choice.
>>
>> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibceed.org%2Fen%2Flatest%2Fexamples%2Ffluids%2F%23equation-eq-peclet=05%7C01%7Cbldenton%40buffalo.edu%7Cd9372f934b26455371a708dbca80dc8e%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638326427028053956%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=skMsKDmpBxiaXtBSqhsyckvVpTOkGqDsNJIYo22Ywps%3D=0
>>
>> Matt, I don't understand how the second derivative comes into play as a
>> length measure on anistropic meshes -- the second derivatives can be
>> uniformly zero and yet you still need a length measure.
>>
>
> I was talking about the usual SUPG where we just penalize the true residual.

I think you're focused on computing the strong diffusive flux (which can be 
done using second derivatives or by a projection; the latter produces somewhat 
better results). But you still need a length scale and that's most naturally 
computed using the derivative of reference coordinates with respect to physical 
(or equivalently, the associated metric tensor).


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Jed Brown
Matthew Knepley  writes:

> On Wed, Oct 11, 2023 at 1:03 PM Jed Brown  wrote:
>
>> I don't see an attachment, but his thesis used conservative variables and
>> defined an effective length scale in a way that seemed to assume constant
>> shape function gradients. I'm not aware of systematic literature comparing
>> the covariant and contravariant length measures on anisotropic meshes, but
>> I believe most people working in the Shakib/Hughes approach use the
>> covariant measure. Our docs have a brief discussion of this choice.
>>
>> https://libceed.org/en/latest/examples/fluids/#equation-eq-peclet
>>
>> Matt, I don't understand how the second derivative comes into play as a
>> length measure on anistropic meshes -- the second derivatives can be
>> uniformly zero and yet you still need a length measure.
>>
>
> I was talking about the usual SUPG where we just penalize the true residual.

I think you're focused on computing the strong diffusive flux (which can be 
done using second derivatives or by a projection; the latter produces somewhat 
better results). But you still need a length scale and that's most naturally 
computed using the derivative of reference coordinates with respect to physical 
(or equivalently, the associated metric tensor).


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Matthew Knepley
On Wed, Oct 11, 2023 at 1:03 PM Jed Brown  wrote:

> I don't see an attachment, but his thesis used conservative variables and
> defined an effective length scale in a way that seemed to assume constant
> shape function gradients. I'm not aware of systematic literature comparing
> the covariant and contravariant length measures on anisotropic meshes, but
> I believe most people working in the Shakib/Hughes approach use the
> covariant measure. Our docs have a brief discussion of this choice.
>
> https://libceed.org/en/latest/examples/fluids/#equation-eq-peclet
>
> Matt, I don't understand how the second derivative comes into play as a
> length measure on anistropic meshes -- the second derivatives can be
> uniformly zero and yet you still need a length measure.
>

I was talking about the usual SUPG where we just penalize the true residual.

  Matt


> Brandon Denton via petsc-users  writes:
>
> > I was thinking about trying to implement Ben Kirk's approach to
> Navier-Stokes (see attached paper; Section 5). His approach uses these
> quantities to align the orientation of the unstructured element/cell with
> the fluid velocity to apply the stabilization/upwinding and to detect
> shocks.
> >
> > If you have an example of the approach you mentioned, could you please
> send it over so I can review it?
> >
> > On Oct 11, 2023 6:02 AM, Matthew Knepley  wrote:
> > On Tue, Oct 10, 2023 at 9:34 PM Brandon Denton via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> > Good Evening,
> >
> > I am looking to implement a form of Navier-Stokes with SUPG
> Stabilization and shock capturing using PETSc's FEM infrastructure. In this
> implementation, I need access to the cell's shape function gradients and
> natural coordinate gradients for calculations within the point-wise
> residual calculations. How do I get these quantities at the quadrature
> points? The signatures for fo and f1 don't seem to contain this information.
> >
> > Are you sure you need those? Darsh and I implemented SUPG without that.
> You would need local second derivative information, which you can get using
> -dm_ds_jet_degree 2. If you check in an example, I can go over it.
> >
> >   Thanks,
> >
> >  Matt
> >
> > Thank you in advance for your time.
> > Brandon
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/<
> http://www.cse.buffalo.edu/~knepley/>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Jed Brown
I don't see an attachment, but his thesis used conservative variables and 
defined an effective length scale in a way that seemed to assume constant shape 
function gradients. I'm not aware of systematic literature comparing the 
covariant and contravariant length measures on anisotropic meshes, but I 
believe most people working in the Shakib/Hughes approach use the covariant 
measure. Our docs have a brief discussion of this choice.

https://libceed.org/en/latest/examples/fluids/#equation-eq-peclet

Matt, I don't understand how the second derivative comes into play as a length 
measure on anistropic meshes -- the second derivatives can be uniformly zero 
and yet you still need a length measure.

Brandon Denton via petsc-users  writes:

> I was thinking about trying to implement Ben Kirk's approach to Navier-Stokes 
> (see attached paper; Section 5). His approach uses these quantities to align 
> the orientation of the unstructured element/cell with the fluid velocity to 
> apply the stabilization/upwinding and to detect shocks.
>
> If you have an example of the approach you mentioned, could you please send 
> it over so I can review it?
>
> On Oct 11, 2023 6:02 AM, Matthew Knepley  wrote:
> On Tue, Oct 10, 2023 at 9:34 PM Brandon Denton via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
> Good Evening,
>
> I am looking to implement a form of Navier-Stokes with SUPG Stabilization and 
> shock capturing using PETSc's FEM infrastructure. In this implementation, I 
> need access to the cell's shape function gradients and natural coordinate 
> gradients for calculations within the point-wise residual calculations. How 
> do I get these quantities at the quadrature points? The signatures for fo and 
> f1 don't seem to contain this information.
>
> Are you sure you need those? Darsh and I implemented SUPG without that. You 
> would need local second derivative information, which you can get using 
> -dm_ds_jet_degree 2. If you check in an example, I can go over it.
>
>   Thanks,
>
>  Matt
>
> Thank you in advance for your time.
> Brandon
>
>
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] [EXTERNAL] Re: Unexpected performance losses switching to COO interface

2023-10-11 Thread Fackler, Philip via petsc-users
I'm on it.

Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory

From: Junchao Zhang 
Sent: Wednesday, October 11, 2023 10:14
To: Fackler, Philip 
Cc: petsc-users@mcs.anl.gov ; 
xolotl-psi-developm...@lists.sourceforge.net 
; Blondel, Sophie 

Subject: Re: [EXTERNAL] Re: [petsc-users] Unexpected performance losses 
switching to COO interface

Hi,  Philip,
  Could you try this branch 
jczhang/2023-10-05/feature-support-matshift-aijkokkos ?

  Thanks.
--Junchao Zhang


On Thu, Oct 5, 2023 at 4:52 PM Fackler, Philip 
mailto:fackle...@ornl.gov>> wrote:
Aha! That makes sense. Thank you.

Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory

From: Junchao Zhang mailto:junchao.zh...@gmail.com>>
Sent: Thursday, October 5, 2023 17:29
To: Fackler, Philip mailto:fackle...@ornl.gov>>
Cc: petsc-users@mcs.anl.gov 
mailto:petsc-users@mcs.anl.gov>>; 
xolotl-psi-developm...@lists.sourceforge.net
 
mailto:xolotl-psi-developm...@lists.sourceforge.net>>;
 Blondel, Sophie mailto:sblon...@utk.edu>>
Subject: [EXTERNAL] Re: [petsc-users] Unexpected performance losses switching 
to COO interface

Wait a moment, it seems it was because we do not have a GPU implementation of 
MatShift...
Let me see how to add it.
--Junchao Zhang


On Thu, Oct 5, 2023 at 10:58 AM Junchao Zhang 
mailto:junchao.zh...@gmail.com>> wrote:
Hi, Philip,
  I looked at the hpcdb-NE_3-cuda file. It seems you used MatSetValues() 
instead of the COO interface?  MatSetValues() needs to copy the data from 
device to host and thus is expensive.
  Do you have profiling results with COO enabled?

[Screenshot 2023-10-05 at 10.55.29 AM.png]


--Junchao Zhang


On Mon, Oct 2, 2023 at 9:52 AM Junchao Zhang 
mailto:junchao.zh...@gmail.com>> wrote:
Hi, Philip,
  I will look into the tarballs and get back to you.
   Thanks.
--Junchao Zhang


On Mon, Oct 2, 2023 at 9:41 AM Fackler, Philip via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
We finally have xolotl ported to use the new COO interface and the aijkokkos 
implementation for Mat (and kokkos for Vec). Comparing this port to our 
previous version (using MatSetValuesStencil and the default Mat and Vec 
implementations), we expected to see an improvement in performance for both the 
"serial" and "cuda" builds (here I'm referring to the kokkos configuration).

Attached are two plots that show timings for three different cases. All of 
these were run on Ascent (the Summit-like training system) with 6 MPI tasks (on 
a single node). The CUDA cases were given one GPU per task (and used CUDA-aware 
MPI). The labels on the blue bars indicate speedup. In all cases we used 
"-fieldsplit_0_pc_type jacobi" to keep the comparison as consistent as possible.

The performance of RHSJacobian (where the bulk of computation happens in 
xolotl) behaved basically as expected (better than expected in the serial 
build). NE_3 case in CUDA was the only one that performed worse, but not 
surprisingly, since its workload for the GPUs is much smaller. We've still got 
more optimization to do on this.

The real surprise was how much worse the overall solve times were. This seems 
to be due simply to switching to the kokkos-based implementation. I'm wondering 
if there are any changes we can make in configuration or runtime arguments to 
help with PETSc's performance here. Any help looking into this would be 
appreciated.

The tarballs linked 
here
 and 
here
 are profiling databases which, once extracted, can be viewed with hpcviewer. I 
don't know how helpful that will be, but hopefully it can give you some 
direction.

Thanks for your help,

Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory


Re: [petsc-users] SLEPc/NEP for shell matrice T(lambda) and T'(lambda)

2023-10-11 Thread Kenneth C Hall
Jose,

Thanks very much for your help with this. Greatly appreciated. I will look at 
the MR. Please let me know if you do get the Fortran example working.

Thanks, and best regards,
Kenneth


From: Jose E. Roman 
Date: Wednesday, October 11, 2023 at 2:41 AM
To: Kenneth C Hall 
Cc: petsc-users@mcs.anl.gov 
Subject: Re: [petsc-users] SLEPc/NEP for shell matrice T(lambda) and T'(lambda)
Kenneth,

The MatDuplicate issue should be fixed in the following MR 
https://urldefense.com/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/6912__;!!OToaGQ!p1tu1lzpyqM4wU-3WRzXN9bH3sFnXjyJvwQZh4PQBG5GNgB472qfxKOASyjxsg23AUQGusU-HpzI855ViaFfRCI$

Note that the NLEIGS solver internally uses MatDuplicate for creating multiple 
copies of the shell matrix, each one with its own value of lambda. Hence your 
implementation of the shell matrix is not appropriate, since you have a single 
global lambda within the module. I have attempted to write a Fortran example 
that duplicates the lambda correctly (see the MR), but does not work yet.

Jose


> El 6 oct 2023, a las 22:28, Kenneth C Hall  escribió:
>
> Jose,
>
> Unfortunately, I was unable to implement the MATOP_DUPLICATE operation in 
> fortran (and I do not know enough c to work in c).  Here is the error message 
> I get:
>
> [0]PETSC ERROR: #1 MatShellSetOperation_Fortran() at 
> /Users/hall/Documents/Fortran_Codes/Packages/petsc/src/mat/impls/shell/ftn-custom/zshellf.c:283
> [0]PETSC ERROR: #2 src/test_nep.f90:62
>
> When I look at zshellf.c, MATOP_DUPLICATE is not one of the supported 
> operations. See below.
>
> Kenneth
>
>
> /**
>  * Subset of MatOperation that is supported by the Fortran wrappers.
>  */
> enum FortranMatOperation {
>   FORTRAN_MATOP_MULT   = 0,
>   FORTRAN_MATOP_MULT_ADD   = 1,
>   FORTRAN_MATOP_MULT_TRANSPOSE = 2,
>   FORTRAN_MATOP_MULT_TRANSPOSE_ADD = 3,
>   FORTRAN_MATOP_SOR= 4,
>   FORTRAN_MATOP_TRANSPOSE  = 5,
>   FORTRAN_MATOP_GET_DIAGONAL   = 6,
>   FORTRAN_MATOP_DIAGONAL_SCALE = 7,
>   FORTRAN_MATOP_ZERO_ENTRIES   = 8,
>   FORTRAN_MATOP_AXPY   = 9,
>   FORTRAN_MATOP_SHIFT  = 10,
>   FORTRAN_MATOP_DIAGONAL_SET   = 11,
>   FORTRAN_MATOP_DESTROY= 12,
>   FORTRAN_MATOP_VIEW   = 13,
>   FORTRAN_MATOP_CREATE_VECS= 14,
>   FORTRAN_MATOP_GET_DIAGONAL_BLOCK = 15,
>   FORTRAN_MATOP_COPY   = 16,
>   FORTRAN_MATOP_SCALE  = 17,
>   FORTRAN_MATOP_SET_RANDOM = 18,
>   FORTRAN_MATOP_ASSEMBLY_BEGIN = 19,
>   FORTRAN_MATOP_ASSEMBLY_END   = 20,
>   FORTRAN_MATOP_SIZE   = 21
> };
>
>
> From: Jose E. Roman 
> Date: Friday, October 6, 2023 at 7:01 AM
> To: Kenneth C Hall 
> Cc: petsc-users@mcs.anl.gov 
> Subject: Re: [petsc-users] SLEPc/NEP for shell matrice T(lambda) and 
> T'(lambda)
>
> I am getting an error in a different place than you. I started to debug, but 
> don't have much time at the moment.
> Can you try something? Comparing to ex21.c, I see that a difference that may 
> be relevant is the MATOP_DUPLICATE operation. Can you try defining it for 
> your A matrix?
>
> Note: If you plan to use the NLEIGS solver, there is no need to define the 
> derivative T' so you can skip the call to NEPSetJacobian().
>
> Jose
>
>
> > El 6 oct 2023, a las 0:37, Kenneth C Hall  
> > escribió:
> >
> > Hi all,
> >
> > I have a very large eigenvalue problem of the form T(\lambda).x = 0. The 
> > eigenvalues appear in a complicated way, and I must use a matrix-free 
> > approach to compute the products T.x and T’.x.
> >
> > I am trying to implement in SLEPc/NEP.  To get started, I have defined a 
> > much smaller and simpler system of the form
> > A.x - \lambda x = 0 where A is a 10x10 matrix. This is of course a simple 
> > standard eigenvalue problem, but I am using it as a surrogate to understand 
> > how to use NEP.
> >
> > I have set the problem up using shell matrices (as that is my ultimate 
> > goal).  The full code is attached, but here is a smaller snippet of code:
> >
> > ! Create matrix-free operators for A and B
> >   
> > PetscCall(MatCreateShell(PETSC_COMM_SELF,n,n,PETSC_DETERMINE,PETSC_DETERMINE,
> >  PETSC_NULL_INTEGER, A, ierr))
> >   
> > PetscCall(MatCreateShell(PETSC_COMM_SELF,n,n,PETSC_DETERMINE,PETSC_DETERMINE,
> >  PETSC_NULL_INTEGER, B, ierr))
> >   PetscCall(MatShellSetOperation(A, MATOP_MULT, MatMult_A, ierr))
> >   PetscCall(MatShellSetOperation(B, MATOP_MULT, MatMult_B, ierr))
> >
> > ! Create nonlinear eigensolver
> >   PetscCall(NEPCreate(PETSC_COMM_SELF, nep, ierr))
> >
> > ! Set the problem type
> >   PetscCall(NEPSetProblemType(nep, NEP_GENERAL, ierr))
> > !
> > ! set the solver type
> >   PetscCall(NEPSetType(nep, NEPNLEIGS, ierr))
> > !
> > ! 

Re: [petsc-users] Configuration of PETSc with Intel OneAPI and Intel MPI fails

2023-10-11 Thread Satish Balay via petsc-users
The same docs should be available in 
https://web.cels.anl.gov/projects/petsc/download/release-snapshots/petsc-with-docs-3.20.0.tar.gz

Satish

On Wed, 11 Oct 2023, Richter, Roland wrote:

> Hei,
> Thank you very much for the answer! I looked it up, but petsc.org seems to
> be a bit unstable here, quite often I can't reach petsc.org. 
> Regards,
> Roland Richter
> 
> -Ursprüngliche Nachricht-
> Von: Satish Balay  
> Gesendet: mandag 9. oktober 2023 17:29
> An: Barry Smith 
> Cc: Richter, Roland ; petsc-users@mcs.anl.gov
> Betreff: Re: [petsc-users] Configuration of PETSc with Intel OneAPI and
> Intel MPI fails
> 
> Will note - OneAPI MPI usage is documented at
> https://petsc.org/release/install/install/#mpi
> 
> Satish
> 
> On Mon, 9 Oct 2023, Barry Smith wrote:
> 
> > 
> >   Instead of using the mpiicc -cc=icx style use -- with-cc=mpiicc (etc)
> and 
> > 
> > export I_MPI_CC=icx
> > export I_MPI_CXX=icpx
> > export I_MPI_F90=ifx
> > 
> > 
> > > On Oct 9, 2023, at 8:32 AM, Richter, Roland 
> wrote:
> > > 
> > > Hei,
> > > I'm currently trying to install PETSc on a server (Ubuntu 22.04) with
> Intel MPI and Intel OneAPI. To combine both, I have to use f. ex. "mpiicc
> -cc=icx" as C-compiler, as described by
> https://stackoverflow.com/a/76362396. Therefore, I adapted the
> configure-line as follow:
> > >  
> > > ./configure --prefix=/media/storage/local_opt/petsc
> --with-scalar-type=complex --with-cc="mpiicc -cc=icx" --with-cxx="mpiicpc
> -cxx=icpx" --CPPFLAGS="-fPIC -march=native -mavx2" --CXXFLAGS="-fPIC
> -march=native -mavx2" --with-fc="mpiifort -fc=ifx" --with-pic=true
> --with-mpi=true
> --with-blaslapack-dir=/opt/intel/oneapi/mkl/latest/lib/intel64/
> --with-openmp=true --download-hdf5=yes --download-netcdf=yes
> --download-chaco=no --download-metis=yes --download-slepc=yes
> --download-suitesparse=yes --download-eigen=yes --download-parmetis=yes
> --download-ptscotch=yes --download-mumps=yes --download-scalapack=yes
> --download-superlu=yes --download-superlu_dist=yes --with-mkl_pardiso=1
> --with-boost=1 --with-boost-dir=/media/storage/local_opt/boost
> --download-opencascade=yes --with-fftw=1
> --with-fftw-dir=/media/storage/local_opt/fftw3 --download-kokkos=yes
> --with-mkl_sparse=1 --with-mkl_cpardiso=1 --with-mkl_sparse_optimize=1
> --download-muparser=no --download-p4est=yes --download-sowing=y
>  es --download-viennalcl=yes --with-zlib --force=1 --with-clean=1
> --with-cuda=1
> > >  
> > > The configuration, however, fails with 
> > >  
> > > The CMAKE_C_COMPILER:
> > >  
> > > mpiicc -cc=icx
> > >  
> > >   is not a full path and was not found in the PATH
> > >  
> > > for all additional modules which use a cmake-based configuration
> approach (such as OPENCASCADE). How could I solve that problem?
> > >  
> > > Thank you!
> > > Regards,
> > > Roland Richter
> > > 
> > 
> > 
> 


Re: [petsc-users] Parallel DMPlex

2023-10-11 Thread Matthew Knepley
On Wed, Oct 11, 2023 at 4:42 AM erdemguer  wrote:

> Hi again,
>

I see the problem. FV ghosts mean extra boundary cells added in FV methods
using DMPlexCreateGhostCells() in order to impose boundary conditions. They
are not the "ghost" cells for overlapping parallel decompositions. I have
changed your code to give you what you want. It is attached.

  Thanks,

 Matt


> Here is my code:
> #include 
> static char help[] = "dmplex";
>
> int main(int argc, char **argv)
> {
> PetscCall(PetscInitialize(, , NULL, help));
> DM dm, dm_dist;
> PetscSection section;
> PetscInt cStart, cEndInterior, cEnd, rank;
> PetscInt nc[3] = {3, 3, 3};
> PetscReal upper[3] = {1, 1, 1};
>
> PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, ));
>
> DMPlexCreateBoxMesh(PETSC_COMM_WORLD, 3, PETSC_FALSE, nc, NULL, upper,
> NULL, PETSC_TRUE, );
> DMViewFromOptions(dm, NULL, "-dm1_view");
> PetscCall(DMSetFromOptions(dm));
> DMViewFromOptions(dm, NULL, "-dm2_view");
>
> PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
> DMPlexComputeCellTypes(dm);
> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_INTERIOR_GHOST,
> , NULL));
> PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, cStart:
> %d, cEndInterior: %d, cEnd: %d\n", rank, cStart,
> cEndInterior, cEnd);
>
> PetscInt nField = 1, nDof = 3, field = 0;
> PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, ));
> PetscSectionSetNumFields(section, nField);
> PetscCall(PetscSectionSetChart(section, cStart, cEnd));
> for (PetscInt p = cStart; p < cEnd; p++)
> {
> PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
> PetscCall(PetscSectionSetDof(section, p, nDof));
> }
>
> PetscCall(PetscSectionSetUp(section));
>
> DMSetLocalSection(dm, section);
> DMViewFromOptions(dm, NULL, "-dm3_view");
>
> DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE);
> DMViewFromOptions(dm, NULL, "-dm4_view");
> PetscCall(DMPlexDistribute(dm, 1, NULL, _dist));
> if (dm_dist)
> {
> DMDestroy();
> dm = dm_dist;
> }
> DMViewFromOptions(dm, NULL, "-dm5_view");
> PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
> DMPlexComputeCellTypes(dm);
> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_FV_GHOST,
> , NULL));
> PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, cStart: %d,
> cEndInterior: %d, cEnd: %d\n", rank, cStart,
> cEndInterior, cEnd);
>
> DMDestroy();
> PetscCall(PetscFinalize());
> }
>
> This codes output is currently (on 2 processors) is:
> Before Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 14
> Before Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 13
> After Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 27
> After Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 24
>
> DMView outputs:
> dm1_view (after creation):
> DM Object: 2 MPI processes
>   type: plex
> DM_0x8404_0 in 3 dimensions:
>   Number of 0-cells per rank: 64 0
>   Number of 1-cells per rank: 144 0
>   Number of 2-cells per rank: 108 0
>   Number of 3-cells per rank: 27 0
> Labels:
>   marker: 1 strata with value/size (1 (218))
>   Face Sets: 6 strata with value/size (6 (9), 5 (9), 3 (9), 4 (9), 1 (9),
> 2 (9))
>   depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27))
>   celltype: 4 strata with value/size (7 (27), 0 (64), 4 (108), 1 (144))
>
> dm2_view (after setfromoptions):
> DM Object: 2 MPI processes
>   type: plex
> DM_0x8404_0 in 3 dimensions:
>   Number of 0-cells per rank: 40 46
>   Number of 1-cells per rank: 83 95
>   Number of 2-cells per rank: 57 64
>   Number of 3-cells per rank: 13 14
> Labels:
>   depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>   marker: 1 strata with value/size (1 (109))
>   Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
>   celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>
> dm3_view (after setting local section):
> DM Object: 2 MPI processes
>   type: plex
> DM_0x8404_0 in 3 dimensions:
>   Number of 0-cells per rank: 40 46
>   Number of 1-cells per rank: 83 95
>   Number of 2-cells per rank: 57 64
>   Number of 3-cells per rank: 13 14
> Labels:
>   depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>   marker: 1 strata with value/size (1 (109))
>   Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
>   celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
> Field Field_0:
>   adjacency FEM
>
> dm4_view (after setting adjacency):
> DM Object: 2 MPI processes
>   type: plex
> DM_0x8404_0 in 3 dimensions:
>   Number of 0-cells per rank: 40 46
>   Number of 1-cells per rank: 83 95
>   Number of 2-cells per rank: 57 64
>   Number of 3-cells per rank: 13 14
> Labels:
>   depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>   marker: 1 strata with value/size (1 (109))
>   Face Sets: 5 strata 

Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Brandon Denton via petsc-users
I was thinking about trying to implement Ben Kirk's approach to Navier-Stokes 
(see attached paper; Section 5). His approach uses these quantities to align 
the orientation of the unstructured element/cell with the fluid velocity to 
apply the stabilization/upwinding and to detect shocks.

If you have an example of the approach you mentioned, could you please send it 
over so I can review it?

On Oct 11, 2023 6:02 AM, Matthew Knepley  wrote:
On Tue, Oct 10, 2023 at 9:34 PM Brandon Denton via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Good Evening,

I am looking to implement a form of Navier-Stokes with SUPG Stabilization and 
shock capturing using PETSc's FEM infrastructure. In this implementation, I 
need access to the cell's shape function gradients and natural coordinate 
gradients for calculations within the point-wise residual calculations. How do 
I get these quantities at the quadrature points? The signatures for fo and f1 
don't seem to contain this information.

Are you sure you need those? Darsh and I implemented SUPG without that. You 
would need local second derivative information, which you can get using 
-dm_ds_jet_degree 2. If you check in an example, I can go over it.

  Thanks,

 Matt

Thank you in advance for your time.
Brandon


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] Compilation failure of PETSc with "The procedure name of the INTERFACE block conflicts with a name in the encompassing scoping unit"

2023-10-11 Thread Matthew Knepley
On Wed, Oct 11, 2023 at 4:22 AM Richter, Roland 
wrote:

> Hei,
>
> following my last question I managed to configure PETSc with Intel MPI and
> Intel OneAPI using the following configure-line:
>
>
>
> *./configure --prefix=/media/storage/local_opt/petsc
> --with-scalar-type=complex --with-cc=mpiicc --with-cxx=mpiicpc
> --CPPFLAGS="-fPIC -march=native -mavx2" --CXXFLAGS="-fPIC -march=native
> -mavx2" --with-fc=mpiifort --with-pic=true --with-mpi=true
> --with-blaslapack-dir=/opt/intel/oneapi/mkl/latest/lib/intel64/
> --with-openmp=true --download-hdf5=yes --download-netcdf=yes
> --download-chaco=no --download-metis=yes --download-slepc=yes
> --download-suitesparse=yes --download-eigen=yes --download-parmetis=yes
> --download-ptscotch=yes --download-mumps=yes --download-scalapack=yes
> --download-superlu=yes --download-superlu_dist=yes --with-mkl_pardiso=1
> --with-boost=1 --with-boost-dir=/media/storage/local_opt/boost
> --download-opencascade=yes --with-fftw=1
> --with-fftw-dir=/media/storage/local_opt/fftw3 --download-kokkos=yes
> --with-mkl_sparse=1 --with-mkl_cpardiso=1 --with-mkl_sparse_optimize=1
> --download-muparser=yes --download-p4est=yes --download-sowing=yes
> --download-viennalcl=yes --with-zlib --force=1 --with-clean=1 --with-cuda=0*
>
>
>
> Now, however, compilation fails with the following error:
>
> /home/user/Downloads/git-files/petsc/include/../src/ksp/f90-mod/ftn-auto-interfaces/petscpc.h90(699):
> error #6623: The procedure name of the INTERFACE block conflicts with a
> name in the encompassing scoping unit.   [PCGASMCREATESUBDOMAINS2D]
>
>   subroutine PCGASMCreateSubdomains2D(a,b,c,d,e,f,g,h,i,j,z)
>
> -^
>
> /home/user/Downloads/git-files/petsc/include/../src/ksp/f90-mod/ftn-auto-interfaces/petscpc.h90(1199):
> error #6623: The procedure name of the INTERFACE block conflicts with a
> name in the encompassing scoping unit.   [PCASMCREATESUBDOMAINS2D]
>
>   subroutine PCASMCreateSubdomains2D(a,b,c,d,e,f,g,h,i,z)
>
> -^
>
> I'm on the latest version of origin/main, but can't figure out how to fix
> that issue by myself. Therefore, I'd appreciate additional insight.
>

You have old build files in the tree. We changed the Fortran stubs to be
generated in the PETSC_ARCH tree so that you can build the stubs for
different branches in the same PETSc tree. You have old stubs in the src
tree. You can get rid of these using

  git clean -f -d -x

unless you have your own files in the source tree, in which case you need
to remove the ftn-auto-interfaces directories yourself.

  Thanks,

 Matt


> Thanks!
>
> Regards,
>
> Roland Richter
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Matthew Knepley
On Tue, Oct 10, 2023 at 9:34 PM Brandon Denton via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Good Evening,
>
> I am looking to implement a form of Navier-Stokes with SUPG Stabilization
> and shock capturing using PETSc's FEM infrastructure. In this
> implementation, I need access to the cell's shape function gradients and
> natural coordinate gradients for calculations within the point-wise
> residual calculations. How do I get these quantities at the quadrature
> points? The signatures for fo and f1 don't seem to contain this information.
>

Are you sure you need those? Darsh and I implemented SUPG without that. You
would need local second derivative information, which you can get using
-dm_ds_jet_degree 2. If you check in an example, I can go over it.

  Thanks,

 Matt


> Thank you in advance for your time.
> Brandon
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Parallel DMPlex

2023-10-11 Thread erdemguer via petsc-users
Hi again,
Here is my code:
#include 
static char help[] = "dmplex";

int main(int argc, char **argv)
{
PetscCall(PetscInitialize(, , NULL, help));
DM dm, dm_dist;
PetscSection section;
PetscInt cStart, cEndInterior, cEnd, rank;
PetscInt nc[3] = {3, 3, 3};
PetscReal upper[3] = {1, 1, 1};

PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, ));

DMPlexCreateBoxMesh(PETSC_COMM_WORLD, 3, PETSC_FALSE, nc, NULL, upper, NULL, 
PETSC_TRUE, );
DMViewFromOptions(dm, NULL, "-dm1_view");
PetscCall(DMSetFromOptions(dm));
DMViewFromOptions(dm, NULL, "-dm2_view");

PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
DMPlexComputeCellTypes(dm);
PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_INTERIOR_GHOST, 
, NULL));
PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, cStart: %d, 
cEndInterior: %d, cEnd: %d\n", rank, cStart,
cEndInterior, cEnd);

PetscInt nField = 1, nDof = 3, field = 0;
PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, ));
PetscSectionSetNumFields(section, nField);
PetscCall(PetscSectionSetChart(section, cStart, cEnd));
for (PetscInt p = cStart; p < cEnd; p++)
{
PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
PetscCall(PetscSectionSetDof(section, p, nDof));
}

PetscCall(PetscSectionSetUp(section));

DMSetLocalSection(dm, section);
DMViewFromOptions(dm, NULL, "-dm3_view");

DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE);
DMViewFromOptions(dm, NULL, "-dm4_view");
PetscCall(DMPlexDistribute(dm, 1, NULL, _dist));
if (dm_dist)
{
DMDestroy();
dm = dm_dist;
}
DMViewFromOptions(dm, NULL, "-dm5_view");
PetscCall(DMPlexGetDepthStratum(dm, 3, , ));
DMPlexComputeCellTypes(dm);
PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_FV_GHOST, , 
NULL));
PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, cStart: %d, 
cEndInterior: %d, cEnd: %d\n", rank, cStart,
cEndInterior, cEnd);

DMDestroy();
PetscCall(PetscFinalize());}

This codes output is currently (on 2 processors) is:
Before Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 14
Before Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 13
After Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 27After 
Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 24

DMView outputs:
dm1_view (after creation):
DM Object: 2 MPI processes
type: plex
DM_0x8404_0 in 3 dimensions:
Number of 0-cells per rank: 64 0
Number of 1-cells per rank: 144 0
Number of 2-cells per rank: 108 0
Number of 3-cells per rank: 27 0
Labels:
marker: 1 strata with value/size (1 (218))
Face Sets: 6 strata with value/size (6 (9), 5 (9), 3 (9), 4 (9), 1 (9), 2 (9))
depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27)) celltype: 4 
strata with value/size (7 (27), 0 (64), 4 (108), 1 (144))

dm2_view (after setfromoptions):
DM Object: 2 MPI processes
type: plex
DM_0x8404_0 in 3 dimensions:
Number of 0-cells per rank: 40 46
Number of 1-cells per rank: 83 95
Number of 2-cells per rank: 57 64
Number of 3-cells per rank: 13 14
Labels:
depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
marker: 1 strata with value/size (1 (109))
Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4)) 
celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))

dm3_view (after setting local section):
DM Object: 2 MPI processes
type: plex
DM_0x8404_0 in 3 dimensions:
Number of 0-cells per rank: 40 46
Number of 1-cells per rank: 83 95
Number of 2-cells per rank: 57 64
Number of 3-cells per rank: 13 14
Labels:
depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
marker: 1 strata with value/size (1 (109))
Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
Field Field_0: adjacency FEM

dm4_view (after setting adjacency):
DM Object: 2 MPI processes
type: plex
DM_0x8404_0 in 3 dimensions:
Number of 0-cells per rank: 40 46
Number of 1-cells per rank: 83 95
Number of 2-cells per rank: 57 64
Number of 3-cells per rank: 13 14
Labels:
depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
marker: 1 strata with value/size (1 (109))
Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
Field Field_0: adjacency FVM++

dm5_view (after distribution):
DM Object: Parallel Mesh 2 MPI processes
type: plex
Parallel Mesh in 3 dimensions:
Number of 0-cells per rank: 64 60
Number of 1-cells per rank: 144 133
Number of 2-cells per rank: 108 98
Number of 3-cells per rank: 27 24
Labels:
depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27))
marker: 1 strata with value/size (1 (218))
Face Sets: 6 strata with value/size (1 (9), 2 (9), 3 (9), 4 (9), 5 (9), 6 (9))
celltype: 4 strata with value/size (0 (64), 1 (144), 4 (108), 7 (27))
Field Field_0: adjacency FVM++

Thanks,
Guer.

Sent with [Proton Mail](https://proton.me/) secure email.

--- Original Message ---
On Wednesday, October 11th, 2023 at 3:33 AM, Matthew Knepley 
 wrote:

> On Tue, 

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Pierre Jolivet

> On 11 Oct 2023, at 9:13 AM, Thanasis Boutsikakis 
>  wrote:
> 
> Very good catch Pierre, thanks a lot!
> 
> This made everything work: the two-step process and the ptap(). I mistakenly 
> thought that I should not let the local number of columns to be None, since 
> the matrix is only partitioned row-wise. Could you please explain what 
> happened because of my setting the local column number so that I get the 
> philosophy behind this partitioning?

Hopefully this should make things clearer to you: 
https://petsc.org/release/manual/mat/#sec-matlayout

Thanks,
Pierre

> Thanks again,
> Thanos
> 
>> On 11 Oct 2023, at 09:04, Pierre Jolivet  wrote:
>> 
>> That’s because:
>> size = ((None, global_rows), (global_cols, global_cols)) 
>> should be:
>> size = ((None, global_rows), (None, global_cols)) 
>> Then, it will work.
>> $ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py 
>> && echo $?
>> 0
>> 
>> Thanks,
>> Pierre
>> 
>>> On 11 Oct 2023, at 8:58 AM, Thanasis Boutsikakis 
>>>  wrote:
>>> 
>>> Pierre, I see your point, but my experiment shows that it does not even run 
>>> due to size mismatch, so I don’t see how being sparse would change things 
>>> here. There must be some kind of problem with the parallel ptap(), because 
>>> it does run sequentially. In order to test that, I changed the flags of the 
>>> matrix creation to sparse=True and ran it again. Here is the code
>>> 
>>> """Experimenting with PETSc mat-mat multiplication"""
>>> 
>>> import numpy as np
>>> from firedrake import COMM_WORLD
>>> from firedrake.petsc import PETSc
>>> 
>>> from utilities import Print
>>> 
>>> nproc = COMM_WORLD.size
>>> rank = COMM_WORLD.rank
>>> 
>>> 
>>> def create_petsc_matrix(input_array, sparse=True):
>>> """Create a PETSc matrix from an input_array
>>> 
>>> Args:
>>> input_array (np array): Input array
>>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to 
>>> None.
>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>> True.
>>> 
>>> Returns:
>>> PETSc mat: PETSc mpi matrix
>>> """
>>> # Check if input_array is 1D and reshape if necessary
>>> assert len(input_array.shape) == 2, "Input array should be 
>>> 2-dimensional"
>>> global_rows, global_cols = input_array.shape
>>> size = ((None, global_rows), (global_cols, global_cols))
>>> 
>>> # Create a sparse or dense matrix based on the 'sparse' argument
>>> if sparse:
>>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>>> else:
>>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>>> matrix.setUp()
>>> 
>>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>>> 
>>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>>> # Calculate the correct row in the array for the current process
>>> row_in_array = counter + local_rows_start
>>> matrix.setValues(
>>> i, range(global_cols), input_array[row_in_array, :], addv=False
>>> )
>>> 
>>> # Assembly the matrix to compute the final structure
>>> matrix.assemblyBegin()
>>> matrix.assemblyEnd()
>>> 
>>> return matrix
>>> 
>>> 
>>> # 
>>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc 
>>> matrix Phi
>>> #  A' = Phi.T * A * Phi
>>> # [k x k] <- [k x m] x [m x m] x [m x k]
>>> # 
>>> 
>>> m, k = 100, 7
>>> # Generate the random numpy matrices
>>> np.random.seed(0)  # sets the seed to 0
>>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>>> 
>>> # 
>>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>>> # 
>>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>>> 
>>> # Create A as an mpi matrix distributed on each process
>>> A = create_petsc_matrix(A_np, sparse=True)
>>> 
>>> # Create Phi as an mpi matrix distributed on each process
>>> Phi = create_petsc_matrix(Phi_np, sparse=True)
>>> 
>>> # Create an empty PETSc matrix object to store the result of the PtAP 
>>> operation.
>>> # This will hold the result A' = Phi.T * A * Phi after the computation.
>>> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=True)
>>> 
>>> # Perform the PtAP (Phi Transpose times A times Phi) operation.
>>> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
>>> # A_prime will store the result of the operation.
>>> A_prime = A.ptap(Phi)
>>> 
>>> I got
>>> 
>>> Traceback (most recent call last):
>>>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", 
>>> line 89, in 
>>> Traceback (most recent call last):
>>>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", 
>>> line 89, in 
>>> Traceback (most recent call last):
>>>   File 

[petsc-users] Compilation failure of PETSc with "The procedure name of the INTERFACE block conflicts with a name in the encompassing scoping unit"

2023-10-11 Thread Richter, Roland
Hei,

following my last question I managed to configure PETSc with Intel MPI and
Intel OneAPI using the following configure-line:

 

./configure --prefix=/media/storage/local_opt/petsc
--with-scalar-type=complex --with-cc=mpiicc --with-cxx=mpiicpc
--CPPFLAGS="-fPIC -march=native -mavx2" --CXXFLAGS="-fPIC -march=native
-mavx2" --with-fc=mpiifort --with-pic=true --with-mpi=true
--with-blaslapack-dir=/opt/intel/oneapi/mkl/latest/lib/intel64/
--with-openmp=true --download-hdf5=yes --download-netcdf=yes
--download-chaco=no --download-metis=yes --download-slepc=yes
--download-suitesparse=yes --download-eigen=yes --download-parmetis=yes
--download-ptscotch=yes --download-mumps=yes --download-scalapack=yes
--download-superlu=yes --download-superlu_dist=yes --with-mkl_pardiso=1
--with-boost=1 --with-boost-dir=/media/storage/local_opt/boost
--download-opencascade=yes --with-fftw=1
--with-fftw-dir=/media/storage/local_opt/fftw3 --download-kokkos=yes
--with-mkl_sparse=1 --with-mkl_cpardiso=1 --with-mkl_sparse_optimize=1
--download-muparser=yes --download-p4est=yes --download-sowing=yes
--download-viennalcl=yes --with-zlib --force=1 --with-clean=1 --with-cuda=0

 

Now, however, compilation fails with the following error:

/home/user/Downloads/git-files/petsc/include/../src/ksp/f90-mod/ftn-auto-int
erfaces/petscpc.h90(699): error #6623: The procedure name of the INTERFACE
block conflicts with a name in the encompassing scoping unit.
[PCGASMCREATESUBDOMAINS2D]

  subroutine PCGASMCreateSubdomains2D(a,b,c,d,e,f,g,h,i,j,z)

-^

/home/user/Downloads/git-files/petsc/include/../src/ksp/f90-mod/ftn-auto-int
erfaces/petscpc.h90(1199): error #6623: The procedure name of the INTERFACE
block conflicts with a name in the encompassing scoping unit.
[PCASMCREATESUBDOMAINS2D]

  subroutine PCASMCreateSubdomains2D(a,b,c,d,e,f,g,h,i,z)

-^

I'm on the latest version of origin/main, but can't figure out how to fix
that issue by myself. Therefore, I'd appreciate additional insight. 

Thanks!

Regards,

Roland Richter

 



compilation_log.log
Description: Binary data


smime.p7s
Description: S/MIME cryptographic signature


Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
Very good catch Pierre, thanks a lot!

This made everything work: the two-step process and the ptap(). I mistakenly 
thought that I should not let the local number of columns to be None, since the 
matrix is only partitioned row-wise. Could you please explain what happened 
because of my setting the local column number so that I get the philosophy 
behind this partitioning?

Thanks again,
Thanos

> On 11 Oct 2023, at 09:04, Pierre Jolivet  wrote:
> 
> That’s because:
> size = ((None, global_rows), (global_cols, global_cols)) 
> should be:
> size = ((None, global_rows), (None, global_cols)) 
> Then, it will work.
> $ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py && 
> echo $?
> 0
> 
> Thanks,
> Pierre
> 
>> On 11 Oct 2023, at 8:58 AM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Pierre, I see your point, but my experiment shows that it does not even run 
>> due to size mismatch, so I don’t see how being sparse would change things 
>> here. There must be some kind of problem with the parallel ptap(), because 
>> it does run sequentially. In order to test that, I changed the flags of the 
>> matrix creation to sparse=True and ran it again. Here is the code
>> 
>> """Experimenting with PETSc mat-mat multiplication"""
>> 
>> import numpy as np
>> from firedrake import COMM_WORLD
>> from firedrake.petsc import PETSc
>> 
>> from utilities import Print
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> 
>> def create_petsc_matrix(input_array, sparse=True):
>> """Create a PETSc matrix from an input_array
>> 
>> Args:
>> input_array (np array): Input array
>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>> Returns:
>> PETSc mat: PETSc mpi matrix
>> """
>> # Check if input_array is 1D and reshape if necessary
>> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>> global_rows, global_cols = input_array.shape
>> size = ((None, global_rows), (global_cols, global_cols))
>> 
>> # Create a sparse or dense matrix based on the 'sparse' argument
>> if sparse:
>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>> else:
>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>> matrix.setUp()
>> 
>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>> 
>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>> # Calculate the correct row in the array for the current process
>> row_in_array = counter + local_rows_start
>> matrix.setValues(
>> i, range(global_cols), input_array[row_in_array, :], addv=False
>> )
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>> 
>> # 
>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
>> Phi
>> #  A' = Phi.T * A * Phi
>> # [k x k] <- [k x m] x [m x m] x [m x k]
>> # 
>> 
>> m, k = 100, 7
>> # Generate the random numpy matrices
>> np.random.seed(0)  # sets the seed to 0
>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>> 
>> # 
>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>> # 
>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>> 
>> # Create A as an mpi matrix distributed on each process
>> A = create_petsc_matrix(A_np, sparse=True)
>> 
>> # Create Phi as an mpi matrix distributed on each process
>> Phi = create_petsc_matrix(Phi_np, sparse=True)
>> 
>> # Create an empty PETSc matrix object to store the result of the PtAP 
>> operation.
>> # This will hold the result A' = Phi.T * A * Phi after the computation.
>> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=True)
>> 
>> # Perform the PtAP (Phi Transpose times A times Phi) operation.
>> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
>> # A_prime will store the result of the operation.
>> A_prime = A.ptap(Phi)
>> 
>> I got
>> 
>> Traceback (most recent call last):
>>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", 
>> line 89, in 
>> Traceback (most recent call last):
>>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", 
>> line 89, in 
>> Traceback (most recent call last):
>>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", 
>> line 89, in 
>> A_prime = A.ptap(Phi)
>> A_prime = A.ptap(Phi)
>>   ^^^
>>   File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
>> A_prime = A.ptap(Phi)
>>   ^^^
>>   ^^^
>>   File 

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Pierre Jolivet
That’s because:
size = ((None, global_rows), (global_cols, global_cols)) 
should be:
size = ((None, global_rows), (None, global_cols)) 
Then, it will work.
$ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py && 
echo $?
0

Thanks,
Pierre

> On 11 Oct 2023, at 8:58 AM, Thanasis Boutsikakis 
>  wrote:
> 
> Pierre, I see your point, but my experiment shows that it does not even run 
> due to size mismatch, so I don’t see how being sparse would change things 
> here. There must be some kind of problem with the parallel ptap(), because it 
> does run sequentially. In order to test that, I changed the flags of the 
> matrix creation to sparse=True and ran it again. Here is the code
> 
> """Experimenting with PETSc mat-mat multiplication"""
> 
> import numpy as np
> from firedrake import COMM_WORLD
> from firedrake.petsc import PETSc
> 
> from utilities import Print
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> 
> def create_petsc_matrix(input_array, sparse=True):
> """Create a PETSc matrix from an input_array
> 
> Args:
> input_array (np array): Input array
> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> PETSc mat: PETSc mpi matrix
> """
> # Check if input_array is 1D and reshape if necessary
> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
> global_rows, global_cols = input_array.shape
> size = ((None, global_rows), (global_cols, global_cols))
> 
> # Create a sparse or dense matrix based on the 'sparse' argument
> if sparse:
> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
> else:
> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
> matrix.setUp()
> 
> local_rows_start, local_rows_end = matrix.getOwnershipRange()
> 
> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
> # Calculate the correct row in the array for the current process
> row_in_array = counter + local_rows_start
> matrix.setValues(
> i, range(global_cols), input_array[row_in_array, :], addv=False
> )
> 
> # Assembly the matrix to compute the final structure
> matrix.assemblyBegin()
> matrix.assemblyEnd()
> 
> return matrix
> 
> 
> # 
> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
> Phi
> #  A' = Phi.T * A * Phi
> # [k x k] <- [k x m] x [m x m] x [m x k]
> # 
> 
> m, k = 100, 7
> # Generate the random numpy matrices
> np.random.seed(0)  # sets the seed to 0
> A_np = np.random.randint(low=0, high=6, size=(m, m))
> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
> 
> # 
> # TEST: Galerking projection of numpy matrices A_np and Phi_np
> # 
> Aprime_np = Phi_np.T @ A_np @ Phi_np
> 
> # Create A as an mpi matrix distributed on each process
> A = create_petsc_matrix(A_np, sparse=True)
> 
> # Create Phi as an mpi matrix distributed on each process
> Phi = create_petsc_matrix(Phi_np, sparse=True)
> 
> # Create an empty PETSc matrix object to store the result of the PtAP 
> operation.
> # This will hold the result A' = Phi.T * A * Phi after the computation.
> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=True)
> 
> # Perform the PtAP (Phi Transpose times A times Phi) operation.
> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
> # A_prime will store the result of the operation.
> A_prime = A.ptap(Phi)
> 
> I got
> 
> Traceback (most recent call last):
>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
> 89, in 
> Traceback (most recent call last):
>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
> 89, in 
> Traceback (most recent call last):
>   File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
> 89, in 
> A_prime = A.ptap(Phi)
> A_prime = A.ptap(Phi)
>   ^^^
>   File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
> A_prime = A.ptap(Phi)
>   ^^^
>   ^^^
>   File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
>   File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
> petsc4py.PETSc.Error: error code 60
> [0] MatPtAP() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
> [0] MatProductSetFromOptions() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
> [0] MatProductSetFromOptions_Private() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
> [0] MatProductSetFromOptions_MPIAIJ() at 
> 

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
Furthermore, I tried to perform the Galerkin projection in two steps by 
substituting

> A_prime = A.ptap(Phi)

With 

AL = Phi.transposeMatMult(A)
A_prime = AL.matMult(Phi)

And running this with 3 procs, results to the false creation of a matrix AL 
that has 3 times bigger dimensions that it should (A is of size 100x100 and Phi 
of size 100x7):

MATRIX mpiaij AL [21x300]
Assembled

Partitioning for AL:
  Rank 0: Rows [0, 7)
  Rank 1: Rows [7, 14)
  Rank 2: Rows [14, 21)

And naturally, in another dimension incompatibility:

Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
A_prime = AL.matMult(Phi)
A_prime = AL.matMult(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
petsc4py.PETSc.Error: error code 60
[2] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[2] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[2] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[2] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[2] Nonconforming object sizes
[2] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 2 (rank 2 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 2
petsc4py.PETSc.Error: error code 60
[1] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[1] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[1] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[1] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[1] Nonconforming object sizes
[1] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 1 (rank 1 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 1
A_prime = AL.matMult(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
petsc4py.PETSc.Error: error code 60
[0] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[0] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[0] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[0] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[0] Nonconforming object sizes
[0] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 0 (rank 0 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0

> On 10 Oct 2023, at 23:33, Thanasis Boutsikakis 
>  wrote:
> 
> Hi all,
> 
> Revisiting my code and the proposed solution from Pierre, I realized this 
> works only in sequential. The reason is that PETSc partitions those matrices 
> only row-wise, which leads to an error due to the mismatch between number of 
> columns of A (non-partitioned) and the number of rows of Phi (partitioned).
> 
> """Experimenting with PETSc mat-mat multiplication"""
> 
> import time
> 
> import numpy as np
> from colorama import Fore
> from firedrake import COMM_SELF, COMM_WORLD
> from firedrake.petsc import PETSc
> from mpi4py import MPI
> from numpy.testing import assert_array_almost_equal
> 
> from utilities import Print
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> def create_petsc_matrix(input_array, sparse=True):
> """Create a PETSc matrix from an input_array
> 
> Args:
> input_array (np array): Input array
> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> PETSc mat: PETSc mpi matrix
> """
> # Check if input_array is 1D and reshape if necessary
> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
> global_rows, global_cols = input_array.shape
> size = ((None, global_rows), (global_cols, global_cols))
> 
> # Create a sparse or dense matrix based on the 'sparse' argument
> if sparse:
> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
> else:
> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
Pierre, I see your point, but my experiment shows that it does not even run due 
to size mismatch, so I don’t see how being sparse would change things here. 
There must be some kind of problem with the parallel ptap(), because it does 
run sequentially. In order to test that, I changed the flags of the matrix 
creation to sparse=True and ran it again. Here is the code

"""Experimenting with PETSc mat-mat multiplication"""

import numpy as np
from firedrake import COMM_WORLD
from firedrake.petsc import PETSc

from utilities import Print

nproc = COMM_WORLD.size
rank = COMM_WORLD.rank


def create_petsc_matrix(input_array, sparse=True):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
PETSc mat: PETSc mpi matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape
size = ((None, global_rows), (global_cols, global_cols))

# Create a sparse or dense matrix based on the 'sparse' argument
if sparse:
matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
else:
matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix


# 
# EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
#  A' = Phi.T * A * Phi
# [k x k] <- [k x m] x [m x m] x [m x k]
# 

m, k = 100, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, m))
Phi_np = np.random.randint(low=0, high=6, size=(m, k))

# 
# TEST: Galerking projection of numpy matrices A_np and Phi_np
# 
Aprime_np = Phi_np.T @ A_np @ Phi_np

# Create A as an mpi matrix distributed on each process
A = create_petsc_matrix(A_np, sparse=True)

# Create Phi as an mpi matrix distributed on each process
Phi = create_petsc_matrix(Phi_np, sparse=True)

# Create an empty PETSc matrix object to store the result of the PtAP operation.
# This will hold the result A' = Phi.T * A * Phi after the computation.
A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=True)

# Perform the PtAP (Phi Transpose times A times Phi) operation.
# In mathematical terms, this operation is A' = Phi.T * A * Phi.
# A_prime will store the result of the operation.
A_prime = A.ptap(Phi)

I got

Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
89, in 
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
89, in 
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
89, in 
A_prime = A.ptap(Phi)
A_prime = A.ptap(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
A_prime = A.ptap(Phi)
  ^^^
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
  File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
petsc4py.PETSc.Error: error code 60
[0] MatPtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
[0] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[0] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
[0] MatProductSetFromOptions_MPIAIJ() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
[0] MatProductSetFromOptions_MPIAIJ_PtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
[0] Nonconforming object sizes
[0] Matrix local dimensions are incompatible, Acol (0, 100) != Prow (0,34)
Abort(1) on node 0 (rank 0 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0
petsc4py.PETSc.Error: error code 60
[1] MatPtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
[1] MatProductSetFromOptions() at 

Re: [petsc-users] Configuration of PETSc with Intel OneAPI and Intel MPI fails

2023-10-11 Thread Richter, Roland
Hei,
Thank you very much for the answer! I looked it up, but petsc.org seems to
be a bit unstable here, quite often I can't reach petsc.org. 
Regards,
Roland Richter

-Ursprüngliche Nachricht-
Von: Satish Balay  
Gesendet: mandag 9. oktober 2023 17:29
An: Barry Smith 
Cc: Richter, Roland ; petsc-users@mcs.anl.gov
Betreff: Re: [petsc-users] Configuration of PETSc with Intel OneAPI and
Intel MPI fails

Will note - OneAPI MPI usage is documented at
https://petsc.org/release/install/install/#mpi

Satish

On Mon, 9 Oct 2023, Barry Smith wrote:

> 
>   Instead of using the mpiicc -cc=icx style use -- with-cc=mpiicc (etc)
and 
> 
> export I_MPI_CC=icx
> export I_MPI_CXX=icpx
> export I_MPI_F90=ifx
> 
> 
> > On Oct 9, 2023, at 8:32 AM, Richter, Roland 
wrote:
> > 
> > Hei,
> > I'm currently trying to install PETSc on a server (Ubuntu 22.04) with
Intel MPI and Intel OneAPI. To combine both, I have to use f. ex. "mpiicc
-cc=icx" as C-compiler, as described by
https://stackoverflow.com/a/76362396. Therefore, I adapted the
configure-line as follow:
> >  
> > ./configure --prefix=/media/storage/local_opt/petsc
--with-scalar-type=complex --with-cc="mpiicc -cc=icx" --with-cxx="mpiicpc
-cxx=icpx" --CPPFLAGS="-fPIC -march=native -mavx2" --CXXFLAGS="-fPIC
-march=native -mavx2" --with-fc="mpiifort -fc=ifx" --with-pic=true
--with-mpi=true
--with-blaslapack-dir=/opt/intel/oneapi/mkl/latest/lib/intel64/
--with-openmp=true --download-hdf5=yes --download-netcdf=yes
--download-chaco=no --download-metis=yes --download-slepc=yes
--download-suitesparse=yes --download-eigen=yes --download-parmetis=yes
--download-ptscotch=yes --download-mumps=yes --download-scalapack=yes
--download-superlu=yes --download-superlu_dist=yes --with-mkl_pardiso=1
--with-boost=1 --with-boost-dir=/media/storage/local_opt/boost
--download-opencascade=yes --with-fftw=1
--with-fftw-dir=/media/storage/local_opt/fftw3 --download-kokkos=yes
--with-mkl_sparse=1 --with-mkl_cpardiso=1 --with-mkl_sparse_optimize=1
--download-muparser=no --download-p4est=yes --download-sowing=y
 es --download-viennalcl=yes --with-zlib --force=1 --with-clean=1
--with-cuda=1
> >  
> > The configuration, however, fails with 
> >  
> > The CMAKE_C_COMPILER:
> >  
> > mpiicc -cc=icx
> >  
> >   is not a full path and was not found in the PATH
> >  
> > for all additional modules which use a cmake-based configuration
approach (such as OPENCASCADE). How could I solve that problem?
> >  
> > Thank you!
> > Regards,
> > Roland Richter
> > 
> 
> 


smime.p7s
Description: S/MIME cryptographic signature


Re: [petsc-users] SLEPc/NEP for shell matrice T(lambda) and T'(lambda)

2023-10-11 Thread Jose E. Roman
Kenneth,

The MatDuplicate issue should be fixed in the following MR 
https://gitlab.com/petsc/petsc/-/merge_requests/6912

Note that the NLEIGS solver internally uses MatDuplicate for creating multiple 
copies of the shell matrix, each one with its own value of lambda. Hence your 
implementation of the shell matrix is not appropriate, since you have a single 
global lambda within the module. I have attempted to write a Fortran example 
that duplicates the lambda correctly (see the MR), but does not work yet.

Jose


> El 6 oct 2023, a las 22:28, Kenneth C Hall  escribió:
> 
> Jose,
>  
> Unfortunately, I was unable to implement the MATOP_DUPLICATE operation in 
> fortran (and I do not know enough c to work in c).  Here is the error message 
> I get:
>  
> [0]PETSC ERROR: #1 MatShellSetOperation_Fortran() at 
> /Users/hall/Documents/Fortran_Codes/Packages/petsc/src/mat/impls/shell/ftn-custom/zshellf.c:283
> [0]PETSC ERROR: #2 src/test_nep.f90:62
>  
> When I look at zshellf.c, MATOP_DUPLICATE is not one of the supported 
> operations. See below.
>  
> Kenneth
>  
>  
> /**   
>   
>   
> 
>  * Subset of MatOperation that is supported by the Fortran wrappers.  
>   
>   
> 
>  */
> enum FortranMatOperation {
>   FORTRAN_MATOP_MULT   = 0,
>   FORTRAN_MATOP_MULT_ADD   = 1,
>   FORTRAN_MATOP_MULT_TRANSPOSE = 2,
>   FORTRAN_MATOP_MULT_TRANSPOSE_ADD = 3,
>   FORTRAN_MATOP_SOR= 4,
>   FORTRAN_MATOP_TRANSPOSE  = 5,
>   FORTRAN_MATOP_GET_DIAGONAL   = 6,
>   FORTRAN_MATOP_DIAGONAL_SCALE = 7,
>   FORTRAN_MATOP_ZERO_ENTRIES   = 8,
>   FORTRAN_MATOP_AXPY   = 9,
>   FORTRAN_MATOP_SHIFT  = 10,
>   FORTRAN_MATOP_DIAGONAL_SET   = 11,
>   FORTRAN_MATOP_DESTROY= 12,
>   FORTRAN_MATOP_VIEW   = 13,
>   FORTRAN_MATOP_CREATE_VECS= 14,
>   FORTRAN_MATOP_GET_DIAGONAL_BLOCK = 15,
>   FORTRAN_MATOP_COPY   = 16,
>   FORTRAN_MATOP_SCALE  = 17,
>   FORTRAN_MATOP_SET_RANDOM = 18,
>   FORTRAN_MATOP_ASSEMBLY_BEGIN = 19,
>   FORTRAN_MATOP_ASSEMBLY_END   = 20,
>   FORTRAN_MATOP_SIZE   = 21
> };
>  
>  
> From: Jose E. Roman 
> Date: Friday, October 6, 2023 at 7:01 AM
> To: Kenneth C Hall 
> Cc: petsc-users@mcs.anl.gov 
> Subject: Re: [petsc-users] SLEPc/NEP for shell matrice T(lambda) and 
> T'(lambda)
> 
> I am getting an error in a different place than you. I started to debug, but 
> don't have much time at the moment.
> Can you try something? Comparing to ex21.c, I see that a difference that may 
> be relevant is the MATOP_DUPLICATE operation. Can you try defining it for 
> your A matrix?
> 
> Note: If you plan to use the NLEIGS solver, there is no need to define the 
> derivative T' so you can skip the call to NEPSetJacobian().
> 
> Jose
> 
> 
> > El 6 oct 2023, a las 0:37, Kenneth C Hall  
> > escribió:
> > 
> > Hi all,
> >  
> > I have a very large eigenvalue problem of the form T(\lambda).x = 0. The 
> > eigenvalues appear in a complicated way, and I must use a matrix-free 
> > approach to compute the products T.x and T’.x.
> >  
> > I am trying to implement in SLEPc/NEP.  To get started, I have defined a 
> > much smaller and simpler system of the form
> > A.x - \lambda x = 0 where A is a 10x10 matrix. This is of course a simple 
> > standard eigenvalue problem, but I am using it as a surrogate to understand 
> > how to use NEP.
> >  
> > I have set the problem up using shell matrices (as that is my ultimate 
> > goal).  The full code is attached, but here is a smaller snippet of code:
> >  
> > ! Create matrix-free operators for A and B
> >   
> > PetscCall(MatCreateShell(PETSC_COMM_SELF,n,n,PETSC_DETERMINE,PETSC_DETERMINE,
> >  PETSC_NULL_INTEGER, A, ierr))
> >   
> > PetscCall(MatCreateShell(PETSC_COMM_SELF,n,n,PETSC_DETERMINE,PETSC_DETERMINE,
> >  PETSC_NULL_INTEGER, B, ierr))
> >   PetscCall(MatShellSetOperation(A, MATOP_MULT, MatMult_A, ierr))
> >   PetscCall(MatShellSetOperation(B, MATOP_MULT, MatMult_B, ierr))
> >  
> > ! Create nonlinear eigensolver
> >   PetscCall(NEPCreate(PETSC_COMM_SELF, nep, ierr))
> >  
> > ! Set the problem type
> >   PetscCall(NEPSetProblemType(nep, NEP_GENERAL, ierr))
> > !
> > ! set the solver type
> >   PetscCall(NEPSetType(nep, NEPNLEIGS, ierr))
> > !
> > ! Set functions and Jacobians for NEP
> >   PetscCall(NEPSetFunction(nep, A, A, MyNEPFunction, 
> > PETSC_NULL_INTEGER, ierr))
> >   PetscCall(NEPSetJacobian(nep, B,