Re: [petsc-users] Redefining MPI functions as macros can break C++ code

2022-11-02 Thread Satish Balay via petsc-users
You can define 'PETSC_HAVE_BROKEN_RECURSIVE_MACRO' and then include
petsc.h in your sources to avoid these macros in amrex/application
codes.

PETSc logging is one of the important features - its best to not
disable it (globally for all) due to this issue.

Satish

On Wed, 2 Nov 2022, Erik Schnetter wrote:

> PETSc redefines MPI functions as macros when logging is enabled. This
> breaks some C++ code; see e.g. <
> https://github.com/AMReX-Codes/amrex/pull/3005> for an example. The reason
> is that macros get confused about commas in template arguments.
> 
> It would be convenient if PETSc used a different way to log MPI function
> calls, but I can't think of a good way. Alternatively, logging could be
> disabled by default, or MPI logging could be disabled by default, or there
> could be a simple way to opt out (e.g. use `#define PETSC_LOG_MPI` after
> `#include ` to enable it for a source file).
> 
> -erik
> 
> 



[petsc-users] Redefining MPI functions as macros can break C++ code

2022-11-02 Thread Erik Schnetter
PETSc redefines MPI functions as macros when logging is enabled. This
breaks some C++ code; see e.g. <
https://github.com/AMReX-Codes/amrex/pull/3005> for an example. The reason
is that macros get confused about commas in template arguments.

It would be convenient if PETSc used a different way to log MPI function
calls, but I can't think of a good way. Alternatively, logging could be
disabled by default, or MPI logging could be disabled by default, or there
could be a simple way to opt out (e.g. use `#define PETSC_LOG_MPI` after
`#include ` to enable it for a source file).

-erik

-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [petsc-users] Bug report LMVM matrix class

2022-11-02 Thread Barry Smith

  Thanks for the bug report with reproducing example. I have a fix in 
https://gitlab.com/petsc/petsc/-/merge_requests/5797

  Barry


> On Nov 2, 2022, at 6:52 AM, Stephan Köhler 
>  wrote:
> 
> Dear PETSc/Tao team,
> 
> it seems to be that there is a bug in the LMVM matrix class:
> 
> In the function MatCreateVecs_LMVM, see, e.g., 
> https://petsc.org/release/src/ksp/ksp/utils/lmvm/lmvmimpl.c.html at line 214.
> it is not checked if the vectors  *L, or *R are NULL.  This is, in 
> particular, a problem if this matrix class is combined with the Schur 
> complement matrix class, since MatMult_SchurComplement
> calls this function with NULL as *R, see, e.g. 
> https://petsc.org/release/src/ksp/ksp/utils/schurm/schurm.c.html at line 66.
> 
> I attach a minimal example.  You need to modify the paths to the PETSc 
> installation in the makefile.
> 
> Best regards
> Stephan Köhler
> 
> -- 
> Stephan Köhler
> TU Bergakademie Freiberg
> Institut für numerische Mathematik und Optimierung
> 
> Akademiestraße 6
> 09599 Freiberg
> Gebäudeteil Mittelbau, Zimmer 2.07
> 
> Telefon: +49 (0)3731 39-3173 (Büro)
> 
> 



Re: [petsc-users] Report Bug TaoALMM class

2022-11-02 Thread Barry Smith


  Stephan,

I have located the troublesome line in TaoSetUp_ALMM() it has the line

  auglag->Px = tao->solution;

and in alma.h it has 

Vec  Px, LgradX, Ce, Ci, G; /* aliased vectors (do not destroy!) */

Now auglag->P in some situations alias auglag->P  and in some cases auglag->Px 
serves to hold a portion of auglag->P. So then in 
TaoALMMSubsolverObjective_Private()
the lines

PetscCall(VecCopy(P, auglag->P));
 PetscCall((*auglag->sub_obj)(auglag->parent));

causes, just as you said, tao->solution to be overwritten by the P at which the 
objective function is being computed. In other words, the solution of the outer 
Tao is aliased with the solution of the inner Tao, by design. 

You are definitely correct, the use of TaoALMMSubsolverObjective_Private and 
TaoALMMSubsolverObjectiveAndGradient_Private  in a line search would be 
problematic. 

I am not an expert at these methods or their implementations. Could you point 
to an actual use case within Tao that triggers the problem. Is there a set of 
command line options or code calls to Tao that fail due to this "design 
feature". Within the standard use of ALMM I do not see how the objective 
function would be used within a line search. The TaoSolve_ALMM() code is 
self-correcting in that if a trust region check fails it automatically rolls 
back the solution.

  Barry




> On Oct 28, 2022, at 4:27 AM, Stephan Köhler 
>  wrote:
> 
> Dear PETSc/Tao team,
> 
> it seems to be that there is a bug in the TaoALMM class:
> 
> In the methods TaoALMMSubsolverObjective_Private and 
> TaoALMMSubsolverObjectiveAndGradient_Private the vector where the function 
> value for the augmented Lagrangian is evaluate
> is copied into the current solution, see, e.g., 
> https://petsc.org/release/src/tao/constrained/impls/almm/almm.c.html line 672 
> or 682.  This causes subsolver routine to not converge if the line search for 
> the subsolver rejects the step length 1. for some
> update.  In detail:
> 
> Suppose the current iterate is xk and the current update is dxk. The line 
> search evaluates the augmented Lagrangian now at (xk + dxk).  This causes 
> that the value (xk + dxk) is copied in the current solution.  If the point 
> (xk + dxk) is rejected, the line search should
> try the point (xk + alpha * dxk), where alpha < 1.  But due to the copying, 
> what happens is that the point ((xk + dxk) + alpha * dxk) is evaluated, see, 
> e.g., https://petsc.org/release/src/tao/linesearch/impls/armijo/armijo.c.html 
> line 191.
> 
> Best regards
> Stephan Köhler
> 
> -- 
> Stephan Köhler
> TU Bergakademie Freiberg
> Institut für numerische Mathematik und Optimierung
> 
> Akademiestraße 6
> 09599 Freiberg
> Gebäudeteil Mittelbau, Zimmer 2.07
> 
> Telefon: +49 (0)3731 39-3173 (Büro)
> 
> 



Re: [petsc-users] Can I use PETSc DMPlex to output the surface mesh?

2022-11-02 Thread Matthew Knepley
On Wed, Nov 2, 2022 at 8:57 AM Gong Yujie  wrote:

> Dear development team,
>
> Now I'm doing a project about visualization. In the process of
> visualization, the surface mesh is preferred. I have two questions about
> the DMPlex mesh.
>
>
>1. Can I output the 3D volume mesh in DMPlex as a .obj or .fbx file?
>Both these two meshes are just surface mesh.
>
> 1) These are undocumented, proprietary formats. We would have to link
Autodesk in order to write them.

>
>1. If not, can I output part of the mesh out? Since I have some labels
>for this part of mesh, can I output this part of mesh (boundary surface
>mesh) separately?
>
> 2) Yes, you can output the surface or the volume to VTK or HDF5. To get
just the surface, you can use

  https://petsc.org/main/docs/manualpages/DMPlex/DMPlexCreateSubmesh/

with the boundary label.

  Thanks,

 Matt


> Best Regards,
> Jerry
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Field split degree of freedom ordering

2022-11-02 Thread Jed Brown
Yes, the normal approach is to partition your mesh once, then for each field, 
resolve ownership of any interface dofs with respect to the element partition 
(so shared vertex velocity can land on any process that owns an adjacent 
element, though even this isn't strictly necessary).

Alexander Lindsay  writes:

> So, in the latter case, IIUC we can maintain how we distribute data among
> the processes (partitioning of elements) such that with respect to a
> `-ksp_view_pmat` nothing changes and our velocity and pressure dofs are
> interlaced on a global scale (e.g. each process has some velocity and
> pressure dofs) ... but in order to leverage field split we need those index
> sets in order to avoid the equal size constraint?
>
> On Tue, Nov 1, 2022 at 11:57 PM Jed Brown  wrote:
>
>> In most circumstances, you can and should interlace in some form such that
>> each block in fieldsplit is distributed across all ranks. If you interlace
>> at scalar granularity as described, then each block needs to be able to do
>> that. So for the Stokes equations with equal order elements (like P1-P1
>> stabilized), you can interlace (u,v,w,p), but for mixed elements (like
>> Q2-P1^discontinuous) you can't interlace in that way. You can still
>> distribute pressure and velocity over all processes, but will need index
>> sets to identify the velocity-pressure splits.
>>
>> Alexander Lindsay  writes:
>>
>> > In the block matrices documentation, it's stated: "Note that for
>> interlaced
>> > storage the number of rows/columns of each block must be the same size"
>> Is
>> > interlacing defined in a global sense, or a process-local sense? So
>> > explicitly, if I don't want the same size restriction, do I need to
>> ensure
>> > that globally all of my block 1 dofs are numbered after my block 0 dofs?
>> Or
>> > do I need to follow that on a process-local level? Essentially in libMesh
>> > we always follow rank-major ordering. I'm asking whether for unequal row
>> > sizes, in order to split, would we need to strictly follow variable-major
>> > ordering (splitting here meaning splitting by variable)?
>> >
>> > Alex
>>


[petsc-users] Can I use PETSc DMPlex to output the surface mesh?

2022-11-02 Thread Gong Yujie
Dear development team,

Now I'm doing a project about visualization. In the process of visualization, 
the surface mesh is preferred. I have two questions about the DMPlex mesh.


  1.  Can I output the 3D volume mesh in DMPlex as a .obj or .fbx file? Both 
these two meshes are just surface mesh.
  2.  If not, can I output part of the mesh out? Since I have some labels for 
this part of mesh, can I output this part of mesh (boundary surface mesh) 
separately?

Best Regards,
Jerry


Re: [petsc-users] Field split degree of freedom ordering

2022-11-02 Thread Alexander Lindsay
So, in the latter case, IIUC we can maintain how we distribute data among
the processes (partitioning of elements) such that with respect to a
`-ksp_view_pmat` nothing changes and our velocity and pressure dofs are
interlaced on a global scale (e.g. each process has some velocity and
pressure dofs) ... but in order to leverage field split we need those index
sets in order to avoid the equal size constraint?

On Tue, Nov 1, 2022 at 11:57 PM Jed Brown  wrote:

> In most circumstances, you can and should interlace in some form such that
> each block in fieldsplit is distributed across all ranks. If you interlace
> at scalar granularity as described, then each block needs to be able to do
> that. So for the Stokes equations with equal order elements (like P1-P1
> stabilized), you can interlace (u,v,w,p), but for mixed elements (like
> Q2-P1^discontinuous) you can't interlace in that way. You can still
> distribute pressure and velocity over all processes, but will need index
> sets to identify the velocity-pressure splits.
>
> Alexander Lindsay  writes:
>
> > In the block matrices documentation, it's stated: "Note that for
> interlaced
> > storage the number of rows/columns of each block must be the same size"
> Is
> > interlacing defined in a global sense, or a process-local sense? So
> > explicitly, if I don't want the same size restriction, do I need to
> ensure
> > that globally all of my block 1 dofs are numbered after my block 0 dofs?
> Or
> > do I need to follow that on a process-local level? Essentially in libMesh
> > we always follow rank-major ordering. I'm asking whether for unequal row
> > sizes, in order to split, would we need to strictly follow variable-major
> > ordering (splitting here meaning splitting by variable)?
> >
> > Alex
>


[petsc-users] Bug report LMVM matrix class

2022-11-02 Thread Stephan Köhler

Dear PETSc/Tao team,

it seems to be that there is a bug in the LMVM matrix class:

In the function MatCreateVecs_LMVM, see, e.g., 
https://petsc.org/release/src/ksp/ksp/utils/lmvm/lmvmimpl.c.html at line 
214.
it is not checked if the vectors  *L, or *R are NULL.  This is, in 
particular, a problem if this matrix class is combined with the Schur 
complement matrix class, since MatMult_SchurComplement
calls this function with NULL as *R, see, e.g. 
https://petsc.org/release/src/ksp/ksp/utils/schurm/schurm.c.html at line 66.


I attach a minimal example.  You need to modify the paths to the PETSc 
installation in the makefile.


Best regards
Stephan Köhler

--
Stephan Köhler
TU Bergakademie Freiberg
Institut für numerische Mathematik und Optimierung

Akademiestraße 6
09599 Freiberg
Gebäudeteil Mittelbau, Zimmer 2.07

Telefon: +49 (0)3731 39-3173 (Büro)



Minimal_example_schur_lmvm.tar.gz
Description: application/gzip


OpenPGP_0xC9BF2C20DFE9F713.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature