[petsc-users] Bug report VecNorm

2023-12-09 Thread Stephan Köhler

Dear PETSc/Tao team,

there is a bug in the voector interface:  In the function
VecNorm, see, eg. 
https://petsc.org/release/src/vec/vec/interface/rvector.c.html#VecNorm 
line 197 the check for consistency in line 214 is done on the wrong 
communicator.  The communicator should be PETSC_COMM_SELF.

Otherwise the program may hang when PetscCheck is executed.

Please find a minimal example attached.


Kind regards,
Stephan Köhler

--
Stephan Köhler
TU Bergakademie Freiberg
Institut für numerische Mathematik und Optimierung

Akademiestraße 6
09599 Freiberg
Gebäudeteil Mittelbau, Zimmer 2.07

Telefon: +49 (0)3731 39-3188 (Büro)

#include "petscvec.h"

int main(int argc, char **args)
{
  PetscMPIInt size, rank;
  Vec vec;
  PetscReal   norm;
  PetscBool   flg = PETSC_FALSE, minflg = PETSC_FALSE;
  MPI_Commcomm;
  PetscScalar *xx;

  PetscCall(PetscInitialize(, , PETSC_NULLPTR, PETSC_NULLPTR));

  comm = PETSC_COMM_WORLD;
  PetscCallMPI(MPI_Comm_size(comm, ));
  PetscCallMPI(MPI_Comm_rank(comm, ));
  PetscCheck(size > 1, comm, PETSC_ERR_ARG_WRONG, "example should be called with more than 1 MPI rank.");

  PetscCall(VecCreateMPI(comm, (rank+1)*10, PETSC_DETERMINE, ));
  PetscCall(VecSet(vec, 1.0));
  PetscCall(VecNorm(vec, NORM_INFINITY, ));

  PetscSynchronizedPrintf(PETSC_COMM_WORLD, "rank = %d, size = %d, norm = %lf\n", rank, size, norm);
  PetscSynchronizedFlush(comm, PETSC_STDOUT);

  if(rank == 0)
  {
PetscCall(VecGetArrayWrite(vec, ));
PetscCall(VecRestoreArrayWrite(vec, ));
  }

  PetscCall(VecNormAvailable(vec, NORM_INFINITY, , ));

  PetscSynchronizedPrintf(comm, "rank = %d, size = %d, flg = %d, norm = %lf\n", rank, size, flg, norm);
  PetscSynchronizedFlush(comm, PETSC_STDOUT);

  PetscCall(MPIU_Allreduce(, , 1, MPIU_BOOL, MPI_LAND, PetscObjectComm((PetscObject)vec)));
  /* wrong */
  PetscCheck(flg == minflg, PetscObjectComm((PetscObject)vec), PETSC_ERR_ARG_WRONGSTATE, "Some MPI processes have cached norm, others do not. This may happen when some MPI processes call VecGetArray() and some others do not.");
  /* this is correct */
  // PetscCheck(flg == minflg, PETSC_COMM_SELF, PETSC_ERR_ARG_WRONGSTATE, "Some MPI processes have cached norm, others do not. This may happen when some MPI processes call VecGetArray() and some others do not.");

  PetscCall(VecDestroy());

  PetscFinalize();

  return 0;

}


OpenPGP_0xC9BF2C20DFE9F713.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] Bug Report TaoALMM class

2023-12-09 Thread Stephan Köhler

Dear PETSc/Tao team,

this is still an open issue andI haven't heard anything else so far that I'm 
wrong.

Kind regards,
Stephan Köhler

Am 18.07.23 um 02:21 schrieb Matthew Knepley:

Toby and Hansol,

Has anyone looked at this?

   Thanks,

  Matt

On Mon, Jun 12, 2023 at 8:24 AM Stephan Köhler <
stephan.koeh...@math.tu-freiberg.de> wrote:


Dear PETSc/Tao team,

I think there might be a bug in the Tao ALMM class:  In the function
TaoALMMComputeAugLagAndGradient_Private(), see, eg.

https://petsc.org/release/src/tao/constrained/impls/almm/almm.c.html#TAOALMM
line 648 the gradient seems to be wrong.

The given function and gradient computation is
Lc = F + Ye^TCe + Yi^T(Ci - S) + 0.5*mu*[Ce^TCe + (Ci - S)^T(Ci - S)],
dLc/dX = dF/dX + Ye^TAe + Yi^TAi + 0.5*mu*[Ce^TAe + (Ci - S)^TAi],

but I think the gradient should be (without 0.5)

dLc/dX = dF/dX + Ye^TAe + Yi^TAi + mu*[Ce^TAe + (Ci - S)^TAi].

Kind regards,
Stephan Köhler

--
Stephan Köhler
TU Bergakademie Freiberg
Institut für numerische Mathematik und Optimierung

Akademiestraße 6
09599 Freiberg
Gebäudeteil Mittelbau, Zimmer 2.07

Telefon: +49 (0)3731 39-3173 (Büro)




--
Stephan Köhler
TU Bergakademie Freiberg
Institut für numerische Mathematik und Optimierung

Akademiestraße 6
09599 Freiberg
Gebäudeteil Mittelbau, Zimmer 2.07

Telefon: +49 (0)3731 39-3188 (Büro)



OpenPGP_0xC9BF2C20DFE9F713.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] PETSc and MPI-3/RMA

2023-12-09 Thread Jed Brown
It uses nonblocking point-to-point by default since that tends to perform 
better and is less prone to MPI implementation bugs, but you can select 
`-sf_type window` to try it, or use other strategies here depending on the sort 
of problem you're working with.

#define PETSCSFBASIC  "basic"
#define PETSCSFNEIGHBOR   "neighbor"
#define PETSCSFALLGATHERV "allgatherv"
#define PETSCSFALLGATHER  "allgather"
#define PETSCSFGATHERV"gatherv"
#define PETSCSFGATHER "gather"
#define PETSCSFALLTOALL   "alltoall"
#define PETSCSFWINDOW "window"

PETSc does try to use GPU-aware MPI, though implementation bugs are present on 
many machines and it often requires a delicate environment arrangement.

"Maeder  Alexander"  writes:

> I am a new user of PETSc
>
> and want to know more about the underlying implementation for matrix-vector 
> multiplication (Ax=y).
>
> PETSc utilizes a 1D distribution and communicates only parts of the vector x 
> utilized depending on the sparsity pattern of A.
>
> Is the communication of x done with MPI-3 RMA and utilizes cuda-aware mpi for 
> RMA?
>
>
> Best regards,
>
>
> Alexander Maeder


[petsc-users] PETSc and MPI-3/RMA

2023-12-09 Thread Maeder Alexander
I am a new user of PETSc

and want to know more about the underlying implementation for matrix-vector 
multiplication (Ax=y).

PETSc utilizes a 1D distribution and communicates only parts of the vector x 
utilized depending on the sparsity pattern of A.

Is the communication of x done with MPI-3 RMA and utilizes cuda-aware mpi for 
RMA?


Best regards,


Alexander Maeder