Re: [petsc-users] Verifying matching PetscFunctionBeginUser() and PetscFunctionReturn()

2023-07-12 Thread Barry Smith


  Ok, I don't understand the logic behind this but if you use 
PetscFunctionBeginUser (not PetscFunctionBegin) the check on the exit to the 
function is not done.

 PetscCheckAbort(!stack__.check || stack__.petscroutine[stack__.currentsize] != 
1 || stack__.function[stack__.currentsize] == (const char *)(func__), 
PETSC_COMM_SELF, PETSC_ERR_PLIB, "Invalid stack: push from %s %s:%d. Pop from 
%s %s:%d.\n", \

since petscroutine[] has been set to 2 the middle || is true and so 
PetscCheckAbort() is not triggered.

  Jacob, do you remember the logic behind skipping the checks here for user 
routines?  

  Seems like it should check it, if the user buys into trying to use 
PetscFunctionBeginUser/PetscFunctionReturn we should have checks to make sure 
they are doing it correctly.

  Barry




> On Jul 12, 2023, at 1:30 PM, Aagaard, Brad T  wrote:
> 
> I created a small toy example (attached) that suggests that the verification 
> of matching PetscFunctionBeginUser() and PetscFunctionReturn() fails when 
> PetscFunctionReturn() is missing or in some cases when different functions 
> are missing PetscFunctionBeginUser() or PetscFunctionReturn(). The cases 
> where the code crashes are much more complex and not easily reproduced in a 
> small toy example.
> 
> Here is the output for 4 cases of running my toy example:
> 
> CASE 1: Correct stack. No errors.
> 
> ./check_petscstack
> Testing correct stack...
> Layer 0: Calling layer 1
> Layer 1: Noop
> 
> CASE 2: Missing PetscFunctionBeginUser() in layer 0 is correctly detected and 
> error message is generated.
> 
> ./check_petscstack missing_begin
> Testing missing PetscFunctionBeginUser() in layer 0...
> Layer 0: Calling layer 1
> Layer 1: Noop
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: Invalid stack size 0, pop layer0 
> /home/baagaard/src/cig/pylith/tests/libtests/utils/TestPetscStack.cc:54.
> 
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.2-475-g71252c2aa25  GIT 
> Date: 2023-06-15 13:06:55 -0600
> [0]PETSC ERROR: 
> /home/baagaard/scratch/build/gcc-9.3/cig/pylith-debug/tests/libtests/utils/.libs/check_petscstack
>  on a arch-gcc-9.3_debug named igskcicguwarlng by baagaard Wed Jul 12 
> 11:20:50 2023
> [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-gcc-9.3_debug 
> --with-debugging=1 --with-clanguage=c --with-mpi-compilers=1 
> --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 
> --with-lgrind=0 --download-chaco=1 --download-parmetis=1 --download-metis=1 
> --download-triangle --download-ml=1 --download-superlu=1 --with-fc=0 
> --download-f2cblaslapack --with-hdf5=1 
> --with-hdf5-dir=/software/baagaard/hdf5-1.12.1/gcc-9.3 --with-zlib=1 CFLAGS=-g
> [0]PETSC ERROR: #1 layer0() at 
> /home/baagaard/src/cig/pylith/tests/libtests/utils/TestPetscStack.cc:54
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF
> with errorcode 77.
> 
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
> 
> CASE 3: Missing PetscFunctionReturn() in layer 1 is not detected.
> 
> ./check_petscstack missing_return
> Testing missing PetscFunctionReturn() in layer 1...
> Layer 0: Calling layer 1
> Layer 1: Noop
> 
> CASE 4: Missing PetscFunctionBeginUser() in layer 0 and missing 
> PetscFunctionReturn() in layer 1 are not detected.
> 
> ./check_petscstack missing_both
> Testing missing PetsFunctionBeginUser() in layer 0 and PetscFunctionReturn() 
> in layer 1...
> Layer 0: Calling layer 1
> Layer 1: Noop
> 
> 
> From: Barry Smith 
> Sent: Tuesday, July 11, 2023 10:45 PM
> To: Aagaard, Brad T
> Cc: petsc-users@mcs.anl.gov
> Subject: [EXTERNAL] Re: [petsc-users] Verifying matching 
> PetscFunctionBeginUser() and PetscFunctionReturn()
> 
> 
> 
> This email has been received from outside of DOI - Use caution before 
> clicking on links, opening attachments, or responding.
> 
> 
> 
>  #define PetscStackPop_Private(stack__, func__) \
>do { \
>  PetscCheckAbort(!stack__.check || stack__.currentsize > 0, 
> PETSC_COMM_SELF, PETSC_ERR_PLIB, "Invalid stack size %d, pop %s %s:%d.\n", 
> stack__.currentsize, func__, __FILE__, __LINE__); \
>  if (--stack__.currentsize < PETSCSTACKSIZE) { \
>PetscCheckAbort(!stack__.check || 
> stack__.petscroutine[stack__.currentsize] != 1 || 
> stack__.function[stack__.currentsize] == (const char *)(func__), 
> PETSC_COMM_SELF, PETSC_ERR_PLIB, "Invalid stack: push from %s %s:%d. Pop from 
> %s %s:%d.\n", \
>stack__.function[stack__.currentsize], 

Re: [petsc-users] Near null space for a fieldsplit in petsc4py

2023-07-12 Thread Pierre Jolivet


> On 12 Jul 2023, at 6:04 PM, TARDIEU Nicolas via petsc-users 
>  wrote:
> 
> Dear PETSc team,
> 
> In the attached example, I set up a block pc for a saddle-point problem in 
> petsc4py. The IS define the unknowns, namely some physical quantity (phys) 
> and a Lagrange multiplier (lags).
> I would like to attach a near null space to the physical block, in order to 
> get the best performance from an AMG pc. 
> I have been trying hard, attaching it to the initial block, to the IS but no 
> matter what I am doing, when it comes to "ksp_view", no near null space is 
> attached to the matrix.
> 
> Could you please help me figure out what I am doing wrong ?

Are you using a double-precision 32-bit integers real build of PETSc?
Is it --with-debugging=0?
Because with my debug build, I get the following error (thus explaining why 
it’s not attached to the KSP).
Traceback (most recent call last):
  File "/Volumes/Data/Downloads/test/test_NullSpace.py", line 35, in 
ns = NullSpace().create(True, [v], comm=comm)
 
  File "petsc4py/PETSc/Mat.pyx", line 5611, in petsc4py.PETSc.NullSpace.create
petsc4py.PETSc.Error: error code 62
[0] MatNullSpaceCreate() at 
/Volumes/Data/repositories/petsc/src/mat/interface/matnull.c:249
[0] Invalid argument
[0] Vector 0 must have 2-norm of 1.0, it is 22.3159

Furthermore, if you set yourself the constant vector in the near null-space, 
then the first argument of create() must be False, otherwise, you’ll have twice 
the same vector, and you’ll end up with another error (the vectors in the near 
null-space must be orthonormal).
If things still don’t work after those couple of fixes, please feel free to 
send an up-to-date reproducer.

Thanks,
Pierre

> Thanks,
> Nicolas
> 
> 
> 
> 
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis 
> à l'intention exclusive des destinataires et les informations qui y figurent 
> sont strictement confidentielles. Toute utilisation de ce Message non 
> conforme à sa destination, toute diffusion ou toute publication totale ou 
> partielle, est interdite sauf autorisation expresse.
> 
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
> copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. 
> Si vous avez reçu ce Message par erreur, merci de le supprimer de votre 
> système, ainsi que toutes ses copies, et de n'en garder aucune trace sur 
> quelque support que ce soit. Nous vous remercions également d'en avertir 
> immédiatement l'expéditeur par retour du message.
> 
> Il est impossible de garantir que les communications par messagerie 
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
> erreur ou virus.
> 
> 
> This message and any attachments (the 'Message') are intended solely for the 
> addressees. The information contained in this Message is confidential. Any 
> use of information contained in this Message not in accord with its purpose, 
> any dissemination or disclosure, either whole or partial, is prohibited 
> except formal approval.
> 
> If you are not the addressee, you may not copy, forward, disclose or use any 
> part of it. If you have received this message in error, please delete it and 
> all copies from your system and notify the sender immediately by return 
> message.
> 
> E-mail communication cannot be guaranteed to be timely secure, error or 
> virus-free.
> 



Re: [petsc-users] Verifying matching PetscFunctionBeginUser() and PetscFunctionReturn()

2023-07-12 Thread Aagaard, Brad T via petsc-users
I created a small toy example (attached) that suggests that the verification of 
matching PetscFunctionBeginUser() and PetscFunctionReturn() fails when 
PetscFunctionReturn() is missing or in some cases when different functions are 
missing PetscFunctionBeginUser() or PetscFunctionReturn(). The cases where the 
code crashes are much more complex and not easily reproduced in a small toy 
example.

Here is the output for 4 cases of running my toy example:

CASE 1: Correct stack. No errors.

./check_petscstack
Testing correct stack...
Layer 0: Calling layer 1
Layer 1: Noop

CASE 2: Missing PetscFunctionBeginUser() in layer 0 is correctly detected and 
error message is generated.

./check_petscstack missing_begin
Testing missing PetscFunctionBeginUser() in layer 0...
Layer 0: Calling layer 1
Layer 1: Noop
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Petsc has generated inconsistent data
[0]PETSC ERROR: Invalid stack size 0, pop layer0 
/home/baagaard/src/cig/pylith/tests/libtests/utils/TestPetscStack.cc:54.

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.19.2-475-g71252c2aa25  GIT 
Date: 2023-06-15 13:06:55 -0600
[0]PETSC ERROR: 
/home/baagaard/scratch/build/gcc-9.3/cig/pylith-debug/tests/libtests/utils/.libs/check_petscstack
 on a arch-gcc-9.3_debug named igskcicguwarlng by baagaard Wed Jul 12 11:20:50 
2023
[0]PETSC ERROR: Configure options --PETSC_ARCH=arch-gcc-9.3_debug 
--with-debugging=1 --with-clanguage=c --with-mpi-compilers=1 
--with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 
--with-lgrind=0 --download-chaco=1 --download-parmetis=1 --download-metis=1 
--download-triangle --download-ml=1 --download-superlu=1 --with-fc=0 
--download-f2cblaslapack --with-hdf5=1 
--with-hdf5-dir=/software/baagaard/hdf5-1.12.1/gcc-9.3 --with-zlib=1 CFLAGS=-g
[0]PETSC ERROR: #1 layer0() at 
/home/baagaard/src/cig/pylith/tests/libtests/utils/TestPetscStack.cc:54
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF
with errorcode 77.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--

CASE 3: Missing PetscFunctionReturn() in layer 1 is not detected.

./check_petscstack missing_return
Testing missing PetscFunctionReturn() in layer 1...
Layer 0: Calling layer 1
Layer 1: Noop

CASE 4: Missing PetscFunctionBeginUser() in layer 0 and missing 
PetscFunctionReturn() in layer 1 are not detected.

./check_petscstack missing_both
Testing missing PetsFunctionBeginUser() in layer 0 and PetscFunctionReturn() in 
layer 1...
Layer 0: Calling layer 1
Layer 1: Noop


From: Barry Smith 
Sent: Tuesday, July 11, 2023 10:45 PM
To: Aagaard, Brad T
Cc: petsc-users@mcs.anl.gov
Subject: [EXTERNAL] Re: [petsc-users] Verifying matching 
PetscFunctionBeginUser() and PetscFunctionReturn()



 This email has been received from outside of DOI - Use caution before clicking 
on links, opening attachments, or responding.



  #define PetscStackPop_Private(stack__, func__) \
do { \
  PetscCheckAbort(!stack__.check || stack__.currentsize > 0, 
PETSC_COMM_SELF, PETSC_ERR_PLIB, "Invalid stack size %d, pop %s %s:%d.\n", 
stack__.currentsize, func__, __FILE__, __LINE__); \
  if (--stack__.currentsize < PETSCSTACKSIZE) { \
PetscCheckAbort(!stack__.check || 
stack__.petscroutine[stack__.currentsize] != 1 || 
stack__.function[stack__.currentsize] == (const char *)(func__), 
PETSC_COMM_SELF, PETSC_ERR_PLIB, "Invalid stack: push from %s %s:%d. Pop from 
%s %s:%d.\n", \
stack__.function[stack__.currentsize], 
stack__.file[stack__.currentsize], stack__.line[stack__.currentsize], func__, 
__FILE__, __LINE__); \


  It is checking on each pop (return) that the current function it is in is the 
same function that the begin was called in.

  You do not have to call through PetscCall() to have this stuff work (though 
we strong recommend adding them).

  Even if you have a return and no outstanding begins it should not crash

  Maybe you need to see a couple of crashes so we try to figure out what is 
going on.


On Jul 11, 2023, at 11:29 PM, Aagaard, Brad T via petsc-users 
 wrote:

PETSc developers,

When I fail to have matching PetscFunctionBeginUser() and PetscFunctionReturn() 
in my code, I get segfaults and valgrind reports invalid writes at places in 
PETSc where memory is freed. As a result, it is difficult to track down the 
actual source of the error. I know there used to be a command line argument for 
checking for mismatches in PetscFunctionBeginUser() and PetscFunctionReturn(), 
but Matt said it is no longer implemented and 

Re: [petsc-users] Matrix-free generalised eigenvalue problem

2023-07-12 Thread Jose E. Roman
By default, it is solving the problem as B^{-1}*A*x=lambda*x (see chapter on 
Spectral Transformation). That is why A can be a shell matrix without problem. 
But B needs to be an explicit matrix in order to compute an LU factorization. 
If B is also a shell matrix then you should set an iterative solver for the 
associated KSP (see examples in the chapter).

An alternative is to create a shell matrix M that computes the action of 
B^{-1}*A, then pass M to the EPS solver as a standard eigenproblem.

Jose


> El 12 jul 2023, a las 19:04, Quentin Chevalier 
>  escribió:
> 
> Hello PETSc Users,
> 
> I have a generalised eigenvalue problem : Ax= lambda Bx
> I used to have only A as a matrix-free method, I used mumps and an LU 
> preconditioner, everything worked fine.
> 
> Now B is matrix-free as well, and my solver is returning an error : 
> "MatSolverType mumps does not support matrix type python", which is ironic 
> given it seem to handle A quite fine.
> 
> I have read in the user manual here that there some methods may require 
> additional methods to be supplied for B like MATOP_GET_DIAGONAL but it's 
> unclear to me exactly what I should be implementing and what is the best 
> solver for my case.
> 
> A is hermitian, B is hermitian positive but not positive-definite or real. 
> Therefore I have specified a GHEP problem type to the EPS object.
> 
> I use PETSc in complex mode through the petsc4py bridge.
> 
> Any help on how to get EPS to work for a generalised matrix-free case would 
> be welcome. Performance is not a key issue here - I have a tractable high 
> value case on hand.
> 
> Thank you for your time,
> 
> Quentin



[petsc-users] Matrix-free generalised eigenvalue problem

2023-07-12 Thread Quentin Chevalier
Hello PETSc Users,

I have a generalised eigenvalue problem : Ax= lambda Bx
I used to have only A as a matrix-free method, I used mumps and an LU
preconditioner, everything worked fine.

Now B is matrix-free as well, and my solver is returning an error :
"MatSolverType mumps does not support matrix type python", which is ironic
given it seem to handle A quite fine.

I have read in the user manual here
 that there some
methods may require additional methods to be supplied for B like
MATOP_GET_DIAGONAL but it's unclear to me exactly what I should be
implementing and what is the best solver for my case.

A is hermitian, B is hermitian positive but not positive-definite or real.
Therefore I have specified a GHEP problem type to the EPS object.

I use PETSc in complex mode through the petsc4py bridge.

Any help on how to get EPS to work for a generalised matrix-free case would
be welcome. Performance is not a key issue here - I have a tractable high
value case on hand.

Thank you for your time,

Quentin


[petsc-users] Near null space for a fieldsplit in petsc4py

2023-07-12 Thread TARDIEU Nicolas via petsc-users
Dear PETSc team,

In the attached example, I set up a block pc for a saddle-point problem in 
petsc4py. The IS define the unknowns, namely some physical quantity (phys) and 
a Lagrange multiplier (lags).
I would like to attach a near null space to the physical block, in order to get 
the best performance from an AMG pc. 
I have been trying hard, attaching it to the initial block, to the IS but no 
matter what I am doing, when it comes to "ksp_view", no near null space is 
attached to the matrix.

Could you please help me figure out what I am doing wrong ?

Thanks,
Nicolas




Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


test.tgz
Description: test.tgz


Re: [petsc-users] [SLEPc] With the increasing of processors, result change in 1e-5 and solving time increase. krylovschur for MATSBAIJ matrix's smallest eigenvalue

2023-07-12 Thread Jose E. Roman
The computed eigenvalue has 7 matching digits, which agrees with the used 
tolerance. If you want more matching digits you have to reduce the tolerance.

The performance seems reasonable for up to 64 processes, so yes the problem may 
be too small for more processes. But performance depends also a lot on the 
sparsity pattern of the matrix. Take into account that scalability of 
matrix-vector products in SBAIJ matrices is expected to be worse than in AIJ. 
Anyway, to answer questions about performance it is better that you send the 
output of -log_view

Jose


> El 12 jul 2023, a las 10:18, Runfeng Jin  escribió:
> 
> Hi,
>  When I try to increase the number of processors to solve the same matrix(to 
> acquire the smallest eigenvalue) , I find all the results differ from each 
> other within the  1e-5 scope (Though the ||Ax-kx||/||kx|| are all achieve 
> 1e-8)  . And the solve time are first decreasing then increasing.
> my question is 
> (1) Is there anyway to make the result more constant 
> (2)  Does the time decreasing because the matrix dimension are too small for 
> so many processors? Is there any way to reduce the solve time when increasing 
> the number of processors?
> 
> Thank you!
> Runfeng
> 
> 
> matrix type  MATSBAIJ
> matrix dimension  2078802
> solver krylovschur
> blocksize 1
> dimension of the subspace   PETSC_DEFAULT
> number of eigenvalues   6
> the maximum dimension allowed for the projected problem   PETSC_DEFAULT
> -eps_non_hermitian 
> 
> number of processors  result solve time
> 16-302.06881196   526
> 32-302.06881892   224
> 64-302.06881989   139
> 128   -302.06881236   122
> 256   -302.06881938   285
> 512   -302.06881029   510
> 640   -302.06882377   291



[petsc-users] [SLEPc] With the increasing of processors, result change in 1e-5 and solving time increase. krylovschur for MATSBAIJ matrix's smallest eigenvalue

2023-07-12 Thread Runfeng Jin
Hi,
 When I try to increase the number of processors to solve the same
matrix(to acquire the smallest eigenvalue) , I find all the results differ
from each other within the  1e-5 scope (Though the ||Ax-kx||/||kx|| are all
achieve 1e-8)  . And the solve time are first decreasing then increasing.
my question is
(1) Is there anyway to make the result more constant
(2)  Does the time decreasing because the matrix dimension are too small
for so many processors? Is there any way to reduce the solve time when
increasing the number of processors?

Thank you!
Runfeng


matrix type  MATSBAIJ
matrix dimension  2078802
solver krylovschur
blocksize 1
dimension of the subspace   PETSC_DEFAULT
number of eigenvalues   6
the maximum dimension allowed for the projected problem   PETSC_DEFAULT
-eps_non_hermitian

number of processors resultsolve time
16 -302.06881196  526
32 -302.06881892  224
64 -302.06881989  139
128 -302.06881236  122
256 -302.06881938  285
512 -302.06881029  510
640 -302.06882377  291