Re: [petsc-users] Running PETSc/SLEPc tests inside/outside a singularity container

2021-01-13 Thread Satish Balay via petsc-users
On Wed, 13 Jan 2021, Jan Grießer via petsc-users wrote: > Hello all, > I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these > libraries are compiled inside a Singularity container (The .def file for > compilation is attached). The compilation of the libraries is no

Re: [petsc-users] counter->tag = *maxval - 128

2021-01-13 Thread Barry Smith
Yes, definitely not easy to debug. If you have concerns in your code about running out and then producing bugs you can fix it by just keeping an array of returned tags and use these tags when needed. You'll also have to add something like PetscObject/CommRestoreTag() to return ones no

Re: [petsc-users] DMPlex with dof in cells AND at nodes

2021-01-13 Thread Thibault Bridel-Bertomeu
Hello Matt, Thank you for the follow up. Here are the print out for the two DMs as you suggested (the smallest tetrahedral mesh possible in a cube). :: [DEBUG] Visualizing DM in console :: DM Object: 2 MPI processes type: plex DM_0x7faa62d416f0_1 in 3 dimensions: 0-cells: 14 14

Re: [petsc-users] counter->tag = *maxval - 128

2021-01-13 Thread Fande Kong
On Tue, Jan 12, 2021 at 6:49 PM Barry Smith wrote: > >Fande, > >/* hope that any still active tags were issued right at the beginning > of the run */ > >PETSc actually starts with *maxval (see line 130). It is only when it > runs out that it does this silly thing for the reason

[petsc-users] PetscAllreduceBarrierCheck is valgrind clean?

2021-01-13 Thread Fande Kong
Hi All, I ran valgrind with mvapich-2.3.5 for a moose simulation. The motivation was that we have a few non-deterministic parallel simulations in moose. I want to check if we have any memory issues. I got some complaints from PetscAllreduceBarrierCheck Thanks, Fande ==98001== 88 (24

Re: [petsc-users] PetscAllreduceBarrierCheck is valgrind clean?

2021-01-13 Thread Barry Smith
Fande, Look at https://scm.mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/trunk/src/mpid/ch3/channels/common/src/detect/arch/mv2_arch_detect.c cpubind_set = hwloc_bitmap_alloc(); but I don't find a corresponding hwloc_bitmap_free(cpubind_set ); in get_socket_bound_info().

[petsc-users] Running PETSc/SLEPc tests inside/outside a singularity container

2021-01-13 Thread Jan Grießer via petsc-users
Hello all, I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these libraries are compiled inside a Singularity container (The .def file for compilation is attached). The compilation of the libraries is no problem, only if you call "make all check" for PETSc and "make check"

Re: [petsc-users] Running PETSc/SLEPc tests inside/outside a singularity container

2021-01-13 Thread Jose E. Roman
> El 13 ene 2021, a las 18:17, Satish Balay via petsc-users > escribió: > > On Wed, 13 Jan 2021, Jan Grießer via petsc-users wrote: > >> Hello all, >> I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these >> libraries are compiled inside a Singularity container (The

Re: [petsc-users] PetscAllreduceBarrierCheck is valgrind clean?

2021-01-13 Thread Fande Kong
On Wed, Jan 13, 2021 at 11:49 AM Barry Smith wrote: > > Fande, > > Look at > https://scm.mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/trunk/src/mpid/ch3/channels/common/src/detect/arch/mv2_arch_detect.c > > cpubind_set = hwloc_bitmap_alloc(); > > but I don't find a corresponding

[petsc-users] Understanding global and local rows in a distributed dense matrix

2021-01-13 Thread Roland Richter
Hei, I am currently struggling a bit with my understanding of local and global rows/columns in distributed dense matrices in PETSc. The attached short program is creating a local matrix in every thread with [matrix_size_rows, matrix_size_cols] as global size. Afterwards, I flatten the matrix

Re: [petsc-users] Understanding global and local rows in a distributed dense matrix

2021-01-13 Thread Stefano Zampini
MATMPIDENSE does not implement any cyclic distribution. In parallel, a dense matrix is split by rows. Each process owns localrows*globalcols entries. Local sizes are to be intended as the size of the right and left vectors used in matvec operations, and are not strictly related with storage

Re: [petsc-users] DMPlex with dof in cells AND at nodes

2021-01-13 Thread Matthew Knepley
On Tue, Jan 12, 2021 at 4:12 PM Thibault Bridel-Bertomeu < thibault.bridelberto...@gmail.com> wrote: > Good evening Matthew, > > Thank you for your answer, and sorry for the delay in mine ! I am trying > to figure things out but when one gets in the distribution of a DMPlex, it > becomes quite

Re: [petsc-users] Understanding global and local rows in a distributed dense matrix

2021-01-13 Thread Matthew Knepley
On Wed, Jan 13, 2021 at 8:26 AM Stefano Zampini wrote: > MATMPIDENSE does not implement any cyclic distribution. In parallel, a > dense matrix is split by rows. Each process owns localrows*globalcols > entries. Local sizes are to be intended as the size of the right and left > vectors used in