On Wed, 13 Jan 2021, Jan Grießer via petsc-users wrote:
> Hello all,
> I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these
> libraries are compiled inside a Singularity container (The .def file for
> compilation is attached). The compilation of the libraries is no
Yes, definitely not easy to debug. If you have concerns in your code about
running out and then producing bugs you can fix it by just keeping an array of
returned tags and use these tags when needed. You'll also have to add something
like PetscObject/CommRestoreTag() to return ones no
Hello Matt,
Thank you for the follow up.
Here are the print out for the two DMs as you suggested (the smallest
tetrahedral mesh possible in a cube).
:: [DEBUG] Visualizing DM in console ::
DM Object: 2 MPI processes
type: plex
DM_0x7faa62d416f0_1 in 3 dimensions:
0-cells: 14 14
On Tue, Jan 12, 2021 at 6:49 PM Barry Smith wrote:
>
>Fande,
>
>/* hope that any still active tags were issued right at the beginning
> of the run */
>
>PETSc actually starts with *maxval (see line 130). It is only when it
> runs out that it does this silly thing for the reason
Hi All,
I ran valgrind with mvapich-2.3.5 for a moose simulation. The motivation
was that we have a few non-deterministic parallel simulations in moose. I
want to check if we have any memory issues. I got some complaints from
PetscAllreduceBarrierCheck
Thanks,
Fande
==98001== 88 (24
Fande,
Look at
https://scm.mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/trunk/src/mpid/ch3/channels/common/src/detect/arch/mv2_arch_detect.c
cpubind_set = hwloc_bitmap_alloc();
but I don't find a corresponding hwloc_bitmap_free(cpubind_set ); in
get_socket_bound_info().
Hello all,
I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these
libraries are compiled inside a Singularity container (The .def file for
compilation is attached). The compilation of the libraries is no problem, only
if you call "make all check" for PETSc and "make check"
> El 13 ene 2021, a las 18:17, Satish Balay via petsc-users
> escribió:
>
> On Wed, 13 Jan 2021, Jan Grießer via petsc-users wrote:
>
>> Hello all,
>> I have a question about the tests for PETSc/PETSc4Py and SLEPc4Py when these
>> libraries are compiled inside a Singularity container (The
On Wed, Jan 13, 2021 at 11:49 AM Barry Smith wrote:
>
> Fande,
>
> Look at
> https://scm.mvapich.cse.ohio-state.edu/svn/mpi/mvapich2/trunk/src/mpid/ch3/channels/common/src/detect/arch/mv2_arch_detect.c
>
> cpubind_set = hwloc_bitmap_alloc();
>
> but I don't find a corresponding
Hei,
I am currently struggling a bit with my understanding of local and
global rows/columns in distributed dense matrices in PETSc.
The attached short program is creating a local matrix in every thread
with [matrix_size_rows, matrix_size_cols] as global size. Afterwards, I
flatten the matrix
MATMPIDENSE does not implement any cyclic distribution. In parallel, a
dense matrix is split by rows. Each process owns localrows*globalcols
entries. Local sizes are to be intended as the size of the right and left
vectors used in matvec operations, and are not strictly related with
storage
On Tue, Jan 12, 2021 at 4:12 PM Thibault Bridel-Bertomeu <
thibault.bridelberto...@gmail.com> wrote:
> Good evening Matthew,
>
> Thank you for your answer, and sorry for the delay in mine ! I am trying
> to figure things out but when one gets in the distribution of a DMPlex, it
> becomes quite
On Wed, Jan 13, 2021 at 8:26 AM Stefano Zampini
wrote:
> MATMPIDENSE does not implement any cyclic distribution. In parallel, a
> dense matrix is split by rows. Each process owns localrows*globalcols
> entries. Local sizes are to be intended as the size of the right and left
> vectors used in
13 matches
Mail list logo