Hi All,
I want to collect MUMPS memory estimates based on the initial
symbolic factorization analysis before the actual numerical factorization
starts to check if the estimated memory requirements fit the available
memory.
I am following the steps from
Thanks a lot, all! So given that there's still some debate about whether we
should even use MATPREALLOCATOR or a better integration of that hash logic
, as in Issue 852, I'll proceed with simply aping what DMDA does (with
apologies for all this code duplication).
One thing I had missed, which I
Hi Samar,
Thanks for your suggestion. Unfortunately, it does not work. I checked the
mpif90 wrapper and the option "-Wl,-flat_namespace” is present.
(base) ➜ bin ./mpif90 -show
ifort -I/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/include
-Wl,-flat_namespace
Hello Barry,
To answer your question, both the eigenvectors contain only two values: the
eigenvectors entries are different in the two eigenvectors but coherent with
the belonging of the entry to the sub-domains.
However, I was able to get the same behavior of the MatTestNullSpace using the
PCFactorSetMatSolverType(pc,MATSOLVERMUMPS);
PCFactorSetUpMatSolverType(pc);
PCFactorGetMatrix(pc,);
MatLUFactorSymbolic(F,A,...)
You must provide row and column permutations etc,
petsc/src/mat/tests/ex125.c may give you a clue on how to get these inputs.
Hong
Calling PCSetUp() before KSPSetUp()?
--Junchao Zhang
On Wed, Jan 12, 2022 at 3:00 AM Varun Hiremath
wrote:
> Hi All,
>
> I want to collect MUMPS memory estimates based on the initial
> symbolic factorization analysis before the actual numerical factorization
> starts to check if the estimated
Hi Danyang,
I had trouble configuring PETSc on MacOS Monterey with ifort when using mpich
(which I was building myself). I tracked it down to an errant
"-Wl,-flat_namespace”
option in the mpif90 wrapper. I rebuilt mpich with the
"--enable-two-level-namespace” configuration option and the
Hi All,
I got an error in PETSc configuration on macOS Monterey with Intel
oneAPI using the following options:
./configure --with-cc=icc --with-cxx=icpc --with-fc=ifort
--with-blas-lapack-dir=/opt/intel/oneapi/mkl/2022.0.0/lib/
--with-debugging=1 PETSC_ARCH=macos-intel-dbg --download-mumps
Dear PETSc Team:
Hi! I'm working on a parallel version of a PETSc script that I wrote in serial
using DMPlex. After calling DMPlexDistribute() each rank is assigned its own
DAG where the points are numbered locally. For example, If I split a 100-cell
mesh over 4 processors, each process