Do you need fortran? If not just run again with also --with-fc=0
--with-sowing=0
If you need fortran send configure.log
> On Apr 29, 2024, at 3:45 PM, Yongzhong Li
> wrote:
>
> Hi Barry,
>
> Thanks for your reply, I checkout to the git branch
> barry/2023-09-15/fix-log-pcmpi but
Hi Barry,
Thanks for your reply, I checkout to the git branch
barry/2023-09-15/fix-log-pcmpi but get some errors when configuring PETSc,
below is the error message,
=
You should use the git branch barry/2023-09-15/fix-log-pcmpi It is still
work-in-progress but much better than what is currently in the main PETSc
branch.
By default, the MPI linear solver server requires 10,000 unknowns per MPI
process, so for smaller problems, it will only run on one
Hi Barry,
Thanks, I am interested in this PCMPI solution provided by PETSc!
I tried the src/ksp/ksp/tutorials/ex1.c which is configured in CMakelists as
follows:
./configure PETSC_ARCH=config-debug --with-scalar-type=complex
--with-fortran-kernels=1 --with-debugging=0 --with-logging=0
On Tue, Apr 23, 2024 at 4:00 PM Yongzhong Li
wrote:
> Thanks Barry! Does this mean that the sparse matrix-vector products, which
> actually constitute the majority of the computations in my GMRES routine in
> PETSc, don’t utilize multithreading? Only basic vector operations such as
> VecAXPY and
Yes, only the routines that can explicitly use BLAS have multi-threading.
PETSc does support using nay MPI linear solvers from a sequential (or
OpenMP) main program using the
No, I think sparse matrix-vector products (MatMult in petsc) can be
accelerated with multithreading. petsc does not do that, but one can use
other libraries, such as MKL for that.
--Junchao Zhang
On Tue, Apr 23, 2024 at 3:00 PM Yongzhong Li
wrote:
> Thanks Barry! Does this mean that the
Thanks Barry! Does this mean that the sparse matrix-vector products, which
actually constitute the majority of the computations in my GMRES routine in
PETSc, don’t utilize multithreading? Only basic vector operations such as
VecAXPY and VecDot or dense matrix operations in PETSc will benefit
Intel MKL or OpenBLAS are the best bet, but for vector operations they will
not be significant since the vector operations do not dominate the computations.
> On Apr 23, 2024, at 3:23 PM, Yongzhong Li
> wrote:
>
> Hi Barry,
>
> Thank you for the information provided!
>
> Do you think
Hi Barry,
Thank you for the information provided!
Do you think different BLAS implementation will affect the multithreading
performance of some vectors operations in GMERS in PETSc?
I am now using OpenBLAS but didn’t see much improvement when theb
multithreading are enabled, do you think
PETSc provided solvers do not directly use threads.
The BLAS used by LAPACK and PETSc may use threads depending on what BLAS is
being used and how it was configured.
Some of the vector operations in GMRES in PETSc use BLAS that can use
threads, including axpy, dot, etc. For
11 matches
Mail list logo