Hi, Philip,
I could reproduce the error. I need to find a way to debug it. Thanks.
/home/jczhang/xolotl/test/system/SystemTestCase.cpp(317): fatal error: in
"System/PSI_1": absolute value of diffNorm{0.19704848134353209} exceeds
1e-10
*** 1 failure is detected in the test module "Regression"
Thanks, yes, I figured out the OMP_NUM_THREADS=1 way while triaging it, and
the --download-fblaslapack way occurred to me.
I was hoping for something that "just worked" (refuse to build in this
case) but I don't know if it's programmatically possible for petsc to tell
whether or not it's linking
There would need to be, for example, some symbol in all the threaded BLAS
libraries that is not in the unthreaded libraries. Of at least in some of the
threaded libraries but never in the unthreaded.
BlasLapack.py could check for the special symbol(s) to determine.
Barry
> On Dec
Thank you for the help.
I think the last piece of the puzzle is how do I create the "expanded IS"
from the subpoint IS using the section?
Sincerely
Nicholas
On Wed, Dec 7, 2022 at 7:06 AM Matthew Knepley wrote:
> On Wed, Dec 7, 2022 at 6:51 AM Nicholas Arnold-Medabalimi <
>
On Wed, Dec 7, 2022 at 9:21 PM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Thank you for the help.
>
> I think the last piece of the puzzle is how do I create the "expanded IS"
> from the subpoint IS using the section?
>
Loop over the points in the IS. For each point, get the dof
I ran into an unexpected issue -- on an NP-core machine, each MPI rank of
my application was launching NP threads, such that when running with
multiple ranks the machine was quickly oversubscribed and performance
tanked.
The root cause of this was petsc linking against the system-provided
library
If you don't specify a blas to use - petsc configure will guess and use what it
can find.
So only way to force it use a particular blas is to specify one [one way is
--download-fblaslapack]
Wrt multi-thread openblas - you can force it run single threaded [by one of
these 2 env variables]
We don't have configure code to detect if the BLAS is thread parallel, nor do
we have code to tell it not to use a thread parallel version.
Except if it is using MKL then we do force it to not use the threaded BLAS.
A "cheat" would be for you to just set the environmental variable BLAS
It isn't always wrong to link threaded BLAS. For example, a user might need to
call threaded BLAS on the side (but the application can only link one) or a
sparse direct solver might want threading for the supernode. We could test at
runtime whether child threads exist/are created when calling
Hi Matthew
Thank you for the help. This clarified a great deal.
I have a follow-up question related to DMPlexFilter. It may be better to
describe what I'm trying to achieve.
I have a general mesh I am solving which has a section with cell center
finite volume states, as described in my initial
On Wed, Dec 7, 2022 at 5:13 AM 김성익 wrote:
> I want to use METIS for ordering.
> I heard the MUMPS has good performance with METIS ordering.
>
> However there are some wonder things.
> 1. With option "-mpi_linear_solver_server -ksp_type preonly -pc_type mpi
> -mpi_pc_type lu " the MUMPS solving
Hi
Thank you so much for your patience. One thing to note: I don't have any
need to go back from the filtered distributed mapping back to the full but
it is good to know.
One aside question.
1) Is natural and global ordering the same in this context?
As far as implementing what you have
I want to use METIS for ordering.
I heard the MUMPS has good performance with METIS ordering.
However there are some wonder things.
1. With option "-mpi_linear_solver_server -ksp_type preonly -pc_type mpi
-mpi_pc_type lu " the MUMPS solving is slower than with option
"-mpi_linear_solver_server
On Wed, Dec 7, 2022 at 3:35 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Hi Matthew
>
> Thank you for the help. This clarified a great deal.
>
> I have a follow-up question related to DMPlexFilter. It may be better to
> describe what I'm trying to achieve.
>
> I have a general
I think I don't understand the meaning of
-pc_type mpi
-mpi_pc_type lu
What's the exact meaning of -pc_type mpi and -mpi_pc_type lu??
Is this difference coming from 'mpi_linear_solver_server' option??
Thanks,
Hyung Kim
2022년 12월 7일 (수) 오후 8:05, Matthew Knepley 님이 작성:
> On Wed, Dec 7, 2022 at
On Wed, Dec 7, 2022 at 6:51 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Hi
>
> Thank you so much for your patience. One thing to note: I don't have any
> need to go back from the filtered distributed mapping back to the full but
> it is good to know.
>
> One aside question.
> 1)
On Wed, Dec 7, 2022 at 5:03 AM 김성익 wrote:
> This error was caused by the inconsistent index of vecgetvalues in the
> mpirun case.
>
> For example, for the problem that the global vector size is 4, when mpirun
> -np 2, the value obtained from each process with vecgetvalues should be 2,
> but in
This error was caused by the inconsistent index of vecgetvalues in the
mpirun case.
For example, for the problem that the global vector size is 4, when mpirun
-np 2, the value obtained from each process with vecgetvalues should be 2,
but in my code tried to get 4 values, so it became a problem.
On Wed, Dec 7, 2022 at 6:15 AM 김성익 wrote:
> I think I don't understand the meaning of
> -pc_type mpi
>
This option says to use the PCMPI preconditioner. This allows you to
parallelize the
solver in what is otherwise a serial code.
> -mpi_pc_type lu
>
This tells the underlying solver in PCMPI
Following your comments,
I used below command
mpirun -np 4 ./app -ksp_type preonly -pc_type mpi -mpi_linear_solver_server
-mpi_pc_type lu -mpi_pc_factor_mat_solver_type mumps -mpi_mat_mumps_icntl_7
5 -mpi_ksp_view
so the output is as below
KSP Object: (mpi_) 1 MPI process
type: gmres
On Wed, Dec 7, 2022 at 8:15 AM 김성익 wrote:
> Following your comments,
> I used below command
> mpirun -np 4 ./app -ksp_type preonly -pc_type mpi
> -mpi_linear_solver_server -mpi_pc_type lu -mpi_pc_factor_mat_solver_type
> mumps -mpi_mat_mumps_icntl_7 5 -mpi_ksp_view
>
>
> so the output is as
21 matches
Mail list logo