Hi,

I'm solving a topology optimization problem with Stokes flow discretized by a 
stabilized Q1-Q0 finite element method
and using BiCGStab with the fieldsplit preconditioner to solve the linear 
systems. The implementation
is based on DMStag, runs on Ubuntu via WSL2, and works fine with PETSc-3.18.1 
on multiple CPU cores and the following
options for the preconditioner:

-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

However, when I enable GPU computations by adding two options -

...
-dm_vec_type cuda \
-dm_mat_type aijcusparse \
-fieldsplit_0_ksp_type preonly \
-fieldsplit_0_pc_type gamg \
-fieldsplit_0_pc_gamg_reuse_interpolation 0 \
-fieldsplit_1_ksp_type preonly \
-fieldsplit_1_pc_type jacobi

- KSP still works fine the first couple of topology optimization iterations but 
then
stops with "Linear solve did not converge due to DIVERGED_DTOL ..".

My question is whether I should expect the GPU versions of the linear solvers 
and pre-conditioners
to function exactly as their CPU counterparts (I got this impression from the 
documentation),
in which case I've probably made some mistake in my own code, or whether there 
are other/additional
settings or modifications I should use to run on the GPU (an NVIDIA Quadro 
T2000)?

Kind regards,

Carl-Johan

Reply via email to