Thanks, Wolfgang. I'll look into it some more and eventually try with a
larger problem.
On Tuesday, January 30, 2024 at 11:18:01 PM UTC-5 Wolfgang Bangerth wrote:
>
> Alex:
>
> > I am running a solid-mechanics job where I import a large mesh and run
> using
> > parallel fullydistributed with
Alex:
I am running a solid-mechanics job where I import a large mesh and run using
parallel fullydistributed with MPI. I had been trying to run my job using a CG
solver with the BoomerAMG
preconditioner (based on the example in step-40).
I ran my mesh with 116,000 nodes and the solver took
Dear deal.ii community,
I've had relative success running the parallel::fullydistributed, but now
I've encountered some strange preconditioner/solver behavior.
I am running a solid-mechanics job where I import a large mesh and run
using parallel fullydistributed with MPI. I had been trying to
Thanks for your response, Wolfgang.
Your last sentence seems to be my solution. I was planning to use
parallel::fullydistributed due to the large sizes of my imported meshes
(per a suggestion on a previous post of mine:
https://groups.google.com/g/dealii/c/V5HH2pZ0Kow )
I wanted to run
Alex:
You've hit on one of those bugs that every once in a while someone trips over,
but that nobody really ever takes/has the time to fully debug. For sure, we
would love to get some help with this issue, and the github bug report already
contains a relatively small test case that should
Dear deal.ii community,
I am working with a mesh that is imported via a modified version of
GridIn::read_abaqus(). I'm able to run my mesh and job with
parallel::shared without any issues.
However, when I go to use parallel::distributed, I run into an issue with
p8est connectivity:
void