Arthur:
I am getting segmentation faults when using WorkStream::run and 2+
threads when the worker function involves reinitializing a scratch_data
and interpolating the current FE solution with FEValues, for instance:
>
> fe_values[pressure].get_function_values(current_solution,
> present_pressure_values);
Is the worker function resizing a global vector? These objects are
supposed to be self-contained with what a worker needs, and workers are
not supposed to *change* anything that isn't part of the scratch object.
They may only *read* data that lives outside the scratch object.
So the question here is where `present_pressure_values` lives. Is it
shared between threads.
The current solution is a distributed PETSc vector. Overall and
following this post (https://groups.google.com/g/dealii/c/Jvt36NOXM4o/m/
tytRf3N9f4gJ), does it still hold that multithreaded matrix/rhs assembly
with PETSc wrappers is not thread safe? (I think so since PETSc is still
stated as not thread safe, but I'm asking in case I missed something)
I believe you should be able to *read* from multiple threads in
parallel, but not write. Either way, it would be useful to see the
backtrace you get from your segfaults.
I am currently testing with Mumps (through PETSc): if I'm not mistaken,
then the other possibilities to use it with distributed matrix/vectors
and threaded assembly are with deal.II's own distributed vectors +
standalone Mumps, or with Trilinos wrappers + Mumps through Amesos/
Amesos2, is that correct?
"...to use it..." -- what is "it" here?
Best
W.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/dealii/d57557b9-1f2d-41f4-b37f-5300d96aae4e%40colostate.edu.