Hello Arthur, the code for serialization and transfer across refinements for hp discretizations has been implemented for serial, parallel::shared and parallel::distributed Triangulations. At the time it was implemented, the parallel::fullydistributed Triangulation did not exist yet, so the comfortable interface with `DoFHandler::prepare_for_serialization_of_active_fe_indices()` and `DoFHandler::deserialize_active_fe_indices()` has never been adjusted for parallel::fullydistributed Triangulation. Essentially, we would need to make CellDataTransfer available to all Triangulation types, which would require a significant amount of work (see also #15280 <https://github.com/dealii/dealii/issues/15280>).
For the moment and for your application, I would suggest to use the MPI interface to write FE indices to disk directly using `MPI_File_write_at()` and `MPI_File_read_at()`. You might be able to use the DoFHandler interface to get/set active FE indices (`DoFHandler::get_active_fe_indices()` & `DoFHandler::set_active_fe_indices()`). Let me know if that helps! Best, Marc On Monday, January 26, 2026 at 10:16:13 PM UTC+1 [email protected] wrote: > Hello deal.II community, > > I wrote a small application which uses a fully distributed (simplicial) > mesh, PETSc wrappers, and handles a Lagrange multiplier on a boundary with > the hp machinery. With the non-hp version of this solver, I can checkpoint > the current and previous solutions and restart without any issue, as done > in step-83, but the restart fails on the hp version. > > As I understand it, I also have to serialize the fe indices with > dof_handler.prepare_for_serialization_of_active_fe_indices(), but this > function is only implemented for distributed triangulations. > > I have joined a minimal example, for both quads and simplices. It compares > a checkpoint/restart for both the non-hp and hp setting, for quads and > simplices. On my end, it succeeds on a single MPI rank, but either reads > mismatched vectors or segfaults on more ranks, without showing a stack > trace in debug. I tried to reproduce the exact same behavior as in the > solver, but they differ slightly and I'm not sure why. In both cases, they > fail in restart() : the attached example fails in > SolutionTransfer::deserialize(...) after successfully loading the > triangulation, in interpolate(...) from solution_transfer.templates.h, > whereas in the actual solver, it fails when loading the triangulation and > throws an std::bad_alloc at line 1051 of grid/tria.cc in the current master > (dest_data_variable.resize(size_on_proc);). > > Assuming the issue is indeed that I need to serialize/deserialize the > fe_indices, is there a workaround that would work for fully distributed > triangulations? I saw that prepare_for_serialization_of_active_fe_indices() > uses CellDataTransfer which do not seem to exist for fully distributed > meshes, so I'm assuming it is not that trivial. > > Thank you for your time, > Arthur > > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/dealii/2e6b407e-df9d-4cbe-996c-6e4ad4fbc8efn%40googlegroups.com.
