On Mon, Nov 25, 2019, at 00:23 CST, vachan potluri
<[email protected]> wrote:
> I was able to reproduce this behaviour with the following code (also
> attached); the CMakeLists file is also attached. The code hangs after
> printing 'Scaled variable 0'.
The problem is the communication pattern:
90 for(auto &cell: dof_handler.active_cell_iterators()){
91 if(!(cell->is_locally_owned())) continue;
92
93 cell->get_dof_indices(dof_ids);
97 for(uint var=0; var<4; var++){
101 vecs[var].compress(VectorOperation::insert);
102 }
106 for(uint var=0; var<4; var++){
110 vecs[var].compress(VectorOperation::add);
111 }
115 for(uint var=0; var<4; var++){
119 vecs[var].compress(VectorOperation::add);
120 }
121 } // loop over owned cells
The call to compress is a collective operation - that requires all MPI
ranks to participate. But here you are iterating over active cells of a
distributed triangulation. If the number of locally owned cell is not
perfectly divisible by three the program will hang...
All your MPI communication should look like this:
for (auto &cell: dof_handler.active_cell_iterators()) {
// do something
} /* end of for loop */
[...].compress(...) // communicate
Best,
Matthias
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/8736eckmmj.fsf%4043-1.org.