Dear Daniel,

Thanks a lot. I tried the MPI_MAXLOC approach and it works.
Unfortunately though, there is a slight problem I wasn't able to understand 
yet fully.

int disp_size = triangulation.n_vertices();

struct
{
double value;
int rank;
} in[disp_size], out[disp_size];

typename DoFHandler<dim>::active_cell_iterator cell = 
dof_handler.begin_active(), endc = dof_handler.end();
std::vector<unsigned int> local_dof_indices(fe.dofs_per_cell);
unsigned int cell_number = 0;

for (; cell != endc; ++cell, ++cell_number)
{
if ( cell->is_locally_owned() )
{
cell->get_dof_indices(local_dof_indices);

for (unsigned int i = 0, j = 1; i < fe.dofs_per_cell; i += 2, j += 2)
{
unsigned int v = i / dim;

double displacement_x = locally_relevant_solution(local_dof_indices[i]);
double displacement_y = locally_relevant_solution(local_dof_indices[j]);

double displacement_norm = sqrt(displacement_x * displacement_x 
+ displacement_y * displacement_y);

in[cell->vertex_index(v)].value = displacement_norm;
in[cell->vertex_index(v)].rank = Utilities::MPI::this_mpi_process(mpi_com);
}
}
}

MPI_Reduce(in, out, disp_size, MPI_DOUBLE_INT, MPI_MAXLOC, 0, mpi_com);

std::vector<double> maximum_disp_norms(disp_size);
std::vector<int> disp_ranks(disp_size), disp_nodes(disp_size);
double max_displacement;
int max_index, max_rank, max_node_id;

if ( Utilities::MPI::this_mpi_process(mpi_com) == 0 )
{
for (int i = 0; i < disp_size; ++i)
{
maximum_disp_norms[i] = out[i].value;
disp_ranks[i] = out[i].rank;
disp_nodes[i] = i;

if ( out[i].value > 1e10 )
{
maximum_disp_norms[i] = 0;
disp_ranks[i] = 0;
}
}

        // Maximum norm of displacements
max_displacement = *max_element(maximum_disp_norms.begin(), 
maximum_disp_norms.end());
max_index = distance(maximum_disp_norms.begin(), 
max_element(maximum_disp_norms.begin(), maximum_disp_norms.end()));
max_rank = disp_ranks[max_index];
max_node_id = disp_nodes[max_index];
}

// Maximum displacement position
cell = dof_handler.begin_active(), endc = dof_handler.end();
cell_number = 0;

for (; cell != endc; ++cell, ++cell_number)
{
if ( cell->is_locally_owned() )
{
                // This is just testwise
if ( Utilities::MPI::this_mpi_process(mpi_com) == max_rank )
{
std::cout << Utilities::MPI::this_mpi_process(mpi_com) << std::endl;
}

for (int v = 0; v < GeometryInfo<dim>::vertices_per_cell; ++v)
{
if ( cell->vertex_index(v) == max_node_id )
{
// std::cout << "VERTEX ID: " << cell->vertex_index(v) << " WITH 
COORDINATES: " << cell->vertex(v) << std::endl;
}
}
}
}

Now the funny part is, if I set max_rank manually such as max_rank = 3 for 
instance, it works and for the currently owned rank I receive an output 
within terminal. Another thing is that MPI_Reduce creates somewhere some 
big values, which is why I fixed it by erasing values greater for 1e10. 
Finding the maximum value is put inside the if ( 
Utilities::MPI::this_mpi_process(mpi_com) == 0 ) part, so I can check, if 
it is dependent on the processor I compute it for. Surprisingly, this works 
only in the first step and afterwards for higher loading steps I receive no 
output. If I use MPI_Reduce for the 3rd processor such as MPI_Reduce(in, 
out, disp_size, MPI_DOUBLE_INT, MPI_MAXLOC, *3*, mpi_com); then I check for 
the max_rank again, it works. I am a bit confused...

What could be the error here?


Kind regards,
Seyed Ali


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to