>
> However, when I run the command "mpirun -np 5 ./MPI_TEST", I get the 
> following message,
>
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
>
> It seems that 5 cpu cores are used, however, they seemed to have the same 
> rank. 
> Besides, my operating system is Ubuntu 16.04. Looking forward to hearing 
> from you soon. Thank you!
>

Hi Yang Liu,

Try that code (explanations below): 

#include <deal.II/base/mpi.h>
#include <deal.II/base/conditional_ostream.h>
#include <fstream>

using namespace dealii;

template <int dim>
class MPI_TEST
{
public:
  MPI_TEST();
  void print_test();

private:
  MPI_Comm mpi_communicator;
  ConditionalOStream pcout;  
};

template <int dim>
void MPI_TEST<dim>::print_test()
{
  for (unsigned int i = 0;
  i < Utilities::MPI::n_mpi_processes(mpi_communicator);
  ++i)
  {
      std::cout << "Hello world from process "
      << Utilities::MPI::this_mpi_process(mpi_communicator)
          << " of "
  <<  Utilities::MPI::n_mpi_processes(mpi_communicator)
  << std::endl;
  }
}

template <int dim>
MPI_TEST<dim>::MPI_TEST()
  : mpi_communicator(MPI_COMM_WORLD)
  , pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_communicator) == 
0))
{}


int main(int argc, char ** argv)
{

  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

  MPI_TEST<2> hw_mpi;
  hw_mpi.print_test();

  return 0;
}

The out put of mpirun -np 3 ./MPI_TEST is:

Hello world from process 1 of 3
Hello world from process 1 of 3
Hello world from process 1 of 3
Hello world from process 0 of 3
Hello world from process 0 of 3
Hello world from process 0 of 3
Hello world from process 2 of 3
Hello world from process 2 of 3
Hello world from process 2 of 3

Looks weird if you are not used to MPI but it is as expected. 

1. You must call Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, 
argv, 1); before you invoke any action of MPI. It will create an object 
that calls MPI_Init upon creation and MPI_Finalize in the destructor, i.e., 
when the object goes out of scope. Only then

2. When you implement MPI code it is important to get into the right 
mindset: You do not program code for one compute node only. You are 
programming code for all nodes (at the same time). In your example the for 
loop gets executed on all nodes( =3 here).

Hope that helps.

Best,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5164476e-19e6-4e08-a0a2-f938a9adab89%40googlegroups.com.

Reply via email to