Re: [deal.II] transitioning from parallel::shared::Triangulation to parallel::distributed::Triangulation

2017-07-07 Thread Wolfgang Bangerth


Alberto,

I am bothering today about a code I have been writing in the shared 
triangulation framework and its transition to the 
parallel::distributed::Triangulation framework.


The shared version works just fine, and so does the parallel if run with 1 
single process. I understand therefore that the numerics seem to work OK, but 
since I am having issues with more than one process, likely I am not 
understanding some fundamentals of the 
parallel::distributed::Triangulation framework.



In particular it seems that a call of type


std::vector< Vector< double> > old_solution_values ( 
quadrature_formula.size(), Vector< double>(dofs.GPDofs) );


fe_values.get_function_values ( locally_accumulated_displacement, 
old_solution_values );


provides nan, that propagates and make the code to fail.


Are you running in debug or release mode? The former may give you an error 
already at an earlier time.



I wonder if this fact 
is due to DoFs that are not locally owned within cells that are indeed owned 
locally.


That is correct -- not all DoFs on locally owned cells are locally owned DoFs. 
That's because some DoFs are located on vertices or faces between processor 
subdomains and they can only be owned by one of the two processors. That's 
exactly the difference between "locally owned" and "locally active" DoFs -- 
take a look at the glossary entry on these two terms.


Given the code snippet above, I wonder whether 
locally_accumulated_displacement is a vector that only has the locally owned 
dofs, or whether it is a vector that has at least the locally active elements 
(or maybe also the ghost/locally relevant elements). If you call 
fe_values.get_function_values() on a locally owned cell, the 
locally_accumulated_displacement vector needs to have at least the locally 
active vector elements.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] transitioning from parallel::shared::Triangulation to parallel::distributed::Triangulation

2017-07-06 Thread Alberto Salvadori


Hi deal.ii community,


I am bothering today about a code I have been writing in the shared 
triangulation framework and its transition to the parallel::distributed::
Triangulation framework.

The shared version works just fine, and so does the parallel if run with 1 
single process. I understand therefore that the numerics seem to work OK, 
but since I am having issues with more than one process, likely I am not 
understanding some fundamentals of the 
parallel::distributed::Triangulation 
framework.


In particular it seems that a call of type


std::vector< Vector< double > > old_solution_values ( 
quadrature_formula.size(), 
Vector< double >(dofs.GPDofs) );

fe_values.get_function_values ( locally_accumulated_displacement, 
old_solution_values );



provides nan, that propagates and make the code to fail. I wonder if this 
fact is due to DoFs that are not locally owned within cells that are indeed 
owned locally.

In such a case, I am doing an implementation which is not appropriate and I 
wonder how to accomplish it properly.

More details follows.


I appreciate your help as usual.


Alberto





The code is a standard 3 fields mechanics formulation, small strains. 

I have thus implemented a Newton Raphson scheme. The interesting part of 
the class definition is:



template 

class SmallStrainMechanicalProblem

{

public:

  

  SmallStrainMechanicalProblem ( const std::string, const unsigned = 1, 
const unsigned = 1 );

  ~SmallStrainMechanicalProblem ();

  

  void run ( unsigned, bool, bool=false );


private:



  // Main methods

...

 

  // quadrature point integrators

...

  


  // MPI data structure

  

  MPI_Comm mpi_communicator;

  

  const unsigned int n_mpi_processes;

  const unsigned int this_mpi_process;

  

  ConditionalOStream pcout;

  

  std::vector local_dofs_per_process;

  std::vector starting_index_per_process;

  std::vector ending_index_per_process;


  IndexSet locally_owned_dofs;

  IndexSet locally_relevant_dofs;

  unsigned int n_local_cells;


  // FEM member variables

  

  parallel::distributed::Triangulation   triangulation;

  DoFHandler  dof_handler;

  

  FESystemfe;

  

  ConstraintMatrix hanging_node_constraints;

  

  const QGauss   quadrature_formula;

  const QGauss face_quadrature_formula;

  

  LA::MPI::SparseMatrix system_matrix;

  LA::MPI::Vector   locally_relevant_solution;// stores 
owned elements and also ghost entries. Initiated in setup_system().

  LA::MPI::Vector   locally_incremental_displacement; // stores 
owned elements only. Initiated in setup_system(), in refine_initial_grid() 
and in create_coarse_grid()

  LA::MPI::Vector   locally_accumulated_displacement; // stores 
owned elements only. Initiated in setup_system(), in refine_initial_grid() 
and in create_coarse_grid()

  LA::MPI::Vector   system_rhs;

  

  // Constitutive laws

  ...

  // IO

  ...

  debug_parallel_IO dbgIO;

  

};


Here is the constructor:


template 

SmallStrainMechanicalProblem::SmallStrainMechanicalProblem (

 const std::
string pathstr,

 const 
unsigned poly_degree,

 const 
unsigned printEveryNSteps)

:

mpi_communicator (MPI_COMM_WORLD),

n_mpi_processes (Utilities::MPI::n_mpi_processes(mpi_communicator)),

this_mpi_process (Utilities::MPI::this_mpi_process(mpi_communicator)),

pcout (std::cout, this_mpi_process == 0),

starting_index_per_process( n_mpi_processes ),

ending_index_per_process( n_mpi_processes ),

triangulation(mpi_communicator,

  typename Triangulation::MeshSmoothing

  (Triangulation::smoothing_on_refinement |

   Triangulation::smoothing_on_coarsening)),

dof_handler (triangulation),

fe(FE_Q(poly_degree), dim, // displacement

   FE_DGPMonomial(poly_degree - 1), 1, // pressure

   FE_DGPMonomial(poly_degree - 1), 1), // dilatation

quadrature_formula (poly_degree + 1),

face_quadrature_formula (4),

PrintEveryNSteps( printEveryNSteps ),

mmIO( pathstr,  this_mpi_process ),

IO( pathstr + "IO",  this_mpi_process ),

dbgIO( pathstr,  this_mpi_process )

{

...

}


the setup:



template 

void SmallStrainMechanicalProblem::setup_system ()

{


  

  dof_handler.distribute_dofs (fe);

  

  locally_owned_dofs = dof_handler.locally_owned_dofs();

  DoFTools::extract_locally_relevant_dofs (dof_handler,locally_relevant_dofs
);

  

  local_dofs_per_process = dof_handler.n_locally_owned_dofs_per_processor();

  

  starting_index_per_process[ 0 ] = 0;

  ending_index_per_process[ 0 ] = local_dofs_per_process[ 0 ] - 1;

  for (unsigned ii=1; ii< n_mpi_processes; ii++)

  {

starting_index_per_process[ ii ] = local_dofs_per_process[ ii-1 ] + 
starting_index_per_process[ ii-1 ] ;

ending_index_per_process[ ii ] =