Re: [deal.II] Re: step-77 in parallel - SUNDIALS::KINSOL

2022-06-29 Thread Bruno Turcksin
Jose,

First, I have never used KINSOL so I don't know what it requires. With that
being said, this  x.reinit(locally_relevant_dofs, MPI_COMM_WORLD); is
wrong. Either you only need the locally_owned_dofs and you write x.reinit
(locally_owned_dofs, MPI_COMM_WORLD); or you need the locally_relevant_dofs
and you need to write x.reinit(locally_owned_dofs, locally_relevant_dofs,
MPI_COMM_WORLD); The problem with locally_relevant_dofs is that one degree
of freedom will appear on multiple processors so you need
locally_owned_dofs to know which processor owns the degree of freedom. This
is basically what the error message is telling you. There is an overlap of
the locally owned indices, ie two processors own the same index.

Best,

Bruno

Le mer. 29 juin 2022 à 05:03, jose.a...@gmail.com 
a écrit :

> Hi Bruno,
>
> sorry for the late response. I get the following error:
>
> An error occurred in line <1469> of file
> 
> in function
> dealii::IndexSet
> dealii::TrilinosWrappers::MPI::Vector::locally_owned_elements() const
> The violated condition was:
> owned_elements.size() == size()
> Additional information:
> The locally owned elements have not been properly initialized! This
> happens for example if this object has been initialized with exactly
> one overlapping IndexSet.
>
> at the bolded line in this code snippet of kinsol.cc
>
> template 
> unsigned int
> KINSOL::solve
> (VectorType
> &initial_guess_and_solution)
> {
> unsigned int system_size = initial_guess_and_solution.size();
> // The solution is stored in
> // solution. Here we take only a
> // view of it.
> # ifdef DEAL_II_WITH_MPI
> if (is_serial_vector::value
> 
> == false)
> {
> *const IndexSet
>  is =
> initial_guess_and_solution.locally_owned_elements();*
> const unsigned int local_system_size = is.n_elements
> 
> ();
> solution =
> N_VNew_Parallel(communicator, local_system_size, system_size);
> u_scale = N_VNew_Parallel(communicator, local_system_size, system_size);
> N_VConst_Parallel(1.e0, u_scale);
> f_scale = N_VNew_Parallel(communicator, local_system_size, system_size);
> N_VConst_Parallel(1.e0, f_scale);
> }
> else
>
> which is expected as how I reinit my solution vector and distributed
> vectors
>
> solution.reinit(locally_relevant_dofs,
> MPI_COMM_WORLD);
>
> distributed_vector.reinit(locally_owned_dofs,
> locally_relevant_dofs,
> MPI_COMM_WORLD,
> true);
>
> Nonetheless this is my standard reinit call for my parallel applications
> following the scheme of step-32. I thought it might had to do with the
> reinit lambda passed to KINSOL
>
> nonlinear_solver.reinit_vector = [&](dealii::LinearAlgebraTrilinos::MPI::
> Vector &x)
> {
> x.reinit(locally_relevant_dofs,
>  MPI_COMM_WORLD);
> };
>
> but I get the above error even if I use the reinit call for distributed
> vectors.
>
> So my question is how should one reinit the vectors in parallel
> applications using KINSOL. As a side question, how should one reinit the
> vector in the reinit_vector lambda? As the solution vector or as a
> distributed vector?
>
> Attached is the source code for the modified step-77.
>
> Cheers,
> Jose
>
> On Thursday, June 9, 2022 at 2:53:56 PM UTC+2 bruno.t...@gmail.com wrote:
>
>> Jose,
>>
>> What's the error that you get? Is there a message? Is it a segfault?
>>
>> Best,
>>
>> Bruno
>>
>> On Wednesday, June 8, 2022 at 1:03:06 PM UTC-4 jose.a...@gmail.com wrote:
>>
>>> Dear dealii community,
>>>
>>> I am solving a nonlinear problem that has a regularized sign function.
>>> The classic Newton-Raphson method converges too slowly near the limit of
>>> the regularization parameter. Before of writing a line search algorithm I
>>> wanted to try the KINSOL solver out.
>>>
>>> I modified step-77 using the parallel scheme I have always used but I
>>> get an error when running in parallel apparently at line 386 of
>>> kinsol.cc
>>> 
>>> due to how I reinit my solution vector. I follow the reinit commands used
>>> in step-32:
>>>
>>> solution.reinit(locally_relevant_dofs,
>>> MPI_COMM_WORLD);
>>>
>>> distributed_vector.reinit(locally_owned_dofs,
>>> locally_relevant_dofs,
>>> MPI_COMM_WORLD,
>>> true);
>>>
>>> It is not clear to me which reinit I should use in this case. I tried
>>> the second reinit method with the flag set to false for the solution vector
>>> as seen in another step but I then get an access error during the
>>> WorkStream::run call.
>>>
>>> Attached is the aforementioned modification of step-77.
>>>
>>> Cheers,
>>> Jose
>>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing li

[deal.II] deal.II Newsletter #217

2022-06-29 Thread 'Rene Gassmoeller' via deal.II User Group
Hello everyone!

This is deal.II newsletter #217.
It automatically reports recently merged features and discussions about the 
deal.II finite element library.


## Below you find a list of recently proposed or merged features:

#14078: Remove remaining deprecations (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/14078

#14077: Fix warning about possibly uninitialized variables (proposed by 
kronbichler) https://github.com/dealii/dealii/pull/14077

#14076: Use faster data structure to compute matrix-free renumbering (proposed 
by kronbichler) https://github.com/dealii/dealii/pull/14076

#14075: MGTransferBlockMatrixFree: allow to copy form/to other type of block 
vector (proposed by peterrum; merged) 
https://github.com/dealii/dealii/pull/14075

#14074: Add MGTransferGlobalCoarsening::min_level() and max_level() (proposed 
by peterrum; merged) https://github.com/dealii/dealii/pull/14074

#14073: Communicate refinement flags in p:s:T (proposed by peterrum) 
https://github.com/dealii/dealii/pull/14073

#14072: p:d:GridRefinement: accept other types of triangulations (proposed by 
peterrum) https://github.com/dealii/dealii/pull/14072

#14071: [WIP] Implement helpers for constitutive laws parameterised by 
(coupled) invariants (proposed by jppelteret) 
https://github.com/dealii/dealii/pull/14071

#14070: Remove CellId::to_cell (proposed by masterleinad) 
https://github.com/dealii/dealii/pull/14070

#14068: Remove the cell dof indices cache (proposed by kronbichler) 
https://github.com/dealii/dealii/pull/14068

#14067: Speed up ReferenceCell::standard_vs_true_line_orientation (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/14067

#14066: Speed up DoFAccessorImplementation::get_dof_indices (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/14066

#14064: Remove deprecated XDMFEntry::get_xdmf_content (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/14064

#14063: DoFAccessor: Streamline access to dof indices (proposed by kronbichler; 
merged) https://github.com/dealii/dealii/pull/14063

#14062: Simplify assertion by AssertIndexRange (proposed by kronbichler; 
merged) https://github.com/dealii/dealii/pull/14062

#14061: Fix some VectorizedArrayTypes for non-default vectorization (proposed 
by sebproell; merged) https://github.com/dealii/dealii/pull/14061

#14060: Remove deprecated header file lac/parallel_block_vector.h  (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/14060

#14059: find_active_cell_around_point should only find cells with marked 
vertices (proposed by jh66637) https://github.com/dealii/dealii/pull/14059

#14058: Fix a few MatrixFree tests (proposed by kronbichler; merged) 
https://github.com/dealii/dealii/pull/14058

#14057: Fix bug in MatrixFree::reinit for MG levels with empty process 
(proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/14057

#14056: HDF XDMF entry: store reference cell (proposed by tjhei) 
https://github.com/dealii/dealii/pull/14056

#14055: Add cross-references for global indices (proposed by peterrum) 
https://github.com/dealii/dealii/pull/14055

#14054: Remove trailing const (proposed by peterrum; merged) 
https://github.com/dealii/dealii/pull/14054

#14052: Small improvement for IndexSet::add_ranges_internal (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/14052

#14051: DoFHandlerPolicy: Speed up function renumber_face_mg_dofs (proposed by 
kronbichler; merged) https://github.com/dealii/dealii/pull/14051

#14050: User projects: Make sure that CMAKE_CUDA_HOST_COMPILER is set. 
(proposed by tamiko; merged) https://github.com/dealii/dealii/pull/14050

#14049: fix a warning when compiling with cuda support (proposed by tamiko; 
merged) https://github.com/dealii/dealii/pull/14049

#14048: Remove unqualified ContantFunction and ZeroFunction (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/14048

#14047: Remove DoFHandler's initialize and set_fe (proposed by masterleinad; 
merged) https://github.com/dealii/dealii/pull/14047

#14046: Mark two tests that depend on p4est as such. (proposed by drwells; 
merged) https://github.com/dealii/dealii/pull/14046

#14045: Include index_set.h in two more tests. (proposed by drwells; merged) 
https://github.com/dealii/dealii/pull/14045

#14044: step-81: Mention example step in the tutorial lists (proposed by 
tamiko; merged) https://github.com/dealii/dealii/pull/14044

#14043: Remove deprecated QProjector member functions (proposed by 
masterleinad; merged) https://github.com/dealii/dealii/pull/14043

#14042: Slightly reword DataOutResample's documentation. (proposed by drwells; 
merged) https://github.com/dealii/dealii/pull/14042

#14041: `QSimplex::compute_affine_transformation()` in co-dimension one 
(proposed by fdrmrc) https://github.com/dealii/dealii/pull/14041

#14040: Remove deprecated DoFTools functions (proposed by masterleinad; merged) 
https://github

[deal.II] DoFTools::make_periodicity_constraints error when working with FEColleciton

2022-06-29 Thread jose.a...@gmail.com
Dear dealii community,

I am working on a gradient enhanced crystal plasticity code where the 
displacement field is continuous but the slip fields can be discontinuous. 
To this end, I am using a hp::FECollection. For example, in the case 
of two crystals with one slip system I have a hp::FECollection<2> with

fe_collection[0] = [FE_Q<2> FE_Q<2> FE_Q<2>  FE_Nothing<2>]
fe_collection[1] = [FE_Q<2> FE_Q<2> FE_Nothing<2> FE_Q<2> ]

where the first two FE_Q<2> correspond to the displacement field in 2D and 
the latter to the slip field.

My reference problem consist of the unit square under simple shear with 
periodic boundary conditions on the x-direction. When I call the 
DoFTools::make_periodicity_constraints method, I am getting the following error when I have subdomains with 
different values of fe_index

An error occurred in line <1690> of file 

 
in function
const dealii::FiniteElement& 
dealii::DoFAccessor::get_fe(unsigned int) 
const [with int structdim = 1; int dim = 2; int spacedim = 2; bool 
level_dof_access = false]
The violated condition was: 
fe_index_is_active(fe_index) == true
Additional information: 
This function can only be called for active FE indices

I wrote a minimum working example based on my code to showcase the error, 
which I have attached to this post. The executable takes a true or false 
argument to assign different material identifiers to different subdomains 
or not. If there are no subdomains and therefore, and more importantly, the 
size of the hp::FECollection is 1, the code runs with no problem.

My question is if my algorithm is wrong or does the method 
DoFTools::make_periodicity_constraints does not currently support hp::FECollection with sizes bigger 
than one.

Cheers,
Jose

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b4ea5417-bc21-4541-ac57-e5d71383a7b9n%40googlegroups.com.
#include 

#include 

#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 


#include 

namespace Tests
{



template
class MakePeriodicityConstraints
{
public:

  MakePeriodicityConstraints(const bool flag_set_material_ids);

  void run();

private:
  dealii::parallel::distributed::Triangulation triangulation;

  dealii::DoFHandler   dof_handler;

  dealii::hp::FECollection fe_collection;

  const double  width;

  const double  height;

  const double  n_fields_per_domain;

  doublen_domains;

  const boolflag_set_material_ids;

  void make_grid();

  void setup_dofs();
};



template
MakePeriodicityConstraints::MakePeriodicityConstraints(
  const bool flag_set_material_ids)
:
triangulation(MPI_COMM_WORLD,
  typename dealii::Triangulation::MeshSmoothing(
  dealii::Triangulation::smoothing_on_refinement |
  dealii::Triangulation::smoothing_on_coarsening)),
dof_handler(triangulation),
width(1.0),
height(1.0),
n_fields_per_domain(2),
flag_set_material_ids(flag_set_material_ids)
{}



template
void MakePeriodicityConstraints::run()
{
  make_grid();

  setup_dofs();
}



template
void MakePeriodicityConstraints::make_grid()
{
  std::vector repetitions(2, 10);

  dealii::GridGenerator::subdivided_hyper_rectangle(
triangulation,
repetitions,
dealii::Point(0,0),
dealii::Point(width, height),
true);

  std::vector::cell_iterator>>
periodicity_vector;

  dealii::GridTools::collect_periodic_faces(triangulation,
0,
1,
0,
periodicity_vector);

  triangulation.add_periodicity(periodicity_vector);

  triangulation.refine_global(1);

  // Set material ids
  for (const auto &cell : triangulation.active_cell_iterators())
if (cell->is_locally_owned())
{
  if (flag_set_material_ids)
  {
if (std::fabs(cell->center()[1]) < height/3.0)
  cell->set_material_id(0);
else if (std::fabs(cell->center()[1]) < 2.0*height/3.0)
  cell->set_material_id(1);
else
  cell->set_material_id(2);
  }
  else
cell->set_material_id(0);
}

  // Set the active finite elemente index of each cell
  for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
  cell->set_act

[deal.II] Re: step-77 in parallel - SUNDIALS::KINSOL

2022-06-29 Thread jose.a...@gmail.com
Hi Bruno,

sorry for the late response. I get the following error:

An error occurred in line <1469> of file 

 
in function
dealii::IndexSet 
dealii::TrilinosWrappers::MPI::Vector::locally_owned_elements() const
The violated condition was: 
owned_elements.size() == size()
Additional information: 
The locally owned elements have not been properly initialized! This
happens for example if this object has been initialized with exactly
one overlapping IndexSet.

at the bolded line in this code snippet of kinsol.cc 

template 
unsigned int
KINSOL::solve 
(VectorType
 
&initial_guess_and_solution)
{
unsigned int system_size = initial_guess_and_solution.size();
// The solution is stored in
// solution. Here we take only a
// view of it.
# ifdef DEAL_II_WITH_MPI
if (is_serial_vector::value 
 
== false)
{
*const IndexSet 
 is = 
initial_guess_and_solution.locally_owned_elements();*
const unsigned int local_system_size = is.n_elements 

();
solution =
N_VNew_Parallel(communicator, local_system_size, system_size);
u_scale = N_VNew_Parallel(communicator, local_system_size, system_size);
N_VConst_Parallel(1.e0, u_scale);
f_scale = N_VNew_Parallel(communicator, local_system_size, system_size);
N_VConst_Parallel(1.e0, f_scale);
}
else

which is expected as how I reinit my solution vector and distributed vectors

solution.reinit(locally_relevant_dofs,
MPI_COMM_WORLD);

distributed_vector.reinit(locally_owned_dofs,
locally_relevant_dofs,
MPI_COMM_WORLD,
true);

Nonetheless this is my standard reinit call for my parallel applications 
following the scheme of step-32. I thought it might had to do with the 
reinit lambda passed to KINSOL

nonlinear_solver.reinit_vector = [&](dealii::LinearAlgebraTrilinos::MPI::
Vector &x)
{
x.reinit(locally_relevant_dofs,
 MPI_COMM_WORLD);
};

but I get the above error even if I use the reinit call for distributed 
vectors.

So my question is how should one reinit the vectors in parallel 
applications using KINSOL. As a side question, how should one reinit the 
vector in the reinit_vector lambda? As the solution vector or as a 
distributed vector?

Attached is the source code for the modified step-77.

Cheers,
Jose

On Thursday, June 9, 2022 at 2:53:56 PM UTC+2 bruno.t...@gmail.com wrote:

> Jose,
>
> What's the error that you get? Is there a message? Is it a segfault?
>
> Best,
>
> Bruno
>
> On Wednesday, June 8, 2022 at 1:03:06 PM UTC-4 jose.a...@gmail.com wrote:
>
>> Dear dealii community,
>>
>> I am solving a nonlinear problem that has a regularized sign function. 
>> The classic Newton-Raphson method converges too slowly near the limit of 
>> the regularization parameter. Before of writing a line search algorithm I 
>> wanted to try the KINSOL solver out. 
>>
>> I modified step-77 using the parallel scheme I have always used but I get 
>> an error when running in parallel apparently at line 386 of kinsol.cc 
>> 
>>  
>> due to how I reinit my solution vector. I follow the reinit commands used 
>> in step-32:
>>
>> solution.reinit(locally_relevant_dofs,
>> MPI_COMM_WORLD);
>>
>> distributed_vector.reinit(locally_owned_dofs,
>> locally_relevant_dofs,
>> MPI_COMM_WORLD,
>> true);
>>
>> It is not clear to me which reinit I should use in this case. I tried the 
>> second reinit method with the flag set to false for the solution vector as 
>> seen in another step but I then get an access error during the 
>> WorkStream::run call.
>>
>> Attached is the aforementioned modification of step-77.
>>
>> Cheers,
>> Jose
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6298c47f-4cfb-44f5-ae27-0420701a862en%40googlegroups.com.
#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 



namespace step77
{



struct CopyBase
{
  CopyBase(const unsigned int dofs_per_cell);

  unsigned int  dofs_per_cell;

  std::vector local_dof_indices;
};