Hi Marc and Bruno,
I was able to reproduce this abort on an even simpler test. Please see the
attached file.

Initial grid:
 /*
* -----------
* |  0 |  0 |
* -----------
* |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
* -----------
*/

/* Set refine flags:
* -----------
* |  R |  R |    FEQ
* -----------
* |      |      |    FE_Nothing
* -----------
*/

Then refine and solution trans. During the
execute_coarsening_and_refinement, it aborts.

Here is a stack trace:
--------------------------------------------------------
An error occurred in line <1167> of file
</build/deal.ii-vFp8uU/deal.ii-9.2.0/include/deal.II/lac/vector.h> in
function
    Number&
dealii::Vector<Number>::operator()(dealii::Vector<Number>::size_type) [with
Number = double; dealii::Vector<Number>::size_type = unsigned int]
The violated condition was:
    static_cast<typename ::dealii::internal::argument_type<void( typename
std::common_type<decltype(i), decltype(size())>::type)>:: type>(i) <
static_cast<typename ::dealii::internal::argument_type<void( typename
std::common_type<decltype(i), decltype(size())>::type)>:: type>(size())
Additional information:
    Index 0 is not in the half-open range [0,0). In the current case, this
half-open range is in fact empty, suggesting that you are accessing an
element of an empty collection such as a vector that has not been set to
the correct size.

Stacktrace:
-----------
#0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::Vector<double>::operator()(unsigned int)
#1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
#2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::pack_callback(dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
dealii::Triangulation<2, 2>::CellStatus)
#3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
2> > const&, dealii::Triangulation<2,
2>::CellStatus)#1}::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
#4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::_Function_handler<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
dealii::Triangulation<2, 2>::CellStatus),
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
2> > const&, dealii::Triangulation<2,
2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
dealii::Triangulation<2, 2>::CellStatus&&)
#5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
dealii::Triangulation<2,
2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
#6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::_Function_handler<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2, 2>::CellStatus), std::function<std::vector<char,
std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >
const&, dealii::Triangulation<2, 2>::CellStatus)>
>::_M_invoke(std::_Any_data const&,
dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
dealii::Triangulation<2, 2>::CellStatus&&)
#7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2,
2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
2> >, dealii::Triangulation<2, 2>::CellStatus) const
#8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::Triangulation<2,
2>::DataTransfer::pack_data(std::vector<std::tuple<p4est_quadrant*,
dealii::Triangulation<2, 2>::CellStatus,
dealii::TriaIterator<dealii::CellAccessor<2, 2> > >,
std::allocator<std::tuple<p4est_quadrant*, dealii::Triangulation<2,
2>::CellStatus, dealii::TriaIterator<dealii::CellAccessor<2, 2> > > > >
const&, std::vector<std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2, 2>::CellStatus)>,
std::allocator<std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2, 2>::CellStatus)> > > const&,
std::vector<std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2, 2>::CellStatus)>,
std::allocator<std::function<std::vector<char, std::allocator<char> >
(dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
dealii::Triangulation<2, 2>::CellStatus)> > > const&)
#9  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::Triangulation<2,
2>::execute_coarsening_and_refinement()
#10  /home/kdas/FE_NothingTest/build/test_distributed: main

Thank you very much,
Kaushik

On Tue, Dec 8, 2020 at 6:25 PM Marc Fehling <mafehl...@gmail.com> wrote:

> Hi Kaushik,
>
> the `p::d::SolutionTransfer` class should be able to deal with `FENothing`
> elements in your example. The tricky cases are when you're coarsening a
> `FENothing` element with others as Bruno already pointed out
> (h-coarsening), or if you change a `FENothing` element to a different
> element in the process (p-adaptation). But with your recent modification,
> you avoid these cases.
>
> I suspect that something else causes the error in your program. Could you
> run a debugger on this and give us the backtrace for the exception? This
> will give us more clues to figure out what goes wrong!
>
> Best,
> Marc
> On Tuesday, December 8, 2020 at 7:13:29 AM UTC-7 k.d...@gmail.com wrote:
>
>> Hi Bruno:
>> Thanks for pointing that out.
>> I tried to not refine FE_nothing cells by modifying the refine loop:
>> (The modified test is attached).
>>
>> for (auto &cell : dgq_dof_handler.active_cell_iterators())
>>       if (cell->is_locally_owned() && cell->active_fe_index () != 0)
>>         {
>>           if (counter > ((dim == 2) ? 4 : 8))
>>             cell->set_coarsen_flag();
>>           else
>>             cell->set_refine_flag();
>>         }
>>
>> But this still aborts.
>> Kaushik
>>
>> On Tue, Dec 8, 2020 at 8:36 AM Bruno Turcksin <bruno.t...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Are you sure that your test makes sense? You randomly assign FE indices
>>> to cells then you refine and coarsen cells. But what does it mean to
>>> coarsen 4 cells together when one of them is FE_Nothing? What would you
>>> expect to happen?
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>> On Monday, December 7, 2020 at 5:54:10 PM UTC-5 k.d...@gmail.com wrote:
>>>
>>>> Hi all:
>>>>
>>>> I modified the test  tests/mpi/solution_transfer_05.cc to add a
>>>> FE_Nohting element to the FECollection. I also modified the other elements
>>>> to FE_Q.
>>>>
>>>> When I run the test, it's aborting in solution transfer.
>>>> Is there any limitations in using FE_Nothing with parallel solution
>>>> transfer?
>>>> The modified test is attached.
>>>> Thank you very much.
>>>> Kaushik
>>>>
>>>> ----
>>>> An error occurred in line <1167> of file
>>>> </build/deal.ii-vFp8uU/deal.ii-9.2.0/include/deal.II/lac/vector.h> in
>>>> function
>>>>     Number&
>>>> dealii::Vector<Number>::operator()(dealii::Vector<Number>::size_type) [with
>>>> Number = double; dealii::Vector<Number>::size_type = unsigned int]
>>>> The violated condition was:
>>>>     static_cast<typename ::dealii::internal::argument_type<void(
>>>> typename std::common_type<decltype(i), decltype(size())>::type)>:: type>(i)
>>>> < static_cast<typename ::dealii::internal::argument_type<void( typename
>>>> std::common_type<decltype(i), decltype(size())>::type)>:: type>(size())
>>>> Additional information:
>>>>     Index 0 is not in the half-open range [0,0). In the current case,
>>>> this half-open range is in fact empty, suggesting that you are accessing an
>>>> element of an empty collection such as a vector that has not been set to
>>>> the correct size.
>>>>
>>>>
>>>> --
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "deal.II User Group" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> dealii+un...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/dealii/609b0457-ba90-4aa2-a7d1-5b798d5349ebn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/dealii/609b0457-ba90-4aa2-a7d1-5b798d5349ebn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/4cd4582b-3168-4b8b-8195-e316b049cfadn%40googlegroups.com
> <https://groups.google.com/d/msgid/dealii/4cd4582b-3168-4b8b-8195-e316b049cfadn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAC-fs6txXWUGBW3jjoZRLMvoZCcHmgHu7jYwFsoJJA8kmYpmuw%40mail.gmail.com.
// Test case based on the one written by K. Bzowski
#include <deal.II/base/utilities.h>
#include <deal.II/base/logstream.h>

#include <deal.II/grid/grid_generator.h>
#include <deal.II/hp/dof_handler.h>
#include <deal.II/grid/tria_iterator.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_nothing.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/hp/fe_collection.h>

#include <deal.II/numerics/data_out.h>
#include <iostream>
#include <fstream>

#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/solution_transfer.h>

#include <deal.II/lac/generic_linear_algebra.h>

// uncomment the following \#define if you have PETSc and Trilinos installed
// and you prefer using Trilinos in this example:
// @code
// #define FORCE_USE_OF_TRILINOS
// @endcode

// This will either import PETSc or TrilinosWrappers into the namespace
// LA. Note that we are defining the macro USE_PETSC_LA so that we can detect
// if we are using PETSc (see solve() for an example where this is necessary)

// This LA namespace must be after including 
<deal.II/lac/generic_linear_algebra.h>

namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
        !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
        using namespace dealii::LinearAlgebraPETSc;
#define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
        using namespace dealii::LinearAlgebraTrilinos;
#else
#error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA

#include <deal.II/lac/vector.h>

using namespace dealii;

int main(int argc, char *argv[])
{
        Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

        std::ofstream logfile("output");
        deallog.attach(logfile);
        deallog.depth_console(0);

        MPI_Comm mpi_communicator(MPI_COMM_WORLD);

        parallel::distributed::Triangulation<2, 2> triangulation(
                mpi_communicator,
                typename 
Triangulation<2>::MeshSmoothing(Triangulation<2>::none));

        GridGenerator::hyper_cube(triangulation);
        triangulation.refine_global(1);

        hp::FECollection<2> fe_collection;
        fe_collection.push_back(FE_Q<2>(1));
        fe_collection.push_back(FE_Nothing<2>());
        //fe_collection.push_back(FE_Q<2>(1));

        hp::DoFHandler<2> dof_handler(triangulation);

        // Assign FE
        /*
         * -----------
         * |  0 |  0 |
         * -----------
         * |  1 |  1 |          0 - FEQ, 1 - FE_Nothing
         * -----------
         */

        for (auto &cell : dof_handler.active_cell_iterators())
        {
                if (cell->is_locally_owned())
                {
                        auto center = cell->center();
                        if (center(1) < 0.5)
                        {
                                cell->set_active_fe_index(1);
                        }
                        else
                        {
                                cell->set_active_fe_index(0);
                        }
                }
        }

        dof_handler.distribute_dofs(fe_collection);

        IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs();
        IndexSet locally_relevant_dofs;
        DoFTools::extract_locally_relevant_dofs(dof_handler, 
locally_relevant_dofs);

        LA::MPI::Vector completely_distributed_solution;
        LA::MPI::Vector locally_relevant_solution;

        completely_distributed_solution.reinit(locally_owned_dofs, 
mpi_communicator);
        locally_relevant_solution.reinit(locally_owned_dofs, 
locally_relevant_dofs, mpi_communicator);

        completely_distributed_solution = 1.0;

        locally_relevant_solution = completely_distributed_solution;

        Vector<double> FE_Type(triangulation.n_active_cells());
        Vector<float> subdomain(triangulation.n_active_cells());
        int i = 0;
        for (auto &cell : dof_handler.active_cell_iterators())
        {
                if (cell->is_locally_owned())
                {
                        FE_Type(i) = cell->active_fe_index();
                        subdomain(i) = triangulation.locally_owned_subdomain();
                }
                else
                {
                        FE_Type(i) = -1;
                        subdomain(i) = -1;
                }
                i++;
        }

        // Save output
        {
                DataOut<2, hp::DoFHandler<2>> data_out;
                data_out.attach_dof_handler(dof_handler);
                data_out.add_data_vector(locally_relevant_solution, "Solution");
                data_out.add_data_vector(FE_Type, "FE_Type");
                data_out.add_data_vector(subdomain, "subdomain");
                data_out.build_patches();

                data_out.write_vtu_with_pvtu_record(
                        "./", "solution", 1, mpi_communicator, 2);
        }

        /* Set refine flags:
         * -----------
         * |  R |  R |  FEQ
         * -----------
         * |    |    |  FE_Nothing
         * -----------
         */

        for (auto &cell : dof_handler.active_cell_iterators())
        {
                if (cell->is_locally_owned())
                {
                        auto center = cell->center();
                        if (center(1) > 0.5)
                        {
                                cell->set_refine_flag();
                        }
                }
        }

        LA::MPI::Vector previous_locally_relevant_solution;
        previous_locally_relevant_solution = locally_relevant_solution;

        parallel::distributed::SolutionTransfer<2, LA::MPI::Vector, 
hp::DoFHandler<2>> solution_trans(dof_handler);
        
        triangulation.prepare_coarsening_and_refinement();
        
solution_trans.prepare_for_coarsening_and_refinement(previous_locally_relevant_solution);

        triangulation.execute_coarsening_and_refinement();

        for (auto &cell : dof_handler.active_cell_iterators())
        {
                if (cell->is_locally_owned())
                {
                        auto center = cell->center();
                        if (center(1) < 0.5)
                        {
                                cell->set_active_fe_index(1);
                        }
                        else
                        {
                                cell->set_active_fe_index(0);
                        }
                }
        }

        dof_handler.distribute_dofs(fe_collection);

        locally_owned_dofs = dof_handler.locally_owned_dofs();
        locally_relevant_dofs.clear();
        DoFTools::extract_locally_relevant_dofs(dof_handler, 
locally_relevant_dofs);
        completely_distributed_solution.reinit(locally_owned_dofs, 
mpi_communicator);
        locally_relevant_solution.reinit(locally_owned_dofs, 
locally_relevant_dofs, mpi_communicator);

        solution_trans.interpolate(completely_distributed_solution);
        locally_relevant_solution = completely_distributed_solution;

        FE_Type.reinit(triangulation.n_active_cells());
        subdomain.reinit(triangulation.n_active_cells());
        i = 0;
        for (auto &cell : dof_handler.active_cell_iterators())
        {
                if (cell->is_locally_owned())
                {
                        FE_Type(i) = cell->active_fe_index();
                        subdomain(i) = triangulation.locally_owned_subdomain();
                }
                else
                {
                        FE_Type(i) = -1;
                        subdomain(i) = -1;
                }
                i++;
        }

        // Save output
        {
                DataOut<2, hp::DoFHandler<2>> data_out;
                data_out.attach_dof_handler(dof_handler);
                data_out.add_data_vector(locally_relevant_solution, "Solution");
                data_out.add_data_vector(FE_Type, "FE_Type");
                data_out.add_data_vector(subdomain, "subdomain");
                data_out.build_patches();

                data_out.write_vtu_with_pvtu_record(
                        "./", "solution", 2, mpi_communicator, 2);
        }

}

Reply via email to