Hi Marc,
I tried using cell data transfer as you suggested.
But I am having trouble in calling compress after getting the
transferred data to a PETSc Vector:
My test code is attached. My confusion is mainly when to call to
compress after the cell data transfer.

        m_completely_distributed_solution =
std::numeric_limits<double>::max();

        std::vector<std::vector<double>>
transferred_data(m_triangulation.n_active_cells());
        m_cell_data_trans.unpack(transferred_data);

        i = 0;
        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            std::vector<double> cell_data = transferred_data[i];
            if (cell_data.size() != 0)
            {
                Vector<double>
local_data(GeometryInfo<2>::vertices_per_cell);
                for (unsigned int j = 0; j <
GeometryInfo<2>::vertices_per_cell; j++)
                {
                    local_data = cell_data[j];
                }
                cell->set_dof_values_by_interpolation(local_data,
m_completely_distributed_solution);
            }
            i++;
        }

        //m_completely_distributed_solution.compress(VectorOperation::min);
// calling compress aborts here

        for (unsigned int i = 0; i <
m_completely_distributed_solution.size(); ++i)
            if (m_completely_distributed_solution(i) ==
std::numeric_limits<double>::max())
                m_completely_distributed_solution(i) = 2; // 2 is the
initial condition I want apply to newly created dofs

       // m_completely_distributed_solution.compress(VectorOperation::min);
// aborts here too Missing compress() or calling with wrong VectorOperation
argument.
        // exchange_cell_data_to_ghosts ??
        m_locally_relevant_solution = m_completely_distributed_solution;


Calling compress from either of those two places results in exceptions:
An error occurred in line <387> of file
</home/kdas/dealii/source/lac/petsc_vector_base.cc> in function
    void
dealii::PETScWrappers::VectorBase::compress(dealii::VectorOperation::values)
The violated condition was:
    last_action == ::dealii::VectorOperation::unknown || last_action ==
operation
Additional information:
    Missing compress() or calling with wrong VectorOperation argument.



Thank you very much for your help.
Happy New year,
Kaushik



On Sat, Jan 2, 2021 at 9:55 AM Kaushik Das <k.da...@gmail.com> wrote:

> Thank you.
> I will try CellDataTransfer method as you suggested.
> The test code that I attached earlier has a
> mistake. cell->set_active_fe_index(0) should be protected by a if
> (cell->is_locally_owned()). I have attached a corrected test here.
> Thanks,
> Kaushik
>
> On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling <mafehl...@gmail.com> wrote:
>
>> Kaushik,
>>
>> in addition to what I just wrote, your example from above has revealed a
>> bug in the `p::d::SolutionTransfer` class that Wolfgang and I were
>> discussing in the course of this chatlog. Thank you very much for this! We
>> are working on a solution for this issue.
>>
>> I would encourage you to use the `p::d::CellDataTransfer` class for your
>> use case as described in the last message.
>>
>> Marc
>>
>> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:
>>
>>> Hi Kaushik,
>>>
>>> Yes, this is possible by changing a cell from FE_Nothing to FE_Q using
>>> p-refinement.
>>>
>>> You can do this with the method described in #11132
>>> <https://github.com/dealii/dealii/pull/11132>: Imitate what
>>> p::d::SolutionTransfer is doing with the more versatile tool
>>> p::d::CellDataTransfer and consider the following:
>>>
>>>    - Prepare a data container like `Vector<Vector<double>>` where the
>>>    outer layer represents each cell in the current mesh, and the inner layer
>>>    corresponds to the dof values inside each cell.
>>>    - Prepare data for the updated grid on the old grid.
>>>       - On already activated cells, store dof values with
>>>       `cell->get_interpolated_dof_values()`.
>>>       - On all other cells, store an empty container.
>>>       - Register your data container for and execute coarsening and
>>>    refinement.
>>>    - Interpolate the old solution on the new mesh.
>>>    - Initialize your new solution vector with invalid values
>>>       `std::numeric_limits<double>::infinity`.
>>>       - On previously activated cells, extract the stored data with
>>>       `cell->set_dof_values_by_interpolation()`.
>>>       - Skip all other cells which only have an empty container.
>>>    - On non-ghosted solution vectors, call
>>>    `compress(VectorOperation::min)` to get correct values on ghost cells.
>>>
>>> This leaves you with a correctly interpolated solution on the new mesh,
>>> where all newly activated dofs have the value `infinity`.
>>>
>>> You can now (and not earlier!!!) assign the values you have in mind for
>>> the newly activated dofs. You may want to exchange data on ghost cells once
>>> more with `GridTools::exchange_cell_data_to_ghosts()`.
>>>
>>> This is a fairly new feature in the library for a very specific use
>>> case, so there is no dedicated class for transferring solutions across
>>> finite element activation yet. If you successfully manage to make this
>>> work, would you be willing to turn this into a class for the deal.II
>>> library?
>>>
>>> Marc
>>> On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com
>>> wrote:
>>>
>>>> Hi all,
>>>> Thank you for your reply.
>>>> Let me explain what I am trying to do and why.
>>>> I want to solve a transient heat transfer problem of the additive
>>>> manufacturing (AM) process. In AM processes, metal powder is deposited in
>>>> layers, and then a laser source scans each layer and melts and bonds the
>>>> powder to the layer underneath it. To simulate this layer by layer process,
>>>> I want to start with a grid that covers all the layers, but initially, only
>>>> the bottom-most layer is active and all other layers are inactive, and then
>>>> as time progresses, I want to activate one layer at a time. I read on this
>>>> mailing list that cell "birth" or "activation" can be done by changing a
>>>> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all
>>>> cells of the grid initially to FE_Nothing except the bottom-most layer. And
>>>> then convert one layer at a time to FE_Q. My questions are:
>>>> 1. Does this make sense?
>>>> 2. I have to do two other things for this to work. (a) When a cell
>>>> becomes FE_Q from FE_Nothing, dofs that are activating for the 1st time, I
>>>> need to apply a non-zero initial value to those dofs. This is to simulation
>>>> the metal powder deposited at a specified temperature,. e.g. room
>>>> temperature. (b) the dofs, that were shared between a FE_Q and FE_Nothing
>>>> cells before the p-refinement and now shared between FE_Q and FE_Nothing
>>>> cells after refinement, should retrain their values from before the
>>>> refinement.
>>>>
>>>> Another way to simulation this process would be to use a cell "awaking"
>>>> process, instead of the cell "birth". I keep call cells FE_Q but apply a
>>>> very low diffusivity to the cells of the layers that are not yet "awake".
>>>> But this way, I have to solve a larger system in all time steps. I was
>>>> hoping to save some computation time, by only forming a system consists of
>>>> cells that are in the "active" layers only.
>>>>
>>>> Please let me if this makes sense? Is there any other method in deal.ii
>>>> that can simulation such a process?
>>>> Thank you very much and happy holidays.
>>>> Kaushik
>>>>
>>>>
>>>> On Tue, Dec 29, 2020 at 12:26 PM Wolfgang Bangerth <
>>>> bang...@colostate.edu> wrote:
>>>>
>>>>> On 12/28/20 5:11 PM, Marc Fehling wrote:
>>>>> >
>>>>> > In case a FE_Nothing has been configured to dominate, the solution
>>>>> should be
>>>>> > continuous on the interface if I understood correctly, i.e., zero on
>>>>> the face.
>>>>> > I will write a few tests to see if this is actually automatically
>>>>> the case in
>>>>> > user applications. If so, this check for domination will help other
>>>>> users to
>>>>> > avoid this pitfall :)
>>>>> >
>>>>>
>>>>> More tests = more better :-)
>>>>> Cheers
>>>>>   W.
>>>>>
>>>>> --
>>>>>
>>>>> ------------------------------------------------------------------------
>>>>> Wolfgang Bangerth          email:
>>>>> bang...@colostate.edu
>>>>>                             www:
>>>>> http://www.math.colostate.edu/~bangerth/
>>>>>
>>>>> --
>>>>> The deal.II project is located at http://www.dealii.org/
>>>>> For mailing list/forum options, see
>>>>> https://groups.google.com/d/forum/dealii?hl=en
>>>>> ---
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "deal.II User Group" group.
>>>>> To unsubscribe from this topic, visit
>>>>> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>> dealii+un...@googlegroups.com.
>>>>>
>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/dealii/0398a440-3ee2-9642-9c27-c519eb2f12e5%40colostate.edu
>>>>> .
>>>>>
>>>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see
>> https://groups.google.com/d/forum/dealii?hl=en
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> dealii+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/dealii/e68e5710-5b53-430c-b053-985b959b0d98n%40googlegroups.com
>> <https://groups.google.com/d/msgid/dealii/e68e5710-5b53-430c-b053-985b959b0d98n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAC-fs6t1KT5v5L8ciXjXWau3nfpoA1K%3DQ1pUGjgZmKpuJm_b7Q%40mail.gmail.com.
// Test case based on the one written by K. Bzowski
#include <deal.II/base/utilities.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/geometry_info.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/hp/dof_handler.h>
#include <deal.II/grid/tria_iterator.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_nothing.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/hp/fe_collection.h>

#include <deal.II/numerics/data_out.h>
#include <iostream>
#include <fstream>
#include <limits>
#include <vector>

#include <deal.II/lac/vector.h>
#include <deal.II/lac/generic_linear_algebra.h>

#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/solution_transfer.h>
//#include <deal.II/distributed/cell_data_transfer.h>
#include <deal.II/distributed/cell_data_transfer.templates.h>

// uncomment the following \#define if you have PETSc and Trilinos installed
// and you prefer using Trilinos in this example:
// @code
// #define FORCE_USE_OF_TRILINOS
// @endcode

// This will either import PETSc or TrilinosWrappers into the namespace
// LA. Note that we are defining the macro USE_PETSC_LA so that we can detect
// if we are using PETSc (see solve() for an example where this is necessary)

// This LA namespace must be after including 
<deal.II/lac/generic_linear_algebra.h>

namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
    !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
    using namespace dealii::LinearAlgebraPETSc;
#define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
    using namespace dealii::LinearAlgebraTrilinos;
#else
#error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA

using namespace dealii;

class FE_nothing_test
{
private:
    MPI_Comm mpi_communicator;
    parallel::distributed::Triangulation<2, 2> m_triangulation;
    DoFHandler<2> m_dof_handler;
    parallel::distributed::SolutionTransfer<2, LA::MPI::Vector> 
m_solution_trans;
    parallel::distributed::CellDataTransfer<2, 2, 
std::vector<std::vector<double>>> m_cell_data_trans;
    LA::MPI::Vector m_completely_distributed_solution;
    LA::MPI::Vector m_locally_relevant_solution;
    IndexSet m_locally_owned_dofs;
    IndexSet m_locally_relevant_dofs;
    hp::FECollection<2> m_fe_collection;
    LA::MPI::Vector m_previous_locally_relevant_solution;

public:
    FE_nothing_test() : mpi_communicator(MPI_COMM_WORLD),
                        m_triangulation(
                            mpi_communicator,
                            typename 
Triangulation<2>::MeshSmoothing(Triangulation<2>::none)),
                        m_dof_handler(m_triangulation),
                        m_solution_trans(m_dof_handler),
                        m_cell_data_trans(m_triangulation,
                                          /*transfer_variable_size_data=*/true)
    {
    }

    void run()
    {
        std::ofstream logfile("output");
        deallog.attach(logfile);
        deallog.depth_console(0);

        GridGenerator::hyper_cube(m_triangulation);
        m_triangulation.refine_global(1);

        m_fe_collection.push_back(FE_Q<2>(1));
        m_fe_collection.push_back(FE_Nothing<2>()); // 
m_fe_collection.push_back(FE_Nothing<2>(1, true));

        int step = 1;

        // Assign FE
        /*
        * -----------
        * |  1 |  1 |
        * -----------
        * |  0 |  0 |           0 - FEQ, 1 - FE_Nothing
        * -----------
        */

        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            if (cell->is_locally_owned())
            {
                auto center = cell->center();
                if (center(1) < 0.5)
                {
                    cell->set_active_fe_index(0);
                }
                else
                {
                    cell->set_active_fe_index(1);
                }
            }
        }

        setUpSolutions();

        m_completely_distributed_solution = 1.0;
        m_locally_relevant_solution = m_completely_distributed_solution;

        output(step);

        //
        // h-refinement
        //
        step = 2;

        /* Set refine flags:
        * -----------
        * | R  |  R |
        * |---------|
        * | R  |  R |
        * -----------
        */

        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            if (cell->is_locally_owned())
            {
                cell->set_refine_flag();
            }
        }

        coarsening_and_refinement();
        setUpSolutions();
        solution_transfer();
        output(step);

        //
        // p-refinement
        //
        step = 3;
        //
        // assign turn on two extra cells at the top half
        //

        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            if (cell->is_locally_owned())
            {
                auto center = cell->center();
                if (center(1) < 0.75)
                {
                    cell->set_future_fe_index(0);
                }
                else
                {
                    cell->set_future_fe_index(1);
                }
            }
        }

        p_coarsening_and_refinement_solution_transfer();

        output(step);
    }

    void setUpSolutions()
    {
        m_dof_handler.distribute_dofs(m_fe_collection);
        m_locally_owned_dofs = m_dof_handler.locally_owned_dofs();
        m_locally_relevant_dofs.clear();
        DoFTools::extract_locally_relevant_dofs(m_dof_handler, 
m_locally_relevant_dofs);
        m_completely_distributed_solution.reinit(m_locally_owned_dofs, 
mpi_communicator);
        m_locally_relevant_solution.reinit(m_locally_owned_dofs, 
m_locally_relevant_dofs, mpi_communicator);
    }

    void coarsening_and_refinement()
    {
        m_previous_locally_relevant_solution = m_locally_relevant_solution;
        m_triangulation.prepare_coarsening_and_refinement();
        
m_solution_trans.prepare_for_coarsening_and_refinement(m_previous_locally_relevant_solution);
        m_triangulation.execute_coarsening_and_refinement();
    }

    void p_coarsening_and_refinement_solution_transfer()
    {
        std::vector<std::vector<double>> 
data_to_transfer(m_triangulation.n_active_cells());
        int i = 0;
        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            std::vector<double> cell_data;
            if (cell->is_locally_owned())
            {
                if (cell->active_fe_index() == 0) // 0 = FE_Q
                {
                    Vector<double> 
local_data(GeometryInfo<2>::vertices_per_cell);
                    
cell->get_interpolated_dof_values(m_completely_distributed_solution, 
local_data);
                    for (unsigned int j = 0; j < 
GeometryInfo<2>::vertices_per_cell; j++)
                    {
                        cell_data.push_back(local_data[j]);
                    }
                }
            }
            data_to_transfer[i] = cell_data;
            i++;
        }

        m_triangulation.prepare_coarsening_and_refinement();
        
m_cell_data_trans.prepare_for_coarsening_and_refinement(data_to_transfer);
        m_triangulation.execute_coarsening_and_refinement();

        setUpSolutions();

        m_completely_distributed_solution = std::numeric_limits<double>::max();

        std::vector<std::vector<double>> 
transferred_data(m_triangulation.n_active_cells());
        m_cell_data_trans.unpack(transferred_data);

        i = 0;
        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            std::vector<double> cell_data = transferred_data[i];
            if (cell_data.size() != 0)
            {
                Vector<double> local_data(GeometryInfo<2>::vertices_per_cell);
                for (unsigned int j = 0; j < 
GeometryInfo<2>::vertices_per_cell; j++)
                {
                    local_data = cell_data[j];
                }
                cell->set_dof_values_by_interpolation(local_data, 
m_completely_distributed_solution);
            }
            i++;
        }

        //m_completely_distributed_solution.compress(VectorOperation::min); // 
calling compress aborts here

        for (unsigned int i = 0; i < m_completely_distributed_solution.size(); 
++i)
            if (m_completely_distributed_solution(i) == 
std::numeric_limits<double>::max())
                m_completely_distributed_solution(i) = 2; // 2 is the initial 
condition I want apply to newly created dofs

       // m_completely_distributed_solution.compress(VectorOperation::min); // 
aborts here too Missing compress() or calling with wrong VectorOperation 
argument.
        // exchange_cell_data_to_ghosts ??
        m_locally_relevant_solution = m_completely_distributed_solution;
    }

    void solution_transfer()
    {
        m_solution_trans.interpolate(m_completely_distributed_solution);
        m_locally_relevant_solution = m_completely_distributed_solution;
    }

    void output(int step)
    {
        Vector<double> FE_Type(m_triangulation.n_active_cells());
        Vector<float> subdomain(m_triangulation.n_active_cells());
        int i = 0;
        for (auto &cell : m_dof_handler.active_cell_iterators())
        {
            if (cell->is_locally_owned())
            {
                FE_Type(i) = cell->active_fe_index();
                subdomain(i) = m_triangulation.locally_owned_subdomain();
            }
            else
            {
                FE_Type(i) = -1;
                subdomain(i) = -1;
            }
            i++;
        }

        const unsigned int n_subdivisions = 0; //3;
        DataOut<2> data_out;
        data_out.attach_dof_handler(m_dof_handler);
        data_out.add_data_vector(m_locally_relevant_solution, "Solution");
        data_out.add_data_vector(FE_Type, "FE_Type");
        data_out.add_data_vector(subdomain, "subdomain");
        data_out.build_patches(n_subdivisions);

        data_out.write_vtu_with_pvtu_record(
            "./", "solution", step, mpi_communicator, 2);
    }
};

int main(int argc, char *argv[])
{
    Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

    FE_nothing_test test;
    test.run();
}

Reply via email to