I haven't tried Wolfgang's suggestion yet.

I tried the L2 projection approach using VectorTools::project 
(https://dealii.org/current/doxygen/deal.II/namespaceVectorTools.html#a51a13e948e1296dbfdfc297167e4fd5a).
 
The code I added is below. I initialize CellDataStorage by evaluating the 
DG solution at the quadrature points using FEValues. The AffineConstraints 
object might contain some actual constraints in the real program. This 
seems to work fine, and result is identical parallelized. Am I missing 
something? It's more expensive than the simple interpolation, but not 
significant compared to the DG solver as long as it's not done too often. 
And I only really need it when writing output, which is quite expensive 
anyway. And it's a bit more code. But more numerically sound. 

This goes in the original code in the place of the call to 
FeTools::interpolate:
```
    auto v = Vector{local_owned_dofs_cont, MPI_COMM_WORLD};
    struct MyData
    {
        double u;
    };
    auto gauss = d2::QGauss<dim>(fe_cont.tensor_degree() + 1);
    auto quadrature_data = 
d2::CellDataStorage<decltype(dof_cont)::active_cell_iterator, MyData>{};
    auto fe_values = d2::FEValues<dim>(mapping, fe_dg, gauss, 
d2::update_values);
    for (auto&& cell : dof_dg.active_cell_iterators())
    {
        if (!cell->is_locally_owned())
        {
            continue;
        } 
        fe_values.reinit(cell); 
        quadrature_data.initialize(cell, fe_values.n_quadrature_points);
        auto data = quadrature_data.get_data(cell);
        auto local_u = std::vector<double>(fe_values.n_quadrature_points);
        fe_values.get_function_values(u, local_u);
        for (auto i : fe_values.quadrature_point_indices())
        {
            *data[i] = MyData{local_u[i]};
        }
    }
    auto ac = d2::AffineConstraints<double>{local_owned_dofs_cont};
    ac.close();
    d2::VectorTools::project(
        mapping,
        dof_cont,
        ac,
        gauss,
        [&quadrature_data](const decltype(dof_cont)::active_cell_iterator& 
cell, unsigned int qidx) -> double
        { return quadrature_data.get_data(cell)[qidx]->u; },
        v);
```

On Wednesday, August 27, 2025 at 7:33:27 AM UTC+2 [email protected] wrote:

>
> An alternative that could maybe be better posed would be to assemble an L2 
> projection with the right-hand side being the DG values at the quadrature 
> point. This would give you the "optimal" fit of the DG results with your CG 
> scheme, which may not always coincide with the averaging of the nodal 
> values as is done when using fe_tools_interpolate :)
>
> On Wednesday, August 27, 2025 at 1:35:30 a.m. UTC+2 Wolfgang Bangerth 
> wrote:
>
>>
>> Daniel: 
>>
>> > I would like to generate a continuous solution of the final state. I 
>> > tried to do this by interpolating from `FE_DGQ` (any degree) to 
>> `FE_Q{1} 
>> > ` using `FETools::interpolate`. This works as long as there is only one 
>> > MPI process. With more than one process, I get significant errors on 
>> the 
>> > borders between processes. 
>> > 
>> > I attached a minimal example of my code. I create one dofhandler each 
>> > for DGQ and Q elements. I create some made up discontinuous solution 
>> > using the DGQ dofhandler for illustration, normally this would be 
>> output 
>> > from my solver. Then I interpolate to Q{1} by first creating a ghosted 
>> > vector from the discontinuous solution and a non-ghosted result vector 
>> > for the continuous solution and calling `FETools::interpolate` with 
>> > them. I write the discontinuous and continuous solutions to VTK files. 
>> > 
>> > I read somewhere (can't find it now, but it makes sense anyway) that 
>> the 
>> > input vector of `interpolate` needs to be ghosted, the output vector 
>> > non-ghosted. But the ghosts don't seem to make a difference. Normally I 
>> > would use the ghost indices from 
>> > `DoFTools::extract_locally_relevant_dofs`. I tried to add ALL indices 
>> > (`add_range(0, n_dofs)`) as ghosts or no ghosts at all (clear index 
>> > set), and got the same numerical errors. I also tried calling 
>> > `Vector::update_ghost_values` manually, but I think Vector::operator= 
>> > does that implicitly anyway? 
>> > 
>> > I plotted the continous and discontinuous solution in Paraview (using 
>> > WarpByScalar filter). With one process, it's a perfect straight line 
>> > through the cell centers. With two or more processes (pictured below: 
>> > two processes) there is a kink in the middle where the processes meet. 
>> > interpolation_n2.png 
>> > 
>> > Am I doing something wrong? Is there a different/better way to achieve 
>> > what I need? 
>>
>> I haven't explored this in detail, but I think what you see is 
>> ultimately a result of the fact that what you are doing is not a 
>> well-defined problem (though I admit that it could be done better). 
>>
>> At its core, you are interpolating onto a continuous finite element 
>> space that has its nodes at exactly those locations where the function 
>> you are trying to interpolate is discontinuous. The algorithm behind 
>> FETools::interpolate() has to somehow choose which of the possible 
>> values of the input function it wants to assign each output DoF. The way 
>> it does this is that it loops over all locally owned cells 
>>
>> https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L146-L148
>>  
>> and for each DoF on these cells, if the DoF is locally owned 
>>
>> https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L202-L206
>>  
>> it adds the value of the input function to the output vector (also 
>> keeping track how often it added something to the output vector): 
>>
>> https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L208-L212
>>  
>>
>> In your case, this means that for every DoF in the interior of a 
>> processor's subdomain, we write into the output vector more than once 
>> and then we take the average of all of these values: 
>>
>> https://github.com/dealii/dealii/blob/master/include/deal.II/fe/fe_tools_interpolate.templates.h#L234-L249
>>  
>> This makes sure that the output vector has *averages* of the inputs at 
>> node positions. But for DoFs that lie at the interfaces between 
>> processes, we only get contributions from the MPI process that owns the 
>> DoF, and so you're not averaging over the contributions from all 
>> adjacent cells but only those adjacent cells owned by that process (see 
>> line 206). 
>>
>> One could consider this undesirable: We should really be adding up from 
>> all adjacent cells. You could try what happens if you remove the 
>> if-condition in line 206. In the sequential case, this should make no 
>> difference. In the parallel case, I think it should achieve what you are 
>> trying to do. Would you want to try that out, and if it works as 
>> suggested make that into a patch to deal.II? 
>>
>> Best 
>> W. 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/54b3a601-ebe4-4ae4-84b0-6e7711128676n%40googlegroups.com.

Reply via email to