Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-09 Thread Wolfgang Bangerth



Thanks to all of you, I can write a small test that seems to be doing what I 
want following Marc's suggestion. But I have a few questions (the test code is 
attached)


  *   cell->set_dof_values(local_data, m_completely_distributed_solution); 

Does set_dof_values set entries in m_completely_distributed_solution that are 
not owned by the current rank?


No. You determine up front upon construction of the vector elements are 
locally owned, which are ghosted, and which are stored elsewhere. Writing into 
entries does not change this decision.




  * m_completely_distributed_solution.compress(VectorOperation::insert);

After set_dof_values, if I call compress(VectorOperation::insert) on that 
PETSc vector, now, does that mean that I now have entries in that distributed 
vector that are not owned by the current rank?


Same here.



  * m_locally_relevant_solution = m_completely_distributed_solution;

Will this assignment of a distributed vector to a locally relevant vector 
correctly set the ghost values?


Yes. That's the point :-)


If this test code is correct, it will be great to add this as a test to 
deal.ii. My current work will be dependent on this part working correctly, and 
it will be nice to have a test that can alert us if something changes in the 
future.


We're always happy to add more tests to the test suite!

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/800a858e-1294-f4d7-cf7a-9deed2de02c9%40colostate.edu.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-09 Thread Kaushik Das
Hello Prof. Bangerth and others,

Thanks to all of you, I can write a small test that seems to be doing what
I want following Marc's suggestion. But I have a few questions (the test
code is attached)

   -  cell->set_dof_values(local_data, m_completely_distributed_solution);

Does set_dof_values set entries in m_completely_distributed_solution that
are not owned by the current rank?

   - m_completely_distributed_solution.compress(VectorOperation::insert);

After  set_dof_values, if I call  compress(VectorOperation::insert) on that
PETSc vector, now, does that mean that I now have entries in that
distributed vector that are not owned by the current rank?

   - m_locally_relevant_solution = m_completely_distributed_solution;

Will this assignment of a distributed vector to a locally relevant vector
correctly set the ghost values?

If this test code is correct, it will be great to add this as a test to
deal.ii. My current work will be dependent on this part working correctly,
and it will be nice to have a test that can alert us if something changes
in the future.

Thank you very much,
Kaushik


On Mon, Jan 4, 2021 at 4:54 PM Wolfgang Bangerth 
wrote:

>
> Kaushik
> Marc and others have already answered the technical details, so just one
> overall comment:
>
> > Let me explain what I am trying to do and why.
> > I want to solve a transient heat transfer problem of the additive
> > manufacturing (AM) process. In AM processes, metal powder is deposited
> in
> > layers, and then a laser source scans each layer and melts and bonds the
> > powder to the layer underneath it. To simulate this layer by layer
> process, I
> > want to start with a grid that covers all the layers, but initially,
> only the
> > bottom-most layer is active and all other layers are inactive, and then
> as
> > time progresses, I want to activate one layer at a time. I read on this
> > mailing list that cell "birth" or "activation" can be done by changing a
> cell
> > from FE_Nothing to FE_Q using p-refinement. I was trying to keep all
> cells of
> > the grid initially to FE_Nothing except the bottom-most layer. And then
> > convert one layer at a time to FE_Q. My questions are:
> > 1. Does this make sense?
>
> Yes, this is how I would do things as well. It is quite possible that
> nobody
> has ever tried this, and that some of the steps don't work without further
> modification -- but whatever doesn't work should be treated as either a
> missing feature, or a bug. The general approach is sound.
>
> When I try to build a code with a work flow that is not quite standard, I
> (like you) frequently run into things that don't quite work. My usual
> approach
> is then to write a small and self-contained test case that illustrates the
> issue without the overhead of the actual application. Most of the time,
> one
> can show the issue with <100 lines of code. This then goes into a github
> issue
> so that one can write a fix for this particular problem without having to
> understand the overall code architecture, the application, etc. I would
> encourage you to follow this kind of work-flow!
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/5ebefd8a-5dde-603e-76d1-6967c7783504%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAC-fs6sJ-GLAumx6YqiXMDbG3kQeg3Xzy_mQzZ4t%3DLhkM_xdFQ%40mail.gmail.com.
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 

#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 
#include 

//
//  Including cell_data_transfer.h did not work. Had to include 
cell_data_transfer.templates.h why?
//
//#include 
#include 

namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-04 Thread Wolfgang Bangerth



Kaushik
Marc and others have already answered the technical details, so just one 
overall comment:



Let me explain what I am trying to do and why.
I want to solve a transient heat transfer problem of the additive 
manufacturing (AM) process. In AM processes, metal powder is deposited in 
layers, and then a laser source scans each layer and melts and bonds the 
powder to the layer underneath it. To simulate this layer by layer process, I 
want to start with a grid that covers all the layers, but initially, only the 
bottom-most layer is active and all other layers are inactive, and then as 
time progresses, I want to activate one layer at a time. I read on this 
mailing list that cell "birth" or "activation" can be done by changing a cell 
from FE_Nothing to FE_Q using p-refinement. I was trying to keep all cells of 
the grid initially to FE_Nothing except the bottom-most layer. And then 
convert one layer at a time to FE_Q. My questions are:

1. Does this make sense?


Yes, this is how I would do things as well. It is quite possible that nobody 
has ever tried this, and that some of the steps don't work without further 
modification -- but whatever doesn't work should be treated as either a 
missing feature, or a bug. The general approach is sound.


When I try to build a code with a work flow that is not quite standard, I 
(like you) frequently run into things that don't quite work. My usual approach 
is then to write a small and self-contained test case that illustrates the 
issue without the overhead of the actual application. Most of the time, one 
can show the issue with <100 lines of code. This then goes into a github issue 
so that one can write a fix for this particular problem without having to 
understand the overall code architecture, the application, etc. I would 
encourage you to follow this kind of work-flow!


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5ebefd8a-5dde-603e-76d1-6967c7783504%40colostate.edu.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-04 Thread Bruno Turcksin
Kaushik,

Oh wow this is a small world :D Unfortunately, PETSc solver requires a
PETSc vector but I think it should be straightforward to add
compress(min) to the PETSc vector. So that's a possibility if copying
the solution takes too much time.

Bestm

Bruno


Le dim. 3 janv. 2021 à 21:42, Kaushik Das  a écrit :
>
> Hi Bruno,
> Thanks for the help. I just saw in your online profile that you were in Texas 
> A, I was a post-doc at Aerospace TAMU 2009-12, small world!
> Can use I a dealii::LinearAlgebra::distributed::Vector with the PETSc solver? 
> If not then, I think I have to copy the solution from a PETSc vector to a 
> dealii vector after the solution in every time step.
> thanks,
> Kaushik
>
> On Sun, Jan 3, 2021 at 8:35 PM Bruno Turcksin  
> wrote:
>>
>> Kaushik,
>>
>> I am working on the exact same problem for the same application :-)
>> PETSc Vector do not support compress(min) You need to use a
>> dealii::LinearAlgebra::distributed::Vector instead.
>>
>> Best,
>>
>> Bruno
>>
>> Le sam. 2 janv. 2021 à 18:38, Kaushik Das  a écrit :
>> >
>> > Hi Marc,
>> > I tried using cell data transfer as you suggested.
>> > But I am having trouble in calling compress after getting the transferred 
>> > data to a PETSc Vector:
>> > My test code is attached. My confusion is mainly when to call to compress 
>> > after the cell data transfer.
>> >
>> > m_completely_distributed_solution = 
>> > std::numeric_limits::max();
>> >
>> > std::vector> 
>> > transferred_data(m_triangulation.n_active_cells());
>> > m_cell_data_trans.unpack(transferred_data);
>> >
>> > i = 0;
>> > for (auto  : m_dof_handler.active_cell_iterators())
>> > {
>> > std::vector cell_data = transferred_data[i];
>> > if (cell_data.size() != 0)
>> > {
>> > Vector 
>> > local_data(GeometryInfo<2>::vertices_per_cell);
>> > for (unsigned int j = 0; j < 
>> > GeometryInfo<2>::vertices_per_cell; j++)
>> > {
>> > local_data = cell_data[j];
>> > }
>> > cell->set_dof_values_by_interpolation(local_data, 
>> > m_completely_distributed_solution);
>> > }
>> > i++;
>> > }
>> >
>> > 
>> > //m_completely_distributed_solution.compress(VectorOperation::min); // 
>> > calling compress aborts here
>> >
>> > for (unsigned int i = 0; i < 
>> > m_completely_distributed_solution.size(); ++i)
>> > if (m_completely_distributed_solution(i) == 
>> > std::numeric_limits::max())
>> > m_completely_distributed_solution(i) = 2; // 2 is the 
>> > initial condition I want apply to newly created dofs
>> >
>> >// 
>> > m_completely_distributed_solution.compress(VectorOperation::min); // 
>> > aborts here too Missing compress() or calling with wrong VectorOperation 
>> > argument.
>> > // exchange_cell_data_to_ghosts ??
>> > m_locally_relevant_solution = m_completely_distributed_solution;
>> >
>> >
>> > Calling compress from either of those two places results in exceptions:
>> > An error occurred in line <387> of file 
>> >  in function
>> > void 
>> > dealii::PETScWrappers::VectorBase::compress(dealii::VectorOperation::values)
>> > The violated condition was:
>> > last_action == ::dealii::VectorOperation::unknown || last_action == 
>> > operation
>> > Additional information:
>> > Missing compress() or calling with wrong VectorOperation argument.
>> >
>> >
>> >
>> > Thank you very much for your help.
>> > Happy New year,
>> > Kaushik
>> >
>> >
>> >
>> > On Sat, Jan 2, 2021 at 9:55 AM Kaushik Das  wrote:
>> >>
>> >> Thank you.
>> >> I will try CellDataTransfer method as you suggested.
>> >> The test code that I attached earlier has a mistake. 
>> >> cell->set_active_fe_index(0) should be protected by a if 
>> >> (cell->is_locally_owned()). I have attached a corrected test here.
>> >> Thanks,
>> >> Kaushik
>> >>
>> >> On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling  wrote:
>> >>>
>> >>> Kaushik,
>> >>>
>> >>> in addition to what I just wrote, your example from above has revealed a 
>> >>> bug in the `p::d::SolutionTransfer` class that Wolfgang and I were 
>> >>> discussing in the course of this chatlog. Thank you very much for this! 
>> >>> We are working on a solution for this issue.
>> >>>
>> >>> I would encourage you to use the `p::d::CellDataTransfer` class for your 
>> >>> use case as described in the last message.
>> >>>
>> >>> Marc
>> >>>
>> >>> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:
>> 
>>  Hi Kaushik,
>> 
>>  Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
>>  p-refinement.
>> 
>>  You can do this with the method described in #11132: Imitate what 
>>  p::d::SolutionTransfer is doing with the more versatile tool 
>>  p::d::CellDataTransfer and consider the following:
>> 
>>  Prepare a data 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-03 Thread Kaushik Das
Hi Bruno,
Thanks for the help. I just saw in your online profile that you were in
Texas A, I was a post-doc at Aerospace TAMU 2009-12, small world!
Can use I a dealii::LinearAlgebra::distributed::Vector with the PETSc
solver? If not then, I think I have to copy the solution from a PETSc
vector to a dealii vector after the solution in every time step.
thanks,
Kaushik

On Sun, Jan 3, 2021 at 8:35 PM Bruno Turcksin 
wrote:

> Kaushik,
>
> I am working on the exact same problem for the same application :-)
> PETSc Vector do not support compress(min) You need to use a
> dealii::LinearAlgebra::distributed::Vector instead.
>
> Best,
>
> Bruno
>
> Le sam. 2 janv. 2021 à 18:38, Kaushik Das  a écrit :
> >
> > Hi Marc,
> > I tried using cell data transfer as you suggested.
> > But I am having trouble in calling compress after getting the
> transferred data to a PETSc Vector:
> > My test code is attached. My confusion is mainly when to call to
> compress after the cell data transfer.
> >
> > m_completely_distributed_solution =
> std::numeric_limits::max();
> >
> > std::vector>
> transferred_data(m_triangulation.n_active_cells());
> > m_cell_data_trans.unpack(transferred_data);
> >
> > i = 0;
> > for (auto  : m_dof_handler.active_cell_iterators())
> > {
> > std::vector cell_data = transferred_data[i];
> > if (cell_data.size() != 0)
> > {
> > Vector
> local_data(GeometryInfo<2>::vertices_per_cell);
> > for (unsigned int j = 0; j <
> GeometryInfo<2>::vertices_per_cell; j++)
> > {
> > local_data = cell_data[j];
> > }
> > cell->set_dof_values_by_interpolation(local_data,
> m_completely_distributed_solution);
> > }
> > i++;
> > }
> >
> >
>  //m_completely_distributed_solution.compress(VectorOperation::min); //
> calling compress aborts here
> >
> > for (unsigned int i = 0; i <
> m_completely_distributed_solution.size(); ++i)
> > if (m_completely_distributed_solution(i) ==
> std::numeric_limits::max())
> > m_completely_distributed_solution(i) = 2; // 2 is the
> initial condition I want apply to newly created dofs
> >
> >//
> m_completely_distributed_solution.compress(VectorOperation::min); // aborts
> here too Missing compress() or calling with wrong VectorOperation argument.
> > // exchange_cell_data_to_ghosts ??
> > m_locally_relevant_solution = m_completely_distributed_solution;
> >
> >
> > Calling compress from either of those two places results in exceptions:
> > An error occurred in line <387> of file
>  in function
> > void
> dealii::PETScWrappers::VectorBase::compress(dealii::VectorOperation::values)
> > The violated condition was:
> > last_action == ::dealii::VectorOperation::unknown || last_action ==
> operation
> > Additional information:
> > Missing compress() or calling with wrong VectorOperation argument.
> >
> >
> >
> > Thank you very much for your help.
> > Happy New year,
> > Kaushik
> >
> >
> >
> > On Sat, Jan 2, 2021 at 9:55 AM Kaushik Das  wrote:
> >>
> >> Thank you.
> >> I will try CellDataTransfer method as you suggested.
> >> The test code that I attached earlier has a mistake.
> cell->set_active_fe_index(0) should be protected by a if
> (cell->is_locally_owned()). I have attached a corrected test here.
> >> Thanks,
> >> Kaushik
> >>
> >> On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling 
> wrote:
> >>>
> >>> Kaushik,
> >>>
> >>> in addition to what I just wrote, your example from above has revealed
> a bug in the `p::d::SolutionTransfer` class that Wolfgang and I were
> discussing in the course of this chatlog. Thank you very much for this! We
> are working on a solution for this issue.
> >>>
> >>> I would encourage you to use the `p::d::CellDataTransfer` class for
> your use case as described in the last message.
> >>>
> >>> Marc
> >>>
> >>> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:
> 
>  Hi Kaushik,
> 
>  Yes, this is possible by changing a cell from FE_Nothing to FE_Q
> using p-refinement.
> 
>  You can do this with the method described in #11132: Imitate what
> p::d::SolutionTransfer is doing with the more versatile tool
> p::d::CellDataTransfer and consider the following:
> 
>  Prepare a data container like `Vector>` where the
> outer layer represents each cell in the current mesh, and the inner layer
> corresponds to the dof values inside each cell.
>  Prepare data for the updated grid on the old grid.
> 
>  On already activated cells, store dof values with
> `cell->get_interpolated_dof_values()`.
>  On all other cells, store an empty container.
> 
>  Register your data container for and execute coarsening and
> refinement.
>  Interpolate the old solution on the new mesh.
> 
>  Initialize your new solution vector with 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-03 Thread Bruno Turcksin
Kaushik,

I am working on the exact same problem for the same application :-)
PETSc Vector do not support compress(min) You need to use a
dealii::LinearAlgebra::distributed::Vector instead.

Best,

Bruno

Le sam. 2 janv. 2021 à 18:38, Kaushik Das  a écrit :
>
> Hi Marc,
> I tried using cell data transfer as you suggested.
> But I am having trouble in calling compress after getting the transferred 
> data to a PETSc Vector:
> My test code is attached. My confusion is mainly when to call to compress 
> after the cell data transfer.
>
> m_completely_distributed_solution = 
> std::numeric_limits::max();
>
> std::vector> 
> transferred_data(m_triangulation.n_active_cells());
> m_cell_data_trans.unpack(transferred_data);
>
> i = 0;
> for (auto  : m_dof_handler.active_cell_iterators())
> {
> std::vector cell_data = transferred_data[i];
> if (cell_data.size() != 0)
> {
> Vector local_data(GeometryInfo<2>::vertices_per_cell);
> for (unsigned int j = 0; j < 
> GeometryInfo<2>::vertices_per_cell; j++)
> {
> local_data = cell_data[j];
> }
> cell->set_dof_values_by_interpolation(local_data, 
> m_completely_distributed_solution);
> }
> i++;
> }
>
> //m_completely_distributed_solution.compress(VectorOperation::min); 
> // calling compress aborts here
>
> for (unsigned int i = 0; i < 
> m_completely_distributed_solution.size(); ++i)
> if (m_completely_distributed_solution(i) == 
> std::numeric_limits::max())
> m_completely_distributed_solution(i) = 2; // 2 is the initial 
> condition I want apply to newly created dofs
>
>// m_completely_distributed_solution.compress(VectorOperation::min); 
> // aborts here too Missing compress() or calling with wrong VectorOperation 
> argument.
> // exchange_cell_data_to_ghosts ??
> m_locally_relevant_solution = m_completely_distributed_solution;
>
>
> Calling compress from either of those two places results in exceptions:
> An error occurred in line <387> of file 
>  in function
> void 
> dealii::PETScWrappers::VectorBase::compress(dealii::VectorOperation::values)
> The violated condition was:
> last_action == ::dealii::VectorOperation::unknown || last_action == 
> operation
> Additional information:
> Missing compress() or calling with wrong VectorOperation argument.
>
>
>
> Thank you very much for your help.
> Happy New year,
> Kaushik
>
>
>
> On Sat, Jan 2, 2021 at 9:55 AM Kaushik Das  wrote:
>>
>> Thank you.
>> I will try CellDataTransfer method as you suggested.
>> The test code that I attached earlier has a mistake. 
>> cell->set_active_fe_index(0) should be protected by a if 
>> (cell->is_locally_owned()). I have attached a corrected test here.
>> Thanks,
>> Kaushik
>>
>> On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling  wrote:
>>>
>>> Kaushik,
>>>
>>> in addition to what I just wrote, your example from above has revealed a 
>>> bug in the `p::d::SolutionTransfer` class that Wolfgang and I were 
>>> discussing in the course of this chatlog. Thank you very much for this! We 
>>> are working on a solution for this issue.
>>>
>>> I would encourage you to use the `p::d::CellDataTransfer` class for your 
>>> use case as described in the last message.
>>>
>>> Marc
>>>
>>> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:

 Hi Kaushik,

 Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
 p-refinement.

 You can do this with the method described in #11132: Imitate what 
 p::d::SolutionTransfer is doing with the more versatile tool 
 p::d::CellDataTransfer and consider the following:

 Prepare a data container like `Vector>` where the outer 
 layer represents each cell in the current mesh, and the inner layer 
 corresponds to the dof values inside each cell.
 Prepare data for the updated grid on the old grid.

 On already activated cells, store dof values with 
 `cell->get_interpolated_dof_values()`.
 On all other cells, store an empty container.

 Register your data container for and execute coarsening and refinement.
 Interpolate the old solution on the new mesh.

 Initialize your new solution vector with invalid values 
 `std::numeric_limits::infinity`.
 On previously activated cells, extract the stored data with 
 `cell->set_dof_values_by_interpolation()`.
 Skip all other cells which only have an empty container.

 On non-ghosted solution vectors, call `compress(VectorOperation::min)` to 
 get correct values on ghost cells.

 This leaves you with a correctly interpolated solution on the new mesh, 
 where all newly activated dofs have the value `infinity`.

 You can now (and not earlier!!!) assign the values you have 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-02 Thread Kaushik Das
Hi Marc,
I tried using cell data transfer as you suggested.
But I am having trouble in calling compress after getting the
transferred data to a PETSc Vector:
My test code is attached. My confusion is mainly when to call to
compress after the cell data transfer.

m_completely_distributed_solution =
std::numeric_limits::max();

std::vector>
transferred_data(m_triangulation.n_active_cells());
m_cell_data_trans.unpack(transferred_data);

i = 0;
for (auto  : m_dof_handler.active_cell_iterators())
{
std::vector cell_data = transferred_data[i];
if (cell_data.size() != 0)
{
Vector
local_data(GeometryInfo<2>::vertices_per_cell);
for (unsigned int j = 0; j <
GeometryInfo<2>::vertices_per_cell; j++)
{
local_data = cell_data[j];
}
cell->set_dof_values_by_interpolation(local_data,
m_completely_distributed_solution);
}
i++;
}

//m_completely_distributed_solution.compress(VectorOperation::min);
// calling compress aborts here

for (unsigned int i = 0; i <
m_completely_distributed_solution.size(); ++i)
if (m_completely_distributed_solution(i) ==
std::numeric_limits::max())
m_completely_distributed_solution(i) = 2; // 2 is the
initial condition I want apply to newly created dofs

   // m_completely_distributed_solution.compress(VectorOperation::min);
// aborts here too Missing compress() or calling with wrong VectorOperation
argument.
// exchange_cell_data_to_ghosts ??
m_locally_relevant_solution = m_completely_distributed_solution;


Calling compress from either of those two places results in exceptions:
An error occurred in line <387> of file
 in function
void
dealii::PETScWrappers::VectorBase::compress(dealii::VectorOperation::values)
The violated condition was:
last_action == ::dealii::VectorOperation::unknown || last_action ==
operation
Additional information:
Missing compress() or calling with wrong VectorOperation argument.



Thank you very much for your help.
Happy New year,
Kaushik



On Sat, Jan 2, 2021 at 9:55 AM Kaushik Das  wrote:

> Thank you.
> I will try CellDataTransfer method as you suggested.
> The test code that I attached earlier has a
> mistake. cell->set_active_fe_index(0) should be protected by a if
> (cell->is_locally_owned()). I have attached a corrected test here.
> Thanks,
> Kaushik
>
> On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling  wrote:
>
>> Kaushik,
>>
>> in addition to what I just wrote, your example from above has revealed a
>> bug in the `p::d::SolutionTransfer` class that Wolfgang and I were
>> discussing in the course of this chatlog. Thank you very much for this! We
>> are working on a solution for this issue.
>>
>> I would encourage you to use the `p::d::CellDataTransfer` class for your
>> use case as described in the last message.
>>
>> Marc
>>
>> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:
>>
>>> Hi Kaushik,
>>>
>>> Yes, this is possible by changing a cell from FE_Nothing to FE_Q using
>>> p-refinement.
>>>
>>> You can do this with the method described in #11132
>>> : Imitate what
>>> p::d::SolutionTransfer is doing with the more versatile tool
>>> p::d::CellDataTransfer and consider the following:
>>>
>>>- Prepare a data container like `Vector>` where the
>>>outer layer represents each cell in the current mesh, and the inner layer
>>>corresponds to the dof values inside each cell.
>>>- Prepare data for the updated grid on the old grid.
>>>   - On already activated cells, store dof values with
>>>   `cell->get_interpolated_dof_values()`.
>>>   - On all other cells, store an empty container.
>>>   - Register your data container for and execute coarsening and
>>>refinement.
>>>- Interpolate the old solution on the new mesh.
>>>- Initialize your new solution vector with invalid values
>>>   `std::numeric_limits::infinity`.
>>>   - On previously activated cells, extract the stored data with
>>>   `cell->set_dof_values_by_interpolation()`.
>>>   - Skip all other cells which only have an empty container.
>>>- On non-ghosted solution vectors, call
>>>`compress(VectorOperation::min)` to get correct values on ghost cells.
>>>
>>> This leaves you with a correctly interpolated solution on the new mesh,
>>> where all newly activated dofs have the value `infinity`.
>>>
>>> You can now (and not earlier!!!) assign the values you have in mind for
>>> the newly activated dofs. You may want to exchange data on ghost cells once
>>> more with `GridTools::exchange_cell_data_to_ghosts()`.
>>>
>>> This is a fairly new feature in the library for a very specific use
>>> case, so there is no dedicated class for transferring solutions across
>>> finite element activation yet. If you 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-02 Thread Kaushik Das
Thank you.
I will try CellDataTransfer method as you suggested.
The test code that I attached earlier has a
mistake. cell->set_active_fe_index(0) should be protected by a if
(cell->is_locally_owned()). I have attached a corrected test here.
Thanks,
Kaushik

On Thu, Dec 31, 2020 at 8:05 PM Marc Fehling  wrote:

> Kaushik,
>
> in addition to what I just wrote, your example from above has revealed a
> bug in the `p::d::SolutionTransfer` class that Wolfgang and I were
> discussing in the course of this chatlog. Thank you very much for this! We
> are working on a solution for this issue.
>
> I would encourage you to use the `p::d::CellDataTransfer` class for your
> use case as described in the last message.
>
> Marc
>
> On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:
>
>> Hi Kaushik,
>>
>> Yes, this is possible by changing a cell from FE_Nothing to FE_Q using
>> p-refinement.
>>
>> You can do this with the method described in #11132
>> : Imitate what
>> p::d::SolutionTransfer is doing with the more versatile tool
>> p::d::CellDataTransfer and consider the following:
>>
>>- Prepare a data container like `Vector>` where the
>>outer layer represents each cell in the current mesh, and the inner layer
>>corresponds to the dof values inside each cell.
>>- Prepare data for the updated grid on the old grid.
>>   - On already activated cells, store dof values with
>>   `cell->get_interpolated_dof_values()`.
>>   - On all other cells, store an empty container.
>>   - Register your data container for and execute coarsening and
>>refinement.
>>- Interpolate the old solution on the new mesh.
>>- Initialize your new solution vector with invalid values
>>   `std::numeric_limits::infinity`.
>>   - On previously activated cells, extract the stored data with
>>   `cell->set_dof_values_by_interpolation()`.
>>   - Skip all other cells which only have an empty container.
>>- On non-ghosted solution vectors, call
>>`compress(VectorOperation::min)` to get correct values on ghost cells.
>>
>> This leaves you with a correctly interpolated solution on the new mesh,
>> where all newly activated dofs have the value `infinity`.
>>
>> You can now (and not earlier!!!) assign the values you have in mind for
>> the newly activated dofs. You may want to exchange data on ghost cells once
>> more with `GridTools::exchange_cell_data_to_ghosts()`.
>>
>> This is a fairly new feature in the library for a very specific use case,
>> so there is no dedicated class for transferring solutions across finite
>> element activation yet. If you successfully manage to make this work, would
>> you be willing to turn this into a class for the deal.II library?
>>
>> Marc
>> On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com
>> wrote:
>>
>>> Hi all,
>>> Thank you for your reply.
>>> Let me explain what I am trying to do and why.
>>> I want to solve a transient heat transfer problem of the additive
>>> manufacturing (AM) process. In AM processes, metal powder is deposited in
>>> layers, and then a laser source scans each layer and melts and bonds the
>>> powder to the layer underneath it. To simulate this layer by layer process,
>>> I want to start with a grid that covers all the layers, but initially, only
>>> the bottom-most layer is active and all other layers are inactive, and then
>>> as time progresses, I want to activate one layer at a time. I read on this
>>> mailing list that cell "birth" or "activation" can be done by changing a
>>> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all
>>> cells of the grid initially to FE_Nothing except the bottom-most layer. And
>>> then convert one layer at a time to FE_Q. My questions are:
>>> 1. Does this make sense?
>>> 2. I have to do two other things for this to work. (a) When a cell
>>> becomes FE_Q from FE_Nothing, dofs that are activating for the 1st time, I
>>> need to apply a non-zero initial value to those dofs. This is to simulation
>>> the metal powder deposited at a specified temperature,. e.g. room
>>> temperature. (b) the dofs, that were shared between a FE_Q and FE_Nothing
>>> cells before the p-refinement and now shared between FE_Q and FE_Nothing
>>> cells after refinement, should retrain their values from before the
>>> refinement.
>>>
>>> Another way to simulation this process would be to use a cell "awaking"
>>> process, instead of the cell "birth". I keep call cells FE_Q but apply a
>>> very low diffusivity to the cells of the layers that are not yet "awake".
>>> But this way, I have to solve a larger system in all time steps. I was
>>> hoping to save some computation time, by only forming a system consists of
>>> cells that are in the "active" layers only.
>>>
>>> Please let me if this makes sense? Is there any other method in deal.ii
>>> that can simulation such a process?
>>> Thank you very much and happy holidays.
>>> 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling
Kaushik,

in addition to what I just wrote, your example from above has revealed a 
bug in the `p::d::SolutionTransfer` class that Wolfgang and I were 
discussing in the course of this chatlog. Thank you very much for this! We 
are working on a solution for this issue.

I would encourage you to use the `p::d::CellDataTransfer` class for your 
use case as described in the last message.

Marc

On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:

> Hi Kaushik,
>
> Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
> p-refinement.
>
> You can do this with the method described in #11132 
> : Imitate what 
> p::d::SolutionTransfer is doing with the more versatile tool 
> p::d::CellDataTransfer and consider the following:
>
>- Prepare a data container like `Vector>` where the 
>outer layer represents each cell in the current mesh, and the inner layer 
>corresponds to the dof values inside each cell.
>- Prepare data for the updated grid on the old grid. 
>   - On already activated cells, store dof values with 
>   `cell->get_interpolated_dof_values()`.
>   - On all other cells, store an empty container.
>   - Register your data container for and execute coarsening and 
>refinement.
>- Interpolate the old solution on the new mesh.
>- Initialize your new solution vector with invalid values 
>   `std::numeric_limits::infinity`.
>   - On previously activated cells, extract the stored data with 
>   `cell->set_dof_values_by_interpolation()`.
>   - Skip all other cells which only have an empty container.
>- On non-ghosted solution vectors, call 
>`compress(VectorOperation::min)` to get correct values on ghost cells.
>
> This leaves you with a correctly interpolated solution on the new mesh, 
> where all newly activated dofs have the value `infinity`.
>
> You can now (and not earlier!!!) assign the values you have in mind for 
> the newly activated dofs. You may want to exchange data on ghost cells once 
> more with `GridTools::exchange_cell_data_to_ghosts()`.
>
> This is a fairly new feature in the library for a very specific use case, 
> so there is no dedicated class for transferring solutions across finite 
> element activation yet. If you successfully manage to make this work, would 
> you be willing to turn this into a class for the deal.II library?
>
> Marc
> On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com 
> wrote:
>
>> Hi all,
>> Thank you for your reply. 
>> Let me explain what I am trying to do and why. 
>> I want to solve a transient heat transfer problem of the additive 
>> manufacturing (AM) process. In AM processes, metal powder is deposited in 
>> layers, and then a laser source scans each layer and melts and bonds the 
>> powder to the layer underneath it. To simulate this layer by layer process, 
>> I want to start with a grid that covers all the layers, but initially, only 
>> the bottom-most layer is active and all other layers are inactive, and then 
>> as time progresses, I want to activate one layer at a time. I read on this 
>> mailing list that cell "birth" or "activation" can be done by changing a 
>> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all 
>> cells of the grid initially to FE_Nothing except the bottom-most layer. And 
>> then convert one layer at a time to FE_Q. My questions are:
>> 1. Does this make sense? 
>> 2. I have to do two other things for this to work. (a) When a cell 
>> becomes FE_Q from FE_Nothing, dofs that are activating for the 1st time, I 
>> need to apply a non-zero initial value to those dofs. This is to simulation 
>> the metal powder deposited at a specified temperature,. e.g. room 
>> temperature. (b) the dofs, that were shared between a FE_Q and FE_Nothing 
>> cells before the p-refinement and now shared between FE_Q and FE_Nothing 
>> cells after refinement, should retrain their values from before the 
>> refinement. 
>>
>> Another way to simulation this process would be to use a cell "awaking" 
>> process, instead of the cell "birth". I keep call cells FE_Q but apply a 
>> very low diffusivity to the cells of the layers that are not yet "awake". 
>> But this way, I have to solve a larger system in all time steps. I was 
>> hoping to save some computation time, by only forming a system consists of 
>> cells that are in the "active" layers only. 
>>
>> Please let me if this makes sense? Is there any other method in deal.ii 
>> that can simulation such a process? 
>> Thank you very much and happy holidays.
>> Kaushik 
>>
>>
>> On Tue, Dec 29, 2020 at 12:26 PM Wolfgang Bangerth  
>> wrote:
>>
>>> On 12/28/20 5:11 PM, Marc Fehling wrote:
>>> > 
>>> > In case a FE_Nothing has been configured to dominate, the solution 
>>> should be 
>>> > continuous on the interface if I understood correctly, i.e., zero on 
>>> the face. 
>>> > I will write a few tests to see if this is 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling


Hi Kaushik,

Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
p-refinement.

You can do this with the method described in #11132 
: Imitate what 
p::d::SolutionTransfer is doing with the more versatile tool 
p::d::CellDataTransfer and consider the following:

   - Prepare a data container like `Vector>` where the outer 
   layer represents each cell in the current mesh, and the inner layer 
   corresponds to the dof values inside each cell.
   - Prepare data for the updated grid on the old grid. 
  - On already activated cells, store dof values with 
  `cell->get_interpolated_dof_values()`.
  - On all other cells, store an empty container.
  - Register your data container for and execute coarsening and 
   refinement.
   - Interpolate the old solution on the new mesh.
   - Initialize your new solution vector with invalid values 
  `std::numeric_limits::infinity`.
  - On previously activated cells, extract the stored data with 
  `cell->set_dof_values_by_interpolation()`.
  - Skip all other cells which only have an empty container.
   - On non-ghosted solution vectors, call `compress(VectorOperation::min)` 
   to get correct values on ghost cells.

This leaves you with a correctly interpolated solution on the new mesh, 
where all newly activated dofs have the value `infinity`.

You can now (and not earlier!!!) assign the values you have in mind for the 
newly activated dofs. You may want to exchange data on ghost cells once 
more with `GridTools::exchange_cell_data_to_ghosts()`.

This is a fairly new feature in the library for a very specific use case, 
so there is no dedicated class for transferring solutions across finite 
element activation yet. If you successfully manage to make this work, would 
you be willing to turn this into a class for the deal.II library?

Marc
On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com wrote:

> Hi all,
> Thank you for your reply. 
> Let me explain what I am trying to do and why. 
> I want to solve a transient heat transfer problem of the additive 
> manufacturing (AM) process. In AM processes, metal powder is deposited in 
> layers, and then a laser source scans each layer and melts and bonds the 
> powder to the layer underneath it. To simulate this layer by layer process, 
> I want to start with a grid that covers all the layers, but initially, only 
> the bottom-most layer is active and all other layers are inactive, and then 
> as time progresses, I want to activate one layer at a time. I read on this 
> mailing list that cell "birth" or "activation" can be done by changing a 
> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all 
> cells of the grid initially to FE_Nothing except the bottom-most layer. And 
> then convert one layer at a time to FE_Q. My questions are:
> 1. Does this make sense? 
> 2. I have to do two other things for this to work. (a) When a cell becomes 
> FE_Q from FE_Nothing, dofs that are activating for the 1st time, I need to 
> apply a non-zero initial value to those dofs. This is to simulation the 
> metal powder deposited at a specified temperature,. e.g. room temperature. 
> (b) the dofs, that were shared between a FE_Q and FE_Nothing cells before 
> the p-refinement and now shared between FE_Q and FE_Nothing cells after 
> refinement, should retrain their values from before the refinement. 
>
> Another way to simulation this process would be to use a cell "awaking" 
> process, instead of the cell "birth". I keep call cells FE_Q but apply a 
> very low diffusivity to the cells of the layers that are not yet "awake". 
> But this way, I have to solve a larger system in all time steps. I was 
> hoping to save some computation time, by only forming a system consists of 
> cells that are in the "active" layers only. 
>
> Please let me if this makes sense? Is there any other method in deal.ii 
> that can simulation such a process? 
> Thank you very much and happy holidays.
> Kaushik 
>
>
> On Tue, Dec 29, 2020 at 12:26 PM Wolfgang Bangerth  
> wrote:
>
>> On 12/28/20 5:11 PM, Marc Fehling wrote:
>> > 
>> > In case a FE_Nothing has been configured to dominate, the solution 
>> should be 
>> > continuous on the interface if I understood correctly, i.e., zero on 
>> the face. 
>> > I will write a few tests to see if this is actually automatically the 
>> case in 
>> > user applications. If so, this check for domination will help other 
>> users to 
>> > avoid this pitfall :)
>> > 
>>
>> More tests = more better :-)
>> Cheers
>>   W.
>>
>> -- 
>> 
>> Wolfgang Bangerth  email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-30 Thread Kaushik Das
Hi all,
Thank you for your reply.
Let me explain what I am trying to do and why.
I want to solve a transient heat transfer problem of the additive
manufacturing (AM) process. In AM processes, metal powder is deposited in
layers, and then a laser source scans each layer and melts and bonds the
powder to the layer underneath it. To simulate this layer by layer process,
I want to start with a grid that covers all the layers, but initially, only
the bottom-most layer is active and all other layers are inactive, and then
as time progresses, I want to activate one layer at a time. I read on this
mailing list that cell "birth" or "activation" can be done by changing a
cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all
cells of the grid initially to FE_Nothing except the bottom-most layer. And
then convert one layer at a time to FE_Q. My questions are:
1. Does this make sense?
2. I have to do two other things for this to work. (a) When a cell becomes
FE_Q from FE_Nothing, dofs that are activating for the 1st time, I need to
apply a non-zero initial value to those dofs. This is to simulation the
metal powder deposited at a specified temperature,. e.g. room temperature.
(b) the dofs, that were shared between a FE_Q and FE_Nothing cells before
the p-refinement and now shared between FE_Q and FE_Nothing cells after
refinement, should retrain their values from before the refinement.

Another way to simulation this process would be to use a cell "awaking"
process, instead of the cell "birth". I keep call cells FE_Q but apply a
very low diffusivity to the cells of the layers that are not yet "awake".
But this way, I have to solve a larger system in all time steps. I was
hoping to save some computation time, by only forming a system consists of
cells that are in the "active" layers only.

Please let me if this makes sense? Is there any other method in deal.ii
that can simulation such a process?
Thank you very much and happy holidays.
Kaushik


On Tue, Dec 29, 2020 at 12:26 PM Wolfgang Bangerth 
wrote:

> On 12/28/20 5:11 PM, Marc Fehling wrote:
> >
> > In case a FE_Nothing has been configured to dominate, the solution
> should be
> > continuous on the interface if I understood correctly, i.e., zero on the
> face.
> > I will write a few tests to see if this is actually automatically the
> case in
> > user applications. If so, this check for domination will help other
> users to
> > avoid this pitfall :)
> >
>
> More tests = more better :-)
> Cheers
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/0398a440-3ee2-9642-9c27-c519eb2f12e5%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAC-fs6tnRZ5LDwCts8uO1ZmH9u6dMkY4vWK8SANKKcXSAaBtjQ%40mail.gmail.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-29 Thread Wolfgang Bangerth

On 12/28/20 5:11 PM, Marc Fehling wrote:


In case a FE_Nothing has been configured to dominate, the solution should be 
continuous on the interface if I understood correctly, i.e., zero on the face. 
I will write a few tests to see if this is actually automatically the case in 
user applications. If so, this check for domination will help other users to 
avoid this pitfall :)




More tests = more better :-)
Cheers
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0398a440-3ee2-9642-9c27-c519eb2f12e5%40colostate.edu.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
The FiniteElementDomination logic in the codim=0 case would indeed make up 
a cheap a priori check in this context.

In case a FE_Nothing has been configured to dominate, the solution should 
be continuous on the interface if I understood correctly, i.e., zero on the 
face. I will write a few tests to see if this is actually automatically the 
case in user applications. If so, this check for domination will help other 
users to avoid this pitfall :)

Marc

On Monday, December 28, 2020 at 4:13:40 PM UTC-7 Wolfgang Bangerth wrote:

>
> > The problem here is that the solution is not continuous across the face 
> of a 
> > FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q 
> element, 
> > the solution is suddenly expected to be continuous, and we have no rule 
> in 
> > deal.II yet how to continue in the situation. In my opinion, we should 
> throw 
> > an assertion in this case.
>
> Denis actually thought of that already a while back. It wasn't well 
> documented, but see here now:
> https://github.com/dealii/dealii/pull/11430
>
> Does that also address what you wanted to do in your patch?
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5c35ba6e-6deb-42a2-8ef1-aa45bba097dan%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Wolfgang Bangerth



The problem here is that the solution is not continuous across the face of a 
FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q element, 
the solution is suddenly expected to be continuous, and we have no rule in 
deal.II yet how to continue in the situation. In my opinion, we should throw 
an assertion in this case.


Denis actually thought of that already a while back. It wasn't well 
documented, but see here now:

https://github.com/dealii/dealii/pull/11430

Does that also address what you wanted to do in your patch?

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c6f6b846-9955-a8a6-74c5-785faefa5efc%40colostate.edu.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
Hi Wolfgang,

your explanation makes indeed more sense in the context of piecewise 
polynomials :)

The problem here is that the solution is not continuous across the face of 
a FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q 
element, the solution is suddenly expected to be continuous, and we have no 
rule in deal.II yet how to continue in the situation. In my opinion, we 
should throw an assertion in this case.

I have a patch for the p::d case in mind what will warn users about this: 
we should reinit the solution vector with NaNs and than only overwrite the 
entries once.

I don't know if we have such a test for the general SolutionTransfer class. 
I will check that.

Marc

On Monday, December 28, 2020 at 1:39:33 PM UTC-7 Wolfgang Bangerth wrote:

> On 12/27/20 8:48 PM, Marc Fehling wrote:
> > 
> > 2) I did not know you were trying to interpolate a FENothing element 
> into a 
> > FEQ element. This should not be possible, as you can not interpolate 
> > information from simply 'nothing', and some assertion should be 
> triggered 
> > while trying to do so. The other way round should be possible, i.e., 
> > interpolation from FEQ to FENothing, since you will simply 'forget' what 
> has 
> > been on the old cell.
>
> In hindsight, FE_Nothing was maybe a poorly named class. It should really 
> have 
> been FE_Zero: A finite element space that contains only a single function 
> -- 
> the zero function. Because it only contains one function, it requires no 
> degrees of freedom.
>
> So interpolation from FE_Nothing to FE_Q is well defined if you take this 
> point of view, and projection from any finite element space to FE_Nothing 
> is also.
>
> Best
> Wolfgang
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9e381e90-abeb-402a-96c8-3d43c33c8ff7n%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Wolfgang Bangerth

On 12/27/20 8:48 PM, Marc Fehling wrote:


2) I did not know you were trying to interpolate a FENothing element into a 
FEQ element. This should not be possible, as you can not interpolate 
information from simply 'nothing', and some assertion should be triggered 
while trying to do so. The other way round should be possible, i.e., 
interpolation from FEQ to FENothing, since you will simply 'forget' what has 
been on the old cell.


In hindsight, FE_Nothing was maybe a poorly named class. It should really have 
been FE_Zero: A finite element space that contains only a single function -- 
the zero function. Because it only contains one function, it requires no 
degrees of freedom.


So interpolation from FE_Nothing to FE_Q is well defined if you take this 
point of view, and projection from any finite element space to FE_Nothing is also.


Best
 Wolfgang

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/72c3545b-70bf-afa8-2b8c-bcb121805bac%40colostate.edu.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-27 Thread Marc Fehling
Hi Kaushik,

1) Yes, this is possible, but tricky: `SolutionTransfer` is not capable of 
this feature, and you need to do it manually with the more versatile class 
`CellDataTransfer`. A way to do it has been discussed in #11132 
.

2) I did not know you were trying to interpolate a FENothing element into a 
FEQ element. This should not be possible, as you can not interpolate 
information from simply 'nothing', and some assertion should be triggered 
while trying to do so. The other way round should be possible, i.e., 
interpolation from FEQ to FENothing, since you will simply 'forget' what 
has been on the old cell.

Did you run your program in debug or release mode? If you ran it in debug 
mode without an assertion being triggered, please tell me. The user should 
be warned that they are not allowed to interpolate from a FENothing object 
to a FEQ element (or any element with nodes).

I think what is happening here is that we initialize some container with 
zeros by default, and accidentally overwrite the corresponding dof values. 
I suspect that if you assign both topmost cells to FEQ and the lower ones 
to FENothing, your solution would look okay due to the way we iterate over 
cells (Z-order). Or in short, we first unpack the lower cells, and then 
unpack the upper ones, which may overwrite dof values. This is undefined 
behavior, and we should warn the user about that.

Best,
Marc

On Sunday, December 27, 2020 at 2:28:50 PM UTC-7 k.d...@gmail.com wrote:

> Hello Marc,
> Thank you very much. 
> I have modified my test code as you suggested and it is working fine now. 
> That the code is attached. I have a few more questions that I added to the 
> attached PNG file below along with the results from the test code. 
> 1. Is it possible to specify an initial value on dofs that are activated 
> then a FE_Nothing cell becomes a FE_Q cell?
> 2. What happens to the dofs that were shared between a FE_Nothing cell and 
> a FE_Q cell before the p-refinement, and are shared between two FE_Q 
> elements after p-refinement? Are those always set to zeros after the 
> refinement? 
>
> [image: image.png]
> Thank you,
> Kaushik 
>
> On Wed, Dec 23, 2020 at 5:35 PM Marc Fehling  wrote:
>
>> Hi Kaushik,
>>
>> Be careful on what you are doing here: You prepare your solution to be 
>> transferred on refinement, but at the point where you wish to interpolate 
>> your solution the mesh differs from the one your SolutionTransfer object 
>> expects to encounter. That is because you changed the assigned finite 
>> element between executing the refinement and interpolating the old solution 
>> to the new mesh.
>>
>> You are basically doing two steps here at once: You perform h-refinement 
>> as your first step, and then alter your function space by assigning 
>> different finite elements (p-adaptation). I would suggest you split your 
>> intentions into two separate steps.
>>
>> First, only perform h-refinement on that one cell and interpolate the 
>> entire solution on the new grid. Next, tell your DoFHandler object that you 
>> intend to change the finite element on some of your cells by assigning a 
>> corresponding future finite element index to them (see here 
>> ),
>>  
>> prepare your solution for refinement using a SolutionTransfer object once 
>> more, and finally execute refinement again. This should accomplish what you 
>> whish to intend.
>>
>> In addition, there have been issues using p::d::SolutionTransfer objects 
>> with FENothing elements which have been fixed in #10592 
>> . Please incorporate these 
>> upstream fixes into your deal.II library by building it on the current 
>> master branch.
>>
>> Hope this helps!
>>
>> Best,
>> Marc
>>
>> On Wednesday, December 23, 2020 at 2:33:09 PM UTC-7 k.d...@gmail.com 
>> wrote:
>>
>>> Hi Marc:
>>> Thank you again for your help.
>>> I have another problem. 
>>> A small test code is attached. 
>>>
>>> I have one cell of FEQ element. I refine that into four cells and then 
>>> assign FE_q to two of them and FE_nothing to the other two child cells. 
>>> Then when I try to transfer the solution, the code aborts. 
>>>
>>> Is this a limitation? 
>>>
>>> Thank you very much,
>>> Kaushik 
>>>
>>>
>>> On Wed, Dec 9, 2020 at 7:16 PM Kaushik Das  wrote:
>>>
 Thank you, Mark. I just built dealii from the source 
 (deal.II-9.3.0-pre). And my little test is passing now. 
 Thanks for the help.
 -Kaushik 

 On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das  wrote:

> Thank you Mark. 
> I am using the dealii lib that I got from apt-get from  
> deal.ii-9.2.0-backports.
> I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus 
> and all aborted similarly. 
>
> I will get the latest master branch and build that. 
>
>
> 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-23 Thread Marc Fehling
Hi Kaushik,

Be careful on what you are doing here: You prepare your solution to be 
transferred on refinement, but at the point where you wish to interpolate 
your solution the mesh differs from the one your SolutionTransfer object 
expects to encounter. That is because you changed the assigned finite 
element between executing the refinement and interpolating the old solution 
to the new mesh.

You are basically doing two steps here at once: You perform h-refinement as 
your first step, and then alter your function space by assigning different 
finite elements (p-adaptation). I would suggest you split your intentions 
into two separate steps.

First, only perform h-refinement on that one cell and interpolate the 
entire solution on the new grid. Next, tell your DoFHandler object that you 
intend to change the finite element on some of your cells by assigning a 
corresponding future finite element index to them (see here 
),
 
prepare your solution for refinement using a SolutionTransfer object once 
more, and finally execute refinement again. This should accomplish what you 
whish to intend.

In addition, there have been issues using p::d::SolutionTransfer objects 
with FENothing elements which have been fixed in #10592 
. Please incorporate these 
upstream fixes into your deal.II library by building it on the current 
master branch.

Hope this helps!

Best,
Marc

On Wednesday, December 23, 2020 at 2:33:09 PM UTC-7 k.d...@gmail.com wrote:

> Hi Marc:
> Thank you again for your help.
> I have another problem. 
> A small test code is attached. 
>
> I have one cell of FEQ element. I refine that into four cells and then 
> assign FE_q to two of them and FE_nothing to the other two child cells. 
> Then when I try to transfer the solution, the code aborts. 
>
> Is this a limitation? 
>
> Thank you very much,
> Kaushik 
>
>
> On Wed, Dec 9, 2020 at 7:16 PM Kaushik Das  wrote:
>
>> Thank you, Mark. I just built dealii from the source (deal.II-9.3.0-pre). 
>> And my little test is passing now. 
>> Thanks for the help.
>> -Kaushik 
>>
>> On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das  wrote:
>>
>>> Thank you Mark. 
>>> I am using the dealii lib that I got from apt-get from  
>>> deal.ii-9.2.0-backports.
>>> I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and 
>>> all aborted similarly. 
>>>
>>> I will get the latest master branch and build that. 
>>>
>>>
>>> Thanks,
>>> Kaushik
>>>
>>> On Wed, Dec 9, 2020 at 4:23 PM Marc Fehling  wrote:
>>>
 From your stacktrace I can see you are using PETSc and deal.II 9.2.0 
 which already incorporates the specified patch. Would you try to build the 
 actual master branch anyways?
 On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:

> Hi Kaushik,
>
> I am unable to reproduce your problem with the code you provided on 
> the latest build of deal.II and Trilinos.
>
>- On how many processes did you run your program?
>- Did you use PETSc or Trilinos?
>- Could you try to build deal.II on the latest master branch? 
>There is a chance that your issue has been solved upstream. Chances 
> are 
>high that fix #8860  
>and the changes made to `get_interpolated_dof_values()` will solve 
> your 
>problem.
>
> Marc
> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com 
> wrote:
>
>> Hi Marc and Bruno,
>> I was able to reproduce this abort on an even simpler test. Please 
>> see the attached file. 
>>
>> Initial grid:
>>  /*
>> * ---
>> * |  0 |  0 |
>> * ---
>> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
>> * ---
>> */
>>
>> /* Set refine flags:
>> * ---
>> * |  R |  R |FEQ
>> * ---
>> * |  |  |FE_Nothing
>> * ---
>> */
>>
>> Then refine and solution trans. During the  
>> execute_coarsening_and_refinement, it aborts. 
>>
>> Here is a stack trace:
>> 
>>
>> An error occurred in line <1167> of file 
>>  in 
>> function
>> Number& 
>> dealii::Vector::operator()(dealii::Vector::size_type) 
>> [with 
>> Number = double; dealii::Vector::size_type = unsigned int]
>> The violated condition was: 
>> static_cast> typename std::common_type::type)>:: 
>> type>(i) 
>> < static_cast> std::common_type::type)>:: type>(size())
>> Additional information: 
>> Index 0 is not in the half-open range [0,0). In the current case, 
>> this half-open range is in fact empty, suggesting that you are accessing 
>> an 
>> element of an empty 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-23 Thread Kaushik Das
Hi Marc:
Thank you again for your help.
I have another problem.
A small test code is attached.

I have one cell of FEQ element. I refine that into four cells and then
assign FE_q to two of them and FE_nothing to the other two child cells.
Then when I try to transfer the solution, the code aborts.

Is this a limitation?

Thank you very much,
Kaushik


On Wed, Dec 9, 2020 at 7:16 PM Kaushik Das  wrote:

> Thank you, Mark. I just built dealii from the source (deal.II-9.3.0-pre).
> And my little test is passing now.
> Thanks for the help.
> -Kaushik
>
> On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das  wrote:
>
>> Thank you Mark.
>> I am using the dealii lib that I got from apt-get from
>> deal.ii-9.2.0-backports.
>> I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and
>> all aborted similarly.
>>
>> I will get the latest master branch and build that.
>>
>>
>> Thanks,
>> Kaushik
>>
>> On Wed, Dec 9, 2020 at 4:23 PM Marc Fehling  wrote:
>>
>>> From your stacktrace I can see you are using PETSc and deal.II 9.2.0
>>> which already incorporates the specified patch. Would you try to build the
>>> actual master branch anyways?
>>> On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:
>>>
 Hi Kaushik,

 I am unable to reproduce your problem with the code you provided on the
 latest build of deal.II and Trilinos.

- On how many processes did you run your program?
- Did you use PETSc or Trilinos?
- Could you try to build deal.II on the latest master branch? There
is a chance that your issue has been solved upstream. Chances are high 
 that
fix #8860  and the
changes made to `get_interpolated_dof_values()` will solve your problem.

 Marc
 On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com
 wrote:

> Hi Marc and Bruno,
> I was able to reproduce this abort on an even simpler test. Please see
> the attached file.
>
> Initial grid:
>  /*
> * ---
> * |  0 |  0 |
> * ---
> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
> * ---
> */
>
> /* Set refine flags:
> * ---
> * |  R |  R |FEQ
> * ---
> * |  |  |FE_Nothing
> * ---
> */
>
> Then refine and solution trans. During the
> execute_coarsening_and_refinement, it aborts.
>
> Here is a stack trace:
> 
>
> An error occurred in line <1167> of file
>  in
> function
> Number&
> dealii::Vector::operator()(dealii::Vector::size_type) 
> [with
> Number = double; dealii::Vector::size_type = unsigned int]
> The violated condition was:
> static_cast typename std::common_type::type)>:: 
> type>(i)
> < static_cast std::common_type::type)>:: type>(size())
> Additional information:
> Index 0 is not in the half-open range [0,0). In the current case,
> this half-open range is in fact empty, suggesting that you are accessing 
> an
> element of an empty collection such as a vector that has not been set to
> the correct size.
>
> Stacktrace:
> ---
> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> dealii::Vector::operator()(unsigned int)
> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
> >::pack_callback(dealii::TriaIterator > const&,
> dealii::Triangulation<2, 2>::CellStatus)
> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> std::_Function_handler >
> (dealii::TriaIterator > const&,
> dealii::Triangulation<2, 2>::CellStatus),
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
> dealii::TriaIterator > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
> std::function >
> (dealii::TriaIterator > const&,
> dealii::Triangulation<2,
> 2>::CellStatus)>::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #6  

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Thank you, Mark. I just built dealii from the source (deal.II-9.3.0-pre).
And my little test is passing now.
Thanks for the help.
-Kaushik

On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das  wrote:

> Thank you Mark.
> I am using the dealii lib that I got from apt-get from
> deal.ii-9.2.0-backports.
> I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and
> all aborted similarly.
>
> I will get the latest master branch and build that.
>
>
> Thanks,
> Kaushik
>
> On Wed, Dec 9, 2020 at 4:23 PM Marc Fehling  wrote:
>
>> From your stacktrace I can see you are using PETSc and deal.II 9.2.0
>> which already incorporates the specified patch. Would you try to build the
>> actual master branch anyways?
>> On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:
>>
>>> Hi Kaushik,
>>>
>>> I am unable to reproduce your problem with the code you provided on the
>>> latest build of deal.II and Trilinos.
>>>
>>>- On how many processes did you run your program?
>>>- Did you use PETSc or Trilinos?
>>>- Could you try to build deal.II on the latest master branch? There
>>>is a chance that your issue has been solved upstream. Chances are high 
>>> that
>>>fix #8860  and the
>>>changes made to `get_interpolated_dof_values()` will solve your problem.
>>>
>>> Marc
>>> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com
>>> wrote:
>>>
 Hi Marc and Bruno,
 I was able to reproduce this abort on an even simpler test. Please see
 the attached file.

 Initial grid:
  /*
 * ---
 * |  0 |  0 |
 * ---
 * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
 * ---
 */

 /* Set refine flags:
 * ---
 * |  R |  R |FEQ
 * ---
 * |  |  |FE_Nothing
 * ---
 */

 Then refine and solution trans. During the
 execute_coarsening_and_refinement, it aborts.

 Here is a stack trace:
 

 An error occurred in line <1167> of file
  in
 function
 Number&
 dealii::Vector::operator()(dealii::Vector::size_type) [with
 Number = double; dealii::Vector::size_type = unsigned int]
 The violated condition was:
 static_cast>>> typename std::common_type::type)>:: type>(i)
 < static_cast>>> std::common_type::type)>:: type>(size())
 Additional information:
 Index 0 is not in the half-open range [0,0). In the current case,
 this half-open range is in fact empty, suggesting that you are accessing an
 element of an empty collection such as a vector that has not been set to
 the correct size.

 Stacktrace:
 ---
 #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 dealii::Vector::operator()(unsigned int)
 #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 dealii::parallel::distributed::SolutionTransfer<2,
 dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
 >::pack_callback(dealii::TriaIterator > const&,
 dealii::Triangulation<2, 2>::CellStatus)
 #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 dealii::parallel::distributed::SolutionTransfer<2,
 dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
 >::register_data_attach()::{lambda(dealii::TriaIterator>>> 2> > const&, dealii::Triangulation<2,
 2>::CellStatus)#1}::operator()(dealii::TriaIterator>>> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
 #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 std::_Function_handler >
 (dealii::TriaIterator > const&,
 dealii::Triangulation<2, 2>::CellStatus),
 dealii::parallel::distributed::SolutionTransfer<2,
 dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
 >::register_data_attach()::{lambda(dealii::TriaIterator>>> 2> > const&, dealii::Triangulation<2,
 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
 dealii::TriaIterator > const&,
 dealii::Triangulation<2, 2>::CellStatus&&)
 #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 std::function >
 (dealii::TriaIterator > const&,
 dealii::Triangulation<2,
 2>::CellStatus)>::operator()(dealii::TriaIterator>>> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
 #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 std::_Function_handler >
 (dealii::TriaIterator >,
 dealii::Triangulation<2, 2>::CellStatus), std::function>>> std::allocator > (dealii::TriaIterator >
 const&, dealii::Triangulation<2, 2>::CellStatus)>
 >::_M_invoke(std::_Any_data const&,
 dealii::TriaIterator >&&,
 dealii::Triangulation<2, 2>::CellStatus&&)
 #7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
 std::function >
 (dealii::TriaIterator >,
 dealii::Triangulation<2,
 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Thank you Mark.
I am using the dealii lib that I got from apt-get from
deal.ii-9.2.0-backports.
I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and all
aborted similarly.

I will get the latest master branch and build that.


Thanks,
Kaushik

On Wed, Dec 9, 2020 at 4:23 PM Marc Fehling  wrote:

> From your stacktrace I can see you are using PETSc and deal.II 9.2.0 which
> already incorporates the specified patch. Would you try to build the actual
> master branch anyways?
> On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:
>
>> Hi Kaushik,
>>
>> I am unable to reproduce your problem with the code you provided on the
>> latest build of deal.II and Trilinos.
>>
>>- On how many processes did you run your program?
>>- Did you use PETSc or Trilinos?
>>- Could you try to build deal.II on the latest master branch? There
>>is a chance that your issue has been solved upstream. Chances are high 
>> that
>>fix #8860  and the
>>changes made to `get_interpolated_dof_values()` will solve your problem.
>>
>> Marc
>> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com
>> wrote:
>>
>>> Hi Marc and Bruno,
>>> I was able to reproduce this abort on an even simpler test. Please see
>>> the attached file.
>>>
>>> Initial grid:
>>>  /*
>>> * ---
>>> * |  0 |  0 |
>>> * ---
>>> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
>>> * ---
>>> */
>>>
>>> /* Set refine flags:
>>> * ---
>>> * |  R |  R |FEQ
>>> * ---
>>> * |  |  |FE_Nothing
>>> * ---
>>> */
>>>
>>> Then refine and solution trans. During the
>>> execute_coarsening_and_refinement, it aborts.
>>>
>>> Here is a stack trace:
>>> 
>>>
>>> An error occurred in line <1167> of file
>>>  in
>>> function
>>> Number&
>>> dealii::Vector::operator()(dealii::Vector::size_type) [with
>>> Number = double; dealii::Vector::size_type = unsigned int]
>>> The violated condition was:
>>> static_cast>> typename std::common_type::type)>:: type>(i)
>>> < static_cast>> std::common_type::type)>:: type>(size())
>>> Additional information:
>>> Index 0 is not in the half-open range [0,0). In the current case,
>>> this half-open range is in fact empty, suggesting that you are accessing an
>>> element of an empty collection such as a vector that has not been set to
>>> the correct size.
>>>
>>> Stacktrace:
>>> ---
>>> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> dealii::Vector::operator()(unsigned int)
>>> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> dealii::parallel::distributed::SolutionTransfer<2,
>>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>>> >::pack_callback(dealii::TriaIterator > const&,
>>> dealii::Triangulation<2, 2>::CellStatus)
>>> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> dealii::parallel::distributed::SolutionTransfer<2,
>>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>>> >::register_data_attach()::{lambda(dealii::TriaIterator>> 2> > const&, dealii::Triangulation<2,
>>> 2>::CellStatus)#1}::operator()(dealii::TriaIterator>> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>>> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> std::_Function_handler >
>>> (dealii::TriaIterator > const&,
>>> dealii::Triangulation<2, 2>::CellStatus),
>>> dealii::parallel::distributed::SolutionTransfer<2,
>>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>>> >::register_data_attach()::{lambda(dealii::TriaIterator>> 2> > const&, dealii::Triangulation<2,
>>> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
>>> dealii::TriaIterator > const&,
>>> dealii::Triangulation<2, 2>::CellStatus&&)
>>> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> std::function >
>>> (dealii::TriaIterator > const&,
>>> dealii::Triangulation<2,
>>> 2>::CellStatus)>::operator()(dealii::TriaIterator>> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>>> #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> std::_Function_handler >
>>> (dealii::TriaIterator >,
>>> dealii::Triangulation<2, 2>::CellStatus), std::function>> std::allocator > (dealii::TriaIterator >
>>> const&, dealii::Triangulation<2, 2>::CellStatus)>
>>> >::_M_invoke(std::_Any_data const&,
>>> dealii::TriaIterator >&&,
>>> dealii::Triangulation<2, 2>::CellStatus&&)
>>> #7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> std::function >
>>> (dealii::TriaIterator >,
>>> dealii::Triangulation<2,
>>> 2>::CellStatus)>::operator()(dealii::TriaIterator>> 2> >, dealii::Triangulation<2, 2>::CellStatus) const
>>> #8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
>>> dealii::parallel::distributed::Triangulation<2,
>>> 2>::DataTransfer::pack_data(std::vector>> dealii::Triangulation<2, 2>::CellStatus,
>>> dealii::TriaIterator > >,
>>> std::allocator>> 2>::CellStatus, 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
>From your stacktrace I can see you are using PETSc and deal.II 9.2.0 which 
already incorporates the specified patch. Would you try to build the actual 
master branch anyways?
On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:

> Hi Kaushik,
>
> I am unable to reproduce your problem with the code you provided on the 
> latest build of deal.II and Trilinos.
>
>- On how many processes did you run your program?
>- Did you use PETSc or Trilinos?
>- Could you try to build deal.II on the latest master branch? There is 
>a chance that your issue has been solved upstream. Chances are high that 
>fix #8860  and the changes 
>made to `get_interpolated_dof_values()` will solve your problem.
>
> Marc
> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com wrote:
>
>> Hi Marc and Bruno,
>> I was able to reproduce this abort on an even simpler test. Please see 
>> the attached file. 
>>
>> Initial grid:
>>  /*
>> * ---
>> * |  0 |  0 |
>> * ---
>> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
>> * ---
>> */
>>
>> /* Set refine flags:
>> * ---
>> * |  R |  R |FEQ
>> * ---
>> * |  |  |FE_Nothing
>> * ---
>> */
>>
>> Then refine and solution trans. During the  
>> execute_coarsening_and_refinement, it aborts. 
>>
>> Here is a stack trace:
>> 
>>
>> An error occurred in line <1167> of file 
>>  in 
>> function
>> Number& 
>> dealii::Vector::operator()(dealii::Vector::size_type) [with 
>> Number = double; dealii::Vector::size_type = unsigned int]
>> The violated condition was: 
>> static_cast> std::common_type::type)>:: type>(i) < 
>> static_cast> std::common_type::type)>:: type>(size())
>> Additional information: 
>> Index 0 is not in the half-open range [0,0). In the current case, 
>> this half-open range is in fact empty, suggesting that you are accessing an 
>> element of an empty collection such as a vector that has not been set to 
>> the correct size.
>>
>> Stacktrace:
>> ---
>> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::Vector::operator()(unsigned int)
>> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::pack_callback(dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus)
>> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::register_data_attach()::{lambda(dealii::TriaIterator> > 
>> 2> > const&, dealii::Triangulation<2, 
>> 2>::CellStatus)#1}::operator()(dealii::TriaIterator> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::_Function_handler > 
>> (dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus), 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::register_data_attach()::{lambda(dealii::TriaIterator> > 
>> 2> > const&, dealii::Triangulation<2, 
>> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&, 
>> dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus&&)
>> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::function > 
>> (dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 
>> 2>::CellStatus)>::operator()(dealii::TriaIterator> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>> #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::_Function_handler > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 2>::CellStatus), std::function> std::allocator > (dealii::TriaIterator > 
>> const&, dealii::Triangulation<2, 2>::CellStatus)> 
>> >::_M_invoke(std::_Any_data const&, 
>> dealii::TriaIterator >&&, 
>> dealii::Triangulation<2, 2>::CellStatus&&)
>> #7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::function > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 
>> 2>::CellStatus)>::operator()(dealii::TriaIterator> 2> >, dealii::Triangulation<2, 2>::CellStatus) const
>> #8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::parallel::distributed::Triangulation<2, 
>> 2>::DataTransfer::pack_data(std::vector> dealii::Triangulation<2, 2>::CellStatus, 
>> dealii::TriaIterator > >, 
>> std::allocator> 2>::CellStatus, dealii::TriaIterator > > > > 
>> const&, std::vector > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 2>::CellStatus)>, 
>> std::allocator > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 2>::CellStatus)> > > const&, 
>> std::vector > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 2>::CellStatus)>, 
>> std::allocator > 
>> (dealii::TriaIterator 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
Hi Kaushik,

I am unable to reproduce your problem with the code you provided on the 
latest build of deal.II and Trilinos.

   - On how many processes did you run your program?
   - Did you use PETSc or Trilinos?
   - Could you try to build deal.II on the latest master branch? There is a 
   chance that your issue has been solved upstream. Chances are high that fix 
   #8860  and the changes made 
   to `get_interpolated_dof_values()` will solve your problem.
   
Marc
On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com wrote:

> Hi Marc and Bruno,
> I was able to reproduce this abort on an even simpler test. Please see the 
> attached file. 
>
> Initial grid:
>  /*
> * ---
> * |  0 |  0 |
> * ---
> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
> * ---
> */
>
> /* Set refine flags:
> * ---
> * |  R |  R |FEQ
> * ---
> * |  |  |FE_Nothing
> * ---
> */
>
> Then refine and solution trans. During the  
> execute_coarsening_and_refinement, it aborts. 
>
> Here is a stack trace:
> 
>
> An error occurred in line <1167> of file 
>  in 
> function
> Number& 
> dealii::Vector::operator()(dealii::Vector::size_type) [with 
> Number = double; dealii::Vector::size_type = unsigned int]
> The violated condition was: 
> static_cast std::common_type::type)>:: type>(i) < 
> static_cast std::common_type::type)>:: type>(size())
> Additional information: 
> Index 0 is not in the half-open range [0,0). In the current case, this 
> half-open range is in fact empty, suggesting that you are accessing an 
> element of an empty collection such as a vector that has not been set to 
> the correct size.
>
> Stacktrace:
> ---
> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::Vector::operator()(unsigned int)
> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::pack_callback(dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus)
> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::register_data_attach()::{lambda(dealii::TriaIterator > 
> 2> > const&, dealii::Triangulation<2, 
> 2>::CellStatus)#1}::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::_Function_handler > 
> (dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus), 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::register_data_attach()::{lambda(dealii::TriaIterator > 
> 2> > const&, dealii::Triangulation<2, 
> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&, 
> dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus&&)
> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::function > 
> (dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 
> 2>::CellStatus)>::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::_Function_handler > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus), std::function std::allocator > (dealii::TriaIterator > 
> const&, dealii::Triangulation<2, 2>::CellStatus)> 
> >::_M_invoke(std::_Any_data const&, 
> dealii::TriaIterator >&&, 
> dealii::Triangulation<2, 2>::CellStatus&&)
> #7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::function > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 
> 2>::CellStatus)>::operator()(dealii::TriaIterator 2> >, dealii::Triangulation<2, 2>::CellStatus) const
> #8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::Triangulation<2, 
> 2>::DataTransfer::pack_data(std::vector dealii::Triangulation<2, 2>::CellStatus, 
> dealii::TriaIterator > >, 
> std::allocator 2>::CellStatus, dealii::TriaIterator > > > > 
> const&, std::vector > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus)>, 
> std::allocator > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus)> > > const&, 
> std::vector > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus)>, 
> std::allocator > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus)> > > const&)
> #9  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::Triangulation<2, 
> 2>::execute_coarsening_and_refinement()
> #10  /home/kdas/FE_NothingTest/build/test_distributed: main
>
> Thank you very much,
> Kaushik 
>
> On Tue, Dec 8, 2020 at 6:25 PM Marc Fehling  wrote:
>
>> Hi Kaushik,
>>
>> the 

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Hi Marc and Bruno,
I was able to reproduce this abort on an even simpler test. Please see the
attached file.

Initial grid:
 /*
* ---
* |  0 |  0 |
* ---
* |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
* ---
*/

/* Set refine flags:
* ---
* |  R |  R |FEQ
* ---
* |  |  |FE_Nothing
* ---
*/

Then refine and solution trans. During the
execute_coarsening_and_refinement, it aborts.

Here is a stack trace:

An error occurred in line <1167> of file
 in
function
Number&
dealii::Vector::operator()(dealii::Vector::size_type) [with
Number = double; dealii::Vector::size_type = unsigned int]
The violated condition was:
static_cast::type)>:: type>(i) <
static_cast::type)>:: type>(size())
Additional information:
Index 0 is not in the half-open range [0,0). In the current case, this
half-open range is in fact empty, suggesting that you are accessing an
element of an empty collection such as a vector that has not been set to
the correct size.

Stacktrace:
---
#0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::Vector::operator()(unsigned int)
#1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
#2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::pack_callback(dealii::TriaIterator > const&,
dealii::Triangulation<2, 2>::CellStatus)
#3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::register_data_attach()::{lambda(dealii::TriaIterator > const&, dealii::Triangulation<2,
2>::CellStatus)#1}::operator()(dealii::TriaIterator > const&, dealii::Triangulation<2, 2>::CellStatus) const
#4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::_Function_handler >
(dealii::TriaIterator > const&,
dealii::Triangulation<2, 2>::CellStatus),
dealii::parallel::distributed::SolutionTransfer<2,
dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2>
>::register_data_attach()::{lambda(dealii::TriaIterator > const&, dealii::Triangulation<2,
2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
dealii::TriaIterator > const&,
dealii::Triangulation<2, 2>::CellStatus&&)
#5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::function >
(dealii::TriaIterator > const&,
dealii::Triangulation<2,
2>::CellStatus)>::operator()(dealii::TriaIterator > const&, dealii::Triangulation<2, 2>::CellStatus) const
#6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::_Function_handler >
(dealii::TriaIterator >,
dealii::Triangulation<2, 2>::CellStatus), std::function > (dealii::TriaIterator >
const&, dealii::Triangulation<2, 2>::CellStatus)>
>::_M_invoke(std::_Any_data const&,
dealii::TriaIterator >&&,
dealii::Triangulation<2, 2>::CellStatus&&)
#7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
std::function >
(dealii::TriaIterator >,
dealii::Triangulation<2,
2>::CellStatus)>::operator()(dealii::TriaIterator >, dealii::Triangulation<2, 2>::CellStatus) const
#8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::Triangulation<2,
2>::DataTransfer::pack_data(std::vector::CellStatus,
dealii::TriaIterator > >,
std::allocator::CellStatus, dealii::TriaIterator > > > >
const&, std::vector >
(dealii::TriaIterator >,
dealii::Triangulation<2, 2>::CellStatus)>,
std::allocator >
(dealii::TriaIterator >,
dealii::Triangulation<2, 2>::CellStatus)> > > const&,
std::vector >
(dealii::TriaIterator >,
dealii::Triangulation<2, 2>::CellStatus)>,
std::allocator >
(dealii::TriaIterator >,
dealii::Triangulation<2, 2>::CellStatus)> > > const&)
#9  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0:
dealii::parallel::distributed::Triangulation<2,
2>::execute_coarsening_and_refinement()
#10  /home/kdas/FE_NothingTest/build/test_distributed: main

Thank you very much,
Kaushik

On Tue, Dec 8, 2020 at 6:25 PM Marc Fehling  wrote:

> Hi Kaushik,
>
> the `p::d::SolutionTransfer` class should be able to deal with `FENothing`
> elements in your example. The tricky cases are when you're coarsening a
> `FENothing` element with others as Bruno already pointed out
> (h-coarsening), or if you change a `FENothing` element to a different
> element in the process (p-adaptation). But with your recent modification,
> you avoid these cases.
>
> I suspect that something else causes the error in your program. Could you
> run a debugger on this and give us the backtrace for the exception? This
> will give us more clues to figure out what goes wrong!
>
> Best,
> Marc
> On Tuesday, December 8, 2020 at 7:13:29 AM UTC-7 k.d...@gmail.com wrote:
>
>> Hi Bruno:
>> Thanks for pointing that out.
>> I tried to not refine FE_nothing cells by modifying the refine loop:
>> (The modified test is attached).
>>
>> for (auto  : dgq_dof_handler.active_cell_iterators())
>>   if (cell->is_locally_owned() && cell->active_fe_index () != 0)
>> {
>>

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Marc Fehling
Hi Kaushik,

the `p::d::SolutionTransfer` class should be able to deal with `FENothing` 
elements in your example. The tricky cases are when you're coarsening a 
`FENothing` element with others as Bruno already pointed out 
(h-coarsening), or if you change a `FENothing` element to a different 
element in the process (p-adaptation). But with your recent modification, 
you avoid these cases.

I suspect that something else causes the error in your program. Could you 
run a debugger on this and give us the backtrace for the exception? This 
will give us more clues to figure out what goes wrong!

Best,
Marc
On Tuesday, December 8, 2020 at 7:13:29 AM UTC-7 k.d...@gmail.com wrote:

> Hi Bruno:
> Thanks for pointing that out. 
> I tried to not refine FE_nothing cells by modifying the refine loop:
> (The modified test is attached).
>
> for (auto  : dgq_dof_handler.active_cell_iterators()) 
>   if (cell->is_locally_owned() && cell->active_fe_index () != 0)
> {
>   if (counter > ((dim == 2) ? 4 : 8))
> cell->set_coarsen_flag();
>   else
> cell->set_refine_flag();
> }
>
> But this still aborts. 
> Kaushik 
>
> On Tue, Dec 8, 2020 at 8:36 AM Bruno Turcksin  
> wrote:
>
>> Hi,
>>
>> Are you sure that your test makes sense? You randomly assign FE indices 
>> to cells then you refine and coarsen cells. But what does it mean to 
>> coarsen 4 cells together when one of them is FE_Nothing? What would you 
>> expect to happen?
>>
>> Best,
>>
>> Bruno
>>
>> On Monday, December 7, 2020 at 5:54:10 PM UTC-5 k.d...@gmail.com wrote:
>>
>>> Hi all:
>>>
>>> I modified the test  tests/mpi/solution_transfer_05.cc to add a 
>>> FE_Nohting element to the FECollection. I also modified the other elements 
>>> to FE_Q. 
>>>
>>> When I run the test, it's aborting in solution transfer. 
>>> Is there any limitations in using FE_Nothing with parallel solution 
>>> transfer? 
>>> The modified test is attached.
>>> Thank you very much.
>>> Kaushik 
>>>
>>>  
>>> An error occurred in line <1167> of file 
>>>  in 
>>> function
>>> Number& 
>>> dealii::Vector::operator()(dealii::Vector::size_type) [with 
>>> Number = double; dealii::Vector::size_type = unsigned int]
>>> The violated condition was:
>>> static_cast>> typename std::common_type::type)>:: type>(i) 
>>> < static_cast>> std::common_type::type)>:: type>(size())
>>> Additional information:
>>> Index 0 is not in the half-open range [0,0). In the current case, 
>>> this half-open range is in fact empty, suggesting that you are accessing an 
>>> element of an empty collection such as a vector that has not been set to 
>>> the correct size.
>>>
>>>
>>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/609b0457-ba90-4aa2-a7d1-5b798d5349ebn%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4cd4582b-3168-4b8b-8195-e316b049cfadn%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Kaushik Das
Hi Bruno:
Thanks for pointing that out.
I tried to not refine FE_nothing cells by modifying the refine loop:
(The modified test is attached).

for (auto  : dgq_dof_handler.active_cell_iterators())
  if (cell->is_locally_owned() && cell->active_fe_index () != 0)
{
  if (counter > ((dim == 2) ? 4 : 8))
cell->set_coarsen_flag();
  else
cell->set_refine_flag();
}

But this still aborts.
Kaushik

On Tue, Dec 8, 2020 at 8:36 AM Bruno Turcksin 
wrote:

> Hi,
>
> Are you sure that your test makes sense? You randomly assign FE indices to
> cells then you refine and coarsen cells. But what does it mean to coarsen 4
> cells together when one of them is FE_Nothing? What would you expect to
> happen?
>
> Best,
>
> Bruno
>
> On Monday, December 7, 2020 at 5:54:10 PM UTC-5 k.d...@gmail.com wrote:
>
>> Hi all:
>>
>> I modified the test  tests/mpi/solution_transfer_05.cc to add a
>> FE_Nohting element to the FECollection. I also modified the other elements
>> to FE_Q.
>>
>> When I run the test, it's aborting in solution transfer.
>> Is there any limitations in using FE_Nothing with parallel solution
>> transfer?
>> The modified test is attached.
>> Thank you very much.
>> Kaushik
>>
>> 
>> An error occurred in line <1167> of file
>>  in
>> function
>> Number&
>> dealii::Vector::operator()(dealii::Vector::size_type) [with
>> Number = double; dealii::Vector::size_type = unsigned int]
>> The violated condition was:
>> static_cast> std::common_type::type)>:: type>(i) <
>> static_cast> std::common_type::type)>:: type>(size())
>> Additional information:
>> Index 0 is not in the half-open range [0,0). In the current case,
>> this half-open range is in fact empty, suggesting that you are accessing an
>> element of an empty collection such as a vector that has not been set to
>> the correct size.
>>
>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/609b0457-ba90-4aa2-a7d1-5b798d5349ebn%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAC-fs6t%3DSSMODByyuRKBMBa20mCnuBb4jQmn04ZoLN6yC73faQ%40mail.gmail.com.
// -
//
// Copyright (C) 2019 - 2020 by the deal.II authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later version.
// The full text of the license can be found in the file LICENSE.md at
// the top level directory of deal.II.
//
// -



// This test crashed at some point: We have set and sent active_fe_indices based
// on the refinement flags on the p::d::Triangulation object. However, p4est has
// the last word on deciding which cells will be refined -- and p4est makes use
// of it in the specific scenario provided as a test. A fix has been introduced
// along with this test.


#include 

#include 
#include 

#include 
#include 

#include 
#include 

#include 

#include 
#include 

#include 

#include 

#include "tests.h"


template 
void
test()
{
  // setup
  parallel::distributed::Triangulation tria(MPI_COMM_WORLD);
  GridGenerator::hyper_cube(tria);
  tria.refine_global(2);

  const unsigned intmax_degree = 6 - dim;
  hp::FECollection fe_dgq;
  
  fe_dgq.push_back(FE_Nothing());
  for (unsigned int deg = 1; deg <= max_degree - 1; ++deg)
fe_dgq.push_back(FE_Q(deg));

  hp::DoFHandler dgq_dof_handler(tria);

  // randomly assign fes
  for (const auto  : dgq_dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
  cell->set_active_fe_index(Testing::rand() % max_degree);
  dgq_dof_handler.distribute_dofs(fe_dgq);

  // prepare index sets
  IndexSet