Re: [deal.II] Re: Transfer vector of solutions

2021-01-29 Thread Marc Fehling
Hello Karthik,

it is perfectly reasonable to treat refinement for the initial mesh 
separately.

I noticed that both your refine and coarsen fractions always add up to 
100%. This is not a requirement! You can adjust both fractions 
independently until you are okay fine with the results.

Marc

On Friday, January 29, 2021 at 6:28:15 AM UTC-7 Karthi wrote:

> If I use a smaller fraction, then it wouldn't adapt the 
> initial conditions properly. 
>
> So I sort of fixed the issue, by using a if statement as follow,
>
>
> 
>
>   if(time.get_step_number() == 0)
>
>   GridRefinement::refine_and_coarsen_fixed_fraction(triangulation,
>
> 
> sum_estimated_error_per_cell,
>
> 0.80,
>
> 0.20);
>
> else
>
>   GridRefinement::refine_and_coarsen_fixed_fraction(triangulation,
>
> 
> sum_estimated_error_per_cell,
>
> 0.30,
>
> 0.70);
>
>
> I have attached the current mesh refinement plots, so that someone else 
> might find this post useful.
>
>
> Best,
>
> Karthi.
>
> On Fri, Jan 29, 2021 at 12:51 AM Wolfgang Bangerth  
> wrote:
>
>> On 1/28/21 3:51 PM, Karthikeyan Chockalingam wrote:
>> > The mesh continues to refine is this an acceptable behaviour?
>>
>> Yes, the mesh looks reasonable to me. I might try to refine a smaller 
>> fraction 
>> of the cells (it doesn't seem necessary to have this many cells refined) 
>> but 
>> the mesh seems adequate for the solution.
>>
>> Best
>>   W.
>>
>>
>> -- 
>> 
>> Wolfgang Bangerth  email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/dealii/YA-eLEs43bc/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> dealii+un...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/8a0ff92a-4a24-f65e-a26f-74ebb41e41bf%40colostate.edu
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/804b75f1-d4b1-41cf-be5d-17e64d1542cfn%40googlegroups.com.


Re: [deal.II] Re: Transfer vector of solutions

2021-01-27 Thread Marc Fehling
HI Karthik,

Glad we could help :-)

To Question 1:
So you estimate the error for the second component of each of your 
solutions, and then you add up the estimates on each cell. What is the 
reasoning behind summing up the error of multiple solutions?
Assume that the error estimate of one of your solutions is larger than all 
others by magnitudes, then this would be the dominant part of your estimate 
sum, and the other solutions would have no influence at all.
Maybe it would be reasonable to only pick one of your estimates as a 
measure for grid refinement.
First, you should find a detailed answer to the question why you want to 
refine your grid, and then find a way how you want to do it.

To Question 2:
`GridRefinement::refine_and_coarsen_fixed_number` always refines and 
coarsens a the specified fraction of ALL cells. With this you can control 
the growth of the mesh size, rather than the reduction of the error. For 
the latter, you can use `GridRefinement::refine_and_coarsen_fixed_fraction`. 
Otherwise, I would suggest to revise the fractions you are using for grid 
refinement.

Marc

On Wednesday, January 27, 2021 at 12:40:37 PM UTC-7 Karthi wrote:

> Hi Marc,
>
>
> Sorry for my delayed response, I was away for a couple of days. Thank you, 
> your suggestions were very helpful and it worked.
>
> I have a couple of follow up questions in regard to mesh refinement.
>
>
> Question (1)
>
>
> I employ FE_Q(degree, 2) for all my sub-problems (and the same 
> dof_handler).
>
>
> Hence, I have a std::vector, I created a container of 
> estimated_error_per_cell 
> for each solution and summed it all up in sum_estimater_error_per_cell, 
> which is then used in the Kelly Error Estimator. I used fe.componet_mask to 
> account for only the second component of my solution while calculating the 
> error estimate. 
>
>
> My objective is calculate a gradient error estimator as follow, where N 
> represents solution.size() (i.e the number of sub-problems)
>
>
> \sum_{i}^{N} |\nabla \alpha_i|
>
>
>
> I don't know if the below code achieves this?
>
>
> std::vector> estimated_error_per_cell(num_index,Vector >(triangulation.n_active_cells()));
>
>
> Vector sum_estimated_error_per_cell(triangulation.n_active_cells
> ());
>
> FEValuesExtractors::Scalar alpha(1);
>
>
> for(unsigned int i=0; i < num_index; i++)
>
>KellyErrorEstimator::estimate(dof_handler,
>
> QGauss(fe.degree + 1),
>
> {},
>
> solution[i],
>
> estimated_error_per_cell[i],
>
> fe.component_mask(alpha));
>
>
>  for(unsigned int i=0; i < num_index; i++)
>
> sum_estimated_error_per_cell += estimated_error_per_cell[i];
>
>
>
>  GridRefinement::refine_and_coarsen_fixed_number(triangulation,
>
>   
> sum_estimated_error_per_cell,
>
> 0.60,
>
> 0.40);
>
>
> Question (2)
>
>
> Please see the attachment of mesh refinement plots of the transient 
> solution. As you can see, the initial mesh refinement at time zero makes 
> sense but as the solution decays I was hoping the mesh would coarsen (but 
> it refines further). I am clearly doing something wrong. I need some help 
> in fixing this issue.
>
>
> Thank you!
>
>
> Karthi.
>
> On Mon, Jan 25, 2021 at 12:19 AM Marc Fehling  wrote:
>
>> Hi Karthi,
>>
>> if you work on the same DoFHandler, one SolutionTransfer object is 
>> sufficient.
>>
>> There are already member functions that take a container of solutions 
>> like yours as a parameter. Have a look at 
>> SolutionTransfer::prepare_for_coarsening_and_refinement 
>> <https://www.dealii.org/developer/doxygen/deal.II/classSolutionTransfer.html#ae6dc5e5a74b166b0dea35f5a64694e69>
>>  
>> and SolutionTransfer::interpolate 
>> <https://www.dealii.org/developer/doxygen/deal.II/classSolutionTransfer.html#ae067f9b520ed50c86a9ff4c7776d16cb>
>> .
>>
>> Best,
>> Marc
>>
>> On Saturday, January 23, 2021 at 6:30:34 AM UTC-7 Karthi wrote:
>>
>>> Dear All,
>>>
>>> I have a fourth order parabolic equation, which is split into two second 
>>> order equations. Hence I solve using components by 
>>> declaring FESystem  fe(FE_Q(degree,2). I have in total three such 
>>> sub-problems, which are coupled to each other in a semi-implicit manner. 
>>> Therefore I have a std::vector of solution for the entire system.
>>>
>>> std::vector> solution.
>>>
>>> In addition, I am em

[deal.II] Re: deal.ii installation on NERSC Cori

2021-01-26 Thread Marc Fehling
Hello

I am not familiar with the details of the NESRC Cori machine. In its 
documentation, I found the following manual 
.
 
I hope this helps.

I can only speak from my experience on HPC machines, and we had particular 
architecture environments in SLURM specially meant to compile for Haskell 
OR KNL machines. Programs compiled for one won't run on the other type of 
hardware.

Marc
On Tuesday, January 26, 2021 at 2:18:17 PM UTC-7 yanj...@umich.edu wrote:

> Hello,  
>
> I am trying to install the latest version of deal.II on NERSC Cori in 
> order to be able to run the  PRISMS-PF framework, which is deal.II-based. 
> This framework only needs deal.II to be configured the p4est and the MPI 
> options. I would like to be able to run on KNL nodes.
>
> The main questions I have are:
>
> 1) I am not sure what are the recommended environment variables and 
> modules I need to load beforehand building deal.II on Cori
>
> 2) I do not know whether I should do the installation in a KNL compute 
> mode, rather than in a login node (which is Haswell).
>
> Below are the set of steps which we used for a previous installation. Are 
> these correct? If not, could you point out the problems? (we are observing 
> poor scalability beyond 1 node. Any insight for this is appreciated. Thank 
> you!
>
> *1. Building and configuring deal.II within a KNL node requires an 
> interactive job*
>
> $ salloc -N 1 -n 68 —account= -C knl -q interactive -t 4:00:00 
>
> *2. Load/unload modules*
>
> module unload cray-libsci/19.06.1
> module load cmake/3.14.4
> module swap craype-haswell craype-mic-knl
>
> Currently Loaded Modulefiles:
>
>   1) modules/3.2.11.4
>
>   2) altd/2.0
>
>   3) darshan/3.1.7
>
>   4) craype-network-aries
>
>   5) intel/19.0.3.199
>
>   6) craype/2.6.2
>
>   7) udreg/2.3.2-7.0.1.1_3.41__g8175d3d.ari
>
>   8) ugni/6.0.14.0-7.0.1.1_7.43__ge78e5b0.ari
>
>   9) pmi/5.0.14
>
>  10) dmapp/7.1.1-7.0.1.1_4.56__g38cf134.ari
>
>  11) gni-headers/5.0.12.0-7.0.1.1_6.33__g3b1768f.ari
>
>  12) xpmem/2.2.20-7.0.1.1_4.16__g0475745.ari
>
>  13) job/2.2.4-7.0.1.1_3.43__g36b56f4.ari
>
>  14) dvs/2.12_2.2.164-7.0.1.1_13.3__g354a5276
>
>  15) alps/6.6.58-7.0.1.1_6.13__g437d88db.ari
>
>  16) rca/2.2.20-7.0.1.1_4.56__g8e3fb5b.ari
>
>  17) atp/2.1.3
>
>  18) PrgEnv-intel/6.0.5
>
>  19) craype-mic-knl
>
>  20) cray-mpich/7.7.10
>
>  21) craype-hugepages2M
>
>  22) nano/2.6.3
>
>  23) cmake/3.14.4
>
>  24) Base-opts/2.4.139-7.0.1.1_4.78__gbb799dd.ari
>
> *3. Set environment variables*
>
> export XTPE_LINK_TYPE=dynamic
> export CRAYPE_LINK_TYPE=dynamic
>
> *4. Before installing dealii, we need to install the p4est dependency. To 
> install p4est do the following:*
>
> $ cd $HOME
>
> $ mkdir p4est_files
>
> $ mkdir p4est_install
>
> $ cd p4est_files
>
> *5. Download p4est tarball and setup script*
>
> $ wget http://p4est.github.io/release/p4est-2.2.tar.gz
>
> $ wget https://www.dealii.org/9.2.0/external-libs/p4est-setup.sh 
>
> *6. Install p4est using the setup script*
>
> $ chmod u+x p4est-setup.sh
>
> $ ./p4est-setup.sh p4est-2.2.tar.gz $HOME/p4est_install 
>
> *7. Once installed without errors, set the environment variable pointing 
> to the p4est installation directory.*
>
> $ export P4EST_DIR=$HOME/p4est_install 
>
> *8. Now ready to compile and install deal.ii with the following commands:*
>
> $ cd $HOME
>
> $ wget https://dealii.43-1.org/downloads/dealii-9.2.0.tar.gz
>
> $ tar xvzf dealii-9.2.0.tar.gz
>
> $ mkdir dealii_install
>
> $ mkdir build 
>
> $ cd build
>
> *9. type the following command*
>
> $ cmake -DDEAL_II_WITH_MPI=ON 
> -DDEAL_II_WITH_P4EST=ON-DCMAKE_INSTALL_PREFIX=$HOME/dealii_install 
> $HOME/dealii-9.2.0 
>
> *10 install and test deal.ii*
>
> $ make install
>
> $ make test
>
> *11. Once this has completed successfully, deal.ii has been installed, 
> ensure that the directories where p4est and deal.ii are installed are set 
> as environment variables. To do this add the following lines to the file 
> .bashrc (or .bash_profile)*
>
> $ export P4EST_DIR=$HOME/p4est_install
>
> $ export DEAL_II_DIR=$HOME/dealii_install 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c19e35da-207f-4800-b2ad-ea7be08b75f1n%40googlegroups.com.


[deal.II] Re: Transfer vector of solutions

2021-01-24 Thread Marc Fehling
Hi Karthi,

if you work on the same DoFHandler, one SolutionTransfer object is 
sufficient.

There are already member functions that take a container of solutions like 
yours as a parameter. Have a look at 
SolutionTransfer::prepare_for_coarsening_and_refinement 

 
and SolutionTransfer::interpolate 

.

Best,
Marc

On Saturday, January 23, 2021 at 6:30:34 AM UTC-7 Karthi wrote:

> Dear All,
>
> I have a fourth order parabolic equation, which is split into two second 
> order equations. Hence I solve using components by 
> declaring FESystem  fe(FE_Q(degree,2). I have in total three such 
> sub-problems, which are coupled to each other in a semi-implicit manner. 
> Therefore I have a std::vector of solution for the entire system.
>
> std::vector> solution.
>
> In addition, I am employing mesh adaptivity. After estimating the error 
> using Kelly, I would like to perform a solution transfer from old to new 
> mesh. Do I need to create a std::vector of SolutionTransfer objects; one 
> for each solution?
>
> The below code copied from step-26 seems to work. Is this the correct 
> approach?
>
> std::vector> solution_trans(3,dof_handler);
>
> std::vector> previous_solution(num_index);
>
> for(unsigned int i=0; i < num_index; i++){previous_solution[i] = 
> solution[i];}
>
> triangulation.prepare_coarsening_and_refinement();
>
> for(unsigned int i=0; i < num_index; 
> i++){solution_trans[i].prepare_for_coarsening_and_refinement(previous_solution[i]);}
>
> triangulation.execute_coarsening_and_refinement();
>
> setup_system();
>
> for(unsigned int i=0; i < num_index; i++){
>
> solution_trans[i].interpolate(previous_solution[i], solution[i]);
>
> constraints.distribute(solution[i]);}
>
> I look forward to your response. 
>
> Best regards,
>
> Karthi.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/26b50c8f-90ec-4b34-9f64-2dae588a1cdfn%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling
Kaushik,

in addition to what I just wrote, your example from above has revealed a 
bug in the `p::d::SolutionTransfer` class that Wolfgang and I were 
discussing in the course of this chatlog. Thank you very much for this! We 
are working on a solution for this issue.

I would encourage you to use the `p::d::CellDataTransfer` class for your 
use case as described in the last message.

Marc

On Thursday, December 31, 2020 at 6:02:00 PM UTC-7 Marc Fehling wrote:

> Hi Kaushik,
>
> Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
> p-refinement.
>
> You can do this with the method described in #11132 
> <https://github.com/dealii/dealii/pull/11132>: Imitate what 
> p::d::SolutionTransfer is doing with the more versatile tool 
> p::d::CellDataTransfer and consider the following:
>
>- Prepare a data container like `Vector>` where the 
>outer layer represents each cell in the current mesh, and the inner layer 
>corresponds to the dof values inside each cell.
>- Prepare data for the updated grid on the old grid. 
>   - On already activated cells, store dof values with 
>   `cell->get_interpolated_dof_values()`.
>   - On all other cells, store an empty container.
>   - Register your data container for and execute coarsening and 
>refinement.
>- Interpolate the old solution on the new mesh.
>- Initialize your new solution vector with invalid values 
>   `std::numeric_limits::infinity`.
>   - On previously activated cells, extract the stored data with 
>   `cell->set_dof_values_by_interpolation()`.
>   - Skip all other cells which only have an empty container.
>- On non-ghosted solution vectors, call 
>`compress(VectorOperation::min)` to get correct values on ghost cells.
>
> This leaves you with a correctly interpolated solution on the new mesh, 
> where all newly activated dofs have the value `infinity`.
>
> You can now (and not earlier!!!) assign the values you have in mind for 
> the newly activated dofs. You may want to exchange data on ghost cells once 
> more with `GridTools::exchange_cell_data_to_ghosts()`.
>
> This is a fairly new feature in the library for a very specific use case, 
> so there is no dedicated class for transferring solutions across finite 
> element activation yet. If you successfully manage to make this work, would 
> you be willing to turn this into a class for the deal.II library?
>
> Marc
> On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com 
> wrote:
>
>> Hi all,
>> Thank you for your reply. 
>> Let me explain what I am trying to do and why. 
>> I want to solve a transient heat transfer problem of the additive 
>> manufacturing (AM) process. In AM processes, metal powder is deposited in 
>> layers, and then a laser source scans each layer and melts and bonds the 
>> powder to the layer underneath it. To simulate this layer by layer process, 
>> I want to start with a grid that covers all the layers, but initially, only 
>> the bottom-most layer is active and all other layers are inactive, and then 
>> as time progresses, I want to activate one layer at a time. I read on this 
>> mailing list that cell "birth" or "activation" can be done by changing a 
>> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all 
>> cells of the grid initially to FE_Nothing except the bottom-most layer. And 
>> then convert one layer at a time to FE_Q. My questions are:
>> 1. Does this make sense? 
>> 2. I have to do two other things for this to work. (a) When a cell 
>> becomes FE_Q from FE_Nothing, dofs that are activating for the 1st time, I 
>> need to apply a non-zero initial value to those dofs. This is to simulation 
>> the metal powder deposited at a specified temperature,. e.g. room 
>> temperature. (b) the dofs, that were shared between a FE_Q and FE_Nothing 
>> cells before the p-refinement and now shared between FE_Q and FE_Nothing 
>> cells after refinement, should retrain their values from before the 
>> refinement. 
>>
>> Another way to simulation this process would be to use a cell "awaking" 
>> process, instead of the cell "birth". I keep call cells FE_Q but apply a 
>> very low diffusivity to the cells of the layers that are not yet "awake". 
>> But this way, I have to solve a larger system in all time steps. I was 
>> hoping to save some computation time, by only forming a system consists of 
>> cells that are in the "active" layers only. 
>>
>> Please let me if this makes sense? Is there any other method in deal.ii 
>> that can simulation such a proc

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling


Hi Kaushik,

Yes, this is possible by changing a cell from FE_Nothing to FE_Q using 
p-refinement.

You can do this with the method described in #11132 
<https://github.com/dealii/dealii/pull/11132>: Imitate what 
p::d::SolutionTransfer is doing with the more versatile tool 
p::d::CellDataTransfer and consider the following:

   - Prepare a data container like `Vector>` where the outer 
   layer represents each cell in the current mesh, and the inner layer 
   corresponds to the dof values inside each cell.
   - Prepare data for the updated grid on the old grid. 
  - On already activated cells, store dof values with 
  `cell->get_interpolated_dof_values()`.
  - On all other cells, store an empty container.
  - Register your data container for and execute coarsening and 
   refinement.
   - Interpolate the old solution on the new mesh.
   - Initialize your new solution vector with invalid values 
  `std::numeric_limits::infinity`.
  - On previously activated cells, extract the stored data with 
  `cell->set_dof_values_by_interpolation()`.
  - Skip all other cells which only have an empty container.
   - On non-ghosted solution vectors, call `compress(VectorOperation::min)` 
   to get correct values on ghost cells.

This leaves you with a correctly interpolated solution on the new mesh, 
where all newly activated dofs have the value `infinity`.

You can now (and not earlier!!!) assign the values you have in mind for the 
newly activated dofs. You may want to exchange data on ghost cells once 
more with `GridTools::exchange_cell_data_to_ghosts()`.

This is a fairly new feature in the library for a very specific use case, 
so there is no dedicated class for transferring solutions across finite 
element activation yet. If you successfully manage to make this work, would 
you be willing to turn this into a class for the deal.II library?

Marc
On Wednesday, December 30, 2020 at 8:22:59 AM UTC-7 k.d...@gmail.com wrote:

> Hi all,
> Thank you for your reply. 
> Let me explain what I am trying to do and why. 
> I want to solve a transient heat transfer problem of the additive 
> manufacturing (AM) process. In AM processes, metal powder is deposited in 
> layers, and then a laser source scans each layer and melts and bonds the 
> powder to the layer underneath it. To simulate this layer by layer process, 
> I want to start with a grid that covers all the layers, but initially, only 
> the bottom-most layer is active and all other layers are inactive, and then 
> as time progresses, I want to activate one layer at a time. I read on this 
> mailing list that cell "birth" or "activation" can be done by changing a 
> cell from FE_Nothing to FE_Q using p-refinement. I was trying to keep all 
> cells of the grid initially to FE_Nothing except the bottom-most layer. And 
> then convert one layer at a time to FE_Q. My questions are:
> 1. Does this make sense? 
> 2. I have to do two other things for this to work. (a) When a cell becomes 
> FE_Q from FE_Nothing, dofs that are activating for the 1st time, I need to 
> apply a non-zero initial value to those dofs. This is to simulation the 
> metal powder deposited at a specified temperature,. e.g. room temperature. 
> (b) the dofs, that were shared between a FE_Q and FE_Nothing cells before 
> the p-refinement and now shared between FE_Q and FE_Nothing cells after 
> refinement, should retrain their values from before the refinement. 
>
> Another way to simulation this process would be to use a cell "awaking" 
> process, instead of the cell "birth". I keep call cells FE_Q but apply a 
> very low diffusivity to the cells of the layers that are not yet "awake". 
> But this way, I have to solve a larger system in all time steps. I was 
> hoping to save some computation time, by only forming a system consists of 
> cells that are in the "active" layers only. 
>
> Please let me if this makes sense? Is there any other method in deal.ii 
> that can simulation such a process? 
> Thank you very much and happy holidays.
> Kaushik 
>
>
> On Tue, Dec 29, 2020 at 12:26 PM Wolfgang Bangerth  
> wrote:
>
>> On 12/28/20 5:11 PM, Marc Fehling wrote:
>> > 
>> > In case a FE_Nothing has been configured to dominate, the solution 
>> should be 
>> > continuous on the interface if I understood correctly, i.e., zero on 
>> the face. 
>> > I will write a few tests to see if this is actually automatically the 
>> case in 
>> > user applications. If so, this check for domination will help other 
>> users to 
>> > avoid this pitfall :)
>> > 
>>
>> More tests = more better :-)
>> Cheers
>>   W.
>>
>> -- 
>> --

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
The FiniteElementDomination logic in the codim=0 case would indeed make up 
a cheap a priori check in this context.

In case a FE_Nothing has been configured to dominate, the solution should 
be continuous on the interface if I understood correctly, i.e., zero on the 
face. I will write a few tests to see if this is actually automatically the 
case in user applications. If so, this check for domination will help other 
users to avoid this pitfall :)

Marc

On Monday, December 28, 2020 at 4:13:40 PM UTC-7 Wolfgang Bangerth wrote:

>
> > The problem here is that the solution is not continuous across the face 
> of a 
> > FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q 
> element, 
> > the solution is suddenly expected to be continuous, and we have no rule 
> in 
> > deal.II yet how to continue in the situation. In my opinion, we should 
> throw 
> > an assertion in this case.
>
> Denis actually thought of that already a while back. It wasn't well 
> documented, but see here now:
> https://github.com/dealii/dealii/pull/11430
>
> Does that also address what you wanted to do in your patch?
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5c35ba6e-6deb-42a2-8ef1-aa45bba097dan%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
Hi Wolfgang,

your explanation makes indeed more sense in the context of piecewise 
polynomials :)

The problem here is that the solution is not continuous across the face of 
a FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q 
element, the solution is suddenly expected to be continuous, and we have no 
rule in deal.II yet how to continue in the situation. In my opinion, we 
should throw an assertion in this case.

I have a patch for the p::d case in mind what will warn users about this: 
we should reinit the solution vector with NaNs and than only overwrite the 
entries once.

I don't know if we have such a test for the general SolutionTransfer class. 
I will check that.

Marc

On Monday, December 28, 2020 at 1:39:33 PM UTC-7 Wolfgang Bangerth wrote:

> On 12/27/20 8:48 PM, Marc Fehling wrote:
> > 
> > 2) I did not know you were trying to interpolate a FENothing element 
> into a 
> > FEQ element. This should not be possible, as you can not interpolate 
> > information from simply 'nothing', and some assertion should be 
> triggered 
> > while trying to do so. The other way round should be possible, i.e., 
> > interpolation from FEQ to FENothing, since you will simply 'forget' what 
> has 
> > been on the old cell.
>
> In hindsight, FE_Nothing was maybe a poorly named class. It should really 
> have 
> been FE_Zero: A finite element space that contains only a single function 
> -- 
> the zero function. Because it only contains one function, it requires no 
> degrees of freedom.
>
> So interpolation from FE_Nothing to FE_Q is well defined if you take this 
> point of view, and projection from any finite element space to FE_Nothing 
> is also.
>
> Best
> Wolfgang
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9e381e90-abeb-402a-96c8-3d43c33c8ff7n%40googlegroups.com.


Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-27 Thread Marc Fehling
Hi Kaushik,

1) Yes, this is possible, but tricky: `SolutionTransfer` is not capable of 
this feature, and you need to do it manually with the more versatile class 
`CellDataTransfer`. A way to do it has been discussed in #11132 
<https://github.com/dealii/dealii/pull/11132>.

2) I did not know you were trying to interpolate a FENothing element into a 
FEQ element. This should not be possible, as you can not interpolate 
information from simply 'nothing', and some assertion should be triggered 
while trying to do so. The other way round should be possible, i.e., 
interpolation from FEQ to FENothing, since you will simply 'forget' what 
has been on the old cell.

Did you run your program in debug or release mode? If you ran it in debug 
mode without an assertion being triggered, please tell me. The user should 
be warned that they are not allowed to interpolate from a FENothing object 
to a FEQ element (or any element with nodes).

I think what is happening here is that we initialize some container with 
zeros by default, and accidentally overwrite the corresponding dof values. 
I suspect that if you assign both topmost cells to FEQ and the lower ones 
to FENothing, your solution would look okay due to the way we iterate over 
cells (Z-order). Or in short, we first unpack the lower cells, and then 
unpack the upper ones, which may overwrite dof values. This is undefined 
behavior, and we should warn the user about that.

Best,
Marc

On Sunday, December 27, 2020 at 2:28:50 PM UTC-7 k.d...@gmail.com wrote:

> Hello Marc,
> Thank you very much. 
> I have modified my test code as you suggested and it is working fine now. 
> That the code is attached. I have a few more questions that I added to the 
> attached PNG file below along with the results from the test code. 
> 1. Is it possible to specify an initial value on dofs that are activated 
> then a FE_Nothing cell becomes a FE_Q cell?
> 2. What happens to the dofs that were shared between a FE_Nothing cell and 
> a FE_Q cell before the p-refinement, and are shared between two FE_Q 
> elements after p-refinement? Are those always set to zeros after the 
> refinement? 
>
> [image: image.png]
> Thank you,
> Kaushik 
>
> On Wed, Dec 23, 2020 at 5:35 PM Marc Fehling  wrote:
>
>> Hi Kaushik,
>>
>> Be careful on what you are doing here: You prepare your solution to be 
>> transferred on refinement, but at the point where you wish to interpolate 
>> your solution the mesh differs from the one your SolutionTransfer object 
>> expects to encounter. That is because you changed the assigned finite 
>> element between executing the refinement and interpolating the old solution 
>> to the new mesh.
>>
>> You are basically doing two steps here at once: You perform h-refinement 
>> as your first step, and then alter your function space by assigning 
>> different finite elements (p-adaptation). I would suggest you split your 
>> intentions into two separate steps.
>>
>> First, only perform h-refinement on that one cell and interpolate the 
>> entire solution on the new grid. Next, tell your DoFHandler object that you 
>> intend to change the finite element on some of your cells by assigning a 
>> corresponding future finite element index to them (see here 
>> <https://www.dealii.org/developer/doxygen/deal.II/classDoFCellAccessor.html#ae4d4d8562cb47b70b797369b8872b04d>),
>>  
>> prepare your solution for refinement using a SolutionTransfer object once 
>> more, and finally execute refinement again. This should accomplish what you 
>> whish to intend.
>>
>> In addition, there have been issues using p::d::SolutionTransfer objects 
>> with FENothing elements which have been fixed in #10592 
>> <https://github.com/dealii/dealii/pull/10592>. Please incorporate these 
>> upstream fixes into your deal.II library by building it on the current 
>> master branch.
>>
>> Hope this helps!
>>
>> Best,
>> Marc
>>
>> On Wednesday, December 23, 2020 at 2:33:09 PM UTC-7 k.d...@gmail.com 
>> wrote:
>>
>>> Hi Marc:
>>> Thank you again for your help.
>>> I have another problem. 
>>> A small test code is attached. 
>>>
>>> I have one cell of FEQ element. I refine that into four cells and then 
>>> assign FE_q to two of them and FE_nothing to the other two child cells. 
>>> Then when I try to transfer the solution, the code aborts. 
>>>
>>> Is this a limitation? 
>>>
>>> Thank you very much,
>>> Kaushik 
>>>
>>>
>>> On Wed, Dec 9, 2020 at 7:16 PM Kaushik Das  wrote:
>>>
>>>> Thank you, Mark. I just built dealii from

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-23 Thread Marc Fehling
Hi Kaushik,

Be careful on what you are doing here: You prepare your solution to be 
transferred on refinement, but at the point where you wish to interpolate 
your solution the mesh differs from the one your SolutionTransfer object 
expects to encounter. That is because you changed the assigned finite 
element between executing the refinement and interpolating the old solution 
to the new mesh.

You are basically doing two steps here at once: You perform h-refinement as 
your first step, and then alter your function space by assigning different 
finite elements (p-adaptation). I would suggest you split your intentions 
into two separate steps.

First, only perform h-refinement on that one cell and interpolate the 
entire solution on the new grid. Next, tell your DoFHandler object that you 
intend to change the finite element on some of your cells by assigning a 
corresponding future finite element index to them (see here 
<https://www.dealii.org/developer/doxygen/deal.II/classDoFCellAccessor.html#ae4d4d8562cb47b70b797369b8872b04d>),
 
prepare your solution for refinement using a SolutionTransfer object once 
more, and finally execute refinement again. This should accomplish what you 
whish to intend.

In addition, there have been issues using p::d::SolutionTransfer objects 
with FENothing elements which have been fixed in #10592 
<https://github.com/dealii/dealii/pull/10592>. Please incorporate these 
upstream fixes into your deal.II library by building it on the current 
master branch.

Hope this helps!

Best,
Marc

On Wednesday, December 23, 2020 at 2:33:09 PM UTC-7 k.d...@gmail.com wrote:

> Hi Marc:
> Thank you again for your help.
> I have another problem. 
> A small test code is attached. 
>
> I have one cell of FEQ element. I refine that into four cells and then 
> assign FE_q to two of them and FE_nothing to the other two child cells. 
> Then when I try to transfer the solution, the code aborts. 
>
> Is this a limitation? 
>
> Thank you very much,
> Kaushik 
>
>
> On Wed, Dec 9, 2020 at 7:16 PM Kaushik Das  wrote:
>
>> Thank you, Mark. I just built dealii from the source (deal.II-9.3.0-pre). 
>> And my little test is passing now. 
>> Thanks for the help.
>> -Kaushik 
>>
>> On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das  wrote:
>>
>>> Thank you Mark. 
>>> I am using the dealii lib that I got from apt-get from  
>>> deal.ii-9.2.0-backports.
>>> I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and 
>>> all aborted similarly. 
>>>
>>> I will get the latest master branch and build that. 
>>>
>>>
>>> Thanks,
>>> Kaushik
>>>
>>> On Wed, Dec 9, 2020 at 4:23 PM Marc Fehling  wrote:
>>>
>>>> From your stacktrace I can see you are using PETSc and deal.II 9.2.0 
>>>> which already incorporates the specified patch. Would you try to build the 
>>>> actual master branch anyways?
>>>> On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:
>>>>
>>>>> Hi Kaushik,
>>>>>
>>>>> I am unable to reproduce your problem with the code you provided on 
>>>>> the latest build of deal.II and Trilinos.
>>>>>
>>>>>- On how many processes did you run your program?
>>>>>- Did you use PETSc or Trilinos?
>>>>>- Could you try to build deal.II on the latest master branch? 
>>>>>There is a chance that your issue has been solved upstream. Chances 
>>>>> are 
>>>>>high that fix #8860 <https://github.com/dealii/dealii/pull/8860> 
>>>>>and the changes made to `get_interpolated_dof_values()` will solve 
>>>>> your 
>>>>>problem.
>>>>>
>>>>> Marc
>>>>> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com 
>>>>> wrote:
>>>>>
>>>>>> Hi Marc and Bruno,
>>>>>> I was able to reproduce this abort on an even simpler test. Please 
>>>>>> see the attached file. 
>>>>>>
>>>>>> Initial grid:
>>>>>>  /*
>>>>>> * ---
>>>>>> * |  0 |  0 |
>>>>>> * ---
>>>>>> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
>>>>>> * ---
>>>>>> */
>>>>>>
>>>>>> /* Set refine flags:
>>>>>> * ---
>>>>>> * |  R |  R |FEQ
>>>>>> * ---
>>>>>> * |  |  |FE_Nothin

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
>From your stacktrace I can see you are using PETSc and deal.II 9.2.0 which 
already incorporates the specified patch. Would you try to build the actual 
master branch anyways?
On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote:

> Hi Kaushik,
>
> I am unable to reproduce your problem with the code you provided on the 
> latest build of deal.II and Trilinos.
>
>- On how many processes did you run your program?
>- Did you use PETSc or Trilinos?
>- Could you try to build deal.II on the latest master branch? There is 
>a chance that your issue has been solved upstream. Chances are high that 
>fix #8860 <https://github.com/dealii/dealii/pull/8860> and the changes 
>made to `get_interpolated_dof_values()` will solve your problem.
>
> Marc
> On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com wrote:
>
>> Hi Marc and Bruno,
>> I was able to reproduce this abort on an even simpler test. Please see 
>> the attached file. 
>>
>> Initial grid:
>>  /*
>> * ---
>> * |  0 |  0 |
>> * ---
>> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
>> * ---
>> */
>>
>> /* Set refine flags:
>> * ---
>> * |  R |  R |FEQ
>> * ---
>> * |  |  |FE_Nothing
>> * ---
>> */
>>
>> Then refine and solution trans. During the  
>> execute_coarsening_and_refinement, it aborts. 
>>
>> Here is a stack trace:
>> 
>>
>> An error occurred in line <1167> of file 
>>  in 
>> function
>> Number& 
>> dealii::Vector::operator()(dealii::Vector::size_type) [with 
>> Number = double; dealii::Vector::size_type = unsigned int]
>> The violated condition was: 
>> static_cast> std::common_type::type)>:: type>(i) < 
>> static_cast> std::common_type::type)>:: type>(size())
>> Additional information: 
>> Index 0 is not in the half-open range [0,0). In the current case, 
>> this half-open range is in fact empty, suggesting that you are accessing an 
>> element of an empty collection such as a vector that has not been set to 
>> the correct size.
>>
>> Stacktrace:
>> ---
>> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::Vector::operator()(unsigned int)
>> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::pack_callback(dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus)
>> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::register_data_attach()::{lambda(dealii::TriaIterator> > 
>> 2> > const&, dealii::Triangulation<2, 
>> 2>::CellStatus)#1}::operator()(dealii::TriaIterator> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::_Function_handler > 
>> (dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus), 
>> dealii::parallel::distributed::SolutionTransfer<2, 
>> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
>> >::register_data_attach()::{lambda(dealii::TriaIterator> > 
>> 2> > const&, dealii::Triangulation<2, 
>> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&, 
>> dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 2>::CellStatus&&)
>> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::function > 
>> (dealii::TriaIterator > const&, 
>> dealii::Triangulation<2, 
>> 2>::CellStatus)>::operator()(dealii::TriaIterator> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
>> #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
>> std::_Function_handler > 
>> (dealii::TriaIterator >, 
>> dealii::Triangulation<2, 2>::CellStatus), std::function> std::allocator > (dealii::TriaIterator > 
>> const&, dealii::Triangulation<2, 2>::CellStatus)> 
>> >::_M_invoke(std::_Any_data const&, 
>> dealii::TriaIterator >&&, 
>> dealii::Triangulation<2, 2>::CellStatus&&)
>> #7  /lib/x86_64-linux-gnu/l

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
Hi Kaushik,

I am unable to reproduce your problem with the code you provided on the 
latest build of deal.II and Trilinos.

   - On how many processes did you run your program?
   - Did you use PETSc or Trilinos?
   - Could you try to build deal.II on the latest master branch? There is a 
   chance that your issue has been solved upstream. Chances are high that fix 
   #8860 <https://github.com/dealii/dealii/pull/8860> and the changes made 
   to `get_interpolated_dof_values()` will solve your problem.
   
Marc
On Wednesday, December 9, 2020 at 7:14:57 AM UTC-7 k.d...@gmail.com wrote:

> Hi Marc and Bruno,
> I was able to reproduce this abort on an even simpler test. Please see the 
> attached file. 
>
> Initial grid:
>  /*
> * ---
> * |  0 |  0 |
> * ---
> * |  1 |  1 | 0 - FEQ, 1 - FE_Nothing
> * ---
> */
>
> /* Set refine flags:
> * ---
> * |  R |  R |FEQ
> * ---
> * |  |  |FE_Nothing
> * ---
> */
>
> Then refine and solution trans. During the  
> execute_coarsening_and_refinement, it aborts. 
>
> Here is a stack trace:
> 
>
> An error occurred in line <1167> of file 
>  in 
> function
> Number& 
> dealii::Vector::operator()(dealii::Vector::size_type) [with 
> Number = double; dealii::Vector::size_type = unsigned int]
> The violated condition was: 
> static_cast std::common_type::type)>:: type>(i) < 
> static_cast std::common_type::type)>:: type>(size())
> Additional information: 
> Index 0 is not in the half-open range [0,0). In the current case, this 
> half-open range is in fact empty, suggesting that you are accessing an 
> element of an empty collection such as a vector that has not been set to 
> the correct size.
>
> Stacktrace:
> ---
> #0  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::Vector::operator()(unsigned int)
> #1  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> #2  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::pack_callback(dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus)
> #3  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::register_data_attach()::{lambda(dealii::TriaIterator > 
> 2> > const&, dealii::Triangulation<2, 
> 2>::CellStatus)#1}::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #4  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::_Function_handler > 
> (dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus), 
> dealii::parallel::distributed::SolutionTransfer<2, 
> dealii::PETScWrappers::MPI::Vector, dealii::hp::DoFHandler<2, 2> 
> >::register_data_attach()::{lambda(dealii::TriaIterator > 
> 2> > const&, dealii::Triangulation<2, 
> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&, 
> dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 2>::CellStatus&&)
> #5  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::function > 
> (dealii::TriaIterator > const&, 
> dealii::Triangulation<2, 
> 2>::CellStatus)>::operator()(dealii::TriaIterator 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> #6  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::_Function_handler > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellStatus), std::function std::allocator > (dealii::TriaIterator > 
> const&, dealii::Triangulation<2, 2>::CellStatus)> 
> >::_M_invoke(std::_Any_data const&, 
> dealii::TriaIterator >&&, 
> dealii::Triangulation<2, 2>::CellStatus&&)
> #7  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> std::function > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 
> 2>::CellStatus)>::operator()(dealii::TriaIterator 2> >, dealii::Triangulation<2, 2>::CellStatus) const
> #8  /lib/x86_64-linux-gnu/libdeal.ii.g.so.9.2.0: 
> dealii::parallel::distributed::Triangulation<2, 
> 2>::DataTransfer::pack_data(std::vector dealii::Triangulation<2, 2>::CellStatus, 
> dealii::TriaIterator > >, 
> std::allocator 2>::CellStatus, dealii::TriaIterator > > > > 
> const&, std::vector > 
> (dealii::TriaIterator >, 
> dealii::Triangulation<2, 2>::CellSt

[deal.II] Re: Periodic boundary conditions : Error using GridTools::collect_periodic_facepairs

2020-12-08 Thread Marc Fehling
Hi Aaditya,

on first look your implementation looks good to me. Does the same error 
occur when you are using a standard `Triangulation` object instead of a 
`parallel::distributed::Triangulation`?

As far as I know, the direction parameter does not matter for scalar fields 
(see also step-45).

Would you mind sharing your source code?

Best,
Marc

On Saturday, December 5, 2020 at 10:12:00 PM UTC-7 aadit...@gmail.com wrote:

> Hi,
>I am trying to simulate a reaction-diffusion system containing two 
> species on a square domain(structured mesh) with periodic boundary 
> conditions enforced on the concentration fields on opposite edges. After 
> testing my implementation on a single processor I obtain the following 
> error message : 
>
> *-
> 
>TimerOutput objects finalize timed values printed to the
> 
> screen by communicating over MPI in their 
> destructors.
>   Since an exception is 
> currently uncaught, this
>   
> synchronization (and subsequent output) will be skipped
> 
>  to avoid a possible deadlock.  
> 
> 
>  -  
> 
> 
> 
> 
> 
> 
>    
> 
>   Exception on processing:  
> 
> 
> 
>   
> 
> 
> An error occurred in line <2107> of file 
>  in 
> function   void 
> dealii::GridTools::match_periodic_face_pairs(std::set unsigned int> >&, std::set dealii::identity::type, unsigned int> >&, int, 
> std::vector >&, const 
> dealii::Tensor<1, typename FaceIterator::AccessorType:: space_dimension>&, 
> const dealii::FullMatrix&) [with CellIterator = 
> dealii::TriaIterator >; typename 
> dealii::identity::type = 
> dealii::TriaIterator >; typename 
> FaceIterator::AccessorType = dealii::CellAccessor<2, 2>]
>   The violated condition was:  
> 
>n_matches == pairs1.size() 
> && pairs2.size() == 0  
>   Additional 
> information:
> 
>  Unmatched faces on periodic boundaries
> 
>     
> 
> 
> 
>   Aborting!  *
>
>
> *Additional Details* : The error message is associated with the 
> *create_mesh()* method of the problem class whose implementation I have 
> included below. The part highlighted in red is the cause of the error 
> message :
>
> template 
>   void Schnakenberg::create_mesh()   
>   {
>   TimerOutput::Scope t(computing_timer, "setup");
>
>   

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Marc Fehling
Hi Kaushik,

the `p::d::SolutionTransfer` class should be able to deal with `FENothing` 
elements in your example. The tricky cases are when you're coarsening a 
`FENothing` element with others as Bruno already pointed out 
(h-coarsening), or if you change a `FENothing` element to a different 
element in the process (p-adaptation). But with your recent modification, 
you avoid these cases.

I suspect that something else causes the error in your program. Could you 
run a debugger on this and give us the backtrace for the exception? This 
will give us more clues to figure out what goes wrong!

Best,
Marc
On Tuesday, December 8, 2020 at 7:13:29 AM UTC-7 k.d...@gmail.com wrote:

> Hi Bruno:
> Thanks for pointing that out. 
> I tried to not refine FE_nothing cells by modifying the refine loop:
> (The modified test is attached).
>
> for (auto  : dgq_dof_handler.active_cell_iterators()) 
>   if (cell->is_locally_owned() && cell->active_fe_index () != 0)
> {
>   if (counter > ((dim == 2) ? 4 : 8))
> cell->set_coarsen_flag();
>   else
> cell->set_refine_flag();
> }
>
> But this still aborts. 
> Kaushik 
>
> On Tue, Dec 8, 2020 at 8:36 AM Bruno Turcksin  
> wrote:
>
>> Hi,
>>
>> Are you sure that your test makes sense? You randomly assign FE indices 
>> to cells then you refine and coarsen cells. But what does it mean to 
>> coarsen 4 cells together when one of them is FE_Nothing? What would you 
>> expect to happen?
>>
>> Best,
>>
>> Bruno
>>
>> On Monday, December 7, 2020 at 5:54:10 PM UTC-5 k.d...@gmail.com wrote:
>>
>>> Hi all:
>>>
>>> I modified the test  tests/mpi/solution_transfer_05.cc to add a 
>>> FE_Nohting element to the FECollection. I also modified the other elements 
>>> to FE_Q. 
>>>
>>> When I run the test, it's aborting in solution transfer. 
>>> Is there any limitations in using FE_Nothing with parallel solution 
>>> transfer? 
>>> The modified test is attached.
>>> Thank you very much.
>>> Kaushik 
>>>
>>>  
>>> An error occurred in line <1167> of file 
>>>  in 
>>> function
>>> Number& 
>>> dealii::Vector::operator()(dealii::Vector::size_type) [with 
>>> Number = double; dealii::Vector::size_type = unsigned int]
>>> The violated condition was:
>>> static_cast>> typename std::common_type::type)>:: type>(i) 
>>> < static_cast>> std::common_type::type)>:: type>(size())
>>> Additional information:
>>> Index 0 is not in the half-open range [0,0). In the current case, 
>>> this half-open range is in fact empty, suggesting that you are accessing an 
>>> element of an empty collection such as a vector that has not been set to 
>>> the correct size.
>>>
>>>
>>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/dealii/ssEva6C2PU8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/609b0457-ba90-4aa2-a7d1-5b798d5349ebn%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4cd4582b-3168-4b8b-8195-e316b049cfadn%40googlegroups.com.


Re: [deal.II] Re: DEAL.II INSTALLATION ERROR

2020-11-26 Thread Marc Fehling


Pushkar,

would you check the `detailed.log` file in the build folder in which you 
configured deal.II with cmake and find the configuration details for PETSc.

Does the PETSc folder specified in the log file contain the requested 
header files? The header files should be located in an include folder, 
i.e., /path/to/petsc/include

Best,
Marc

On Thursday, November 26, 2020 at 7:47:50 PM UTC-7 pushkar...@gmail.com 
wrote:

> Yes I did follow the above instructions but I am still facing the same 
> issue . 
>
> On Fri, Nov 27, 2020 at 3:53 AM Marc Fehling  wrote:
>
>> Hi Pushkar!
>>
>> It appears the PETSc has been found during the configuration of deal.II 
>> with cmake, but you can not find the header files of the PETSc libraries 
>> during compilation.
>>
>> Did you follow all instructions on how to interface deal.II to PETSc on 
>> this particular guide 
>> <https://www.dealii.org/9.0.0/external-libs/petsc.html>?
>>
>> Marc
>> On Thursday, November 26, 2020 at 5:24:58 AM UTC-7 pushkar...@gmail.com 
>> wrote:
>>
>>> Dear deal.II community,
>>>
>>> I am installing deal.II 9.0.0 as I wish to run PRIMS-PF on it wherein 
>>> during the process I ran into certain issues as :
>>>  In file included from 
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>>>  from 
>>> /home/pushkar/dealii-9.0.0/source/dofs/dof_accessor_get.cc:21:
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
>>> fatal error: petscvec.h: No such file or directory
>>>32 | #  include 
>>>   |^~~~
>>> compilation terminated.
>>> make[2]: *** [source/dofs/CMakeFiles/obj_dofs_debug.dir/build.make:160: 
>>> source/dofs/CMakeFiles/obj_dofs_debug.dir/dof_accessor_get.cc.o] Error 1
>>> make[1]: *** [CMakeFiles/Makefile2:3588: 
>>> source/dofs/CMakeFiles/obj_dofs_debug.dir/all] Error 2
>>> make[1]: *** Waiting for unfinished jobs
>>> [ 48%] Building CXX object 
>>> source/base/CMakeFiles/obj_base_debug.dir/multithread_info.cc.o
>>> In file included from 
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>>>  from 
>>> /home/pushkar/dealii-9.0.0/source/algorithms/operator.cc:30:
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
>>> fatal error: petscvec.h: No such file or directory
>>>32 | #  include 
>>>   |^~~~
>>> compilation terminated.
>>> make[2]: *** 
>>> [source/algorithms/CMakeFiles/obj_algorithms_debug.dir/build.make:82: 
>>> source/algorithms/CMakeFiles/obj_algorithms_debug.dir/operator.cc.o] Error 1
>>> make[1]: *** [CMakeFiles/Makefile2:4209: 
>>> source/algorithms/CMakeFiles/obj_algorithms_debug.dir/all] Error 2
>>> [ 48%] Building CXX object 
>>> source/base/CMakeFiles/obj_base_debug.dir/named_selection.cc.o
>>> In file included from 
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>>>  from 
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_block_vector.h:24,
>>>  from 
>>> /home/pushkar/dealii-9.0.0/source/multigrid/mg_base.cc:21:
>>> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
>>> fatal error: petscvec.h: No such file or directory
>>>32 | #  include 
>>>   |^~~~
>>> compilation terminated.
>>> make[2]: *** 
>>> [source/multigrid/CMakeFiles/obj_multigrid_debug.dir/build.make:82: 
>>> source/multigrid/CMakeFiles/obj_multigrid_debug.dir/mg_base.cc.o] Error 1
>>> make[1]: *** [CMakeFiles/Makefile2:4047: 
>>> source/multigrid/CMakeFiles/obj_multigrid_debug.dir/all] Error 2
>>> [ 48%] Building CXX object 
>>> source/distributed/CMakeFiles/obj_distributed_debug.dir/solution_transfer.cc.o
>>> /home/pushkar/dealii-9.0.0/source/base/mpi.cc:38:12: fatal error: 
>>> petscsys.h: No such file or directory
>>>38 | #  include 
>>>   |^~~~
>>> compilation terminated.
>>> make[2]: *** [source/base/CMakeFiles/obj_base_debug.dir/build.make:368: 
>>> source/base/CMakeFiles/obj_base_debug.dir/mpi.cc.o] Error 1
>>> make[2]: *** Waiting for unfinished jobs
>>> [ 48%] Building CXX object 
>>> source/distributed/CMakeFiles/obj_distributed_debug.dir/tria.cc.o
>>> In file included from 
>>> /home/pushka

[deal.II] Re: DEAL.II INSTALLATION ERROR

2020-11-26 Thread Marc Fehling
Hi Pushkar!

It appears the PETSc has been found during the configuration of deal.II 
with cmake, but you can not find the header files of the PETSc libraries 
during compilation.

Did you follow all instructions on how to interface deal.II to PETSc on 
this particular guide 
?

Marc
On Thursday, November 26, 2020 at 5:24:58 AM UTC-7 pushkar...@gmail.com 
wrote:

> Dear deal.II community,
>
> I am installing deal.II 9.0.0 as I wish to run PRIMS-PF on it wherein 
> during the process I ran into certain issues as :
>  In file included from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>  from 
> /home/pushkar/dealii-9.0.0/source/dofs/dof_accessor_get.cc:21:
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
> fatal error: petscvec.h: No such file or directory
>32 | #  include 
>   |^~~~
> compilation terminated.
> make[2]: *** [source/dofs/CMakeFiles/obj_dofs_debug.dir/build.make:160: 
> source/dofs/CMakeFiles/obj_dofs_debug.dir/dof_accessor_get.cc.o] Error 1
> make[1]: *** [CMakeFiles/Makefile2:3588: 
> source/dofs/CMakeFiles/obj_dofs_debug.dir/all] Error 2
> make[1]: *** Waiting for unfinished jobs
> [ 48%] Building CXX object 
> source/base/CMakeFiles/obj_base_debug.dir/multithread_info.cc.o
> In file included from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>  from 
> /home/pushkar/dealii-9.0.0/source/algorithms/operator.cc:30:
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
> fatal error: petscvec.h: No such file or directory
>32 | #  include 
>   |^~~~
> compilation terminated.
> make[2]: *** 
> [source/algorithms/CMakeFiles/obj_algorithms_debug.dir/build.make:82: 
> source/algorithms/CMakeFiles/obj_algorithms_debug.dir/operator.cc.o] Error 1
> make[1]: *** [CMakeFiles/Makefile2:4209: 
> source/algorithms/CMakeFiles/obj_algorithms_debug.dir/all] Error 2
> [ 48%] Building CXX object 
> source/base/CMakeFiles/obj_base_debug.dir/named_selection.cc.o
> In file included from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>  from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_block_vector.h:24,
>  from 
> /home/pushkar/dealii-9.0.0/source/multigrid/mg_base.cc:21:
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
> fatal error: petscvec.h: No such file or directory
>32 | #  include 
>   |^~~~
> compilation terminated.
> make[2]: *** 
> [source/multigrid/CMakeFiles/obj_multigrid_debug.dir/build.make:82: 
> source/multigrid/CMakeFiles/obj_multigrid_debug.dir/mg_base.cc.o] Error 1
> make[1]: *** [CMakeFiles/Makefile2:4047: 
> source/multigrid/CMakeFiles/obj_multigrid_debug.dir/all] Error 2
> [ 48%] Building CXX object 
> source/distributed/CMakeFiles/obj_distributed_debug.dir/solution_transfer.cc.o
> /home/pushkar/dealii-9.0.0/source/base/mpi.cc:38:12: fatal error: 
> petscsys.h: No such file or directory
>38 | #  include 
>   |^~~~
> compilation terminated.
> make[2]: *** [source/base/CMakeFiles/obj_base_debug.dir/build.make:368: 
> source/base/CMakeFiles/obj_base_debug.dir/mpi.cc.o] Error 1
> make[2]: *** Waiting for unfinished jobs
> [ 48%] Building CXX object 
> source/distributed/CMakeFiles/obj_distributed_debug.dir/tria.cc.o
> In file included from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_vector.h:28,
>  from 
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_parallel_block_vector.h:24,
>  from 
> /home/pushkar/dealii-9.0.0/source/lac/block_matrix_array.cc:23:
> /home/pushkar/dealii-9.0.0/include/deal.II/lac/petsc_vector_base.h:32:12: 
> fatal error: petscvec.h: No such file or directory
>32 | #  include 
>   |^~~~
> compilation terminated.
> make[2]: *** [source/lac/CMakeFiles/obj_lac_debug.dir/build.make:82: 
> source/lac/CMakeFiles/obj_lac_debug.dir/block_matrix_array.cc.o] Error 1
> make[1]: *** [CMakeFiles/Makefile2:3615: 
> source/lac/CMakeFiles/obj_lac_debug.dir/all] Error 2
> [ 48%] Building CXX object 
> source/distributed/CMakeFiles/obj_distributed_debug.dir/tria_base.cc.o
> In file included from 
> /home/pushkar/dealii-9.0.0/include/deal.II/hp/fe_values.h:24,
>  from 
> /home/pushkar/dealii-9.0.0/include/deal.II/numerics/data_out_dof_data.h:30,
>  from 
> /home/pushkar/dealii-9.0.0/include/deal.II/numerics/data_out.h:22,
>  from 
> /home/pushkar/dealii-9.0.0/source/numerics/data_out.cc:17:
> /home/pushkar/dealii-9.0.0/include/deal.II/fe/fe_values.h:48:12: fatal 
> error: petsc.h: No such file or directory
>48 | #  include 
>   |^
> compilation terminated.
> make[2]: *** 
> 

[deal.II] Re: Refinement on a parallel distributed triangulation with periodic BC

2020-11-16 Thread Marc Fehling
Hi Maurice!

On Monday, November 16, 2020 at 8:09:14 AM UTC-7 maurice@googlemail.com 
wrote:

> Looking at the doc of `collect_periodic_faces` (This function will collect 
> periodic face pairs on the coarsest mesh level of the given mesh (a 
> Triangulation 
>  or 
> DoFHandler 
> ) and 
> add them to the vector matched_pairs leaving the original contents 
> intact.)  the resulting vector will only contain parent cells, on which the 
> call `is_artifiical()` is not possible. 
>

It appears to me that `GridTools::get_active_child_cells()` is the function 
you are looking for (see here 
).
 
In your case, you would get all active children of the coarsest mesh cells 
if applied on the results of `GridTools::collect_periodic_faces()`. You can 
iterate over those and identify which one of them are located at the 
boundary.

However, I am not sure whether `GridTools::get_active_child_cells()` works 
with `parallel::distributed::Triangulations`. If it does though, the 
results may contain locally owned, ghost, and/or artificial cells. As the 
results will be all active cells, you are able to check for these 
attributes, i.e., check which ones are locally relevant.

Hope this helps!

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ea516d6f-1eeb-411d-9ca6-c7bf483efe7fn%40googlegroups.com.


[deal.II] Re: Error with boost after installing dealii with spack

2020-11-11 Thread Marc Fehling
Hi Christian!

Right now I have two things in mind that you could try out:

   - Configure your own project with cmake from scratch, if you haven't 
   already done so.
   - Build deal.II with the bundled version of boost and see if the problem 
   persists.

You can also try out to build deal.II with the upstream master branch. 
Recently a check for matching boost versions has been introduced (see #11024 
).

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/128c7d58-ca2b-43f5-b4bc-b47be4c0cd11o%40googlegroups.com.


Re: [deal.II] outer product of two vectors

2020-10-07 Thread Marc Fehling
Please have a look at this particular test which showcases how an outer 
product can be achieved with deal.II!
https://github.com/dealii/dealii/blob/master/tests/full_matrix/full_matrix_57.cc

Hope this helps!
Marc
Wolfgang Bangerth schrieb am Dienstag, 6. Oktober 2020 um 18:31:47 UTC-6:

> On 10/6/20 5:50 PM, Nikki Holtzer wrote:
> > 
> > I am trying to form a cross product/ outer product of two vectors of 
> type 
> > deallii:Vector. I have attempted to use some of the built in 
> functions 
> > for the outer product from the Tensor Class but have had no luck. I 
> can't seem 
> > to get anything other than
> > 
> > error: no matching function for call to 'outer_product(vec1, vec2);'
> > 
> > I have tried recasting my vec1/vec2 as Tensors but have run into a 
> similar 
> > error message.
> > 
> > Is there a built in vector cross product? Alternatively, how could I 
> recast my 
> > vectors and then use the built in functions from the Tensor Class and 
> finally 
> > recast them back into vectors?
>
> The easy way is to do
>
> const unsigned int n = vec.size();
> FullMatrix o_p (n,n);
> for (unsigned int i=0; i for (unsigned int j=0; j o_p(i,j) = vec[i] * vec[j];
>
> But the issue is that generally you end up with a full matrix this way. Is 
> that what you want? How large are your vectors?
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0e98c110-f9fa-4e11-97a7-0c99ea4cc405n%40googlegroups.com.


[deal.II] Re: hp fem error assigning Fourier

2020-06-08 Thread Marc Fehling
Hi Ishan!

You are correct: We opted for a more versatile approach in transforming 
solutions into Fourier or Legendre series with deal.II 9.2. Glad you 
figured it out!

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4e2f4bba-e2a6-4269-acb1-21bf03b2126co%40googlegroups.com.


Re: [deal.II] Re: Deal.ii installation

2020-04-28 Thread Marc Fehling
Hi Prasad!

My guess now is the following: You have PETSc and Trilinos installed on 
your device, which deal.II finds, but complains that PETSc has been 
installed with a different MPI configuration as is available for deal.II.

-- Include 
/home/prasad/Downloads/dealii/cmake/configure/configure_3_petsc.cmake
-- Found PETSC_LIBRARY
-- Found PETSC_INCLUDE_DIR_ARCH
-- Found PETSC_INCLUDE_DIR_COMMON
-- PETSC_PETSCVARIABLES not found! Call:
-- FIND_FILE(PETSC_PETSCVARIABLES NAMES petscvariables HINTS 
/usr/lib/petscdir/3.6.2//x86_64-linux-gnu-real /usr/lib/petscdir/3.6.2/ 
PATH_SUFFIXES conf lib/petsc/conf)
--   PETSC_VERSION: 3.7.7.0
--   PETSC_LIBRARIES: /usr/lib/x86_64-linux-gnu/libpetsc.so
--   PETSC_INCLUDE_DIRS: /usr/include/petsc;/usr/include/petsc
--   PETSC_USER_INCLUDE_DIRS: /usr/include/petsc;/usr/include/petsc
-- Found PETSC
-- Could not find a sufficient PETSc installation: PETSc has to be 
configured with the same MPI configuration as deal.II.
-- DEAL_II_WITH_PETSC has unmet external dependencies.

Later, we find:

-- Include 
/home/prasad/Downloads/dealii/cmake/configure/configure_p4est.cmake
-- DEAL_II_WITH_P4EST has unmet configuration requirements: 
DEAL_II_WITH_MPI has to be set to "ON".
-- 
-- Include 
/home/prasad/Downloads/dealii/cmake/configure/configure_scalapack.cmake
-- DEAL_II_WITH_SCALAPACK has unmet configuration requirements: 
DEAL_II_WITH_MPI has to be set to "ON".
-- 
-- Include 
/home/prasad/Downloads/dealii/cmake/configure/configure_slepc.cmake
-- DEAL_II_WITH_SLEPC has unmet configuration requirements: 
DEAL_II_WITH_PETSC has to be set to "ON".

Is your intention to use parallel features of deal.II? Otherwise I'd 
suggest to explicitly disable all parallel features for now with:
cmake -DDEALII_WITH_MPI=OFF -DDEALII_WITH_TRILINOS=OFF 
-DDEAL_II_WITH_PETSC=OFF ..

Hope this helps for now!

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/837362ce-d2f5-4727-bc72-c400a90e280e%40googlegroups.com.


Re: [deal.II] Re: Deal.ii installation

2020-04-28 Thread Marc Fehling
Prasad!

If you look closely into your CMakeError.log file, you'll find that there 
are multiple tests failing. This is not a bad thing: deal.II figures out 
your system configuration this way and enables/disables certain features. 
However in your case, it seems that there is a mandatory test failing, and 
I can not figure out which one by these two log files alone. Could you 
provide your cmake console output?

Could you redirect your console output into a file e.g. via `cmake .. > 
console.log` and forward it?

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c17bb327-8528-424f-a33c-96685f217e4c%40googlegroups.com.


Re: [deal.II] Re: Deal.ii installation

2020-04-28 Thread Marc Fehling
Hi Prasad!

On Tuesday, April 28, 2020 at 10:54:54 AM UTC+2, Prasad Adhav wrote:
>
> I am using cmake version 3.10.2
>

With version 3.10.2, it is less likely that my suggestion will fix your 
problem.

Did cmake actually finished the configuration, as Wolfgang pointed out? 
Going through the error log, you may not have all dependencies of deal.II 
fulfilled, and I'm not entirely sure which tests are mandatory to pass. 
What does cmake print in your console when you run `cmake ..`? The end of 
the console output should give you a hint on what's missing.

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5a76f482-de29-4c66-873a-7e5b6dca7503%40googlegroups.com.


[deal.II] Re: Deal.ii installation

2020-04-28 Thread Marc Fehling
Hi Prasad!

Thank you for providing the logs! It seems like this here is the cause of 
it:

/usr/bin/ld: cannot find -lpthreads


It seems like this is again related to the `-lpthread` problem. Just out of 
curiosity: Which version of cmake and which OS are you using? They may have 
updated CMake in their repository which could be the reason why we see this 
issue more frequently recently...

This may have been fixed upstream in pull request #9117 
. This issue only occurs with 
CMake version >= 3.16. Just compile from the master branch and you should 
be good!

Best,
Marc

On Monday, April 27, 2020 at 11:27:31 AM UTC+2, Prasad Adhav wrote:
>
> Hello,
>
> I am trying to install deal.ii for the first time.
> I followed the instructions on the readme page(
> https://www.dealii.org/current/readme.html)
>
> I use the following for cmake and it worked fine:
>
> prasad@XDEM-laptop:~/Downloads/dealii-9.1.1/build$ cmake 
> -DCMAKE_INSTALL_PREFIX=/home/
> prasad/Downloads/dealii-9.1.1/dealii_install/ ..
>
> Then I tried to do `make`, `make info` and `make install`, I get a similar 
> error as follows:
> prasad@XDEM-laptop:~/Downloads/dealii-9.1.1/build$ make info 
> make: *** No rule to make target 'info'.  Stop.
>
> prasad@XDEM-laptop:~/Downloads/dealii-9.1.1/build$ make install 
> make: *** No rule to make target 'install'.  Stop.
>
> I apologize if this was already posted, in my search I did not find any 
> questions similar to this.
> Can anyone help with this?
> Thank you.
>
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f43cf896-d852-4f1b-b349-e8c87878b416%40googlegroups.com.


[deal.II] Re: "libpthreads"?

2020-04-24 Thread Marc Fehling
Hi Victor!

This has been fixed upstream with pull request #9117 
. This issue only occurs with 
CMake version >= 3.16. Just compile from the master branch and you're good!

Best,
Marc

On Wednesday, April 22, 2020 at 3:16:40 PM UTC+2, Victor Eijkhout wrote:
>
> It looks like cmake bombs on a request to "-lpthreads". I have no idea why 
> it is wanting that. Log files attached.
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/489b8ab0-fe35-4329-9050-038890b59f62%40googlegroups.com.


Re: [deal.II] Mesh refinement and the ability to transfer the data to the quadrature points of the new mesh on parallel::shared::triangulation.

2020-04-22 Thread Marc Fehling
Hi Alberto!

If I understood you correctly, you transfer quadrature point data, with the 
`SolutionTransfer` class which is meant to transfer finite element 
approximations.

A different class dedicated to the transfer of quadrature point data 
already exists: It is called `TransferableQuadraturePointData`. Examples on 
how to use that feature can be found in 
`tests/base/quadrature_point_data_{02|03|04}`.

You could also use the `CellDataTransfer` class to transfer cell related 
data, i.e. stored as `Vector>` in your case if I interpreted 
your code correctly. However, this particular feature is only available in 
the current `master` branch of the library and has not been released yet.

Hope this gives you some more options to find a solution!

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/71caa803-60fa-42cc-b609-3c2e1dd57674%40googlegroups.com.


[deal.II] Re: Error in make_hanging_node_constraints() while using parallel::distributed::Triangulation with hp::DoFHandler

2020-02-20 Thread Marc Fehling
Hi Chaitanya,

we turned your minimum working example into a test case for the deal.II 
library #9555 .

Thank you for providing your code! Would you mind to give it a look, since 
we reduced and changed a few parts of it?

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/63ee12be-9e6a-49e5-991e-f05e51b9a883%40googlegroups.com.


[deal.II] Re: Error in make_hanging_node_constraints() while using parallel::distributed::Triangulation with hp::DoFHandler

2020-02-11 Thread Marc Fehling
Hi Chaitanya,

This should've been fixed in #8365 
, which is not included in 
deal.II-9.1.1.


Compiling the most recent version of deal.II from the master branch should 
do the trick.


Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/890e2de7-4fce-45a7-ac3f-9534ef35270a%40googlegroups.com.


[deal.II] Re: Error installing with Candi

2019-10-14 Thread Marc Fehling
Hi David!

On Saturday, October 12, 2019 at 4:34:22 AM UTC+2, David Ryan wrote:
>
> I'm trying to get deal.ii installed using candi on my Mac running macOS 
> Mojave.
>
> Everything seems to work up till the deal.ii compiling where it tells me 
> that it can't find the lapack libraries.
> I've tried installing lapack through brew and from the website, but the 
> installations can't seem to find the lapack libraries.
>
 
Can you verify where brew installed your lapack library? I'm not familiar 
with macOS, but at least you can query which files have been installed by a 
package via linux repository managers. I assume you can do something 
similar with brew as well.

--   LAPACK_LIBRARIES: *** Required variable "_lapack_libraries" empty ***
>

I assume the "_lapack_libraries" variable should have been set by candi at 
some stage?
 

>   Could not find the lapack library!
>
>
>   Please ensure that a suitable lapack library is installed on your 
> computer.
>
>
>   If the library is not at a default location, either provide some hints 
> for
>
>   autodetection,
>
>
>   $ LAPACK_DIR="..." cmake <...>
>
>   $ cmake -DLAPACK_DIR="..." <...>
>
>
>   or set the relevant variables by hand in ccmake.
>
 
I'm not using candi either, but maybe there is a way to tell candi where 
lapack is installed. Probably in a similar way as Wolfgang proposed.

Hope this helps...

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/793089aa-1c81-468e-a724-a4500b51e4dc%40googlegroups.com.


Re: [deal.II] installation fails with intel/19.0.5

2019-09-30 Thread Marc Fehling


On Monday, September 30, 2019 at 11:01:45 PM UTC+2, Victor Eijkhout wrote:
>
>
>
> On Sep 30, 2019, at 3:23 PM, Marc Fehling  > wrote:
>
> Victor, have you tried disabling C++17 support? Maybe that'll do the 
> trick...
>
>
> cmake option please?
>
> (is there a list of all cmake options for your build process?)
>
> My best guess was:
>
> -DDEAL_II_WITH_CXX17=OFF
>

Your guess was indeed correct :) You'll find a list of cmake attributes for 
deal.II in the documentation here 
<https://www.dealii.org/current/users/cmake.html#configurefeature>.
 

>
> and that got me to:
>
> 
> Run Build Command:"/usr/bin/gmake" "cmTC_8b681/fast"
> /usr/bin/gmake -f CMakeFiles/cmTC_8b681.dir/build.make 
> CMakeFiles/cmTC_8b681.dir/build
> gmake[1]: Entering directory `/tmp/dealii-build/CMakeFiles/CMakeTmp'
> Building CXX object CMakeFiles/cmTC_8b681.dir/src.cxx.o
> /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/icpc   
>  -DDEAL_II_HAVE_FLAG_Wimplicit_fallthrough=0   -Wimplicit-fallthrough=0 -o 
> CMakeFiles/cmTC_8b681.dir/src.cxx.o -c 
> /tmp/dealii-build/CMakeFiles/CMakeTmp/src.cxx
> icpc: command line warning #10148: option '-W=implicit-fallthrough=0' not 
> supported
> Linking CXX executable cmTC_8b681
> /opt/apps/cmake/3.13.4/bin/cmake -E cmake_link_script 
> CMakeFiles/cmTC_8b681.dir/link.txt --verbose=1
> /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/icpc   
> -DDEAL_II_HAVE_FLAG_Wimplicit_fallthrough=0-rdynamic 
> CMakeFiles/cmTC_8b681.dir/src.cxx.o  -o cmTC_8b681
> gmake[1]: Leaving directory `/tmp/dealii-build/CMakeFiles/CMakeTmp'
>
> Source file was:
> int main() { return 0; }
> Performing C++ SOURCE FILE Test DEAL_II_HAVE_FLAG_Wno_nested_anon_types 
> failed with the following output:
> Change Dir: /tmp/dealii-build/CMakeFiles/CMakeTmp
>
> Run Build Command:"/usr/bin/gmake" "cmTC_b3b8f/fast"
> /usr/bin/gmake -f CMakeFiles/cmTC_b3b8f.dir/build.make 
> CMakeFiles/cmTC_b3b8f.dir/build
>
> %%%
>
> which does not look fatal to me. It’s only a warning.
>

Yes, these are only warnings.

I am confused about Intel19's state on the fallthrough flag: They say that 
it is implemented on their website 
<https://software.intel.com/en-us/articles/c17-features-supported-by-intel-c-compiler>,
 
but it seems that they just prepared it according to this statement 
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0188r1.pdf>? Or 
is it just the 'implicit fallthrough' feature that Intel19 is not capable 
off.

Would you mind to try enabling C++17 support but disabling the implicit 
fallthrough feature by providing "-DDEAL_II_WITH_CXX17=ON 
-DDEAL_II_HAVE_FLAG_Wimplicit_fallthrough=0" to cmake? If this fails, I 
think it would be best to keep C++17 disabled for now...

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f5636462-4a29-4dd9-9190-9db2a56fbb8c%40googlegroups.com.


Re: [deal.II] installation fails with intel/19.0.5

2019-09-30 Thread Marc Fehling


On Friday, September 27, 2019 at 11:24:12 PM UTC+2, Wolfgang Bangerth wrote:

>
> Didn't we recently merge a patch where ICC reported that it understands 
> C++17, but doesn't in fact support this attribute? Does that ring a bell 
> for anyone? 
>

Intel published a list of all C++17 features that their Intel19 compiler 
features here 

.

We had an issue with Intel19 not understanding class template argument 
deduction (CTAD), which has been fixed via this patch 
.

Victor, have you tried disabling C++17 support? Maybe that'll do the 
trick...

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/aeca51de-923c-4248-a52b-fdb534d8%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-27 Thread Marc Fehling
Hi Doug!

On Tuesday, August 27, 2019 at 3:41:11 AM UTC+2, Doug wrote:
>
> Thank you very much for the quick fix! Looking forward to pull this once 
> it goes through all the checks.
>

The patch has been merged. Let me know if this fix does the trick for you.

We introduced a test named `tests/mpi/solution_transfer_05.cc` that may be 
in a name conflict with the test you were preparing. I'm sorry about that 
if this is the case. Please adjust the filenames of your test accordingly.

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/695dd024-1069-4ac3-8d4b-eeab9daebb83%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-25 Thread Marc Fehling
Hi Doug! Hi Wolfgang!

On Sunday, August 25, 2019 at 3:25:06 AM UTC+2, Wolfgang Bangerth wrote:
>
> On 8/23/19 6:32 PM, Marc Fehling wrote: 
> > 
> > Your scenario indeed revealed a bug: Currently, we set and send 
> > `active_fe_indices` based on the refinement flags on the Triangulation 
> object. 
> > However, p4est has the last word on deciding which cells will be refined 
>
> That's ultimately true, but we try to match what p4est expects in the 
> Triangulation::prepare_coarsening_and_refinement() function. Are you 
> calling 
> this function after you decide which cells *you* want to refine/coarsen 
> and 
> before you execute the refinement/coarsening? 
>

We've set all refinement and coarsening flags on the p::d::Triangulation, 
called prepare_coarsening_and_refinement(), and then executed refinement. 
The transfer of active_fe_indices will be prepared during the 
pre_distributed_refinement signal, but is executed later after p4est 
performed refinement on its forest. We utilize the CellStatus flags for 
transfer.

In the scenario provided by Doug, there are neighboring cells that are 
either flagged for coarsening or for refinement. I would guess that there 
are differences in enforcing 2:1 hanging node conditions between p4est and 
deal.II.

I came up with a fix for this issue in the following PR #8637 
<https://github.com/dealii/dealii/pull/8637> that uses the CellStatus flags 
to determine active_fe_indices while coarsening.

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/02c3cb9e-fc13-47d7-bd16-1505afc7eac8%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-23 Thread Marc Fehling
Hi Doug!

Your scenario indeed revealed a bug: Currently, we set and send 
`active_fe_indices` based on the refinement flags on the Triangulation 
object. However, p4est has the last word on deciding which cells will be 
refined -- and in your specific scenario p4est makes use of it. I came up 
with a fix that should resolve this issue. Thank you for providing us with 
this example!

I'll open a pull request once my local testsuite passes.

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/89662064-f19f-4747-a851-b751abc782e1%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-21 Thread Marc Fehling
Hi Doug!

On Wednesday, August 21, 2019 at 4:00:49 AM UTC+2, Doug wrote:
>
> 8134: void dealii::parallel::distributed::SolutionTransfer VectorType, DoFHandlerType>::unpack_callback(const typename 
> dealii::parallel::distributed::Triangulation space_dimension>::cell_iterator&, typename 
> dealii::parallel::distributed::Triangulation space_dimension>::CellStatus, const 
> boost::iterator_range<__gnu_cxx::__normal_iterator std::vector > > >&, std::vector&) 
> [with int dim = 3; VectorType = 
> dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = 
> dealii::hp::DoFHandler<3, 3>; typename 
> dealii::parallel::distributed::Triangulation space_dimension>::cell_iterator = 
> dealii::TriaIterator >; typename 
> dealii::parallel::distributed::Triangulation space_dimension>::CellStatus = dealii::Triangulation<3, 3>::CellStatus]
>

It seems that the assertion fails only in the 3D case, which is quite 
interesting. Are you using the deal.II-9.1 release version, or do you work 
with the current master branch? I'll try to reproduce the error once I'm at 
my desk.

It looks like either p::d::SolutionTransfer picks the wrong finite element 
while packing the solution on cells to be coarsened, or hp::DoFHandler 
chooses the wrong finite element for the parent cell in case of coarsening. 
I'll investigate.

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/294c934e-5582-4c4a-a9e9-f895fa8639bb%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-18 Thread Marc Fehling
Hi Doug,

when dealing with distributed meshes, ownership of cells change and we may 
not know which finite element lives on cells that the process got recently 
assigned to. Thus, we need to transfer each cell's `active_fe_index`, which 
we do automatically during coarsening and refinement. However, you set 
`active_fe_indices` after refinement happened, which works in the serial 
case, but no longer in the parallel one. Before executing refinement, you 
need to set `future_fe_indices` that describe to which finite element your 
cell will be assigned to, and you need to do that before refinement 
happened! This should resolve both issues.

Further, you initialize `LinearAlgebra::distrtibuted::Vector` objects 
without any parallel distribution by using this constructor. 

 
Try using one a different one.

Please see `tests/mpi/solution_transfer_04.cc 
`
 
and `tests/mpi/p_coarsening_and_refinement.cc 
`
 
for working examples (I guess we should provide one using an actual 
`SolutionTransfer` object as well), that should hopefully be applicable to 
your problem. This is a recently added feature: If you have any suggestions 
for improvement or encounter more problems, feel free to message us!

Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f78c5eae-4b73-4188-935f-1dd9a5f1009d%40googlegroups.com.


Re: [deal.II] Re: Heat equation (step-26): Negative values with small time step

2018-01-06 Thread Marc Fehling
I extended the step-26 documentation and provided a pull request 
 on github.

You can create an iterator to the elements of a matrix row. Would that do 
> what 
> you need?
>

Yes, that's exactly what I was looking for. I just somehow missed the 
information that diagonal elements were stored as the first element of each 
row in quadratic matrices. But with that, one is able to distinguish 
between those entries easily.

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Heat equation (step-26): Negative values with small time step

2018-01-03 Thread Marc Fehling
Hi Wolfgang,

On Monday, December 18, 2017 at 1:45:15 AM UTC+1, Wolfgang Bangerth wrote:
>
> I think your observation of negative values is an interesting one (and 
> surprising one, for many). Would you be interested in writing a couple of 
> paragraphs about time step choice for the introduction of this program? 
>

Yes, I will do that. I could either enhance the introduction, or write a 
new paragraph in the 'Possibilities for extensions' sections. I guess the 
latter option would be the better one.

I could provide a code snippet on how to check for positivity preservation, 
but for that I need a way to access the non-diagonal entries of a 
SparseMatrix. Do you have an idea on how to do it in a fast way?

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Heat equation (step-26): Negative values with small time step

2017-12-11 Thread Marc Fehling


Hi Bruno,

I only heard about applying flux limiters on advection/convection problems, 
but not on diffusion-related ones. This conforms with what I recently found 
in literature, but I may skipped something crucial.

The equation of interest is the heat equation:


Do you think that flux limiters or TVDs work for a combination like this?


I also suspect that I missed some lower bound on a stability constraint for 
the factor (k*timestepsize)? But from von Neumann analysis, I only get an 
upper one on that.

I hope that one of you has any experience on that to share :)

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Heat equation (step-26): Negative values with small time step

2017-12-07 Thread Marc Fehling
Dear deal.ii community!

I stumbled over some interesting behavior of the heat equation from 
step-26. If I reduce the time step to a smaller value, let's say to 1e-6, I 
observe negative values for the solution near the sources (where gradients 
are large), which I would not expect. I guess it is related to the 
sharpness of the used right hand side function, since I could not observe 
this behavior with a smooth Gaussian shaped one. So my idea was then that 
DG methods may suppress this behavior. How are your thoughts about that?

I stumbled over this issue while working on buoyancy-driven flows. It 
causes negative temperature differences in my setup, yielding downward 
buoyancy forces which ruin the whole dynamic of the fluid in the end.

I would be grateful for any comment on that!

Best,
Marc

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Convergence rate of solution scheme for incompressible Navier-Stokes equations

2016-10-07 Thread Marc Fehling
Dear community,

as the question title suggests, I'm having trouble verifying the 
convergence order of a solution scheme for the incompressible Navier-Stokes 
equations and I'm addressing this question particularly to those who are 
familiar with the verification of such a scheme.

We're currently working on a numerical solution scheme for the 
incompressible Navier-Stokes equations using Chorin's projection. 
Currently, we're using continuous Taylor-Hood elements. As mentioned in 
tutorial step-12, advection-like problems are not stable with CG methods, 
thus we're applying external stabilization, i.e. Taylor-Galerkin and 
grad-div-stabilization. The time marching scheme is chosen to be a 
semi-implicit scheme, basing on the implicit Euler scheme. We call it 
semi-implicit because we linearize the advection term, replacing the (u^* * 
nabla) u^* by (u^n * nabla) u^*.

Now, I want to verify the solution scheme with a convergence analysis for 
the flow velocity. I take the L2 error using the 
"integrate_difference(...)" function and compare the different values 
depending on the size of the time-step and the global refinement level. To 
get an error indicator, I take the root mean square over the L2 error at 
every timestep. The used function for verification is the non-trivial, 
two-dimensional solution to the incompressible Navier-Stokes (INS) 
equations made by McDermott (source) 
<https://sites.google.com/site/randymcdermott/NS_exact_soln.pdf>. Since 
this function solves the INS equations intrinsically, no modification of 
the right hand side of the equations is made. Periodic boundaries are used 
to avoid the external imposition of boundary conditions and thus another 
error source.

If I run the simulation with different timesteps at a specified global 
level, I can reproduce the expected order in time, which is one, resulting 
from the implicit Euler scheme. But if I run the simulation at different 
refinement levels with a fixed timestep, I get convergence rates which are 
not consistent. Using second order elements for the velocity space, I would 
expect a convergence order of 3 for the L2 error. But the convergence rates 
I get are jumping wildly and seem to depend on the size of the chosen 
timestep. As an exmaple using small timesteps (dt=1e-4, dx<0.2, cfl<1e-3) I 
get convergence rates of roughly 3+/-0.6 between two global levels. With 
smaller timesteps (dt=1e-3), I get rates around 1.6+/-0.2. If I take a look 
at the convergence rates at different timepoints (not the root mean square 
one) for the smallest timestep (dt=1e-4), I see that convergence rates in 
the beginning are indeed the expected ones of 3, but are changing over time.

Why is the convergence rate in space inconsistent? Am I missing some 
crucial point?

Best regards,
Marc Fehling

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.