Re: [deal.II] Re: different results when compute integration with adaptive mesh

2019-02-11 Thread chucui1016
Dear Prof. Bangerth,

Thanks very much for your quick answer!

> Constraints are funny and sometimes require deep thought about what 
exactly 
> they mean. What happens if you don't apply constraints to the 
> cell_volume_matrix and cell_gradient_matrix -- i.e., you copy the 
elements 1:1 
> into the matrix, without the 'constraints' object? (Like in step-4.) 
I rewrite my code as you say, and the results are:
Cycle 0:
   Number of active cells:   20
   Number of degrees of freedom: 89
 u square 0: 0.0305621  grad u square 0: 0.256542
Cycle 1:
   Number of active cells:   44
   Number of degrees of freedom: 209
 u square 1: 0.144303  grad u square 1: 0.49794
Cycle 2:
   Number of active cells:   92
   Number of degrees of freedom: 449
 u square 2: 0.101366  grad u square 2: 0.477137
Cycle 3:
   Number of active cells:   188
   Number of degrees of freedom: 881
 u square 3: 0.13305  grad u square 3: 0.529805
Cycle 4:
   Number of active cells:   368
   Number of degrees of freedom: 1737
 u square 4: 0.145489  grad u square 4: 0.565953
Cycle 5:
   Number of active cells:   737
   Number of degrees of freedom: 3409
 u square 5: 0.141771  grad u square 5: 0.574863
Cycle 6:
   Number of active cells:   1436
   Number of degrees of freedom: 6705
 u square 6: 0.171807  grad u square 6: 0.642486
Cycle 7:
   Number of active cells:   2729
   Number of degrees of freedom: 12521
 u square 7: 0.160819  grad u square 7: 0.625337

The attached is the code ,which doesn't use 
"constraints.distribute_local_to_global(...)" and 
"DoFTools::make_hanging_node_constraints (dof_handler,  constraints);"
if "DoFTools::make_hanging_node_constraints (dof_handler,  constraints);" 
remain, the results:
Cycle 0:
   Number of active cells:   20
   Number of degrees of freedom: 89
 u square 0: 0.0305621  grad u square 0: 0.256542
Cycle 1:
   Number of active cells:   44
   Number of degrees of freedom: 209
 u square 1: 0.0822159  grad u square 1: 2.97512
Cycle 2:
   Number of active cells:   92
   Number of degrees of freedom: 449
 u square 2: 0.000404454  grad u square 2: 0.0534184
Cycle 3:
   Number of active cells:   188
   Number of degrees of freedom: 937
 u square 3: 9.40228e-05  grad u square 3: 0.0378517
Cycle 4:
   Number of active cells:   368
   Number of degrees of freedom: 1793
 u square 4: 7.66868e-05  grad u square 4: 0.0403114
Cycle 5:
   Number of active cells:   704
   Number of degrees of freedom: 3321
 u square 5: 5.9193e-05  grad u square 5: 0.0366146
Cycle 6:
   Number of active cells:   1412
   Number of degrees of freedom: 6657
 u square 6: 6.47763e-05  grad u square 6: 0.0455655
Cycle 7:
   Number of active cells:   2696
   Number of degrees of freedom: 12477
 u square 7: 2.50787e-05  grad u square 7: 0.0320763



So I wander if it approximate the "grad u_exact square" and  "u_exact 
square" correctly?  
Because in my numerical experiments, the  "grad u_h square" and  "u_h 
square" of numerical solution u_h are what we take care of. 

So if the computions of "grad u_h square" and  "u_h square" are divided 
from other work like visualization, error estimation, are these results 
correct if change the using of constraints?
And without "constraints..distribute_local_to_global(...)", the information 
of boundary may not use correctly (for example, My boundary condition is 
periodic boundary condition).

Thank you very much!

Best,
Chucui

在 2019年2月12日星期二 UTC+8下午1:49:15,Wolfgang Bangerth写道:
>
> On 2/11/19 12:20 AM, chucu...@gmail.com  wrote: 
> > Dear Prof. Bangerth: 
> > 
> > Thank you very much for your quick reply! 
> > 
> > When I apply hanging node constraints to my matrix, as the step-6 
> > says: https://www.dealii.org/developer/doxygen/deal.II/step_6.html 
> > I make 4 steps: 
> >  1.Create a Constraints Class: 
> > | 
> > ConstraintMatrixconstraints; 
> > | 
> > 
> >  2.Fill this object using the function 
> > DoFTools::make_hanging_node_constraints() to ensure continuity of the 
> elements 
> > of the finite element space. 
> > | 
> > DoFTools::make_hanging_node_constraints (dof_handler, 
> > constraints); 
> > | 
> > 
> >  3.Copy the local contributions to the matrix and right hand side 
> into the 
> > global objects: 
> > | 
> >constraints.distribute_local_to_global (cell_matrix, 
> >cell_rhs, 
> >local_dof_indices, 
> >system_matrix, 
> >system_rhs); 
> >constraints.distribute_local_to_global (cell_volume_matrix, 
> >local_dof_indices, 
> >volume_matrix); 
> >constraints.distribute_local_to_global (cell_gradient_matrix, 
> >

Re: [deal.II] Re: different results when compute integration with adaptive mesh

2019-02-11 Thread Wolfgang Bangerth
On 2/11/19 12:20 AM, chucui1...@gmail.com wrote:
> Dear Prof. Bangerth:
> 
> Thank you very much for your quick reply!
> 
> When I apply hanging node constraints to my matrix, as the step-6 
> says: https://www.dealii.org/developer/doxygen/deal.II/step_6.html
> I make 4 steps:
>      1.Create a Constraints Class:
> |
> ConstraintMatrix    constraints;
> |
> 
>      2.Fill this object using the function 
> DoFTools::make_hanging_node_constraints() to ensure continuity of the 
> elements 
> of the finite element space.
> |
> DoFTools::make_hanging_node_constraints (dof_handler,
>                                             constraints);
> |
> 
>      3.Copy the local contributions to the matrix and right hand side into 
> the 
> global objects:
> |
>        constraints.distribute_local_to_global (cell_matrix,
>                                                cell_rhs,
>                                                local_dof_indices,
>                                                system_matrix,
>                                                system_rhs);
>        constraints.distribute_local_to_global (cell_volume_matrix,
>                                                local_dof_indices,
>                                                volume_matrix);
>        constraints.distribute_local_to_global (cell_gradient_matrix,
>                                                local_dof_indices,
>                                                gradient_matrix);
> |
> 
> 
>      4.Make sure that the degrees of "freedom" located on hanging nodes get 
> their correct (constrained) value:
> |
>   constraints.distribute (solution);
> |

Constraints are funny and sometimes require deep thought about what exactly 
they mean. What happens if you don't apply constraints to the 
cell_volume_matrix and cell_gradient_matrix -- i.e., you copy the elements 1:1 
into the matrix, without the 'constraints' object? (Like in step-4.)

Alternatively, you could do as you show above and after calling 
'constraint.distribute(solution)' you manually set all constrained degrees of 
freedom in the solution vector to zero. That's not what you want to use for 
visualization, error estimation, ... but it might work for the operation 
you're trying here.

I'm pretty sure that both of these solutions should work individually (but not 
in combination). But explaining why this is so would take a page or two, I'm 
afraid.

Best
  W.


-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Automatic Differentiation of the Finite Element Jacobian

2019-02-11 Thread Wolfgang Bangerth


Doug,
let me add another thought I had: While the vertex locations of the mesh are 
hard-coded as doubles (and you won't be able to change that), there is a class 
MappingEulerian that allows you to provide a displacement field. This field is 
parameterized by a vector of some sort, and it is possible that you can make 
it work if that field is a vector of some AD type for which you can thread the 
remainder of the computations through AD evaluations.

I don't know whether that's going to work out any easier. but I did want to 
point it out as an option worth reading through.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] FEValues pre-computed values

2019-02-11 Thread Wolfgang Bangerth
On 2/11/19 1:52 PM, Doug wrote:
> 
> Just to clarify, the Jacobian, its transpose inverse and its determinant will 
> be re-computed every time reinit() is called? Even if the mesh has not 
> changed 
> from the previous time step?

Yes. We don't store this information for all cells -- in fact, FEValues 
doesn't actually know anything about the mesh as a whole, all it knows is 
about the current cell. So there is no place to look this information up: if 
the current cell is different from the previous cell, then this information 
needs to be recomputed by FEValues. Of course, that doesn't prevent you from 
making a run through all cells, storing this information for each cell, and in 
all following loops over all cells use what you cached rather than utilizing 
FEValues. That's the classic memory-vs-compute trade-off.

(I will note that FEValues has optimizations that make some of this cheaper. 
For example, if the cell for which you call reinit() is simply a translation 
of the previous cell, then it is not necessary to recompute the Jacobian, 
transpose, inverse, and determinant because it will be the same as for the 
previous cell. There is some trickery involved in this when using multiple 
threads that might defeat the optimization, but it can be enabled.)


> If so, I might end up pre-computing and storing the Jacobian transpose 
> inverse 
> and the Jacobian determinant to avoid re-computation. However, its effect on 
> the total time should be relatively small compared to flux computations. 
> "Premature optimization is the root of all evil".

Well said. Implement the simplest version of the code first and optimize once 
you have hard evidence that something is an actual problem.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] FEValues pre-computed values

2019-02-11 Thread Doug
Prof. Bangerth,

Just to clarify, the Jacobian, its transpose inverse and its determinant 
will be re-computed every time reinit() is called? Even if the mesh has not 
changed from the previous time step?

If so, I might end up pre-computing and storing the Jacobian transpose 
inverse and the Jacobian determinant to avoid re-computation. However, its 
effect on the total time should be relatively small compared to flux 
computations. "Premature optimization is the root of all evil".

Doug

On Sunday, February 10, 2019 at 2:15:04 PM UTC-5, Wolfgang Bangerth wrote:
>
>
> Doug, 
>
> > I am currently looking into the implementation FEValues. Here is my 
> understanding 
> > 
> >  1. FEValues takes some Mapping, FiniteElement, and Quadrature to 
> evaluate and 
> > store the values with the required UpdateFlags. 
> >  2. FEValues::initialize through the FiniteElement::get_data will then 
> > pre-compute all the required data that are possibly computable on 
> the 
> > reference cell. 
> >  3. FEValues::reinit will then compute/fetch the data on a physical 
> cell, 
> > using the Jacobian information by calling the various 
> fill_fe_values() 
> > functions. 
> > 
> > I see that most of the examples use implicit methods to solve the 
> various 
> > problems. Therefore, the cost of initialize() and reinit() on each cell 
> is 
> > quite low compared to solving the actual system at every step. However, 
> for 
> > explicit methods such as RK, we wouldn't want to re-compute the Jacobian 
> or 
> > reference gradients (\nabla_{physical} \phi = J^{-T} \nabla _{reference} 
> \phi) 
> > at every time step. Therefore, the FEValues declaration would be outside 
> the 
> > "assemble_system()" seen in most tutorials, and passed as an argument as 
> > "assemble_system(FEValues fe_v)". Unfortunately, I am getting lost in 
> the code 
> > where things values are pre-computed versus fetching them. 
>
> I think that's not necessary. The creation of an FEValues object is 
> expensive, 
> but on meshes with a non-trivial number of cells, the creation of the 
> FEValues 
> object is still cheap compared to what you're going to do with the object 
> on 
> the cells. (I.e., it is expensive compared to the operations on one cell; 
> but 
> the creation is cheap compared to the operations on *all* cells.) 
>
>
> > My question is: What is being re-computed every time I call 
> > FEValues::reinit(cell)? 
>
> initialize() computes everything that can be computed on the reference 
> cell. 
> This includes the values and gradients of the shape functions with regard 
> to 
> the reference coordinates. 
>
> reinit() then only computes everything that depends on the actual cell. 
> This 
> means for example computing the Jacobian of the mapping on a given cell, 
> and 
> multiplying it by the (precomputed) gradients of the shape functions on 
> the 
> reference cell to obtain the gradient on the real cell. 
>
>
> > I am mainly worried about it re-computing Jacobians (its determinant and 
> its 
> > inverse) and evaluating its product with the reference gradients 
> > (\nabla_{reference} phi) every time reinit(cell) is called. Otherwise, 
> the 
> > solution will be to pre-compute and store those outside in some other 
> > container and avoid using FEValues. 
>
> Yes. This (the Jacobian of the mapping and its determinant) is a quantity 
> that 
> changes from each cell to the next, and so it is computed in reinit() 
> because 
> it simply can't be precomputed without knowledge of the actual cell. 
>
> In explicit methods, you will likely visit the same mesh over and over, 
> and so 
> it does make sense to compute things such as the gradient of the shape 
> functions on each cell (rather than the Jacobian or its determinant, which 
> goes into the computation) once at the beginning of the program and then 
> re-use it where necessary. You can of course do this pre-computation using 
> FEValues :-) 
>
> An alternative is the matrix-free framework that can do some of these 
> things 
> as well. You may want to look at the corresponding example programs. 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: different results when compute integration with adaptive mesh

2019-02-11 Thread chucui1016
Dear Prof Arndt,

Thanks for your quick answer!

> in the end you program is very similar to step-6. Do you observe the same 
behavior there as well?
Yes, my code add several lines below to step-6.cc directly:
...
  SparseMatrix volume_matrix;
  SparseMatrix gradient_matrix;
...
  volume_matrix.reinit (sparsity_pattern);
  gradient_matrix.reinit (sparsity_pattern);
...

 cell_volume_matrix(i,j) += (
 fe_values.shape_value(i,q_index) *
 fe_values.shape_value(j,q_index) *
 fe_values.JxW(q_index));
 cell_gradient_matrix(i,j) += (
 fe_values.shape_grad(i,q_index) *
 fe_values.shape_grad(j,q_index) *
 fe_values.JxW(q_index));
...
  constraints.distribute_local_to_global (cell_volume_matrix,
  local_dof_indices,
  volume_matrix);
  constraints.distribute_local_to_global (cell_gradient_matrix,
  local_dof_indices,
  gradient_matrix); 
...
  double int_test_volume = volume_matrix.matrix_norm_square(solution);
  double int_test_gradient = 
gradient_matrix.matrix_norm_square(solution);
  std::cout << " u square " << cycle <<": " << int_test_volume << "  
grad u square " << cycle <<": " << int_test_gradient  << std::endl;
and the results:
Cycle 0:
   Number of active cells:   20
   Number of degrees of freedom: 89
 u square 0: 0.0305621  grad u square 0: 0.256542
Cycle 1:
   Number of active cells:   44
   Number of degrees of freedom: 209
 u square 1: 0.0438936  grad u square 1: 1.31255
Cycle 2:
   Number of active cells:   92
   Number of degrees of freedom: 449
 u square 2: 0.0505902  grad u square 2: 3.584
Cycle 3:
   Number of active cells:   200
   Number of degrees of freedom: 921
 u square 3: 0.0536183  grad u square 3: 6.60495
Cycle 4:
   Number of active cells:   440
   Number of degrees of freedom: 2017
 u square 4: 0.0555872  grad u square 4: 19.7488
Cycle 5:
   Number of active cells:   956
   Number of degrees of freedom: 4425
 u square 5: 0.0562045  grad u square 5: 48.0924
Cycle 6:
   Number of active cells:   1916
   Number of degrees of freedom: 8993
 u square 6: 0.0572988  grad u square 6: 117.562
Cycle 7:
   Number of active cells:   3860
   Number of degrees of freedom: 18353
 u square 7: 0.0577537  grad u square 7: 258.867


Thank you very much! And attached is the code of step-6.cc added lines 
above.

Best,
Chucui


在 2019年2月11日星期一 UTC+8下午5:52:56,Daniel Arndt写道:
>
> Chucui,
>
> in the end you program is very similar to step-6. Do you observe the same 
> behavior there as well?
> If not, try to find the changes that are responsible for the quickly 
> increasing norm.
> Do the results of your computation look reasonable?
>
> And the version of deal.ii I use is 8.5.1, which doesn't have the updated 
>> class AffineConstraints , but have Constraint_matrix Class. I don't 
>> know if this is the key point. Need I update my deal.ii?
>>
> No, that should not be related at all.
>
> Best,
> Daniel
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 2000 - 2016 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * -

 *
 * Author: Wolfgang Bangerth, University of Heidelberg, 2000
 */


// @sect3{Include files}

// The first few files have already been covered in previous examples and will
// thus not be further commented on.
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

// From the following include file we will import the declaration of
// H1-conforming finite element shape functions. This family of finite

Re: [deal.II] Automatic Differentiation of the Finite Element Jacobian

2019-02-11 Thread Daniel Arndt


> > I see that in a previous thread 
> > (
> https://groups.google.com/forum/#!searchin/dealii/tpetra%7Csort:date/dealii/I8aLPu5anrM/gBzaaqivDAAJ),
>  
>
> > you plan on switching over to Tpetra, which will also be useful for us 
> since 
> > we might instantiate those vectors with AD types. Do you have an idea of 
> the 
> > time horizon on which this will be developped? 
>
> There is a patch here: 
> https://github.com/dealii/dealii/pull/7180 
> I suspect that it will be merged in the next few weeks or months, but the 
> question is what one can do with these vectors. I'll let Daniel Arndt 
> answer 
> that question, but suspect that additional work is necessary to use them 
> with 
> linear solvers, multiply with matrices, etc. 
>
The pull request mentioned above just introduces a wrapper class for Tpetra 
vectors
and is a first step towards Tpetra support. This vector class can be used 
with TrilinosWrappers::SparseMatrix if
the Scalar template parameter is double. The next step is to implement a 
Tpetra::CrsMatrix wrapper
that is templated with respect to the scalar type as well. Using these 
wrappers with deal.II`s iterative solvers should then work out-of-the-box.
For preconditioners, we likely want to use wrappers for Trilinos classes as 
well, though. This will likely require some more work.
Of course, every contribution in that direction is welcome!

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Automatic Differentiation of the Finite Element Jacobian

2019-02-11 Thread Jean-Paul Pelteret
> The derivatives with respect to the 
> mesh points will be needed for optimization purposes.

I’m not familiar with the algorithms used in shape optimisation, but one 
*possible* work-around if you need derivatives wrt. the mesh points on the 
finite element cell level would be to do the following:
- Add an extra component to an FESystem, specifically a FE_Q(1)^dim. You would 
interpret as the current position of the mesh vertices because the support 
points for this FE coincide with the grid vertices.
- You can initialise this field using VectorTools::interpolate. Specifically 
you would simply populate the relevant entries of the result returned by this 
Function with the coordinates given by the input Point.
- Now that you have these “extra DoFs”, you can now compute the sensitivities 
on any cell with respect to them.

This, of course, would not really help if you compute some objective function 
based on the entire solution, and you in fact need your entire program to de 
differentiable.

Best,
Jean-Paul

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.