Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-09 Thread Wolfgang Bangerth
On 9/9/19 1:57 AM, Richard Schussnig wrote:
> 
> FINALLY, MY QUESTIONS:
> 
> Using the Q1Q1, I would in the end (FSI) need to come up with a space made
>  from Q1 elements with a discontinuity at the interface - which shall be
> realized using different material_id(). - how may I do that other than
> using a FE_DGQ space for the pressure and enforce continuity 'manually'
> through a giant ConstraintMatrix?

That's inefficient, of course :-) I assume that your interface is in the
interior of the domain? In that case, take a look at step-46, where solution
variables only live on certain cells, and are discontinuous at the interface
between the two parts of the domain.


> Using the Q1Q0, the main problem is data transfer and 'node searching' in
> the parallel case - example: the stabilization matrix from cell 16 has
> pressure dof 45 and shares edges or maybe only a single vertex (!) with
> cells with pressure dofs 1 2 3 4 5. The cell matrix for the projection from
> Q0(dc) to Q1(c) is an area-weighted sum of the pressures on the cells
> touching the vertex of the support of the matching bilinear function,
> therefore we get a 6x6 local matrix and entries into all 'touching' cells. 

Yes, you'd have to create a map that for each vertex gives you a list of all 
adjacent cells. I think I recall that there is a function in GridTools for 
this, though.


> Since these cells are not only the direct neighbors of the current cell, 
> things may get complicated quite fast, if we consider the 3d case with
> hanging nodes, but on the other hand side, looping in the element loop over
> all elements again(!) to check the vertex_index() is extremely slow.

Yes, you'd reverse this approach by looping over all vertices first, and then 
in this loop over all adjacent cells.


> Do you know of any better-fitting stabilizations for the Q1Q0 pair? Or do
> you think there are better options around?

Q1-Q1 is a pretty good method, and not very difficult to implement. I'll note 
that Q1-Q0 *sounds* like a good idea, but has a very low convergence rate and 
so will not yield particularly good accuracy if that's what you actually care 
about. Of course, Q2-Q1 is the standard for good reasons.

Best
  W.


-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0e6ef930-d6b9-e87e-8975-97c9a7a11a3f%40colostate.edu.


Re: [deal.II] Re: Error with Petsc and and multithreading?

2019-09-09 Thread Wolfgang Bangerth
On 9/9/19 3:10 PM, Konrad wrote:
> 
> Thanks a lot, Wolfgang! The error message is actually quite clear. What 
> I found out in addition is that it is advantageous and helpful to read 
> the documentation ... ;-)

Yes, no doubt :-) We're spending a lot of time writing it ;-)

Cheers
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/35630390-2478-266e-516f-6cbacabaaa16%40colostate.edu.


Re: [deal.II] Re: Error with Petsc and and multithreading?

2019-09-09 Thread Konrad

>
> Yes, this is indeed the case, and is what the error message was trying to 
> suggest. If you think that the error message could have been clearer, feel 
> free to propose a better wording and we'll include it for the next 
> release! 
>

Thanks a lot, Wolfgang! The error message is actually quite clear. What I 
found out in addition is that it is advantageous and helpful to read the 
documentation ... ;-)

Best ,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6aa4d4eb-f91c-4519-a707-4833225fe4d8%40googlegroups.com.


Re: [deal.II] Re: Extracting volume data to the boundary

2019-09-09 Thread Wolfgang Bangerth


> In solving a Laplace-Beltrami problem on an advecting surface one should 
> identify the Gauss points on the manifold dim-1 cell and retrieve at 
> such locations relevant information from the solution of the advection 
> problem, using the dim dof_handler of the volume. I wonder how to 
> connect a manifold dim-1 cell to the volumetric cell it was extracted 
> from. The GridGenerator::extract_boundary_mesh method seem to provide 
> some information, since "it returns A map that for each cell of the 
> surface mesh (key) returns an iterator to the corresponding face of a 
> cell of the volume mesh (value). " . I am not sure how I can use this 
> map to link a manifold cell to the corresponding volume cell.

Alberto,
the way this is supposed to be used is as follows:

When you are assembling the linear system for the surface problem, you 
will have an FEValues object for the surface shape functions. 
You initialize it with a Quadrature.

But then you also need to evaluate the volume solution on the same 
quadrature points. You do this by creating an FEFaceValues object, 
which requires you to also provide a Quadrature object -- which 
you want to choose the same as above.

Now, if you need the volume solution when assembling something for the 
surface problem, you are sitting on a cell of the surface mesh; use the 
map returned by the extract_boundary_mesh() function to obtain what the 
corresponding face of the volume mesh is and reinit() the FEFaceValues 
for that cell. You can then use the FEFaceValues object to evaluate the 
volume solution at the same quadrature points as you use for the 
FEValues object of the surface mesh. The only thing you may have to pay 
attention to is that the surface cell may be inverted compared to the 
volume cell's face -- in which case the quadrature points of the two 
objects match, but are in a different order. You'll have to translate 
between these permutations then.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a60b9b5e-fef3-fd88-5e54-65bed698a5bc%40colostate.edu.


Re: [deal.II] Instantiation of Vector< Sacado::Fad::DFad >

2019-09-09 Thread Wolfgang Bangerth


> I am trying to instantiate a Vector with an AD type such as Vector< 
> Sacado::Fad::DFad > by changing
> 
> for (SCALAR : REAL_AND_COMPLEX_SCALARS) to for (SCALAR : ALL_SCALAR_TYPES)
> 
> in the instantiation file
> 
> dealii/source/lac/la_parallel_vector.inst.in

This might work, but the scheme is actually supposed to work differently: In 
your user code, just #include  and 
everything will be instantiated in your user code.


> However, it seems 3 issues come up.
> 
> 1. I can fix this one, so skip ahead if you want.
> 
> A bunch of static/dynamic casting of ADvar(unsigned int) 
> through real_type(partitioner->local_size()), but Trilinos doesn't have the 
> *unsigned* int. Simply need to cast the unsigned int value to a long before 
> casting. For example real_type((long)partitioner->local_size()).

This seems like we are abusing the real_type for something it wasn't intended 
to do. Can you open a github issue with the error message you get with the 
unmodified code?


> 2. I can also fix.
> 
> dealii/include/deal.II/base/exceptions.h
> 
> Requires the definition of
>     dealii::numbers::is_finite(number)
> 
> where I can provide a definition in dealii/include/deal.II/base/numbers.h for 
> the AD types.

I suspect that that would be useful in its own right. Can you open an issue or 
pull request for this as well?



> *3. I don't know how to fix*
> *
> *
> std::complex(number) needs to be defined.
> 
> Now, this translates 
> to std::complex::complex(Sacado::Rad::ADvar 
>  >&), which of course doesn't exist.
> 
> I understand why it's casting into a complex to ensure that we can use the 
> exception for all scalar arguments. But this is limiting the behaviour such 
> that I can't use AD types.

Where exactly is this conversion necessary?

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/efa1f1fd-0609-0dcc-9e1a-476e813d17ee%40colostate.edu.


Re: [deal.II] Re: Error with Petsc and and multithreading?

2019-09-09 Thread Wolfgang Bangerth
On 9/9/19 2:47 AM, Konrad wrote:
> OK, just to answer the question: If you are running with petsc you should not 
> call
> 
> dealii::Utilities::MPI::MPI_InitFinalize
> mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);
> 
> since this will invoke threading if you start a number of mpi processes that 
> is not equal the number of cores you use. Instead one should pass (when using 
> generic linear algebra)
> 
> #ifdef USE_PETSC_LA
> dealii::Utilities::MPI::MPI_InitFinalize
> mpi_initialization(argc, argv, /* disable threading for petsc */ 1);
> #else
> dealii::Utilities::MPI::MPI_InitFinalize
> mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);
> #endif
> 
> I guess that was the bug.

Yes, this is indeed the case, and is what the error message was trying to 
suggest. If you think that the error message could have been clearer, feel 
free to propose a better wording and we'll include it for the next release!

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/919b1577-e0eb-5e43-664d-82de1c8d3ab4%40colostate.edu.


[deal.II] Re: Extracting volume data to the boundary

2019-09-09 Thread Alberto Salvadori
Dear community

in case of use, I attempted to write a small code that seems to solve the 
issue of linking manifold cells to volume cells. 

As first, two maps are defined:

  *// maps*

  *// the map surface_to_volumefaces_mapping contains the outcome of the 
method extract_boundary_mesh and connects*

  *// panels from the manifold triangulation to the faces on the boundary 
of the volume triangulation.*

  *// the map manifoldcells_to_volumecells_mapping pairs surface panels 
from the manifold triangulation*

  *// to the cells of the volume triangulation. It is used in the method 
pair_the_triangulations().*

  *//*


  std::map<

*typename* parallel::shared::Triangulation::cell_iterator,

*typename* parallel::shared::Triangulation::face_iterator

  >

  surface_to_volumefaces_mapping;


  std::map<

*typename* parallel::shared::Triangulation::cell_iterator,

*typename* parallel::shared::Triangulation::cell_iterator

  >

  manifoldcells_to_volumecells_mapping;




then the manifold grid is generated


*//*

*// Surface triangulation definition*

*//*



*this*->pcout << "\n   Generation of surface discretization ... " << std
::flush;



surface_to_volumefaces_mapping = GridGenerator::extract_boundary_mesh ( 
*this*->triangulation, *this*->manifold_triangulation );

*this*->manifold_triangulation.set_all_manifold_ids(0);



*this*->manifold_dof_handler.distribute_dofs ( *this*->manifold_fe );

*this*->manifold_accumulated_solution.reinit ( 
*this*->manifold_dof_handler.n_dofs() 
);

*this*->manifold_former_step_solution.reinit ( 
*this*->manifold_dof_handler.n_dofs() 
);

*this*->manifold_initial_guess.reinit ( 
*this*->manifold_dof_handler.n_dofs() 
);



*this*->pcout << " done \n" << std::flush;



*//*

*// Surface triangulation and volume triangulation pairing*

*//*



pair_the_triangulations();


In the method pair_the_triangulations() a manifold cell is linked  to the 
corresponding volume cell for parallel::shared::triangulations as follows.

pair_the_triangulations ( *bool* write_map_report )


*//*

*// this function associates each*

*// surface panel to the volume cell from which it emanates.*

*// the map manifoldcells_to_volumecells_mapping contains the pair 
surface/volume panels*

*//*


{

  *// unsigned face_counter = 0;*

  

  *this*->pcout << "pairing the two triangulations ... " << std::flush;

  

  *// loop through the volumetric triangulation*

  

  *typename* parallel::shared::Triangulation::active_cell_iterator

  cell = *this*->triangulation.begin_active (),

  endc = *this*->triangulation.end();

  

  *for* (; cell!=endc; ++cell)



  *// We do not seek for locally_owned on the volumetric triangulation*



  {



*// debug*

*// this->pcout << " new cell " << cell->id() << " " << std::flush;*



*// select a face at boundary*



*for* (*unsigned* *int* face_number=0; 
face_number::faces_per_cell; 
++face_number)

{

  *const* *typename* parallel::shared::Triangulation::face_iterator

  face = cell->face( face_number );

  

  *if* ( face->at_boundary() )

  {



*// scan the surface_to_volumefaces_mapping and*

*// associate maifold panels to volume panels*



*typename*

std::map< *typename* parallel::shared::Triangulation::cell_iterator,

*typename* parallel::shared::Triangulation::face_iterator >

::iterator

map_it = surface_to_volumefaces_mapping.begin(),

map_end = surface_to_volumefaces_mapping.end() ;



*for* (; map_it!=map_end; ++map_it)

  *if* ( face == map_it->second )

  {



*// debug*

*// this->pcout <<  cell->face_index(face_number) << ", " << 
std::flush;*

*// face_counter++;*



manifoldcells_to_volumecells_mapping[ map_it->first ] = cell ;

  }

  }

}

  }

  

  *// debug*

  *// this->pcout <<  "\n I found " << face_counter << " faces. \n bye. \n 
" << std::flush;*

  

  *// map report*

  

  *if* ( write_map_report )

  {



*typename* std::map< *typename* parallel::shared::Triangulation<
manifold_dim,dim>::cell_iterator,

*typename* parallel::shared::Triangulation::cell_iterator >

::iterator

manifoldcells_to_volumecells_it = 
manifoldcells_to_volumecells_mapping.begin() 
,

manifoldcells_to_volumecells_end = 
manifoldcells_to_volumecells_mapping.end() 
;



*this*->pcout <<  "\n map report: "  << std::flush;

*for* (; 
manifoldcells_to_volumecells_it!=manifoldcells_to_volumecells_end; 
++manifoldcells_to_volumecells_it)

  *this*->pcout <<  "[ " << manifoldcells_to_volumecells_it->first->*id*() 
<< ", " << manifoldcells_to_volumecells_it->second->*id*() << " ], " << std
::flush;


[deal.II] Re: Error with Petsc and and multithreading?

2019-09-09 Thread Konrad
OK, just to answer the question: If you are running with petsc you should 
not call

dealii::Utilities::MPI::MPI_InitFinalize
mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);

since this will invoke threading if you start a number of mpi processes 
that is not equal the number of cores you use. Instead one should pass 
(when using generic linear algebra)

#ifdef USE_PETSC_LA
dealii::Utilities::MPI::MPI_InitFinalize
mpi_initialization(argc, argv, /* disable threading for petsc */ 1);
#else
dealii::Utilities::MPI::MPI_InitFinalize
mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);
#endif

I guess that was the bug.

Best,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/628d94ee-9e38-4dfd-b0b7-0a84d7b832ac%40googlegroups.com.


[deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-09 Thread Richard Schussnig
Hi everyone!

I am trying to implement the stabilizations presented in a paper by Bochev 
et al. [2006], 
which you may find here:

https://pdfs.semanticscholar.org/47be/4e317d4dcbbf1b70c781394e49c1dbf7e538.pdf

This one is parameter free, and they present local projections for both 
Q1Q1 and Q1Q0 element pairs (both of course not fulfillling the LBB cond).

The Q1Q1 is easily implemented, since we only need to adapt the cell 
matrices, 
but this would also be possible with the PSPG stabilization for example, 
nevertheless, here we do not need a parameter.

More interestingly, the Q1Q0 pair allows the use of discontinuous pressure, 
which is in my case of interest, 
since I want to finally apply this to a ALE-FSI approach, where at the 
interface the pressure is discontinuous.
-And for the PSPG I need the derivatives of the pressure functions, wihch 
are exactly zero for continuous functions
mapped by affine mappings (which is for quads/hexas of course most likely 
not the case).

FINALLY, MY QUESTIONS:

Using the Q1Q1, I would in the end (FSI) need to come up with a space made 
from Q1 elements 
with a discontinuity at the interface - which shall be realized using 
different material_id(). 
- how may I do that other than using a FE_DGQ space for the pressure and 
enforce continuity 
'manually' through a giant ConstraintMatrix?

Using the Q1Q0, the main problem is data transfer and 'node searching' in 
the parallel case
- example:
the stabilization matrix from cell 16 has pressure dof 45 and shares edges 
or maybe only a single vertex (!)
with cells with pressure dofs 1 2 3 4 5.
The cell matrix for the projection from Q0(dc) to Q1(c) is an area-weighted 
sum of the pressures on 
the cells touching the vertex of the support of the matching bilinear 
function, therefore we get a
6x6 local matrix and entries into all 'touching' cells.
Since these cells are not only the direct neighbors of the current cell, 
things may get complicated quite fast, 
if we consider the 3d case with hanging nodes, but on the other hand side, 
looping in the element loop over all elements again(!) to check the 
vertex_index() is extremely slow.
- Is there a fast and efficient way to get the 'touching' cells in a 
triangulation for one vertex?
(of course i do that only once during setup and save a std::map from 
active_cell_index to pressure dof and cell area)

Do you know of any better-fitting stabilizations for the Q1Q0 pair? Or do 
you think there are better options around?

Any help would be greatly appreciated!
Thanks in advance!

Kind regards,
Richard


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f98bf7c9-ed79-43b5-bf70-d2babcecb783%40googlegroups.com.


Re: [deal.II] MUMPS with PETScWrappers::MPI::BlockSparseMatrix

2019-09-09 Thread Richard Schussnig
Hi Konrad!

You can use the parallel direct solver in the schur complement, for 
orientation, take a look at step-57 (should be Navier-Stokes with direct 
solver for the A-block, if im not mistaken).
However, my inferior C++ knowledge did not allow me to do the factorization 
in the constructor of the 'preconditioner', but you can initialize it 
before creating the preconditioner and pass it as an argument!
This is considerably faster than doing the factorization inside the 
preconditioner of course, since you would re-do it every iteration. 
Nevertheless, this is perfectly fine for testing, to see if it is really 
doing the job!

Kind regards, Richard

Am Sonntag, 8. September 2019 15:20:20 UTC+2 schrieb Konrad:
>
> Thank you, David, so I will see if I can maybe still use it in the Schur 
> complement somehow.
>
> Best,
> Konrad
>
> On Friday, September 6, 2019 at 7:46:23 PM UTC+2, David Wells wrote:
>>
>> Hi Konrad,
>>
>> I don't think that it is possible to use MUMPS with a block matrix for 
>> exactly this reason. I think that if you want to use MUMPS you will need to 
>> copy the block matrix into a non-block matrix.
>>
>> Thanks,
>> David
>>
>> On Fri, Sep 6, 2019 at 12:25 PM Konrad  wrote:
>>
>>> Dear deal.ii community,
>>>
>>> is it possible to use MUMPS with 
>>> a PETScWrappers::MPI::BlockSparseMatrix? Don't find anything but I see 
>>> that PETScWrappers::MPI::BlockSparseMatrix does not inherit from 
>>> PETScWrappers::MatrixBase.
>>>
>>> Best,
>>> Konrad
>>>
>>> -- 
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see 
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to dea...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/dealii/290c3820-4c09-4515-a2f9-847691627427%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0c5e62ad-fe63-49bf-9344-7af0652ad6bb%40googlegroups.com.