Re: [deal.II] mmult memory leak with petsc

2020-08-18 Thread Richard Schussnig
Hi again,

I took a look at the modified mwe you sent -- immediately noticing that you 
read though the whole thing, 
which is amazing, considering that this is a google group! 
So, thanks again for your time and effort, it is really appreciated!

I have been busy doing the following with our little toy problem:
(i)  Upgraded to deal.ii v9.2.0 with candi, together with PETSc 3.13.1 (and 
Trilinos 12.18.1).
(ii) Further stripped down the mwe and deleted unnecessary parts and moved 
all but the mmult() out of the 'main loop'.

So, I still observe the following:
using Trilinos, all is fine.

and using PETSc, using the call
leftMatrix.mmult(result, rightMatrix)
seems fine, whereas calling
leftMatrix.mmult(result, rightMatrix, vector)
still results in a memory leak for me.

Could you verify that again for me please?
If this is doing fine for you I will just multiply the rows of rightMatrix 
'by hand' 
before using the call without the vector.

Thanks in advance & kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/030ba907-3f33-4831-9e7d-9ff018db5c5eo%40googlegroups.com.
/*
  This code is licensed under the "GNU GPL version 2 or later". See
  license.txt or https://www.gnu.org/licenses/gpl-2.0.html
*/

// Include files
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

namespace LA
{
  using namespace dealii::LinearAlgebraPETSc;
//  using namespace dealii::LinearAlgebraTrilinos;
}

// deal.II
using namespace dealii;

// Define flow problem.
template 
class flow_problem
{
public:
  flow_problem(const FESystem );
  ~flow_problem();
  void
  run();

private:
  // Create system matrix, rhs and distribute degrees of freedom.
  void
  setup_system();

  const FESystem 

  MPI_Comm  mpi_communicator;
  parallel::distributed::Triangulation triangulation;
  DoFHandler   dof_handler;
  ConditionalOStreampcout;

  // Distributed vectors.
  LA::MPI::BlockSparseMatrix system_matrix;
  LA::MPI::SparseMatrix  B_inv_diag_D_C;

  AffineConstraints constraints;

  std::vector own_partitioning, rel_partitioning;
};

// We are going to use the finite element discretization:
// Q_(k+1)^c for solid and fluid displacements and velocities
// and Q_k^dc for the pressure.
template 
flow_problem::flow_problem(const FESystem )
  : fe(fe)
  , mpi_communicator(MPI_COMM_WORLD)
  , triangulation(mpi_communicator,
  typename Triangulation::MeshSmoothing(
Triangulation::smoothing_on_refinement |
Triangulation::smoothing_on_coarsening))
  , dof_handler(triangulation)
  , pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
{}

// Destructor.
template 
flow_problem::~flow_problem()
{}

// System setup
template 
void
flow_problem::setup_system()
{
  // Get some mesh.
  GridGenerator::hyper_cube(triangulation, -0.5, 0.5, false);
  triangulation.refine_global(4);

  // Clear existing matrices.
  system_matrix.clear();
  B_inv_diag_D_C.clear();

  // Distribute DoFs.
  dof_handler.distribute_dofs(fe);

  // Cuthill_McKee renumbering.
  DoFRenumbering::Cuthill_McKee(dof_handler);

  // Component numbers.
  std::vector block_component(dim + 1, 0); // velocity
  block_component[dim] = 1;  // pressure

  // Count DoFs per block.
  std::vector dofs_per_block(2);
  DoFTools::count_dofs_per_block(dof_handler, dofs_per_block, block_component);
  const unsigned int n_v = dofs_per_block[0], n_p = dofs_per_block[1];

  // Renumber by block components.
  DoFRenumbering::component_wise(dof_handler, block_component);

  // Indexsets.
  IndexSet own_dofs, rel_dofs;
  own_dofs = dof_handler.locally_owned_dofs();
  DoFTools::extract_locally_relevant_dofs(dof_handler, rel_dofs);

  // Block system; velocities and pressure in seperate blocks.
  own_partitioning.resize(2);
  own_partitioning[0] = own_dofs.get_view(0, n_v);
  own_partitioning[1] = own_dofs.get_view(n_v, n_v + n_p);

  rel_partitioning.resize(2);
  rel_partitioning[0] = rel_dofs.get_view(0, n_v);
  rel_partitioning[1] = rel_dofs.get_view(n_v, n_v + n_p);

  // Get zero Dirichlet constraints on vector space.
  {
constraints.clear();
constraints.reinit(rel_dofs);
DoFTools::make_hanging_node_constraints(dof_handler, constraints);

std::vector component_mask(dim + 1, true);
component_mask[dim] = false;

unsigned int dirichlet_boundary_id = 0;

Re: [deal.II] mmult memory leak with petsc

2020-08-18 Thread Richard Schussnig
Hi Wolfgang,

thanks a ton for taking a look at my silly example!
- It seems like I am now finally really at a point where an update is 
inevitable! ; )
... maybe just rewriting my codes to fit a newer version would have saved 
me loads of time in the long run,
but I am happy that remains a mistery.

So, I will update and comment here afterwards; 
also, I will work on the mwe as you suggested in case the upgrade does not 
get rid of the problem.

Regards,
Richard

Am Dienstag, 18. August 2020 01:51:41 UTC+2 schrieb Wolfgang Bangerth:
>
>
> Richard, 
>
> > I am working on incompressible flow problems and stumbled upon an issue 
> when 
> > calling PETScWrappers::SparseMatrix::mmult(), 
> > but before I describe the problem in more detail, let me comment on the 
> basic 
> > building blocks of the MWE: 
> > 
> > (i) parallel::distributed::Triangulation & either PETSc or Trilinos 
> linear 
> > algebra packages 
> > (ii) dim-dimensional vector-valued FE space for velocity components & 
> > scalar-valued FE space for pressure, 
> > simply constructed via: 
> > FESystem fe (FE_Q(vel_degree), dim, 
> >   FE_Q(press_degree), 1); 
> > 
> > So, after integrating the weak form -- or just filling the matrices with 
> some 
> > entries -- we end up with a block system 
> > A u + B p = f 
> > C u + D p = g. 
> > To construct some preconditioners, we have to perform some matrix-matrix 
> products: 
> > either for the Schur complement 
> > (a) S = D - C inv(diag(A)) B 
> > or some A_gamma 
> > (b) A_gamma = A + gamma * B inv(diag(Mp)) C. 
> > 
> > Comletely ignoring now, why that might be necessary or not 
> > (I know that there is the possibility of assembling a grad-div term and 
> using a 
> > Laplacian on the pressure space to account for the reaction term, 
> > but that is not really an option in the stabilized case), 
> > we need those matrix products, and here comes the problem: 
> > 
> > using either PETSc or Trilinos I get identical matrix-products when 
> calling 
> > mmult(), BUT 
> > when using PETSc, the RAM is slowly but steadily filled (up to 500GB on 
> our 
> > local cluster) 
> > 
> > I came up with the MWE attached, which does nothing else than 
> initializing the 
> > system 
> > and then constructs the matrix product 1000 times in a row. 
>
> Nice! Can I ask you to play with this some more? I think you can make that 
> code even more minimal: 
> * Remove all of the commented out stuff -- for the purposes of reproducing 
> the 
> problem it shouldn't matter. 
> * Move the matrix initialization code out of the main loop. You want to 
> show 
> that it's the mmult that is the problem, but you're having a lot of other 
> code 
> in there as well that could in principle be the issue. If you move the 
> initialization of the individual factors out of the loop and only leave 
> whatever is absolutely necessary for the mmult in the loop, then you've 
> reduced the number of places where one needs to look. 
> * I bet you could trim down the list of #includes by a good bit :-) 
>
> You seem to be using a pretty old version of deal.II. There are a number 
> of 
> header files that no longer exist, and some code that doesn't compile for 
> me. 
> For your reference, attached is a version that compiles on the current 
> master 
> branch (though with a number of warnings). That said, it seems like the 
> memory 
> doesn't explode for me -- which raises the question of which version of 
> deal.II and PETSc you use. For me, this is deal.II dev and PETSc 3.7.5. 
>
>
> > Am I doing anything wrong or is this supposed to be used differently? 
> > I am using dealii v.9.0.1 installed via candi, so maybe the old version 
> is the 
> > reason. 
>
> Possible -- no need to chase an old bug that has already been fixed if you 
> can 
> simply upgrade. 
>
>
> > Bonus question: 
> > Is there a way similar to hand the sparsity patterns over to the mmult 
> function? 
> > (For the dealii::SparseMatrix there is, which is why I am asking) 
>
> DynamicSparsityPattern has compute_mmult_pattern, which should give you a 
> sparsity pattern you can then use to initialize the resulting PETSc 
> matrix. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0fa94c18-4eb6-49c0-8954-f96d5250479fo%40googlegroups.com.


[deal.II] mmult memory leak with petsc

2020-08-17 Thread Richard Schussnig
Hi everyone,

I am working on incompressible flow problems and stumbled upon an issue 
when calling PETScWrappers::SparseMatrix::mmult(),
but before I describe the problem in more detail, let me comment on the 
basic building blocks of the MWE:

(i) parallel::distributed::Triangulation & either PETSc or Trilinos linear 
algebra packages
(ii) dim-dimensional vector-valued FE space for velocity components & 
scalar-valued FE space for pressure,
simply constructed via:
FESystem fe (FE_Q(vel_degree), dim,
 FE_Q(press_degree), 1);

So, after integrating the weak form -- or just filling the matrices with 
some entries -- we end up with a block system
A u + B p = f
C u + D p = g.
To construct some preconditioners, we have to perform some matrix-matrix 
products:
either for the Schur complement 
(a) S = D - C inv(diag(A)) B
or some A_gamma
(b) A_gamma = A + gamma * B inv(diag(Mp)) C.

Comletely ignoring now, why that might be necessary or not
(I know that there is the possibility of assembling a grad-div term and 
using a
Laplacian on the pressure space to account for the reaction term, 
but that is not really an option in the stabilized case),
we need those matrix products, and here comes the problem:

using either PETSc or Trilinos I get identical matrix-products when calling 
mmult(), BUT
when using PETSc, the RAM is slowly but steadily filled (up to 500GB on our 
local cluster)

I came up with the MWE attached, which does nothing else than initializing 
the system 
and then constructs the matrix product 1000 times in a row.

Am I doing anything wrong or is this supposed to be used differently?
I am using dealii v.9.0.1 installed via candi, so maybe the old version is 
the reason.

Any suggestions? Any help would be greatly appreciated!

Bonus question:
Is there a way similar to hand the sparsity patterns over to the mmult 
function? 
(For the dealii::SparseMatrix there is, which is why I am asking)

Kind regards,
Richard



-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/68896f29-c57f-497d-9338-33a75094ec9do%40googlegroups.com.
##
#  CMake script for blockLDU-fsi_(...) program:
##

# Set the name of the project and target:
SET(TARGET "mmult_mwe")

# Declare all source files the target consists of:
SET(TARGET_SRC
  ${TARGET}.cc
  # You can specify additional files here!
  )

# Usually, you will not need to modify anything beyond this point...

CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8)

FIND_PACKAGE(deal.II 9.0.1 QUIET
  HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR}
  )
IF(NOT ${deal.II_FOUND})
  MESSAGE(FATAL_ERROR "\n"
"*** Could not locate deal.II. ***\n\n"
"You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II to 
cmake\n"
"or set an environment variable \"DEAL_II_DIR\" that contains this path."
)
ENDIF()

DEAL_II_INITIALIZE_CACHED_VARIABLES()
PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()
/*
  This code is licensed under the "GNU GPL version 2 or later". See
  license.txt or https://www.gnu.org/licenses/gpl-2.0.html

  Copyright 2019: Richard Schussnig
*/

// Include files
#include 
#include 
#include 
#include 
#include   
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
// #include 

#include 
#include 
#include 

// (Block-)LinearOperator and operations among those for the sub-solvers.
#include 
#include 
#include 

// Parameter handler for input-file processing at runtime.
#include 

// Linear algebra packages - switch between PETSc & Trilinos
//#define FORCE_USE_OF_TRILINOS // ###
namespace LA
{
	#if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
		using namespace dealii::LinearAlgebraPETSc;
		#define USE_PETSC_LA
	#elif defined(DEAL_II_WITH_TRILINOS)
		using namespace dealii::LinearAlgebraTrilinos;
	#else
		#error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
	#endif
}

// C++
#include 
#include 
#include 

//deal.II
using namespace dealii;

// Define flow problem.
template 
class flow_problem
{
public:

  flow_problem (const FESystem );
  ~flow_problem ();
  void run ();

Re: [deal.II] Re: Memory loss in system solver

2020-08-16 Thread Richard Schussnig

>
> Hi again,

it seems my last comment got lost, but I wanted to post it here for others 
researching that issue:
I constructed a MWE and found out, that actually solving the system via 
GMRES, CG or BiCGstab 
using any of the implementations provided by PETSc, Trilinos or the dealii 
versions thereof did not result in memory loss!
So, from my side this was a false alarm!
-Good luck finding that bug Alberto!

PS: I will make a new thread for the bug in my code if necessary!

Kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/742566d8-65e0-4452-9a72-cc3a5a918886o%40googlegroups.com.


Re: [deal.II] Re: Memory loss in system solver

2020-07-29 Thread Richard Schussnig
Hi again,

Great to hear that you were able to construct a minimal working example & 
pinpoint the error location,
that is already of great help, but please do share the MWE you have 
constructed! 
I can also confirm, that this behaviour described in the previous posts 
does not(!) occur when running step-18 
(I am running v.9.0.1, which is why I did not want to bother the mailing 
list).

If we have step-18 and your MWE, we simply compare and see, where the error 
might come from, right?
-Unfortunately though, I am not overly skilled when it comes to C++, 
so maybe someone else might be kind enough to help us out on that one? 

Regards,
Richard

Am Mittwoch, 29. Juli 2020 17:09:01 UTC+2 schrieb Alberto Salvadori:
>
> Hi Daniel,
>
> here is the report of valgrind. Can you maybe help devising potential 
> issues? I am afraid to be not sufficiently skilled, yet.
> Thank you, I appreciate.
>
> Alberto
>
>
>
> albertosalvadori@ubuntu:~/Codes/m4_code/Release$ valgrind --leak-check=yes 
> mpirun -np 4 ./m4_code_9.1.1 ../input/viscosity_test -mechanics
> ==15940== Memcheck, a memory error detector
> ==15940== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==15940== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright 
> info
> ==15940== Command: mpirun -np 4 ./m4_code_9.1.1 ../input/viscosity_test 
> -mechanics
> ==15940==
> ==15940== Warning: noted but unhandled ioctl 0x5441 with no size/direction 
> hints.
> ==15940==This could cause spurious value errors to appear.
> ==15940==See README_MISSING_SYSCALL_OR_IOCTL for guidance on writing a 
> proper wrapper.
> ==15944== Warning: invalid file descriptor 1024 in syscall close()
> ==15944== Warning: invalid file descriptor 1025 in syscall close()
> ==15944== Warning: invalid file descriptor 1026 in syscall close()
> ==15944== Warning: invalid file descriptor 1027 in syscall close()
> ==15944==Use --log-fd= to select an alternative log fd.
> ==15944== Warning: invalid file descriptor 1028 in syscall close()
> ==15944== Warning: invalid file descriptor 1029 in syscall close()
> ==15945== Warning: invalid file descriptor 1024 in syscall close()
> ==15945== Warning: invalid file descriptor 1025 in syscall close()
> ==15945== Warning: invalid file descriptor 1026 in syscall close()
> ==15945== Warning: invalid file descriptor 1027 in syscall close()
> ==15945==Use --log-fd= to select an alternative log fd.
> ==15945== Warning: invalid file descriptor 1028 in syscall close()
> ==15945== Warning: invalid file descriptor 1029 in syscall close()
> ==15945== Warning: invalid file descriptor 1030 in syscall close()
> ==15946== Warning: invalid file descriptor 1024 in syscall close()
> ==15946== Warning: invalid file descriptor 1025 in syscall close()
> ==15946== Warning: invalid file descriptor 1026 in syscall close()
> ==15946== Warning: invalid file descriptor 1027 in syscall close()
> ==15946==Use --log-fd= to select an alternative log fd.
> ==15946== Warning: invalid file descriptor 1028 in syscall close()
> ==15946== Warning: invalid file descriptor 1029 in syscall close()
> ==15947== Warning: invalid file descriptor 1024 in syscall close()
> ==15947== Warning: invalid file descriptor 1025 in syscall close()
> ==15947== Warning: invalid file descriptor 1026 in syscall close()
> ==15947== Warning: invalid file descriptor 1027 in syscall close()
> ==15947==Use --log-fd= to select an alternative log fd.
> ==15947== Warning: invalid file descriptor 1028 in syscall close()
> ==15947== Warning: invalid file descriptor 1029 in syscall close()
>
>
>   Welcome to m4_code, a multiphysics solver for several class of problems 
> developed at the m4lab @ UNIBS.
>   The problem that is going to be solved is: Mechanics for the data set 
> ../input/viscosity_test
>
>   Problem LargeStrainMechanicalProblem_OneField defined  
>
>Reading material parameters from file ../input/viscosity_test.materials 
> ...  done
>Reading time discretization parameters from file 
> ../input/viscosity_test.time_discretization ...  done
>
>   Time = 0., step =0
>Initialization
>Reading discretization from file ../input/viscosity_test.msh ...  done
>Number of active cells:   24 (by partition: 6+6+6+6)
>Number of degrees of freedom: 153 (by partition: 63+30+33+27)
>Dirichlet faces: 36, Neumann faces (with non-zero tractions): 0, 
> contact faces: 0
> NR it. 0, Assembling..., convergence achieved.
> Writing output..., 0.00 s. Elapsed time 0.00 s.
>
>   Time = 0.0100, step =1
>   Cycle 0:
>Number of active cells:   24 (by partition: 6+6+6+6)
>Number of degrees of freedom: 153 (by partition: 63+30+33+27)
>Dirichlet faces: 36, symmetry faces: 0
>Dirichlet faces: 36, Neumann faces (with non-zero tractions): 0, 
> contact faces: 0
> NR it. 0, Assembling..., 0.00 s, norm of residual is 
> 85544.434331779601052
>  Jacobi - Bicgstab , solver converged in 8 

[deal.II] Memory loss in system solver

2020-07-25 Thread Richard Schussnig
Hi Alberto,
I might be having a similar or even the same problem with petsc! In my case, 
the memory accumulated is proportional to the number of iterations done in the 
SolverFGMRES solver. Also, when using trilinos (switch between petsc and 
trilinos see step 40 I believe), this does not(!) happen!
Please do report back, if you find anything - I did not look into it for now, 
but will do in a week or so.
Regards & good luck,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e0b379a9-dba7-4133-8ef0-ad308e680ffeo%40googlegroups.com.


Re: [deal.II] [deal.ii]mesh import problem

2020-07-18 Thread Richard Schussnig
Hi Chen,
I successfully imported gmsh-generated meshes using the .msh file format once:
Try to set the physical ids in gmsh to get the right boundary_id and 
material_id in dealii and then export as version 2 ascii .msh and untick the 
box [export all elements]. This was working for dealii v 9.0.1 and might still 
do!
Good luck & kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/62727f37-753b-41be-8a52-70a4432bf8d6o%40googlegroups.com.


[deal.II] Re: Solver for nonlinear solid dynamics

2020-03-26 Thread Richard Schussnig
Hi David,

Great to hear that, I am a fan of open-source software development in our 
field and appreciate the work of the preCICE project!
The tutorial steps you might want to look at are 8,18,24,44,46 & 62; the 
code gallery also offers quite some interesting codes, among which 
https://dealii.org/developer/doxygen/deal.II/code_gallery_goal_oriented_elastoplasticity.html
and
https://dealii.org/developer/doxygen/deal.II/code_gallery_Quasi_static_Finite_strain_Quasi_incompressible_ViscoElasticity.html
might be of most relevance to you.
The code by T. Wick is available here
http://www.thomaswick.org/gallery_engl.html

You might simplify one of the codes from the code gallery and plug in the 
element integration routine from the last code to get to 'geometrically 
nonlinear elasticity'.
I have done some similar things, thus have a bit of code in that direction. 
If you are interested in working together, just mail me.

Good luck & Kind regards,
Richard 


Am Mittwoch, 25. März 2020 18:08:24 UTC+1 schrieb David:
>
> Hello everyone,
>
> we (the preCICE team 
> )
>  
> are targeting to support a new deal.II solver for preCICE. We want to 
> simulate the probably well known Turek 
> 
>  
> and Hron 
> FSI3
>  
> benachmark 
> .
>  
> Therefore, we would like to use a solver for nonlinear solid dynamics with 
> deal.II , since the deformations are rather large.
>
> This post is intended to collect some ideas before starting: Has anyone 
> already developed such a solver or knows a good starting point?
>
> We have currently a dynamic linear-elastic solid solver and there are of 
> course several tutorial programs dealing with nonlinear static solid 
> mechanics. 
>
> Any idea is appreciated!
>
> Best regards,
> David
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ac3b7d3e-44da-47fc-8664-f6eefd305719%40googlegroups.com.


Re: [deal.II] Re: solving stabilized Stokes

2020-03-12 Thread Richard Schussnig
Dear Wolfgang, dear Bruno,

Thank you very much again for your time & effort!

I think one should leave grad-div stabilization out of the discussion, 
since it is only used as a counter-measure. It has nothing to do with 
inf-sup stability,
but is rather used to additionally penalize violations of continuity of 
mass in the case of stabilized formulations through a penalty-term in the 
balance of linear momentum.

I also checked up on the literature in V. John s book from 2016: FEM for 
incompr flow problems
and what I found is, that in example 4.100 for stabilized Stokes shows 
nicely, that the velocity is converging quadratically 
& the order of the pressure convergence depends on viscosity and is in 
[1,2] for q1q1 elements (nu = 1e-10 giving ~2).

So summing up, I was simply expecting too much it seems!

Best wishes & kind regards from Austria,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5022df17-a2b3-42bf-8abd-4ee43ac0f80a%40googlegroups.com.


[deal.II] Re: solving stabilized Stokes

2020-03-10 Thread Richard Schussnig
Hi everyone,
I am back with good & bad news!
- Good news first, I implemented a parallel direct solver (mumps via petsc) 
and going from a 2x2 blocked system to 
a regular one (actually 1x1 still blocked system)
one just needs to adapt the setup phase, not reordering per component & not 
use the "get_view" 
but rather use all the same vector and matrix (types) with 1 block instead 
of 2. Once you realize that, it takes ~15 mins for a 
non-experienced deal.ii user like me.

- Bad news is, that I still do not obtain the optimal convergence rate in 
the pressure, i.e., 2 for Q1Q1 PSPG-stabilized Stokes.

The obtained rates in the Poisuille benchmark (quadratic velocity profile & 
linear pressure) using the direct solver are:

 L2-errors in v & p: Q1Q1 + PSPG
--
lvl   0: | e_v =1 | eoc_v =0 | e_p =1 | 
eoc_p =0 | 
--
lvl   1: | e_v = 0.531463 | eoc_v =  0.91196 | e_p = 0.479585 | 
eoc_p =  1.06014 | 
--
lvl   2: | e_v = 0.218196 | eoc_v =  1.28434 | e_p = 0.208925 | 
eoc_p =   1.1988 | 
--
lvl   3: | e_v =0.0661014 | eoc_v =  1.72287 | e_p =0.0674415 | 
eoc_p =  1.63128 | 
--
lvl   4: | e_v =0.0176683 | eoc_v =  1.90352 | e_p =0.0194532 | 
eoc_p =  1.79363 | 
--
lvl   5: | e_v =   0.00452588 | eoc_v =  1.96489 | e_p =   0.00549756 | 
eoc_p =  1.82315 | 
--
lvl   6: | e_v =   0.00114238 | eoc_v =  1.98616 | e_p =   0.00158448 | 
eoc_p =  1.79478 | 
--
lvl   7: | e_v =  0.000286765 | eoc_v =   1.9941 | e_p =  0.000475072 | 
eoc_p =  1.73779 | 
--
lvl   8: | e_v =   7.1824e-05 | eoc_v =  1.99733 | e_p =  0.000149148 | 
eoc_p =  1.67141 | 
--
lvl   9: | e_v =  1.79717e-05 | eoc_v =  1.99874 | e_p =  4.88044e-05 | 
eoc_p =  1.61166 | 
--
lvl  10: | e_v =  4.49483e-06 | eoc_v =  1.99939 | e_p =  1.64706e-05 | 
eoc_p =  1.56712 | 
--
where e_v & e_p are relative errors, i.e., int(p-ph)dx/int(p)dx

The obtained rates using stable Taylor-Hood Q2Q1 are:
 L2-errors in v & p: Q2Q1
--
lvl   0: | e_v =  2.20205e-16 | eoc_v =0 | e_p =  3.58011e-16 | 
eoc_p =0 | 
--
lvl   1: | e_v =   2.1471e-16 | eoc_v =0.0364598 | e_p =  1.02455e-15 | 
eoc_p = -1.51692 | 
--
lvl   2: | e_v =  4.23849e-16 | eoc_v = -0.98116 | e_p =   4.1472e-16 | 
eoc_p =  1.30479 | 
--
lvl   3: | e_v =   6.9843e-16 | eoc_v =-0.720565 | e_p =  4.49311e-15 | 
eoc_p =  -3.4375 | 
--
lvl   4: | e_v =  1.53993e-15 | eoc_v = -1.14068 | e_p =  9.82242e-15 | 
eoc_p = -1.12837 | 
--
lvl   5: | e_v =  7.60182e-15 | eoc_v = -2.30348 | e_p =  5.44953e-14 | 
eoc_p = -2.47198 | 
--
lvl   6: | e_v =  1.60118e-14 | eoc_v = -1.07472 | e_p =  1.86328e-13 | 
eoc_p = -1.77364 | 
--
... which shows, that I get the exact solution, which is expected.

Using Q2Q2 + PSPG i get:
 L2-errors in v & p: Q2Q2 + PSPG
--
lvl   0: | e_v =   3.5608e-16 | eoc_v =0 | e_p =   

[deal.II] Re: solving stabilized Stokes

2020-02-21 Thread Richard Schussnig
Dear Bruno,
thanks again for bothering!
I will try to do that, but it is a bit involved, since I was setting up 
block-matrices, which i need to change!

For pure Dirichlet, enclosed flow problems I did the same as for step-55f, 
which is basically nothing 
(the iterative solver handles the pressure scaling) but after solving 
rescale the pressure to mean value of zero, 
which is working like a charm for q2q1 for example!

I will report back, when I get some more insight using a direct solver!

Kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/99c7b5a1-6278-4ffc-8b09-568a8331d33e%40googlegroups.com.


Re: [deal.II] solving stabilized Stokes

2020-02-19 Thread Richard Schussnig
Dear Wolfgang,
Thanks also to you for taking the time, I really appreciate it!

I already tested the integration, increasing up by order+1, +2, +3 ... gave 
identical results.
Now it seems, that I should introduce some switches to a direct solver! 
This might give me a hint, if I need some rescaling of some rows!
I will report back, if that helped!

Kind regards,
Richard


Am Mittwoch, 19. Februar 2020 03:03:57 UTC+1 schrieb Wolfgang Bangerth:
>
> On 2/14/20 5:34 AM, Richard Schussnig wrote: 
> > 
> > Could the observed behaviour be caused by applying an iterative solver*? 
> > (using ReductionControl and a reduction factor of 1e-13 which is really 
> low) 
> > * Block-triangular Schur-complement-based approach, similar as in 
> > step-55 suggested in the further possibilities for extension, basically 
> > using S~-(1/viscosity)*Mass_pressure giving 20-40 iterations up to 3mio 
> > dofs tested. 
> > 
> > I have been checking the code for a week now, and i really doubt, that 
> > the integration of just tau*grad(p)*grad(p) is wrong (again, the laplace 
> > drops out for a rectangular grid). 
> > Just to check, i used a manufactured solution from Poisuille flow and 
> > added the laplacian from PSPG to the rhs, giving me the exact (linear) 
> > solution in the pressure and quadratic convergence in the velocity, thus 
> > I assume, that the grad(p)grad(q) term added is correctly implemeted. 
> > (exact solution meaning, that i get an L2 error of err<1e-9 for any 
> > number of elements used) 
> > 
> > Anyone has ever experienced this or has anyone some tipps for further 
> > debugging? 
>
> We have all spent much time trying to figure out these sorts of issues 
> :-) My usual approach is to switch to a direct solver to eliminate the 
> possibility that the problem lies with the solver. Just speaking about, 
> say, step-22, there is also the issue that the iterative solver actually 
> just ignores the pressure-pressure block of the matrix (I think that we 
> put a mass matrix in there as that is used in the preconditioner -- or 
> at least that's what we used to do). So it's useful to just remove 
> everything that could be problematic, and the iterative solvers are 
> definitely a point in case. 
>
> The other thing I usually check is what happens if I increase the order 
> of quadrature. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/bc74215c-9c89-429f-96ce-732812e1879a%40googlegroups.com.


[deal.II] Re: solving stabilized Stokes

2020-02-19 Thread Richard Schussnig
Hi Bruno,
Thank you very much for taking the time to read my post & giving me some 
ideas!
I am already familiar with lethe, thats where i compared my implementation 
of the residual to, but for Stokes, 
we actually do not introduce a nonlinearity, since we do not have the 
(nonlinear) convective term as in Navier Stokes, so it should actuallly be 
fine, correct? 
As far as I know, the Laplacian(u) in the residual does not lead to 
nonlinearities, since the residual in PSPG for Stokes(!) is only r := 
-nu*laplace(u) + grad(p) - f
and we simply add on element level + int( r * grad(q) )dx
Nonetheless, I solve it using Newton OR Picard+Aitken, giving me identical 
results.

That it takes longer to reach the asymptotic convergence I can confirm 
(using my matlab code), 
but using deal.ii and an iterative solver it just does not reach the point 
(I tested up to 3 mio cells in 2d on just a rectangular grid with poisuille 
flow ... didnt help)
thanks also for that hint!

Kind regards,
Richard

Am Mittwoch, 19. Februar 2020 02:49:26 UTC+1 schrieb Bruno Blais:
>
> Dear Richard,
> We implement a quasi identical scheme in lethe (PSPG for continuity and 
> SUPG for momentum). You can find the code on the github, it might help you :
>
> https://github.com/lethe-cfd/lethe/blob/master/source/solvers/gls_navier_stokes.cc
>
> I can confirm that you should be getting order p+1 for velocity. To be 
> honest I have never really taken the time to look at pressure convergence, 
> but I recall that it can take very fine mesh to reach the asymptotic 
> convergence.
>
> Please keep in mind that the introduction of the PSPG term introduces a 
> non-linearity in the solver except in the case where you rmesh is made of 
> perfect rectangles that are aligned with the axis of the system.
> Are you solving it using Newton's method?
>
> Best
> Bruno
>
>
> On Friday, 14 February 2020 07:34:09 UTC-5, Richard Schussnig wrote:
>>
>> Hi everyone!
>> I am currently implementing some stationary Stokes solvers based on 
>> step-55.
>> Therein, Taylor-Hood elements are being used. One can check the optimal 
>> order of 
>> convergence easily comparing to the Kovasznay or Poisuille flow solution, 
>> the first one being already implemented in this step.
>> I went ahead and added PSPG and Bochev-Dohrmann stabilizations to check 
>> the errors using those.
>>
>> For the latter, i get the expected suboptimal (!) convergence rates 
>> (vel:2 and p~1.5-2).
>> Using PSPG, one adds 
>> tau * residual momentum dot gradient(q)
>> (q being the pressure test function)
>> per element like:
>>
>> local_matrix (i,j) += fe_values.JxW(q) * tau_sups * phi_grads_p [j] * 
>> phi_grads_p [i];
>>
>> if (ADD_THE_LAPLACIAN)
>> {
>> local_matrix (i,j) += fe_values.JxW(q) * tau * (
>>   - viscosity * phi_laplacians_v [j]
>>   ) * phi_grads_p [i];
>> }
>>
>> Using the laplacian of the velocity u defined as below:
>>
>> if (get_laplace_residual)
>> {
>> phi_hessians_v[k] = fe_hessians[velocities].hessian(k,q);
>> for (int d = 0; d < dim; ++d)
>> { phi_laplacians_v[k][d] = trace(phi_hessians_v[k][d]);
>> }
>>
>> // Check laplacian, both give zero for rectangular grid and identical 
>> values for distorted grids.
>> unsigned int comp_k = fe.system_to_component_index(k).first;
>>
>> Tensor<1,dim> vector_laplace_v;
>> vector_laplace_v = 0;
>> if (comp_k> {
>> Tensor<2,dim> nonzero_hessian_k = 
>> fe_hessians.shape_hessian_component(k,q,comp_k);
>> vector_laplace_v [comp_k] += trace(nonzero_hessian_k);
>> }
>> Tensor<1,dim> diff;
>> diff = phi_laplacians_v[k] - vector_laplace_v;
>>
>> if (diff.norm() > 1e-6 || vector_laplace_v.norm() > 1e-16)
>> std::cout << "|divgrad_v| = " << std::setw(15) << vector_laplace_v.norm() 
>> << " , diff = " << diff.norm() << std::endl;
>>
>> }
>>
>> Which also includes a second option to compute the wanted laplacian.
>>
>> Using a perfectly rectangular grid, the laplacian is actually zero for 
>> all elements, thus everything reduces to just adding a pressure stiffness 
>> in the pressure-pressure block.
>> Nonetheless, i observe convergence rates of vel:2 and p:1.5-1.8 which is 
>> suboptimal and not(!) expected.
>> I implemented the same methods in a matlab inhouse code & observed 
>> perfect convergence rates of 2&2 for v using a direct solver.
>>
>> The stabilization parameter is in both cases (matlab or deal.ii) chosen 
>> as 
>> tau = 0.1*h^2/viscosity 
>> w

[deal.II] solving stabilized Stokes

2020-02-14 Thread Richard Schussnig
Hi everyone!
I am currently implementing some stationary Stokes solvers based on step-55.
Therein, Taylor-Hood elements are being used. One can check the optimal 
order of 
convergence easily comparing to the Kovasznay or Poisuille flow solution, 
the first one being already implemented in this step.
I went ahead and added PSPG and Bochev-Dohrmann stabilizations to check the 
errors using those.

For the latter, i get the expected suboptimal (!) convergence rates (vel:2 
and p~1.5-2).
Using PSPG, one adds 
tau * residual momentum dot gradient(q)
(q being the pressure test function)
per element like:

local_matrix (i,j) += fe_values.JxW(q) * tau_sups * phi_grads_p [j] * 
phi_grads_p [i];

if (ADD_THE_LAPLACIAN)
{
local_matrix (i,j) += fe_values.JxW(q) * tau * (
  - viscosity * phi_laplacians_v [j]
  ) * phi_grads_p [i];
}

Using the laplacian of the velocity u defined as below:

if (get_laplace_residual)
{
phi_hessians_v[k] = fe_hessians[velocities].hessian(k,q);
for (int d = 0; d < dim; ++d)
{ phi_laplacians_v[k][d] = trace(phi_hessians_v[k][d]);
}

// Check laplacian, both give zero for rectangular grid and identical 
values for distorted grids.
unsigned int comp_k = fe.system_to_component_index(k).first;

Tensor<1,dim> vector_laplace_v;
vector_laplace_v = 0;
if (comp_k nonzero_hessian_k = 
fe_hessians.shape_hessian_component(k,q,comp_k);
vector_laplace_v [comp_k] += trace(nonzero_hessian_k);
}
Tensor<1,dim> diff;
diff = phi_laplacians_v[k] - vector_laplace_v;

if (diff.norm() > 1e-6 || vector_laplace_v.norm() > 1e-16)
std::cout << "|divgrad_v| = " << std::setw(15) << vector_laplace_v.norm() 
<< " , diff = " << diff.norm() << std::endl;

}

Which also includes a second option to compute the wanted laplacian.

Using a perfectly rectangular grid, the laplacian is actually zero for all 
elements, thus everything reduces to just adding a pressure stiffness in 
the pressure-pressure block.
Nonetheless, i observe convergence rates of vel:2 and p:1.5-1.8 which is 
suboptimal and not(!) expected.
I implemented the same methods in a matlab inhouse code & observed perfect 
convergence rates of 2&2 for v using a direct solver.

The stabilization parameter is in both cases (matlab or deal.ii) chosen as 
tau = 0.1*h^2/viscosity 
with 
double h = std::pow(cell->measure(), 1.0/ static_cast(dim)) 
or h = cell->diameter() giving similar results.

Could the observed behaviour be caused by applying an iterative solver*? 
(using ReductionControl and a reduction factor of 1e-13 which is really low)
* Block-triangular Schur-complement-based approach, similar as in step-55 
suggested in the further possibilities for extension, basically using 
S~-(1/viscosity)*Mass_pressure giving 20-40 iterations up to 3mio dofs 
tested.

I have been checking the code for a week now, and i really doubt, that the 
integration of just tau*grad(p)*grad(p) is wrong (again, the laplace drops 
out for a rectangular grid).
Just to check, i used a manufactured solution from Poisuille flow and added 
the laplacian from PSPG to the rhs, giving me the exact (linear) solution 
in the pressure and quadratic convergence in the velocity, thus I assume, 
that the grad(p)grad(q) term added is correctly implemeted. (exact solution 
meaning, that i get an L2 error of err<1e-9 for any number of elements used)

Anyone has ever experienced this or has anyone some tipps for further 
debugging?
Thanks for taking the time to read the post, any help or comment would be 
greatly appreciated!

Greetings from Austria,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2ded867b-a2e7-4041-b90c-2dd774f1f5dd%40googlegroups.com.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig


Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to 
> implement. You can also scale it to arbitrary Qn-Qn elements if you are 
> interested in higher order.
> We have implemented such an approach in our code based on dealii (
> https://github.com/lethe-cfd/lethe)
> Q1Q1 is already very diffusive (which is why it is so robust I guess), so 
> I am not sure that going with Q1-Q0 to save a few pressure degree of 
> freedom is actually worth it.
> Best!
> Bruno
>

Thanks Bruno,
also very appreciated! I have been looking for dealii implementations of 
NS, 
since especially the stabilizations (not only inf-sup but SUPG GLS or 
similar)
are always a bit tricky ... or at least what is considered/or may be 
ignored!

I totally agree, that saving a few pressure dofs does not pay off, 
especially since I am having 
vector-valued velocities and displacements and only the single 
pressure ...
The motivation was solely coming from the fact, that I did not think of 
using FE_nothing,
otherwise I would not have started trying to use Q1Q0. 

Kind regards,
Richard 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d01d6cf4-70d4-45aa-93ed-c2a63e5bae66%40googlegroups.com.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig
Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to 
> implement. You can also scale it to arbitrary Qn-Qn elements if you are 
> interested in higher order.
> We have implemented such an approach in our code based on dealii (
> https://github.com/lethe-cfd/lethe)
> Q1Q1 is already very diffusive (which is why it is so robust I guess), so 
> I am not sure that going with Q1-Q0 to save a few pressure degree of 
> freedom is actually worth it.
> Best!
> Bruno
>

Thanks Bruno,
also very appreciated! I have been looking for dealii implementations of 
NS, 
since especially the stabilizations (not only inf-sup but SUPG GLS or 
similar)
are always a bit tricky ... or at least what is considered/or may be 
ignored!

I totally agree, that saving a few pressure dofs does not pay off, 
especially since I am having 
vector-valued velocities and displacements and only the single 
pressure ...
The motivation was solely coming from the fact, that I did not think of 
using FE_nothing,
otherwise I would not have started trying to use Q1Q0. 

Kind regards,
Richard


 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/12f3accc-8cb8-4f28-8d78-7e84736d7491%40googlegroups.com.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig
Dear Wolfgang,
Thank you very much for looking taking the time to answer my question, 
I really appreciate it - such things do not happen to often on the 
internet! ; )
Please find below my response per answer, I hope it is not too unstructured 
that way!

Am Dienstag, 10. September 2019 06:03:27 UTC+2 schrieb Wolfgang Bangerth:
>
> On 9/9/19 1:57 AM, Richard Schussnig wrote: 
> > 
> > FINALLY, MY QUESTIONS: 
> > 
> > Using the Q1Q1, I would in the end (FSI) need to come up with a space 
> made 
> >  from Q1 elements with a discontinuity at the interface - which shall be 
> > realized using different material_id(). - how may I do that other than 
> > using a FE_DGQ space for the pressure and enforce continuity 'manually' 
> > through a giant ConstraintMatrix? 
>
> That's inefficient, of course :-) I assume that your interface is in the 
> interior of the domain? In that case, take a look at step-46, where 
> solution 
> variables only live on certain cells, and are discontinuous at the 
> interface 
> between the two parts of the domain. 
>
>
The interface is indeed in the interior of the domain, resolved of course, 
since I only use
the material_id() to distinguish between solid and fluid.
And the hint is perfect, thanks a lot, I have not looked into step-46 for a 
few months and almost forgot about it.
- I will look into having 2 seperate pressure spaces using FE_nothing, that 
may do the trick (for the Q1Q1 stabilized case).


> > Using the Q1Q0, the main problem is data transfer and 'node searching' 
> in 
> > the parallel case - example: the stabilization matrix from cell 16 has 
> > pressure dof 45 and shares edges or maybe only a single vertex (!) with 
> > cells with pressure dofs 1 2 3 4 5. The cell matrix for the projection 
> from 
> > Q0(dc) to Q1(c) is an area-weighted sum of the pressures on the cells 
> > touching the vertex of the support of the matching bilinear function, 
> > therefore we get a 6x6 local matrix and entries into all 'touching' 
> cells. 
>
> Yes, you'd have to create a map that for each vertex gives you a list of 
> all 
> adjacent cells. I think I recall that there is a function in GridTools for 
> this, though. 


>
> > Since these cells are not only the direct neighbors of the current cell, 
> > things may get complicated quite fast, if we consider the 3d case with 
> > hanging nodes, but on the other hand side, looping in the element loop 
> over 
> > all elements again(!) to check the vertex_index() is extremely slow. 
>
> Yes, you'd reverse this approach by looping over all vertices first, and 
> then 
> in this loop over all adjacent cells. 
>
>
Thanks a ton, I coded that up myself, but there are:
find_cells_adjacent_to_vertex
&
vertex_to_cell_map
that come in handy at this point and probably are written in a more 
efficient way.
Somehow I did not find the functions, sorry for bothering you
(added for people coming after me).
 

>
> > Do you know of any better-fitting stabilizations for the Q1Q0 pair? Or 
> do 
> > you think there are better options around? 
>
> Q1-Q1 is a pretty good method, and not very difficult to implement. I'll 
> note 
> that Q1-Q0 *sounds* like a good idea, but has a very low convergence rate 
> and 
> so will not yield particularly good accuracy if that's what you actually 
> care 
> about. Of course, Q2-Q1 is the standard for good reasons. 
>
> Best 
>   W. 
>
>
>
I was under the misconception, that I should have the orders matching since 
I am interested in
stresses in the fluid & on the interface, but apparently looking at the 
parameters in the interesting 
case (~4*1e-6 kinem. viscosity for blood [Bazilevs et al 2010, Küttler et 
al 2009, Deparis et al 2016])
it might be worth a lot having a good pressure approximation!
So the conclusion is for me now: 
test Q1Q0dc and perhaps also use Q1Q1 with FE_Nothing!

Thanks once again for your thoughts on the topic!

Kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/97a9811e-6fc1-4379-a265-b407f6d5ff0e%40googlegroups.com.


[deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-09 Thread Richard Schussnig
Hi everyone!

I am trying to implement the stabilizations presented in a paper by Bochev 
et al. [2006], 
which you may find here:

https://pdfs.semanticscholar.org/47be/4e317d4dcbbf1b70c781394e49c1dbf7e538.pdf

This one is parameter free, and they present local projections for both 
Q1Q1 and Q1Q0 element pairs (both of course not fulfillling the LBB cond).

The Q1Q1 is easily implemented, since we only need to adapt the cell 
matrices, 
but this would also be possible with the PSPG stabilization for example, 
nevertheless, here we do not need a parameter.

More interestingly, the Q1Q0 pair allows the use of discontinuous pressure, 
which is in my case of interest, 
since I want to finally apply this to a ALE-FSI approach, where at the 
interface the pressure is discontinuous.
-And for the PSPG I need the derivatives of the pressure functions, wihch 
are exactly zero for continuous functions
mapped by affine mappings (which is for quads/hexas of course most likely 
not the case).

FINALLY, MY QUESTIONS:

Using the Q1Q1, I would in the end (FSI) need to come up with a space made 
from Q1 elements 
with a discontinuity at the interface - which shall be realized using 
different material_id(). 
- how may I do that other than using a FE_DGQ space for the pressure and 
enforce continuity 
'manually' through a giant ConstraintMatrix?

Using the Q1Q0, the main problem is data transfer and 'node searching' in 
the parallel case
- example:
the stabilization matrix from cell 16 has pressure dof 45 and shares edges 
or maybe only a single vertex (!)
with cells with pressure dofs 1 2 3 4 5.
The cell matrix for the projection from Q0(dc) to Q1(c) is an area-weighted 
sum of the pressures on 
the cells touching the vertex of the support of the matching bilinear 
function, therefore we get a
6x6 local matrix and entries into all 'touching' cells.
Since these cells are not only the direct neighbors of the current cell, 
things may get complicated quite fast, 
if we consider the 3d case with hanging nodes, but on the other hand side, 
looping in the element loop over all elements again(!) to check the 
vertex_index() is extremely slow.
- Is there a fast and efficient way to get the 'touching' cells in a 
triangulation for one vertex?
(of course i do that only once during setup and save a std::map from 
active_cell_index to pressure dof and cell area)

Do you know of any better-fitting stabilizations for the Q1Q0 pair? Or do 
you think there are better options around?

Any help would be greatly appreciated!
Thanks in advance!

Kind regards,
Richard


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f98bf7c9-ed79-43b5-bf70-d2babcecb783%40googlegroups.com.


Re: [deal.II] MUMPS with PETScWrappers::MPI::BlockSparseMatrix

2019-09-09 Thread Richard Schussnig
Hi Konrad!

You can use the parallel direct solver in the schur complement, for 
orientation, take a look at step-57 (should be Navier-Stokes with direct 
solver for the A-block, if im not mistaken).
However, my inferior C++ knowledge did not allow me to do the factorization 
in the constructor of the 'preconditioner', but you can initialize it 
before creating the preconditioner and pass it as an argument!
This is considerably faster than doing the factorization inside the 
preconditioner of course, since you would re-do it every iteration. 
Nevertheless, this is perfectly fine for testing, to see if it is really 
doing the job!

Kind regards, Richard

Am Sonntag, 8. September 2019 15:20:20 UTC+2 schrieb Konrad:
>
> Thank you, David, so I will see if I can maybe still use it in the Schur 
> complement somehow.
>
> Best,
> Konrad
>
> On Friday, September 6, 2019 at 7:46:23 PM UTC+2, David Wells wrote:
>>
>> Hi Konrad,
>>
>> I don't think that it is possible to use MUMPS with a block matrix for 
>> exactly this reason. I think that if you want to use MUMPS you will need to 
>> copy the block matrix into a non-block matrix.
>>
>> Thanks,
>> David
>>
>> On Fri, Sep 6, 2019 at 12:25 PM Konrad  wrote:
>>
>>> Dear deal.ii community,
>>>
>>> is it possible to use MUMPS with 
>>> a PETScWrappers::MPI::BlockSparseMatrix? Don't find anything but I see 
>>> that PETScWrappers::MPI::BlockSparseMatrix does not inherit from 
>>> PETScWrappers::MatrixBase.
>>>
>>> Best,
>>> Konrad
>>>
>>> -- 
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see 
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to dea...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/dealii/290c3820-4c09-4515-a2f9-847691627427%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0c5e62ad-fe63-49bf-9344-7af0652ad6bb%40googlegroups.com.


Re: [deal.II] How to set material id with MPI

2019-09-04 Thread Richard Schussnig
Hi Pham!
>From your description I do not really get why you are specifically doing 
this, so maybe consider the following:
I assume, you are flagging cells material ids on one locally owned part due 
to some custom condition - lets say some stress or function, 
you cannot formulate in the global coordinate system, so that you cannot 
decide on the material id simply using cell->center() or similar strategies.

The workaround i came up with is as follows:
Every cell is only flagged from the owning processor and needs to be 
communicated to other parts of the 
p::d:tria afterwards, where those elements are ghosts.
->As a consequence, ghost cells have different material ids depending on 
the side, from which they are viewed from, 
which is what we want to get rid of.

So firstly, create - in my case I already had such a space in use - the 
simplest possible discontinuous (!) function space,
e.g. with FE_DGQ(degree=0). 

Then, you loop over the locally owned cells and assemble simply the 
material id. - similar to 
assembling e.g. the right hand side. (Here the discontinuity of the 
function space in use comes into play, since you do not 
want to mix contributions from different elements (or you consider the 
internal nodes for C0, but at least 2nd order functions).
 
After that, you of course compress and convert to a ghosted vector - 
meaning, that you now have access to the entries of the vectors
in the ghost cells as well - which now contain your material ids.

So finally, you loop over locally owned AND ghosted cells - and set the 
material id in both from the vector you got, 
which is now accessible from the cells you need.

The above approach might not be the fastest one, but if you can reuse some 
space it might not be too bad.
If someone reading this sees any flaw I am currently not aware of, please 
let me know! - I am new to both C++ and dealii ; )

Kind regards & good luck coding that up!
Richard


Am Dienstag, 3. September 2019 04:49:03 UTC+2 schrieb Phạm Ngọc Kiên:
>
> Dear Prof. Wolfgang Bangerth,
> As the file is too large, I send it again in a compressed file in the 
> attachment.
> I am sorry for my mistake.
> Best regards,
> Kien
>
> Vào Th 3, 3 thg 9, 2019 vào lúc 10:50 Phạm Ngọc Kiên <
> ngockie...@gmail.com > đã viết:
>
>> Dear Prof. Wolfgang Bangerth,
>> The attachment is my codes and the mesh for loading grid.
>> I think that when I run the codes on a single computer, it might take 
>> longer time than run the codes on cluster.
>>
>> I would like to thank you very much for your great guidance.
>> Best regards,
>> Kien
>>
>> Vào Th 6, 30 thg 8, 2019 vào lúc 12:35 Wolfgang Bangerth <
>> bang...@colostate.edu > đã viết:
>>
>>> On 8/29/19 6:31 PM, Phạm Ngọc Kiên wrote:
>>> > 
>>> > When I run the codes in my computer, it takes  a lot of time for p4est 
>>> to load 
>>> > the grid.
>>> > The loading grid  step is more time consuming than solving the system 
>>> of 
>>> > equations with a mesh containing about 100,000 cells.
>>>
>>> It *shouldn't* take that long, but who knows what exactly is happening 
>>> with 
>>> such big meshes. Do you think you can create a small testcase that 
>>> demonstrates this? It should really only contain of the code to read the 
>>> mesh, 
>>> and the file with the mesh itself.
>>>
>>> Best
>>>   W.
>>>
>>> -- 
>>> 
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/
>>>
>>> -- 
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see 
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to dea...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/dealii/246cfed4-41cb-d051-022e-6b9c7f1d8e91%40colostate.edu
>>> .
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9a839162-2c7f-4b32-befa-4a0f2a703a52%40googlegroups.com.