Hi Wolfgang,
 

> Precisely what do you see? 
>>
>
> Chaotic distribution of values in the solution several orders of 
> magnitudes higher than what I expect.
>  
>
>> I haven't heard of this, but that doesn't mean anything. How do you 
>> actually derive the compatibility condition in the mixed case? 
>>
>> Let me call the primary variable 'p'. Assume you have Neumann boundary 
>> conditions 
>>    dp/dn = g 
>> on the entire boundary Gamma. In the non-mixed, standard case, after 
>> multiplying the equation and integrating by parts, you have 
>>
>>    (nabla q, nabla p)_Omega = (q,f)_Omega + (q,g)_Gamma 
>>
>> for all test functions q. You get the compatibility condition by testing 
>> with q=1. 
>>
>> But in the mixed formulation, your weak formulation is 
>>
>>    (v,u)_Omega - (div v, p)_Omega - (q, div u)_Omega = -(q,f)_Omega 
>>
>> As you correctly mention, 'g' doesn't appear in the weak formulation. 
>> I'm pretty sure the compatibility condition is still true (of course!) 
>> but I'm not sure how to derive it in the mixed case off the top of my 
>> head. Maybe going through the steps would help point out where the issue 
>> lies. 
>>
>  
The compatibility condition is derived by integrating the divergence of 
velocities (to use Darcy terminology) over the domain and applying Gauss 
theorem, then you see \int_{del Omega} v * n dS = \int_Omega f dx
 
But I guess the weird result pops up because I simply forgot to take care 
of the kernel of p which contains constants. :-/ I am trying to solve this 
issue using a mean value constraint on p (called u in my code) but it seems 
like I am doing something wrong. I am following step-11 but in my case I 
have a BlockDynamicSparsityPattern and something is wrong in my code 
(probably something I missed in the documentation). My program compiles but 
crashes. More details:

I set up a set of constraints in one function like that (in 2D for a 
RT_0-DGQ_0 pairing, what I call u would be in Darcy terms the pressure)

<code>
// First clear
constraints_v.at(n_basis).clear ();

// Hanging nodes
DoFTools::make_hanging_node_constraints (dof_handler, 
constraints_v.at(n_basis));

// Class to evaluate a shape function for any Finite element on any cell at 
any point, derived from Function<dim> class
ShapeFun::ShapeFunctionVector<2>
std_shape_function (fe.base_element(0),
global_cell,
/*verbose =*/ false);
std_shape_function.set_shape_fun_index(n_basis);

for (unsigned int i=0; i<4; ++i)
{
         // Since we are in H(div) we need to project BCs div-conforming
VectorTools::project_boundary_values_div_conforming(dof_handler,
/*first vector component */ 0,
std_shape_function,
/*boundary id*/ i,
constraints_v.at(n_basis));
}

// Now create mean value constraint
FEValuesExtractors::Scalar concentration (2);
ComponentMask concentration_mask = fe.component_mask (concentration);

std::vector<bool> concentration_dofs (dof_handler.n_dofs(), false);
DoFTools::extract_dofs (dof_handler,
concentration_mask,
concentration_dofs);

const unsigned int first_dof_u
  = std::distance (concentration_dofs.begin(),
   std::find (concentration_dofs.begin(),
   concentration_dofs.end(),
  true));
constraints_v.at(n_basis).add_line (first_dof_u);
for (unsigned int i=first_dof_u+1; i<dof_handler.n_dofs(); ++i)
if (concentration_dofs[i] == true)
constraints_v.at(n_basis).add_entry (first_dof_u, i, -1);

constraints_v.at(n_basis).close();

// Then take care of sparsity pattern
std::vector<types::global_dof_index> dofs_per_component (3);
DoFTools::count_dofs_per_component (dof_handler, dofs_per_component);
const unsigned int      n_sigma = dofs_per_component[0],
   n_u = dofs_per_component[2];

{
BlockDynamicSparsityPattern dsp(2, 2);
dsp.block(0, 0).reinit (n_sigma, n_sigma);
dsp.block(1, 0).reinit (n_u, n_sigma);
dsp.block(0, 1).reinit (n_sigma, n_u);
dsp.block(1, 1).reinit (n_u, n_u);
dsp.collect_sizes ();

DoFTools::make_sparsity_pattern(dof_handler,
  dsp,
  constraints_v.at(n_basis),
  /*keep_constrained_dofs = */ false);
constraints_v.at(n_basis).condense (dsp);

sparsity_pattern_v.at(n_basis).copy_from (dsp);
} // Now memory for dsp is released.

....

</code>

The exception pops up in the assembly when distributing element martix 
entries and element rhs together with constraints:

<code>
cell->get_dof_indices(local_dof_indices);
constraints_v.at(n_basis).distribute_local_to_global(local_matrix,  // 
<<<-----   Here the exception is thrown
local_rhs_v.at(n_basis),
local_dof_indices_v.at(n_basis),
system_matrix_v.at(n_basis),
system_rhs_v.at(n_basis),
/* use inhomogeneities for rhs */ true);
</code>

The exception says:

The violated condition was:  
   matrix_values->column() == column 
Additional information:  
   You are trying to access the matrix entry with index <2,3>, but this 
entry does not exist in the sparsity pattern of this matrix. 

The most common cause for this problem is that you used a method to build 
the sparsity pattern that did not (completely) take into account all of the 
entries you will later try to write into. An example would be building a 
sparsity pattern that does not include the entries you will write into due 
to constraints on degrees of freedom such as hanging nodes or periodic 
boundary conditions. In such cases, building the sparsity pattern will 
succeed, but you will get errors such as the current one at one point or 
other when trying to write into the entries of the matrix.

It seems like I do not handle building the constraints correctly but can't 
figure out how. Sorry, that was lots of (hopefully sufficiently 
informative) code and probably it is just a minor thing.

Best,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to