Dear Thomas,
I found the problem. Your version works exactly like discussed in your
paper, even faster with my Intel i7, namely around 1400 s. The
timeconsuming part is the computation of my B-operator.
The B-operator computation is implemented inside* assemble_system (bool
residual_only):*
for (; cell != endc; ++cell)
if ( cell->is_locally_owned() )
{
fe_values.reinit(cell);
local_matrix = 0;
local_rhs = 0;
int cell_index = cell->active_cell_index();
...
// COMPUTE: B-Operator
for (unsigned int q = 0; q < n_q_points; ++q) // loop over
integration/quadrature points (GPs)
{
b_operators[cell_index][q] = Tensors::*get_b_operator*(fe_values,
dofs_per_cell, q);
}
*B* = b_operators;
// Old Newton iteration values
fe_values.get_function_values(rel_solution, old_solution_values);
fe_values.get_function_gradients(rel_solution, old_solution_grads);
...
}
The variable *B *is a history variable I created to store the B-operator
history and has been initialized in the main class function according to:
std::vector<std::vector<FullMatrix<double>>>B; // temporary cell storage
solution (inefficient!)
The function *get_b_operator()* which is called within the Gauss points
loop is implemented in Tensors:
template<int dim>
FullMatrix<double> get_b_operator (const FEValues<dim> &fe_values, const
unsigned int dofs_per_cell, const unsigned int q)
{
FullMatrix<double> tmp(dim, GeometryInfo<dim>::vertices_per_cell);
// Remark: For vector-valued problems each column is a value for each value
of the solution variable, hence here 3 DoFs per node (dofs_per_cell = 12)!
for (unsigned int i = 0; i < dofs_per_cell; i += 3)
{
const unsigned int index = i / 3;
// COMPUTE: B-operator (Remark: This version has to be extended for 3D!)
tmp[0][index] = fe_values.shape_grad_component(i, q, 0)[0];
tmp[1][index] = fe_values.shape_grad_component(i, q, 0)[1];
}
return tmp;
}
I assume (as I already wrote within my comments inside the above code) that
this kind of cell storage for matrices turns out to be quite slow and
inefficient. Jean-Paul offered me once to use the new CellDataStorage
function, if I remember correctly. Isn't there a nice and elegant way to
store data in each Gauss point and cell without loss of computational time?
Or am I able to pass the B-operator to another function somehow after I
have solved the system? Something similar to your predictor-corrector
scheme. It seems you somehow pass data more efficiently from one function
to another function even after you solved the system. Do you also pass cell
data from a function before solving to a function after solving?
Best,
Seyed Ali
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.