Dear Daniel,

another possibility would be to use FEValues::get_function_values instead of obtaining the previous coefficients and computing the value by yourself. This approach should also work in parallel.

Best Regards,
Markus



On 13.06.2012 21:30, Daniel Brauss wrote:
Hi all,

I am programming a problem using the techniques of step 40.
In my deal.ii serial code I run a Newton iteration process to converge
to a steady state solution by using the previous solution
values corresponding to the basis functions of each particular cell
in the assembly's cell for loop.  To obtain the previous solution
values (or previous coefficients), I use the calculated solution
from the previous iteration ( equals 0 if first iteration) stored
in the public vector

    Vector<double> solution

and the local to global index

    std::vector<unsigned int> local_dof_indices (dofs_per_cell);

in the cell for loop

  DoFHandler<3>::active_cell_iterator
    cell = dof_handler.begin_active(),
    endc = dof_handler.end();
  for (; cell!=endc; ++cell)
  {
    fe_values.reinit (cell);

    cell->get_dof_indices (local_dof_indices);

    for (unsigned int k=0; k < dofs_per_cell; k++)
      prev_coeffs[k] = solution(local_dof_indices[k]);

    ...

I then form Newton's method.

Would a similar call for the parallel code with the distributed
mesh as in step-40 using the processor vector

PETScWrappers::MPI::Vector   locally_relevant_solution;

and the local to global index

std::vector<unsigned int> local_dof_indices (dofs_per_cell);

work in a similar manner?  The indexing would be different
since we do not have the entire solution on each processor.
I was how this might work?


Thanks,
Dan


_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to