I tried to reduce it as much as possible while still being able to run, the 
result is attached. It still has ~600 lines, but I am not sure if I should 
remove things like the output-function? Furthermore the NOX-logic takes 
quite a lot of lines, and I would like to keep the program as usable as 
possible while removing unneccessary lines. Is that example usable?

Am Freitag, 26. April 2019 05:12:39 UTC+2 schrieb Wolfgang Bangerth:
>
> On 4/25/19 10:13 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > After running some tests, I ended up reducing the problem to the 
> transfer to 
> > and from the Epetra-vectors. I have to write an interface to the model 
> > (according to the instructions), and as shown in the code above. There I 
> have 
> > Epetra-Vectors created by my interface, with a size of 
> > locally_relevant_dofs.size(), and TrilinosWrappers::MPI::Vectors. 
> Transfer 
> > from the Epetra-Vectors to the MPI::Vectors works fine, but the way back 
> does 
> > not work (The MPI::Vectors are larger than the Epetra_Vectors). Are 
> there any 
> > hints in how I still could fit the data from the MPI::Vector into the 
> > Epetra_Vector? Or should I rather ask this on the Trilinos mailing list? 
>
> Good question. I think it would probably be very useful to have small 
> testcase 
> others could look at. The program you have attached is still 1,300 lines, 
> which is far too much for anyone to understand. But since you have an idea 
> of 
> what the problem is, do you think you could come up with a small testcase 
> that 
> illustrates exactly the issue you have? It doesn't have to do anything 
> useful 
> at all -- in your case, I think all that's necessary is to create a 
> vector, 
> convert it as you describe above, and then output the sizes to show that 
> they 
> are not the same. Run this on two processors, and you should have 
> something 
> that could be done in 50 or 100 lines, and I think that might be small 
> enough 
> for someone who doesn't know your code to understand what is necessary. 
>
> I've built these sorts of testcases either from scratch in the past, or by 
> just keep removing things from a program that (i) are run after the time 
> the 
> problem happens, and (ii) then removing everything that is not necessary. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 [email protected] 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.
//Nox include files
#include <NOX.H>
#include <NOX_LAPACK_Group.H>
#include <Teuchos_StandardCatchMacros.hpp>

#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/convergence_table.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/timer.h>
#include <deal.II/base/work_stream.h>

#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/sparse_direct.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/solver_gmres.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/lac/generic_linear_algebra.h>

#include <deal.II/grid/tria.h>

#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>

#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/dofs/dof_handler.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>

#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/data_out.h>

#include <deal.II/meshworker/mesh_loop.h>

#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>

#include <deal.II/meshworker/scratch_data.h>
#include <deal.II/meshworker/copy_data.h>

// NOX Objects
#include "NOX.H"
#include "NOX_Epetra.H"

// Trilinos Objects
#ifdef HAVE_MPI
#include "Epetra_MpiComm.h"
#else
#include "Epetra_SerialComm.h"
#endif
#include "Epetra_Vector.h"
#include "Epetra_Operator.h"
#include "Epetra_RowMatrix.h"
#include "NOX_Epetra_Interface_Required.H" // base class
#include "NOX_Epetra_Interface_Jacobian.H" // base class
#include "NOX_Epetra_Interface_Preconditioner.H" // base class
#include "Epetra_CrsMatrix.h"
#include "Epetra_Map.h"
#include "Epetra_LinearProblem.h"
#include "AztecOO.h"
#include "Teuchos_StandardCatchMacros.hpp"

// User's application specific files
//#include "problem_interface.h" // Interface file to NOX
//#include "finite_element_problem.h"

// Required for reading and writing parameter lists from xml format
// Configure Trilinos with --enable-teuchos-extended
#ifdef HAVE_TEUCHOS_EXTENDED
#include "Teuchos_XMLParameterListHelpers.hpp"
#include <sstream>
#endif

// This is C++ ...
#include <fstream>
#include <iostream>

using namespace dealii;

template <int dim>
class BoundaryValues : public dealii::Function<dim>
{
public:
    BoundaryValues(const size_t n_components)
        : dealii::Function<dim>(1), n_components(n_components)
    {}

    virtual double value (const dealii::Point<dim>   &p,
                          const unsigned int  component = 0) const;

    virtual void vector_value(const dealii::Point<dim> &p, dealii::Vector<double> &value) const;
private:
    const size_t n_components;
};


template <int dim>
double BoundaryValues<dim>::value (const dealii::Point<dim> &p,
                                   const unsigned int) const
{
    const double x = p[0];
    const double y = p[1];
    return sin(M_PI * x) * sin(M_PI * y);
}

template <int dim>
void BoundaryValues<dim>::vector_value(const dealii::Point<dim> &p, dealii::Vector<double> &value) const
{
    for(size_t i = 0; i < value.size(); ++i)
        value[i] = BoundaryValues<dim>::value(p, i);
}

template <int dim>
class ProblemInterface : public NOX::Epetra::Interface::Required,
        public NOX::Epetra::Interface::Preconditioner,
        public NOX::Epetra::Interface::Jacobian
{
public:
    ProblemInterface(std::function<void(const TrilinosWrappers::MPI::Vector&, TrilinosWrappers::MPI::Vector&)> residual_function,
                     const MPI_Comm &mpi_communicator,
                     const IndexSet &locally_owned_dofs,
                     const IndexSet &locally_relevant_dofs) :
        residual_function(residual_function),
        mpi_communicator(mpi_communicator),
        locally_owned_dofs(locally_owned_dofs),
        locally_relevant_dofs(locally_relevant_dofs)
    {

    };

    ~ProblemInterface(){

    };

    bool computeF(const Epetra_Vector &x, Epetra_Vector &FVec, NOX::Epetra::Interface::Required::FillType)
    {
        IndexSet local_index_set(FVec.Map());

        f_vec.reinit(locally_relevant_dofs, mpi_communicator);
        residual_vec.reinit(locally_relevant_dofs, mpi_communicator);

        std::cout << "F_vec: " << f_vec.size() << '\t' << x.MyLength() << '\n';

        Epetra_Vector f_epetra_vec = Epetra_Vector(View, f_vec.trilinos_vector(), 0);

        residual_vec = 0.;

        std::cout << "f_epetra_vec: " << f_epetra_vec.MyLength() << '\n';

        f_epetra_vec = x;

        std::cout << "residual_vec: " << residual_vec.size() << '\n';

        residual_function(f_vec, residual_vec);

        std::cout << "\n\nFVec: " << FVec.MyLength() << "\n\n";

        for(size_t i = 0; i < local_index_set.n_elements(); ++i)
            std::cout << local_index_set.is_element(i) << '\t';
        std::cout << "\n\nLocal_index_set: " << local_index_set.size() << '\t' << locally_owned_dofs.n_elements() << '\t' << '\n';
        std::cout << "\n\n\n\n";
        for(auto index : residual_vec.locally_owned_elements())
            FVec[index] = residual_vec[index];
        for(int i = 0; i < FVec.MyLength(); ++i)
            std::cout << FVec[i] << '\t';
        std::cout << "\n\n";
        std::cout << "\n\n";
        return true;
    }

    bool computeJacobian(const Epetra_Vector &x, Epetra_Operator &Jac)
    {
        Epetra_CrsMatrix* Jacobian = dynamic_cast<Epetra_CrsMatrix*>(&Jac);
        if (Jacobian == NULL) {
            std::cout << "ERROR: Problem_Interface::computeJacobian() - The supplied"
                      << "Epetra_Operator is NOT an Epetra_RowMatrix!" << std::endl;
            throw;
        }
        (void) x;
        return false;
    }

    bool computePrecMatrix(const Epetra_Vector &x, Epetra_Operator &M)
    {
        Epetra_RowMatrix* precMatrix = dynamic_cast<Epetra_RowMatrix*>(&M);
        if (precMatrix == NULL) {
            std::cout << "ERROR: Problem_Interface::computePreconditioner() - The supplied"
                      << "Epetra_Operator is NOT an Epetra_RowMatrix!" << std::endl;
            throw;
        }
        return false;
    }

    bool computePreconditioner(const Epetra_Vector& x, Epetra_Operator& M,
                               Teuchos::ParameterList* precParams = 0)
    {
        (void) x;
        (void) M;
        (void) precParams;
        std::cout << "ERROR: Problem_Interface::preconditionVector() - Use Explicit Jacobian only for this test problem!" << std::endl;
        throw 1;
        return false;
    }

private:
    std::function<void(const TrilinosWrappers::MPI::Vector&, TrilinosWrappers::MPI::Vector&)> residual_function;
    Epetra_Vector *rhs;

    MPI_Comm mpi_communicator;
    IndexSet locally_owned_dofs, locally_relevant_dofs;

    TrilinosWrappers::MPI::Vector f_vec, residual_vec;
};

template <int dim>
class Step5
{
public:
    Step5();
    ~Step5();
    void run();

private:
    void setup_system();
    void assemble_system();
    void calculate_residual(const TrilinosWrappers::MPI::Vector &u, TrilinosWrappers::MPI::Vector &residual);
    void solve_with_NOX();
    void process_solution();
    void output_results();

    void assemble_rhs(double &return_value,
                      const double factor,
                      const double &val_u,
                      const Tensor<1, dim> &grad_u,
                      const Point<dim> p,
                      const double &fe_values_value,
                      const Tensor<1, dim> &fe_gradient_value,
                      const double JxW_value);

    MPI_Comm mpi_communicator;

    parallel::distributed::Triangulation<dim> triangulation;
    DoFHandler<dim>    dof_handler;
    FE_Q<dim>          fe;

    IndexSet locally_owned_dofs;
    IndexSet locally_relevant_dofs;

    SparsityPattern      sparsity_pattern;
    TrilinosWrappers::SparseMatrix system_matrix;
    TrilinosWrappers::MPI::Vector locally_relevant_solution;
    TrilinosWrappers::MPI::Vector system_rhs;

    TrilinosWrappers::MPI::Vector evaluation_point;

    AffineConstraints<double> hanging_node_constraints;

    ConvergenceTable convergence_table;

    ConditionalOStream pcout;
    TimerOutput computing_timer;
};

template <int dim>
double coefficient(const Point<dim> &p)
{
    return -2 * M_PI * M_PI * sin(M_PI * p[0]) * sin(M_PI * p[1]);
}

template <int dim>
Step5<dim>::Step5()
    : mpi_communicator(MPI_COMM_WORLD),
      triangulation(mpi_communicator,
                    typename Triangulation<dim>::MeshSmoothing(Triangulation<dim>::smoothing_on_refinement | Triangulation<dim>::smoothing_on_coarsening)),
      dof_handler(triangulation),
      fe(2),
      pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)),
      computing_timer(mpi_communicator, pcout, TimerOutput::summary, TimerOutput::wall_times)
{}

template <int dim>
Step5<dim>::~Step5<dim>()
{
    dof_handler.clear();
}

template <int dim>
void Step5<dim>::setup_system()
{
    TimerOutput::Scope t(computing_timer, "Setup system");
    dof_handler.distribute_dofs(fe);

    locally_owned_dofs = dof_handler.locally_owned_dofs();
    DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);

    locally_relevant_solution.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_communicator);
    evaluation_point.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_communicator);

    system_rhs.reinit(locally_owned_dofs, mpi_communicator);

    hanging_node_constraints.clear();
    hanging_node_constraints.reinit(locally_relevant_dofs);

    VectorTools::interpolate_boundary_values(dof_handler,
                                             0,
                                             BoundaryValues<dim>(1),
                                             hanging_node_constraints);

    hanging_node_constraints.close();

    pcout << "   Number of degrees of freedom: " << dof_handler.n_dofs()
          << std::endl;

    DynamicSparsityPattern dsp(locally_relevant_dofs);
    DoFTools::make_sparsity_pattern(dof_handler, dsp, hanging_node_constraints, false);
    SparsityTools::distribute_sparsity_pattern(dsp,
                                               dof_handler.n_locally_owned_dofs_per_processor(),
                                               mpi_communicator,
                                               locally_relevant_dofs);
    system_matrix.reinit(locally_owned_dofs,
                         locally_owned_dofs,
                         dsp,
                         mpi_communicator);

}

template <int dim>
void Step5<dim>::assemble_rhs(double &return_value,
                              const double factor,
                              const double &val_u,
                              const Tensor<1, dim> &grad_u,
                              const Point<dim> p,
                              const double &fe_values_value,
                              const Tensor<1, dim> &fe_gradient_value,
                              const double JxW_value)
{
    (void) val_u;
    return_value += factor
            * (coefficient(p)
               * fe_values_value
               + grad_u
               * fe_gradient_value
               )
            * JxW_value;
}

template <int dim>
void Step5<dim>::calculate_residual(const TrilinosWrappers::MPI::Vector &u, TrilinosWrappers::MPI::Vector &residual)
{
    QGauss<dim> quadrature_formula(fe.degree + 1);
    QGauss<dim - 1> face_quad(fe.degree + 1);

    UpdateFlags cell_flags = update_values | update_gradients | update_quadrature_points | update_JxW_values;

    FEValues fe_values(fe, quadrature_formula, cell_flags);

    const unsigned int dofs_per_cell = fe.dofs_per_cell;

    residual.reinit(locally_owned_dofs, mpi_communicator);

    std::vector<Tensor<1, dim>> grad_u(quadrature_formula.size());
    std::vector<double> val_u(quadrature_formula.size());

    TrilinosWrappers::MPI::Vector local_evaluation_point = u;

    Vector<double> cell_rhs(dofs_per_cell);
    std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);

    const unsigned int n_q_points    = quadrature_formula.size();
    for (const auto &cell : dof_handler.active_cell_iterators())
    {
        if(cell->is_locally_owned())
        {
            cell_rhs    = 0.;
            fe_values.reinit(cell);

            fe_values.get_function_values(local_evaluation_point, val_u);
            fe_values.get_function_gradients(local_evaluation_point, grad_u);

            for (unsigned int q = 0; q < n_q_points; ++q)
            {
                for (unsigned int i = 0; i < dofs_per_cell; ++i)
                {
                    assemble_rhs(cell_rhs(i),
                                 -1,
                                 val_u[q],
                                 grad_u[q],
                                 fe_values.quadrature_point(q),
                                 fe_values.shape_value(i, q),
                                 fe_values.shape_grad(i, q),
                                 fe_values.JxW(q));
                }
            }

            cell->get_dof_indices(local_dof_indices);
            hanging_node_constraints.distribute_local_to_global(cell_rhs,
                                                                local_dof_indices,
                                                                residual);
        }
    }
    residual.compress(VectorOperation::add);
}

template <int dim>
void Step5<dim>::solve_with_NOX()
{
    TimerOutput::Scope t(computing_timer, "Solve NOX system");

    const double offset = 1;

    locally_relevant_solution = offset;

    TrilinosWrappers::MPI::Vector offset_vector = locally_relevant_solution;
    //return;
    Epetra_Vector local_solution(Copy, locally_relevant_solution.trilinos_vector(), 0);

    Teuchos::RCP<Teuchos::ParameterList> nlParamsPtr =
            Teuchos::rcp(new Teuchos::ParameterList);
    Teuchos::ParameterList& nlParams = *nlParamsPtr.get();

    // Set the nonlinear solver method
    nlParams.set("Nonlinear Solver", "Line Search Based");
    //nlParams.set("Nonlinear Solver", "Trust Region Based");

    // Set the printing parameters in the "Printing" sublist
    Teuchos::ParameterList& printParams = nlParams.sublist("Printing");
    printParams.set("Output Precision", 3);
    printParams.set("Output Processor", 0);
    printParams.set("Output Information",
                    //NOX::Utils::OuterIteration +
                    //                        NOX::Utils::OuterIterationStatusTest +
                    //NOX::Utils::InnerIteration +
                    //                        NOX::Utils::Parameters +
                    //                        NOX::Utils::Details +
                    NOX::Utils::Warning);

    // Create printing utilities
    NOX::Utils utils(printParams);

    // Sublist for line search
    Teuchos::ParameterList& searchParams = nlParams.sublist("Line Search");
    searchParams.set("Method", "Full Step");
    //searchParams.set("Method", "Interval Halving");
    //searchParams.set("Method", "Polynomial");
    //searchParams.set("Method", "NonlinearCG");
    //searchParams.set("Method", "Quadratic");
    //searchParams.set("Method", "More'-Thuente");

    // Sublist for direction
    Teuchos::ParameterList& dirParams = nlParams.sublist("Direction");
    //  dirParams.set("Method", "Modified-Newton");
    //  Teuchos::ParameterList& newtonParams = dirParams.sublist("Modified-Newton");
    //    newtonParams.set("Max Age of Jacobian", 2);
    dirParams.set("Method", "Newton");
    Teuchos::ParameterList& newtonParams = dirParams.sublist("Newton");
    newtonParams.set("Forcing Term Method", "Constant");
    //newtonParams.set("Forcing Term Method", "Type 1");
    //newtonParams.set("Forcing Term Method", "Type 2");
    //newtonParams.set("Forcing Term Minimum Tolerance", 1.0e-4);
    //newtonParams.set("Forcing Term Maximum Tolerance", 0.1);
    //dirParams.set("Method", "Steepest Descent");
    //Teuchos::ParameterList& sdParams = dirParams.sublist("Steepest Descent");
    //sdParams.set("Scaling Type", "None");
    //sdParams.set("Scaling Type", "2-Norm");
    //sdParams.set("Scaling Type", "Quadratic Model Min");
    //dirParams.set("Method", "NonlinearCG");
    //Teuchos::ParameterList& nlcgParams = dirParams.sublist("Nonlinear CG");
    //nlcgParams.set("Restart Frequency", 2000);
    //nlcgParams.set("Precondition", "On");
    //nlcgParams.set("Orthogonalize", "Polak-Ribiere");
    //nlcgParams.set("Orthogonalize", "Fletcher-Reeves");

    // Sublist for linear solver for the Newton method
    Teuchos::ParameterList& lsParams = newtonParams.sublist("Linear Solver");
    lsParams.set("Aztec Solver", "GMRES");
    lsParams.set("Max Iterations", 800);
    lsParams.set("Tolerance", 1e-4);
    lsParams.set("Preconditioner", "None");
    //lsParams.set("Preconditioner", "Ifpack");
    lsParams.set("Max Age Of Prec", 5);

    computing_timer.enter_subsection("Creating interface for NOX");
    Teuchos::RCP<ProblemInterface<dim>> interface = Teuchos::rcp(new ProblemInterface<dim>(std::bind(&Step5::calculate_residual,
                                                                                                     this,
                                                                                                     std::placeholders::_1,
                                                                                                     std::placeholders::_2),
                                                                                           mpi_communicator,
                                                                                           locally_owned_dofs,
                                                                                           locally_relevant_dofs));
    computing_timer.leave_subsection();
    computing_timer.enter_subsection("Create linear system for NOX");

    Teuchos::RCP<NOX::Epetra::MatrixFree> MF =
            Teuchos::rcp(new NOX::Epetra::MatrixFree(printParams, interface, local_solution));
    //    Teuchos::RCP<NOX::Epetra::FiniteDifference> FD =
    //            Teuchos::rcp(new NOX::Epetra::FiniteDifference(printParams, interface,
    //                                                           local_solution));

    computing_timer.leave_subsection();
    computing_timer.enter_subsection("Create linear system for NOX, part II");
    // Create the linear system
    Teuchos::RCP<NOX::Epetra::Interface::Required> iReq = interface;
    Teuchos::RCP<NOX::Epetra::Interface::Jacobian> iJac = MF;
    Teuchos::RCP<NOX::Epetra::Interface::Preconditioner> iPrec =
            interface;
    Teuchos::RCP<NOX::Epetra::LinearSystemAztecOO> linSys =
            Teuchos::rcp(new NOX::Epetra::LinearSystemAztecOO(printParams, lsParams,
                                                              iReq, iJac, MF,// iPrec, FD,
                                                              local_solution));
    computing_timer.leave_subsection();

    // Create the Group
    Teuchos::RCP<NOX::Epetra::Group> grp =
            Teuchos::rcp(new NOX::Epetra::Group(printParams, iReq, local_solution,
                                                linSys));

    computing_timer.enter_subsection("Create NOX convergence tests");
    // Create the convergence tests
    Teuchos::RCP<NOX::StatusTest::NormF> absresid =
            Teuchos::rcp(new NOX::StatusTest::NormF(1.0e-8));
    Teuchos::RCP<NOX::StatusTest::NormF> relresid =
            Teuchos::rcp(new NOX::StatusTest::NormF(*grp.get(), 1.0e-2));
    Teuchos::RCP<NOX::StatusTest::NormUpdate> update =
            Teuchos::rcp(new NOX::StatusTest::NormUpdate(1.0e-5));
    Teuchos::RCP<NOX::StatusTest::NormWRMS> wrms =
            Teuchos::rcp(new NOX::StatusTest::NormWRMS(1.0e-2, 1.0e-8));
    Teuchos::RCP<NOX::StatusTest::Combo> converged =
            Teuchos::rcp(new NOX::StatusTest::Combo(NOX::StatusTest::Combo::AND));
    converged->addStatusTest(absresid);
    converged->addStatusTest(relresid);
    converged->addStatusTest(wrms);
    converged->addStatusTest(update);
    Teuchos::RCP<NOX::StatusTest::MaxIters> maxiters =
            Teuchos::rcp(new NOX::StatusTest::MaxIters(20));
    Teuchos::RCP<NOX::StatusTest::FiniteValue> fv =
            Teuchos::rcp(new NOX::StatusTest::FiniteValue);
    Teuchos::RCP<NOX::StatusTest::Combo> combo =
            Teuchos::rcp(new NOX::StatusTest::Combo(NOX::StatusTest::Combo::OR));
    combo->addStatusTest(fv);
    combo->addStatusTest(converged);
    combo->addStatusTest(maxiters);

    Teuchos::RCP<Teuchos::ParameterList> finalParamsPtr = nlParamsPtr;

    // Create the method
    computing_timer.leave_subsection();
    Teuchos::RCP<NOX::Solver::Generic> solver =
            NOX::Solver::buildSolver(grp, combo, finalParamsPtr);
    NOX::StatusTest::StatusType status;
    {
        TimerOutput::Scope t(computing_timer, "NOX solver");
        status = solver->solve();
    }

    if (status == NOX::StatusTest::Converged)
        utils.out() << "Test Passed!" << std::endl;
    else {
        utils.out() << "Nonlinear solver failed to converge!" << std::endl;
    }

    // Get the Epetra_Vector with the final solution from the solver
    const NOX::Epetra::Group& finalGroup = dynamic_cast<const NOX::Epetra::Group&>(solver->getSolutionGroup());
    const Epetra_Vector& finalSolution = (dynamic_cast<const NOX::Epetra::Vector&>(finalGroup.getX())).getEpetraVector();

    // End Nonlinear Solver **************************************

    // Output the parameter list
    if (utils.isPrintType(NOX::Utils::Parameters)) {
        utils.out() << std::endl << "Final Parameters" << std::endl
                    << "****************" << std::endl;
        solver->getList().print(utils.out());
        utils.out() << std::endl;
    }

    locally_relevant_solution = 0;

    Epetra_Vector epetra_F(View, locally_relevant_solution.trilinos_vector(), 0);
    epetra_F = finalSolution;
    locally_relevant_solution -= offset_vector;
}

template <int dim>
void Step5<dim>::output_results()
{
    TimerOutput::Scope t(computing_timer, "Print system");
    DataOut<dim> data_out;

    data_out.attach_dof_handler(dof_handler);

    data_out.add_data_vector(locally_relevant_solution, "Calculated_solution");

    Vector<float> subdomain(triangulation.n_active_cells());
    for (size_t i = 0; i < subdomain.size(); ++i)
        subdomain(i) = triangulation.locally_owned_subdomain();
    data_out.add_data_vector(subdomain, "subdomain");

    data_out.build_patches();

    std::ofstream output("solution."
                         + Utilities::int_to_string(triangulation.locally_owned_subdomain(), 4)
                         + ".vtu");
    data_out.write_vtu(output);

    if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
    {
        std::vector<std::string> filenames;
        for (unsigned int i = 0;
             i < Utilities::MPI::n_mpi_processes(mpi_communicator);
             ++i)
            filenames.push_back("solution."
                                + Utilities::int_to_string(i, 4)
                                + ".vtu");
        std::ofstream master_output("solution.pvtu");
        data_out.write_pvtu_record(master_output, filenames);
    }
}

template <int dim>
void Step5<dim>::run()
{
    GridGenerator::hyper_cube(triangulation, -1, 1);
    triangulation.refine_global(2);
    setup_system();

    solve_with_NOX();
    output_results();
}

int main(int argc, char *argv[])
{
    Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, numbers::invalid_unsigned_int);
    Step5<2> laplace_problem_2d;
    laplace_problem_2d.run();
    return 0;
}

Reply via email to