Re: [deal.II] Compilation error during the use of CellDataStorage datastructure

2017-10-27 Thread Jean-Paul Pelteret
Hi Phani,

Hmm… interesting. I think that you may be using the class in a way that we did 
not expect - I think that we intended it to store a struct/class rather than a 
primitive data type. I think that this should be an easy fix, namely that this 
assert (and perhaps others) need to be extended to read something like
(std::is_base_of::value || std::is_same::value)

Would you like to try to see if that small change fixes the problem, or else 
would you mind please providing us with a full minimal test case that 
reproduces the problem? In the mean time you could also simply wrap your single 
double in a struct which should circumvent the issue.

Best regards,
Jean-Paul

> On 28 Oct 2017, at 02:31, Phani Motamarri  wrote:
> 
> Hi,
> 
> I am trying to create a quadrature Point data in the form of CellDataStorage. 
> My quadrature point data are of of type "double" and hence I do the following
> 
>   CellDataStorage::active_cell_iterator,double> 
> rhoQuadData;
>typename DoFHandler<3>::active_cell_iterator cell_start = 
> dofHandler.begin_active(), cell_end = dofHandler.end();  dofHandler is 
> previously created and is associated with the a finite-element mesh
> 
>rhoQuadData.initialize(cell_start,
>  cell_end,
>  n_q_points);
> 
>   I get a compilation error which says
> 
>error: static assertion failed with "User's T class should be derived from 
> user's DataType class"
> static_assert(std::is_base_of::value,
> ^
>   detected during:
> instantiation of "void dealii::CellDataStorage DataType>::initialize(const CellIteratorType &, unsigned int) [with 
> CellIteratorType=dealii::TriaActiveIterator  3>, false>>, DataType=double, T=double]" at line 532
> instantiation of "void dealii::CellDataStorage DataType>::initialize(const CellIteratorType &, const CellIteratorType &, 
> unsigned int) [with 
> CellIteratorType=dealii::TriaActiveIterator  3>, false>>, DataType=double, T=double]" at line 120 of 
> "/home/phanim/KohnShamCodes/DEALIICode/dft-fe/src/./dft/project.cc
> 
> 
> This is very strange to me as DataType and T are both "double" but still I 
> get this static assertion error. Could you please let me know if I am missing 
> anything here?
> 
> Thanks
> Phani
> 
> -- 
> The deal.II project is located at http://www.dealii.org/ 
> 
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+unsubscr...@googlegroups.com 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Compilation error during the use of CellDataStorage datastructure

2017-10-27 Thread Phani Motamarri
Hi,

I am trying to create a quadrature Point data in the form of 
CellDataStorage. My quadrature point data are of of type "double" and hence 
I do the following

  CellDataStorage::active_cell_iterator,double> 
rhoQuadData;
   typename DoFHandler<3>::active_cell_iterator cell_start = 
dofHandler.begin_active(), cell_end = dofHandler.end();  dofHandler is 
previously created and is associated with the a finite-element mesh

   rhoQuadData.initialize(cell_start,
 cell_end,
 n_q_points);

  I get a compilation error which says

   





*error: static assertion failed with "User's T class should be derived from 
user's DataType class"static_assert(std::is_base_of::value,^  detected during:instantiation of "void 
dealii::CellDataStorage::initialize(const 
CellIteratorType &, unsigned int) [with 
CellIteratorType=dealii::TriaActiveIterator, false>>, DataType=double, T=double]" at line 532
instantiation of "void dealii::CellDataStorage::initialize(const CellIteratorType &, const CellIteratorType &, 
unsigned int) [with 
CellIteratorType=dealii::TriaActiveIterator, false>>, DataType=double, T=double]" at line 120 of 
"/home/phanim/KohnShamCodes/DEALIICode/dft-fe/src/./dft/project.cc*

This is very strange to me as DataType and T are both "double" but still I 
get this static assertion error. Could you please let me know if I am 
missing anything here?

Thanks
Phani

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Test 4 failed, maybe different installation of MPI? then how to solve this?

2017-10-27 Thread Michael
Thanks for your reply. I did install mpich_3.2-7_amd64.deb before. When I 
tried your simple mpi example, I can not even compile it and the the error 
is:

hello.c: In function ‘main’:
hello.c:3:5: warning: implicit declaration of function ‘MPI_Init’ 
[-Wimplicit-function-declaration]
 MPI_Init(NULL, NULL);
 ^~~~
hello.c:3:14: error: ‘NULL’ undeclared (first use in this function)
 MPI_Init(NULL, NULL);
  ^~~~
hello.c:3:14: note: each undeclared identifier is reported only once for 
each function it appears in
hello.c:7:5: warning: implicit declaration of function ‘MPI_Comm_size’ 
[-Wimplicit-function-declaration]
 MPI_Comm_size(MPI_COMM_WORLD, _size);
 ^
hello.c:7:19: error: ‘MPI_COMM_WORLD’ undeclared (first use in this 
function)
 MPI_Comm_size(MPI_COMM_WORLD, _size);
   ^~
hello.c:11:5: warning: implicit declaration of function ‘MPI_Comm_rank’ 
[-Wimplicit-function-declaration]
 MPI_Comm_rank(MPI_COMM_WORLD, _rank);
 ^
hello.c:14:25: error: ‘MPI_MAX_PROCESSOR_NAME’ undeclared (first use in 
this function)
 char processor_name[MPI_MAX_PROCESSOR_NAME];
 ^~
hello.c:16:5: warning: implicit declaration of function 
‘MPI_Get_processor_name’ [-Wimplicit-function-declaration]
 MPI_Get_processor_name(processor_name, _len);
 ^~
hello.c:19:5: warning: implicit declaration of function ‘printf’ 
[-Wimplicit-function-declaration]
 printf("Hello world from processor %s, rank %d"
 ^~
hello.c:19:5: warning: incompatible implicit declaration of built-in 
function ‘printf’
hello.c:19:5: note: include ‘’ or provide a declaration of ‘printf’
hello.c:24:5: warning: implicit declaration of function ‘MPI_Finalize’ 
[-Wimplicit-function-declaration]
 MPI_Finalize();
 ^~~~
: recipe for target 'hello.o' failed
make: *** [hello.o] Error 1

Not sure if this is because of I didn't install mpi package correctly which 
was installed by ubuntu software manager.


On Friday, October 27, 2017 at 12:57:55 PM UTC-5, Timo Heister wrote:
>
> Well, do you have more than one MPI library installed? If yes, you 
> need to address that by removing one of them. 
>
> You can check if a simple hello world MPI program compiles and runs 
> correctly. See for example 
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__mpitutorial.com_tutorials_mpi-2Dhello-2Dworld_=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=uE9uaCa2RV_G7HtjV6sxyfx9GduRM9wHUH-EOF3VvNM=
>  
>
> On Thu, Oct 26, 2017 at 6:38 PM, Michael  
> wrote: 
> > Hi, 
> > 
> > Test 4 failed after make test. I make my research and find the following 
> > answer: 
> > 
> > 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_msg_dealii_mV-2DjuFpnybA_DSBP4DArCQAJ=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=b_s_RUQD1qLw_eZ_t42IRcNKemfVWZIUnPr0jIQTF5c=
>  
> > 
> > I guess it may be caused by different version of MPI. And I get the 
> > following return of suggested command: 
> > 
> > 
> 
>  
>
> > mpirun -n 2 ./mpi.debug 
> >  Hi from 0/1 
> >  Hi from 0/1 
> > ERROR: process does not see nproc=2! 
> > ERROR: process does not see nproc=2! 
> > --- 
> > Primary job  terminated normally, but 1 process returned 
> > a non-zero exit code.. Per user-direction, the job has been aborted. 
> > --- 
> > 
> -- 
> > mpirun detected that one or more processes exited with non-zero status, 
> thus 
> > causing 
> > the job to be terminated. The first process to do so was: 
> > 
> >   Process name: [[18180,1],0] 
> >   Exit code:255 
> > 
> 
>  
>
> >  mpiexec -n 2 ./mpi.debug 
> >  Hi from 0/1 
> >  Hi from 0/1 
> > ERROR: process does not see nproc=2! 
> > ERROR: process does not see nproc=2! 
> > --- 
> > Primary job  terminated normally, but 1 process returned 
> > a non-zero exit code.. Per user-direction, the job has been aborted. 
> > --- 
> > 
> -- 
> > mpiexec detected that one or more processes exited with non-zero status, 
> > thus causing 
> > the job to be terminated. The first process to do so was: 
> > 
> >   Process name: [[18211,1],0] 
> >   Exit code:255 
> > 
> 

Re: [deal.II] Possible bug in the Jacobi preconditioner for MatrixFreeOperators::LaplaceOperator for vector fields

2017-10-27 Thread Martin Kronbichler

Dear Stephen,

You are absolutely right, the value of dofs_per_cell is simply wrong in 
the vector-valued case. I have been hesitant to fix it because there are 
some downstream projects using it (mostly mine, though), but I guess it 
is better to switch to the correct notation now rather than causing more 
trouble. You can use your fix right now - I will open an issue and fix 
it (probably next week).


Best,
Martin


On 27.10.2017 21:58, Stephen DeWitt wrote:

Hello,
I've been trying to refactor my code to use the new 
MatrixFreeOperators, but I've run into a problem trying to use the 
Jacobi preconditioner with the MatrixFreeOperators::LaplaceOperator 
with a vector-valued variable.


In short, I wrote a short code to solve a simple 2D Poisson problem. 
For a scalar field, it works well both when using the identity matrix 
preconditioner and the Jacobi preconditioner. However, for a vector 
field, it works when using the identity matrix preconditioner but 
won't converge with the Jacobi preconditioner.


Aside from all of the standard book-keeping (creating an FESystem with 
multiple components, applying vector-valued ICs and BCs, etc.), my 
only change between the scalar case and the vector case is the 
template parameter for the MatrixFreeOperators::LaplaceOperator object.


Digging into it, I found that approximately every other element of the 
diagonal vector was zero, excluding a few that I believe are from the 
BCs (that is, in "compute_diagonal()", half of the elements in 
"inverse_diagonal_vector" are zero after the cell loop but before the 
reciprocal is taken). In the scalar case, all of the diagonal elements 
are non-zero, as one would expect. The zero elements of the diagonal 
seem to come from the "local_diagonal_cell" method, where 
"phi.dofs_per_cell" and "phi.tensor_dofs_per_cell" don't change 
between the scalar and vector case (4 in this case, with linear 
elements), when I'd expect the dofs per cell in the vector case to be 
the number of components times the dofs per cell in the scalar case.


If I multiply  "phi.dofs_per_cell" and "phi.tensor_dofs_per_cell" by 
the number of components everywhere in the "local_diagonal_cell" 
method, the preconditioner works (see the highlighted edits at the 
bottom of the post).


So, does anyone have an idea of what's going on here? Is this a bug, 
or is there an extra step I'm missing to use MatrixFreeOperators with 
a vector that I'm compensating for?


Thanks,
Steve

|
template
void
CustomLaplaceOperator::
  local_diagonal_cell 
(constMatrixFree::value_type>,

VectorType,
constunsignedint&,
conststd::pair_range)const
{
typedeftypenamedealii::MatrixFreeOperators::Base::value_type 
Number;
FEEvaluationphi 
(data,this->selected_rows[0]);


for(unsignedintcell=cell_range.first;cell.

For more options, visit https://groups.google.com/d/optout.


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] refine_mesh get different number of active cells on different processes with parallel::shared::Triangulation

2017-10-27 Thread Yiyang Zhang
Hello Prof. Bangerth,

Yes I think I am setting them in a way that is exactly same throughout all 
processes.
Since I can check the n_active_cells() before the refinement, and also the 
refine_flags and coarsen_flags for each process. They are the same for each 
process. But after the refinement, the number of active_cells suddenly 
becomes different.

I attached the refine_mesh() code.

Best,
Yiyang

Vector estimated_error_per_cell(this->tria.n_active_cells());
KellyErrorEstimator::estimate(this->dof_handler,
  QGauss(Q_POINTS),
  typename FunctionMap::type(),
  locally_relevant_sln,
  estimated_error_per_cell,
  ComponentMask(),
  0,
  MultithreadInfo::n_threads(),
  Utilities::MPI::this_mpi_process(this->mpi_communicator));

IndexSet local_active_cells(this->tria.n_active_cells());
local_active_cells.clear();
for(auto cell = this->tria.begin_active(); cell != this->tria.end(); 
++cell){
if(cell->is_locally_owned()){
local_active_cells.add_index(cell->active_cell_index());
}
}
TrilinosWrappers::MPI::Vector 
distributed_error_per_cell(local_active_cells, this->mpi_communicator);
for(auto cell = this->tria.begin_active(); cell != this->tria.end(); 
++cell){
if(cell->is_locally_owned()){
distributed_error_per_cell(cell->active_cell_index()) = 
estimated_error_per_cell(cell->active_cell_index());
}
}
distributed_error_per_cell.compress(VectorOperation::insert);
estimated_error_per_cell = distributed_error_per_cell;

GridRefinement::refine_and_coarsen_fixed_number(this->tria,
  estimated_error_per_cell,
  param[0],
  param[1],
  max_cells);

//restrict grid levels if these two parameters are given.
if(max_grid_level > 0){
if (this->tria.n_levels() > max_grid_level) {
for (typename Triangulation::active_cell_iterator
cell = this->tria.begin_active(max_grid_level);
cell != this->tria.end(max_grid_level); ++cell) {
cell->clear_refine_flag();
}
}
}
if(min_grid_level > 0){
for (typename Triangulation::active_cell_iterator
cell = this->tria.begin_active(min_grid_level);
cell != this->tria.end_active(min_grid_level); ++cell) {
cell->clear_coarsen_flag();
}
}
int refine_count = 0;
int coarsen_count = 0;
for(auto cell = this->tria.begin_active(); cell != this->tria.end(); 
++cell){
if(cell->refine_flag_set()) refine_count++;
if(cell->coarsen_flag_set()) coarsen_count++;
}
std::cout<<"Process No. 
"<mpi_communicator) <<". Refine 
Count = "<mpi_communicator) <<". Coarsen 
Count = "<dof_handler);

this->tria.prepare_coarsening_and_refinement();

sol_trans.prepare_for_coarsening_and_refinement(locally_relevant_sln);
std::cout<<"Process No. 
"<mpi_communicator) <<". 
Pre_refine, n_active_cells = "run_mesh_refinement();
std::cout<<"Process No. 
"<mpi_communicator) <<". 
Post_refine, n_active_cells = "

Re: [deal.II] refine_mesh get different number of active cells on different processes with parallel::shared::Triangulation

2017-10-27 Thread Yiyang Zhang
Hello Prof. Bangerth,

Yes I think I am setting them in a way that is exactly same throughout all 
processes.
Since I can check the n_active_cells() before the refinement, and also the 
refine_flags and coarsen_flags for each process. They are the same for each 
process. But after the refinement, the number of active_cells suddenly 
becomes different.

I attached the refine_mesh() code.

Best,
Yiyang




Vector estimated_error_per_cell(this->tria.n_active_cells());

KellyErrorEstimator::estimate(this->dof_handler,


   QGauss(Q_POINTS), 


   typename FunctionMap::type(),


   locally_relevant_sln,


   estimated_error_per_cell,


   ComponentMask(),


   0,


   MultithreadInfo::n_threads(),


   Utilities::MPI::this_mpi_process(this->mpi_communicator));


IndexSet local_active_cells(this->tria.n_active_cells());

local_active_cells.clear();

for(auto cell = this->tria.begin_active(); cell != this->
tria.end(); ++cell){

if(cell->is_locally_owned()){

local_active_cells.add_index(cell->
active_cell_index());

}

}

TrilinosWrappers::MPI::Vector distributed_error_per_cell(
local_active_cells, this->mpi_communicator);

for(auto cell = this->tria.begin_active(); cell != this->
tria.end(); ++cell){

if(cell->is_locally_owned()){

distributed_error_per_cell(cell->
active_cell_index()) = estimated_error_per_cell(cell->active_cell_index());

}

}

distributed_error_per_cell.compress(VectorOperation::insert
);

estimated_error_per_cell = distributed_error_per_cell;




 GridRefinement::refine_and_coarsen_fixed_number(this->tria,


   
estimated_error_per_cell,


   param[0],


   param[1],


   max_cells);

   

   //restrict grid levels if these two parameters are given.

if(max_grid_level > 0){

if (this->tria.n_levels() > max_grid_level) {

for (typename Triangulation::
active_cell_iterator

 cell = this->tria.begin_active(
max_grid_level);

 cell != this->tria.end(
max_grid_level); ++cell) {

cell->clear_refine_flag();

}

}

}

if(min_grid_level > 0){

for (typename Triangulation::
active_cell_iterator

 cell = this->tria.begin_active(
min_grid_level);

 cell != this->tria.end_active(
min_grid_level); ++cell) {

cell->clear_coarsen_flag();

}

}

int refine_count = 0;

int coarsen_count = 0;

for(auto cell = this->tria.begin_active(); cell != this->
tria.end(); ++cell){

if(cell->refine_flag_set()) refine_count++;

if(cell->coarsen_flag_set()) coarsen_count++;

}

std::cout<<"Process No. "<mpi_communicator) <<". Refine Count = "<mpi_communicator) <<". Coarsen Count = "<dof_handler);



this->tria.prepare_coarsening_and_refinement();




[deal.II] Possible bug in the Jacobi preconditioner for MatrixFreeOperators::LaplaceOperator for vector fields

2017-10-27 Thread Stephen DeWitt
Hello,
I've been trying to refactor my code to use the new MatrixFreeOperators, 
but I've run into a problem trying to use the Jacobi preconditioner with 
the MatrixFreeOperators::LaplaceOperator with a vector-valued variable. 

In short, I wrote a short code to solve a simple 2D Poisson problem. For a 
scalar field, it works well both when using the identity matrix 
preconditioner and the Jacobi preconditioner. However, for a vector field, 
it works when using the identity matrix preconditioner but won't converge 
with the Jacobi preconditioner.

Aside from all of the standard book-keeping (creating an FESystem with 
multiple components, applying vector-valued ICs and BCs, etc.), my only 
change between the scalar case and the vector case is the template 
parameter for the MatrixFreeOperators::LaplaceOperator object.

Digging into it, I found that approximately every other element of the 
diagonal vector was zero, excluding a few that I believe are from the BCs 
(that is, in "compute_diagonal()", half of the elements in 
"inverse_diagonal_vector" are zero after the cell loop but before the 
reciprocal is taken). In the scalar case, all of the diagonal elements are 
non-zero, as one would expect. The zero elements of the diagonal seem to 
come from the "local_diagonal_cell" method, where "phi.dofs_per_cell" and 
"phi.tensor_dofs_per_cell" don't change between the scalar and vector case 
(4 in this case, with linear elements), when I'd expect the dofs per cell 
in the vector case to be the number of components times the dofs per cell 
in the scalar case.

If I multiply  "phi.dofs_per_cell" and "phi.tensor_dofs_per_cell" by the 
number of components everywhere in the "local_diagonal_cell" method, the 
preconditioner works (see the highlighted edits at the bottom of the post).

So, does anyone have an idea of what's going on here? Is this a bug, or is 
there an extra step I'm missing to use MatrixFreeOperators with a vector 
that I'm compensating for?

Thanks,
Steve

template 
  void
  CustomLaplaceOperator::
  local_diagonal_cell (const MatrixFree::value_type> ,
   VectorType   
,
   const unsigned int &,
   const std::pair   &
cell_range) const
  {
typedef typename 
dealii::MatrixFreeOperators::Base::value_type 
Number;
FEEvaluation phi (data, 
this->selected_rows[0]);

for (unsigned int cell=cell_range.first; cell

[deal.II] refine_mesh get different number of active cells on different processes with parallel::shared::Triangulation

2017-10-27 Thread Yiyang Zhang
Hello, 

I am using parallel::shared::Triangulation, sometimes, but not always, I 
will get the following very strange behaviour when doing refine_mesh. 

This is the code.

int refine_count = 0;
int coarsen_count = 0;

for(auto cell = this->tria.begin_active(); cell != this->tria.end(); 
++cell){
if(cell->refine_flag_set()) refine_count++;
if(cell->coarsen_flag_set()) coarsen_count++;
}

std::cout<<"Process No. 
"<mpi_communicator) <<". Refine 
Count = "<mpi_communicator) <<". Coarsen 
Count = "<dof_handler);

this->tria.prepare_coarsening_and_refinement();
sol_trans.prepare_for_coarsening_and_refinement(locally_relevant_sln);

std::cout<<"Process No. 
"<mpi_communicator) <<". 
Pre_refine, n_active_cells = "run_mesh_refinement();
std::cout<<"Process No. 
"<mpi_communicator) <<". 
Post_refine, n_active_cells = "

Re: [deal.II] Test 4 failed, maybe different installation of MPI? then how to solve this?

2017-10-27 Thread Timo Heister
Well, do you have more than one MPI library installed? If yes, you
need to address that by removing one of them.

You can check if a simple hello world MPI program compiles and runs
correctly. See for example
https://urldefense.proofpoint.com/v2/url?u=http-3A__mpitutorial.com_tutorials_mpi-2Dhello-2Dworld_=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=uE9uaCa2RV_G7HtjV6sxyfx9GduRM9wHUH-EOF3VvNM=
 

On Thu, Oct 26, 2017 at 6:38 PM, Michael  wrote:
> Hi,
>
> Test 4 failed after make test. I make my research and find the following
> answer:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_msg_dealii_mV-2DjuFpnybA_DSBP4DArCQAJ=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=b_s_RUQD1qLw_eZ_t42IRcNKemfVWZIUnPr0jIQTF5c=
>  
>
> I guess it may be caused by different version of MPI. And I get the
> following return of suggested command:
>
> 
> mpirun -n 2 ./mpi.debug
>  Hi from 0/1
>  Hi from 0/1
> ERROR: process does not see nproc=2!
> ERROR: process does not see nproc=2!
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> ---
> --
> mpirun detected that one or more processes exited with non-zero status, thus
> causing
> the job to be terminated. The first process to do so was:
>
>   Process name: [[18180,1],0]
>   Exit code:255
> 
>  mpiexec -n 2 ./mpi.debug
>  Hi from 0/1
>  Hi from 0/1
> ERROR: process does not see nproc=2!
> ERROR: process does not see nproc=2!
> ---
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> ---
> --
> mpiexec detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
>   Process name: [[18211,1],0]
>   Exit code:255
> 
> which mpirun
> /usr/bin/mpirun
> 
> which mpiexec
> /usr/bin/mpiexec
> 
>
> Please tell me how to solve this(how to choose the correct mpi compiler)? Do
> I need to reinstall dealii and trilinos? Thanks!
>
> --
> The deal.II project is located at 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=UIJ0hYC1YEmLHJ5Djjsj_XcW_2LGt7Q5raqDotwhhac=
>  
> For mailing list/forum options, see
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=JKE6aBeInND4IfgBW9RSAgN9OCmcy2saC2mphp8BAMM=
>  
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> For more options, visit 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_optout=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=xqP0pwi1MCtRPItXgJGYSlWmAdkua1-iL_svoI2Icgg=Eq64x4K04k3VFV3CDPo-vz5RrxJJmqZn3nruLvMycQY=
>  .



-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Test 4 failed after installation, no clue

2017-10-27 Thread Michael
Hi,

Test 4 failed after make test. I make my research and find the following 
answer:

https://groups.google.com/d/msg/dealii/mV-juFpnybA/DSBP4DArCQAJ

And I get the following suggested commands with returns blow:


mpirun -n 2 ./mpi.debug 

 Hi from 0/1
 Hi from 0/1
ERROR: process does not see nproc=2!
ERROR: process does not see nproc=2!
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
---
--
mpirun detected that one or more processes exited with non-zero status, 
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[18180,1],0]
  Exit code:255

 mpiexec -n 2 ./mpi.debug 

 Hi from 0/1
 Hi from 0/1
ERROR: process does not see nproc=2!
ERROR: process does not see nproc=2!
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
---
--
mpiexec detected that one or more processes exited with non-zero status, 
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[18211,1],0]
  Exit code:255

which mpirun

/usr/bin/mpirun

which mpiexec

/usr/bin/mpiexec

ldd /home/superman/usr/dealii/lib/libdeal_II.g.so.8.5.1 | grep mpi

libmpich.so.12 => /usr/lib/x86_64-linux-gnu/libmpich.so.12 
(0x7f884cb3)

ldd /home/superman/usr/trilinos/lib/libepetra.so.12.12.1 | grep mpi

libmpich.so.12 => /usr/lib/x86_64-linux-gnu/libmpich.so.12 
(0x7fd5327f1000)


I have no clue about this, please let me know if you have any suggestion. 
Thanks a lot!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] deal.II on Mac OS Sierra

2017-10-27 Thread Timo Heister
great to hear that this works. I am in the process to create a second
installer for the newer xcode+sdk.

On Fri, Oct 27, 2017 at 9:37 AM, Pawan Takhar  wrote:
> Alberto,
>
> Thanks for your reply. I installed 10.12 SDK as suggested in previous
> discussion. I also set the project to use 10.12. Now when I build from
> within Xcode, the project is compiling fine, but I get the following warning
>
> DEAL_II_DISABLE_EXTRA_DIAGNOSTICS "Unknown warning group
> '-Wunused-but-set-variable', ignored".
>
> The steps I am using are:
>
> 1) cmake . -G "Xcode"
> 2) open the project in Xcode
> 3) Change Base SDK to macOS 10.12
> 4) Build the project.
>
> Thanks,
> Pawan
>
>
>
>
> --
> The deal.II project is located at 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=tR6LjiGi5rSIXAb97vi-pCHhKTlrXyX9g-oF1ntDHaY=TCwMzPZDu1D7TXhDjRC7NF4cL_UMQ2DZAibKY-A2CSo=
>  
> For mailing list/forum options, see
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=tR6LjiGi5rSIXAb97vi-pCHhKTlrXyX9g-oF1ntDHaY=gkn58bmNEqsZqdWisunz2Y_CMI0DDYgKIHY4xRSl3Ng=
>  
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> For more options, visit 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_optout=DwIBaQ=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw=tR6LjiGi5rSIXAb97vi-pCHhKTlrXyX9g-oF1ntDHaY=Nc38f4K8cTeM839HUBA3m-YjW-i7F3ilc2AzvZjKVHg=
>  .



-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] CellDataStorage with mesh refinement

2017-10-27 Thread Jean-Paul Pelteret
Hi Frederik,

I’m being wildly presumptuous here (so feel free to say if I’ve presumed 
incorrectly), but unless you have a special set of data that needs to be 
considered then the ContinuousQuadratureDataTransfer 

 class may still be relevant to your use case. There is no requirement that 
your global FE space be continuous to use this class. Whats relevant is this 
snippet from the documentation:

"the data located at data_quadrature of each cell is L2-projected to the 
continuous space defined by a single FiniteElement projection_fe “

What happens is that, for each cell, the specified QP data is projected to the 
finite element space specified by the input projection_fe, and then 
interpolated onto the new cell (or children cells in the case of refinement) of 
the refined triangulation. So I think that it might be possible that you've 
misinterpreted is what exactly the “Continuous” part of the class name refers 
to: It is assumed that for each individual cell the data is continuous within 
the cell (i.e. can be directly interpolated using the input FE), while the data 
between cells may not be. An example case where data is truly discontinuous 
within a cell (and therefore where this class is not applicable) would be the 
internal variables for yield or damage in plasticity. Would you care to 
elaborate more on what you’re using this for?

Best regards,
Jean-Paul

P.S. You really want to use the GeometryInfo 

 class to get this value:
for (unsigned int child=0; child<8; child++){

> On 26 Oct 2017, at 18:50, 'Frederik S.' via deal.II User Group 
>  wrote:
> 
> Hello!
> 
> I've got a question on the usage of CellDataStorage for refined cells for a 
> parallelized simulation. Below you'll find a sample of the code I use for 
> refinement. 
> 
> I'm transferring the solution as stated in 
> https://www.dealii.org/8.5.0/doxygen/deal.II/classparallel_1_1distributed_1_1SolutionTransfer.html
>  and I sadly can't use ContinuousQuadratureDataTransfer, because my CellData 
> is discontinous. Therefore I'm iterating through all cells, look at each 
> cells parent and copy the CellData of the parent to the child cells (the blue 
> part below; in reality the copying process is much longer, but then the code 
> would be unnecessarily complex for this example). If I run the code with just 
> one cpu core, everything works perfectly fine, so in this example every child 
> cell has it's parent's CellData. But for simulations with more than one core, 
> I just copy zeros into the child cells CellData (but I do not get an 
> runtime-error).
> 
> Where did I make a mistake? Or is it just not possible to access parent 
> cell's CellData for parallel meshes?
> 
> Thanks in advance!
> 
> 
> 
> template 
> void TEST::refine_grid (){
>   const unsigned int n_q_points= quadrature_formula.size();
>   
>   std::vector solution_vectors (2);
>   solution_vectors[0] = 
>   solution_vectors[1] = _solution;
>   
>   parallel::distributed::SolutionTransfer > soltrans(dof_handler);
>   
>   typename 
> parallel::distributed::Triangulation::active_cell_iterator cell = 
> triangulation.begin_active(), endc = triangulation.end();
>   for (; cell!=endc; ++cell){
>   if (cell->subdomain_id() == this_mpi_process){
>   cell->set_refine_flag();
>   }
>   }
>   
>   triangulation.prepare_coarsening_and_refinement();
>   soltrans.prepare_for_coarsening_and_refinement(solution_vectors);   
>   triangulation.execute_coarsening_and_refinement();
>   
>   setup_system (0);
>   
>   typename 
> parallel::distributed::Triangulation::active_cell_iterator cell2 = 
> triangulation.begin_active(), endc2 = triangulation.end(); 
>   for (; cell2!=endc2; ++cell2){
>   point_history_accessor.initialize(cell2, n_q_points);
>   }
>   
>   int highest_level_after_refinement=triangulation.n_global_levels()-1;
>   
>   PETScWrappers::MPI::Vector interpolated_solution(system_rhs);
>   PETScWrappers::MPI::Vector interpolated_old_solution(system_rhs);
>   std::vector tmp (2);
>   tmp[0] = &(interpolated_solution);
>   tmp[1] = &(interpolated_old_solution);
>   soltrans.interpolate(tmp);
>   
>   cell2 = triangulation.begin_active(), endc2 = triangulation.end();  
>   for (; cell2!=endc2; ++cell2)
>   if(cell2->level()==highest_level_after_refinement){
>   typename 
> parallel::distributed::Triangulation::cell_iterator 
> cell_p=cell2->parent();   
> 

Re: [deal.II] deal.II on Mac OS Sierra

2017-10-27 Thread Pawan Takhar
Alberto,

Thanks for your reply. I installed 10.12 SDK as suggested in previous 
discussion. I also set the project to use 10.12. Now when I build from 
within Xcode, the project is compiling fine, but I get the following 
warning 

DEAL_II_DISABLE_EXTRA_DIAGNOSTICS "Unknown warning group 
'-Wunused-but-set-variable', ignored". 

The steps I am using are:

1) cmake . -G "Xcode" 
2) open the project in Xcode
3) Change Base SDK to macOS 10.12
4) Build the project.

Thanks,
Pawan




-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Non-homogeneus boundary conditions with-matrix-free not working

2017-10-27 Thread Wolfgang Bangerth

On 10/27/2017 06:44 AM, Bruno Turcksin wrote:


Could you add this explanation to the documentation and/or to a tutorial. I 
think mentioning it step-37 would be great.


Yes, I was actually thinking the same. Maybe in "Possibilities for extensions" 
in the results section of step-37, like we have for other programs?


Cheers
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Non-homogeneus boundary conditions with-matrix-free not working

2017-10-27 Thread Martin Kronbichler
Hi Bruno,

I definitely agree - we should mention this in step-37. I will open an
issue since I will not get to do it today.

Best,
Martin


On 27.10.2017 14:44, Bruno Turcksin wrote:
> Martin,
>
> Could you add this explanation to the documentation and/or to a
> tutorial. I think mentioning it step-37 would be great.
>
> Best,
>
> Bruno
>
> On Friday, October 27, 2017 at 3:57:09 AM UTC-4, Martin Kronbichler
> wrote:
>
> Dear Michal,
>
> This is expected: the matrix-free operator evaluation cannot apply
> non-homogeneous boundary conditions while solving (at least I have
> never figured out how to do that). To solve such a problem, you
> need to bring the non-homogeneous part on the right hand side
> first. I often solve this by first creating a vector that is
> filled with the Dirichlet values and apply the operator on that
> (without setting the zero constraints) and then solve for an
> increment that has zero boundary conditions. I attach an example
> which is an extension of step-37 (and uses coefficients described
> here, if you want some more background:
> http://www.it.uu.se/research/publications/reports/2017-006/2017-006-nc.pdf
> 
> 
>
> In the program, you will find that LaplaceOperator not only
> contains a vmult() function, but also a compute_residual()
> function that reads the data in from an FEEvaluation object
> without Dirichlet conditions, due the operation, and assembles the
> right hand side into an FEEvaluation object with Dirichlet
> conditions (-> local_compute_residual()).
>
> Please let me know in case you have questions.
>
> Best,
> Martin
>
> On 26.10.2017 22:07, Michał Wichrowski wrote:
>> Dear deal.II developers,
>> I think I found problem with non-homogeneus boundary conditions.  
>> By changing:
>> VectorTools::interpolate_boundary_values (dof_handler,
>> 0,
>> Functions::ZeroFunction(),
>> constraints);
>> to
>> VectorTools::interpolate_boundary_values (dof_handler,
>> 0,
>> Functions::ConstantFunction(5),
>> constraints);
>> The results shown on included picture were outputted. I've tried
>> to figure out what is going on, and it looks like matrix-free
>> framework is solving problem with homogeneous constrains and then
>> Dirichlet nodes are overwritten.  
>> -- The deal.II project is located at http://www.dealii.org/ For
>> mailing list/forum options, see
>> https://groups.google.com/d/forum/dealii?hl=en
>>  --- You received
>> this message because you are subscribed to the Google Groups
>> "deal.II User Group" group. To unsubscribe from this group and
>> stop receiving emails from it, send an email to
>> dealii+unsubscr...@googlegroups.com
>> . For more options,
>> visit https://groups.google.com/d/optout
>> . 
>
> -- The deal.II project is located at http://www.dealii.org/ For
> mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en --- You received this
> message because you are subscribed to the Google Groups "deal.II User
> Group" group. To unsubscribe from this group and stop receiving emails
> from it, send an email to dealii+unsubscr...@googlegroups.com
> . For more options, visit
> https://groups.google.com/d/optout. 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Non-homogeneus boundary conditions with-matrix-free not working

2017-10-27 Thread Bruno Turcksin
Martin,

Could you add this explanation to the documentation and/or to a tutorial. I 
think mentioning it step-37 would be great.

Best,

Bruno

On Friday, October 27, 2017 at 3:57:09 AM UTC-4, Martin Kronbichler wrote:
>
> Dear Michal,
>
> This is expected: the matrix-free operator evaluation cannot apply 
> non-homogeneous boundary conditions while solving (at least I have never 
> figured out how to do that). To solve such a problem, you need to bring the 
> non-homogeneous part on the right hand side first. I often solve this by 
> first creating a vector that is filled with the Dirichlet values and apply 
> the operator on that (without setting the zero constraints) and then solve 
> for an increment that has zero boundary conditions. I attach an example 
> which is an extension of step-37 (and uses coefficients described here, if 
> you want some more background: 
> http://www.it.uu.se/research/publications/reports/2017-006/2017-006-nc.pdf
>
> In the program, you will find that LaplaceOperator not only contains a 
> vmult() function, but also a compute_residual() function that reads the 
> data in from an FEEvaluation object without Dirichlet conditions, due the 
> operation, and assembles the right hand side into an FEEvaluation object 
> with Dirichlet conditions (-> local_compute_residual()).
>
> Please let me know in case you have questions.
>
> Best,
> Martin
> On 26.10.2017 22:07, Michał Wichrowski wrote:
>
> Dear deal.II developers, 
> I think I found problem with non-homogeneus boundary conditions.  
> By changing:
> VectorTools::interpolate_boundary_values (dof_handler,
> 0,
> Functions::ZeroFunction(),
> constraints);
> to
> VectorTools::interpolate_boundary_values (dof_handler,
> 0,
> Functions::ConstantFunction(5),
> constraints);
>
>
> The results shown on included picture were outputted. I've tried to figure 
> out what is going on, and it looks like matrix-free framework is solving 
> problem with homogeneous constrains and then Dirichlet nodes are 
> overwritten.  
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Non-homogeneus boundary conditions with-matrix-free not working

2017-10-27 Thread Martin Kronbichler
Dear Michal,

This is expected: the matrix-free operator evaluation cannot apply
non-homogeneous boundary conditions while solving (at least I have never
figured out how to do that). To solve such a problem, you need to bring
the non-homogeneous part on the right hand side first. I often solve
this by first creating a vector that is filled with the Dirichlet values
and apply the operator on that (without setting the zero constraints)
and then solve for an increment that has zero boundary conditions. I
attach an example which is an extension of step-37 (and uses
coefficients described here, if you want some more background:
http://www.it.uu.se/research/publications/reports/2017-006/2017-006-nc.pdf

In the program, you will find that LaplaceOperator not only contains a
vmult() function, but also a compute_residual() function that reads the
data in from an FEEvaluation object without Dirichlet conditions, due
the operation, and assembles the right hand side into an FEEvaluation
object with Dirichlet conditions (-> local_compute_residual()).

Please let me know in case you have questions.

Best,
Martin

On 26.10.2017 22:07, Michał Wichrowski wrote:
> Dear deal.II developers,
> I think I found problem with non-homogeneus boundary conditions.  
> By changing:
> VectorTools::interpolate_boundary_values (dof_handler,
> 0,
> Functions::ZeroFunction(),
> constraints);
> to
> VectorTools::interpolate_boundary_values (dof_handler,
> 0,
> Functions::ConstantFunction(5),
> constraints);
>
>
> The results shown on included picture were outputted. I've tried to
> figure out what is going on, and it looks like matrix-free framework
> is solving problem with homogeneous constrains and then Dirichlet
> nodes are overwritten.  
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google
> Groups "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to dealii+unsubscr...@googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/* -
 *
 * Copyright (C) 2009 - 2017 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE at
 * the top level of the deal.II distribution.
 *
 * -

 *
 * Authors: Katharina Kormann, Martin Kronbichler, Uppsala University,
 * 2009-2012, updated to MPI version with parallel vectors in 2016
 */


#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 


namespace Step37
{
  using namespace dealii;


  const unsigned int degree_finite_element = 3;
  const unsigned int dimension = 3;

  typedef double number;
  typedef float level_number;


  // ==
  // Functions
  // ==

  template 
  class Solution : public Function
  {
  public:
Solution ()  : Function() {}

virtual double value (const Point   ,
  const unsigned int  component = 0) const;

virtual Tensor<1,dim> gradient (const Point   ,
const unsigned int  component = 0) const;


virtual Tensor<1,dim,VectorizedArray> gradient (const 
Point  ,
const unsigned int  
component = 0) const;

virtual double laplacian (const Point   ,
  const unsigned int  component = 0) const;

virtual VectorizedArray laplacian (const 
Point   ,
   const unsigned int  component = 
0)