Re: [deal.II] Complex-valued distributed matrices in dealii

2020-07-26 Thread Pascal Kraft
The documentation states that Tpetra supports 
- MPI
- Shared memory parallelization: (OpenMP, CUDA, Posix)

and: 
Scalar: A Scalar is the type of values in the sparse matrix or dense 
vector. This is the type most likely to be changed by many users. The most 
common use cases 
are float, double, std::complex and std::complex 

and it contains:

   - 
   
   Parallel distributions: Tpetra::Map 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1Map.html>
 - 
   Contains information used to distribute vectors, matrices and other 
   objects. This class is analogous to Epetra's Epetra_Map class.
   
   - 
   
   Distributed dense vectors: Tpetra::MultiVector 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1MultiVector.html>
   , Tpetra::Vector 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1Vector.html>
 - 
   Provides vector services such as scaling, norms, and dot products.
   
   - 
   
   Distributed sparse matrices: Tpetra::RowMatrix 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1RowMatrix.html>
   , Tpetra::CrsMatrix 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1CrsMatrix.html>
- Tpetra::RowMatrix 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1RowMatrix.html>
 is 
   a abstract interface for row-distributed sparse matrices. 
   Tpetra::CrsMatrix 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1CrsMatrix.html>
 is 
   a specific implementation of Tpetra::RowMatrix 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1RowMatrix.html>,
 
   utilizing compressed row storage format. Both of these classes derive from 
   Tpetra::Operator 
   
<https://docs.trilinos.org/dev/packages/tpetra/doc/html/classTpetra_1_1Operator.html>,
 
   the base class for linear operators.
   

see
https://docs.trilinos.org/dev/packages/tpetra/doc/html/index.html  

Pascal Kraft schrieb am Sonntag, 26. Juli 2020 um 10:57:36 UTC+2:

> Hi Wolfgang,
>
> here is what I found out about the topic: 
> Originally, I only knew Trilinos because I used the distributed matrices 
> and vectors in my application. I also knew that there is a configuration of 
> trilinos to make complex numbers available in all packages that support it. 
> However, from what I can tell, that only effects Tpetra datatypes, not 
> Epetra. From what I have seen in the dealwrappers, they only use Epetra. An 
> interesting detail about this is the Komplex-Package, which is described as 
> an Epetra based solver for complex systems, which wraps Epetra matrices and 
> stores the real and imaginary parts as blocks. (see here:  
> https://docs.trilinos.org/dev/packages/komplex/doc/html/index.html )
> At GitHub I can see that project 4 deals with adding Tpetra support which 
> would make complex numbers in Tpetra usable in deal if the interface is 
> built to support them)
>
> About GMRES: I will be using PETSc GMRES to solve my system, but if 
> possible I will try to also solve it with dealii::SolverGMRES and let you 
> know what happens.
>
> Kind regards,
> Pascal
>
> Wolfgang Bangerth schrieb am Sonntag, 26. Juli 2020 um 01:43:44 UTC+2:
>
>> On 7/23/20 10:42 AM, Pascal Kraft wrote:
>> > 
>> > I have Trillions compiled with support for complex numbers and also 
>> searched 
>> > through the LinearAlgebra documentation.
>>
>> I don't think I knew that one can compile Trilinos with complex numbers. 
>> How 
>> do you do that?
>>
>> It does not greatly surprise me that we use TrilinosScalar and double 
>> interchangeably. If Trilinos can indeed be compiled with complex numbers, 
>> then 
>> we ought to find a way to (i) make TrilinosScalar dependent on whatever 
>> Trilinos was compiled for, (ii) ensure that all of the places that 
>> currently 
>> don't compile because we use double in place of TrilinosScalar are fixed.
>>
>> Patches are, as always, very welcome!
>>
>>
>> > I require GMRES as a solver (which should be possible, because the 
>> GMRES 
>> > Versions all use a templated Vector which can take complex components) 
>> and MPI 
>> > distribution of a sparse system. I have so far only seen FullMatrix to 
>> accept 
>> > complex numbers.
>>
>> I believe that GMRES could indeed be made to work for complex-valued 
>> problems, 
>> but I'm not sure any of us have every tried. When writing step-58, I 
>> toyed 
>> with the idea of looking up in the literature what one would need for a 
>> complex GMRES, but in the end decided to just make SparseDirectUMFPACK 
>> work 
>> i

Re: [deal.II] Complex-valued distributed matrices in dealii

2020-07-26 Thread Pascal Kraft
Hi Wolfgang,

here is what I found out about the topic: 
Originally, I only knew Trilinos because I used the distributed matrices 
and vectors in my application. I also knew that there is a configuration of 
trilinos to make complex numbers available in all packages that support it. 
However, from what I can tell, that only effects Tpetra datatypes, not 
Epetra. From what I have seen in the dealwrappers, they only use Epetra. An 
interesting detail about this is the Komplex-Package, which is described as 
an Epetra based solver for complex systems, which wraps Epetra matrices and 
stores the real and imaginary parts as blocks. (see here:  
https://docs.trilinos.org/dev/packages/komplex/doc/html/index.html )
At GitHub I can see that project 4 deals with adding Tpetra support which 
would make complex numbers in Tpetra usable in deal if the interface is 
built to support them)

About GMRES: I will be using PETSc GMRES to solve my system, but if 
possible I will try to also solve it with dealii::SolverGMRES and let you 
know what happens.

Kind regards,
Pascal

Wolfgang Bangerth schrieb am Sonntag, 26. Juli 2020 um 01:43:44 UTC+2:

> On 7/23/20 10:42 AM, Pascal Kraft wrote:
> > 
> > I have Trillions compiled with support for complex numbers and also 
> searched 
> > through the LinearAlgebra documentation.
>
> I don't think I knew that one can compile Trilinos with complex numbers. 
> How 
> do you do that?
>
> It does not greatly surprise me that we use TrilinosScalar and double 
> interchangeably. If Trilinos can indeed be compiled with complex numbers, 
> then 
> we ought to find a way to (i) make TrilinosScalar dependent on whatever 
> Trilinos was compiled for, (ii) ensure that all of the places that 
> currently 
> don't compile because we use double in place of TrilinosScalar are fixed.
>
> Patches are, as always, very welcome!
>
>
> > I require GMRES as a solver (which should be possible, because the GMRES 
> > Versions all use a templated Vector which can take complex components) 
> and MPI 
> > distribution of a sparse system. I have so far only seen FullMatrix to 
> accept 
> > complex numbers.
>
> I believe that GMRES could indeed be made to work for complex-valued 
> problems, 
> but I'm not sure any of us have every tried. When writing step-58, I toyed 
> with the idea of looking up in the literature what one would need for a 
> complex GMRES, but in the end decided to just make SparseDirectUMFPACK 
> work 
> instead. The issue is that for every matrix-vector and vector-vector 
> operation 
> that happens inside GMRES, you have to think about whether one or the 
> other 
> operand needs to be complex-conjugated. I'm certain that is possible, but 
> would require an audit of a few hundred lines. It would probably be 
> simpler to 
> just use PETSc's (or Trilinos') GMRES implementation.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/57acc699-f6d9-4bbe-99b1-650dbb554fdcn%40googlegroups.com.


Re: [deal.II] Complex-valued distributed matrices in dealii

2020-07-23 Thread Pascal Kraft
Hi Daniel,

oh, I'm really sorry for asking if that works. I had seen that neither 
PETSc nor Trilinos Sparse Matrices are templated and assumed that if the 
more modern version (Trilinos) doesn't work with complex numbers, trying 
PETSc wouldn't be very promising. But you are right, I will try that and 
report back. If that works I will see if it is possible to update the docu 
somewhere.

Kind regards and thanks for the super fast response, 
Pascal

d.arnd...@gmail.com schrieb am Donnerstag, 23. Juli 2020 um 19:32:49 UTC+2:

> Pascal,
>
> The wrapped Trilinos matrices are based on Epetra which only supports 
> double AFAICT. That's why you can replace TrilinosScalar easily.
> On the other hand, you should be able to compile PETSc with complex scalar 
> type and use that with MPI.
>
> Best,
> Daniel
>
> Am Do., 23. Juli 2020 um 12:42 Uhr schrieb Pascal Kraft <
> kraft@gmail.com>:
>
>> Dear Deal.II devs and users,
>>
>> In the latest release a lot of (great) work has been done to make complex 
>> numbers more of a first-class citizen in deal, which has made my code a lot 
>> more readable. Currently, I am stuck with one problem, though. Are there 
>> any distributed datatypes for matrices that accept complex numbers?
>>
>> The dealii sparse matrix implementation is a template and allows complex 
>> numbers - however that implementation has no MPI functionality, which I 
>> need.
>>
>> The Petsc Sparse Matrix and Trilinos Sparse Matrix are no templates. In 
>> the types header I found the declaration of TrilinosScalar as double but 
>> changing it and recompiling dealii with the changed header threw an error. 
>>
>> I have Trillions compiled with support for complex numbers and also 
>> searched through the LinearAlgebra documentation.
>>
>> I require GMRES as a solver (which should be possible, because the GMRES 
>> Versions all use a templated Vector which can take complex components) and 
>> MPI distribution of a sparse system. I have so far only seen FullMatrix to 
>> accept complex numbers.
>>
>> Can anyone give me a pointer on what is possible?
>>
>> Kind regards,
>> Pascal Kraft
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/4f4af1e0-4020-4dbf-aa9a-3a3fb7297d90o%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/4f4af1e0-4020-4dbf-aa9a-3a3fb7297d90o%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/15427367-236e-4b31-89ca-7063ab6d7364n%40googlegroups.com.


[deal.II] Re: Complex-valued distributed matrices in dealii

2020-07-23 Thread Pascal Kraft
Some additional information: If I try to compile deal with TrilinosScalar = 
std::complex I get many errors like this:

[  5%] Building CXX object 
source/numerics/CMakeFiles/obj_numerics_release.dir/data_postprocessor.cc.o
[  5%] Building CXX object 
source/numerics/CMakeFiles/obj_numerics_release.dir/dof_output_operator.cc.o
In file included from 
install_dir/dealii/dealii-source/include/deal.II/lac/trilinos_parallel_block_vector.h:27,
 from 
install_dir/dealii/dealii-source/source/numerics/dof_output_operator.cc:26:
install_dir/dealii/dealii-source/include/deal.II/lac/trilinos_vector.h: In 
member function ‘dealii::TrilinosWrappers::MPI::Vector::value_type* 
dealii::TrilinosWrappers::MPI::Vector::begin()’:
install_dir/dealii/dealii-source/include/deal.II/lac/trilinos_vector.h:1525:25: 
error: cannot convert ‘double*’ to 
‘dealii::TrilinosWrappers::MPI::Vector::iterator’ {aka 
‘std::complex*’} in return
 1525 |   return (*vector)[0];

suggesting a hard dependency on double somewhere else.

Pascal Kraft schrieb am Donnerstag, 23. Juli 2020 um 18:42:47 UTC+2:

> Dear Deal.II devs and users,
>
> In the latest release a lot of (great) work has been done to make complex 
> numbers more of a first-class citizen in deal, which has made my code a lot 
> more readable. Currently, I am stuck with one problem, though. Are there 
> any distributed datatypes for matrices that accept complex numbers?
>
> The dealii sparse matrix implementation is a template and allows complex 
> numbers - however that implementation has no MPI functionality, which I 
> need.
>
> The Petsc Sparse Matrix and Trilinos Sparse Matrix are no templates. In 
> the types header I found the declaration of TrilinosScalar as double but 
> changing it and recompiling dealii with the changed header threw an error. 
>
> I have Trillions compiled with support for complex numbers and also 
> searched through the LinearAlgebra documentation.
>
> I require GMRES as a solver (which should be possible, because the GMRES 
> Versions all use a templated Vector which can take complex components) and 
> MPI distribution of a sparse system. I have so far only seen FullMatrix to 
> accept complex numbers.
>
> Can anyone give me a pointer on what is possible?
>
> Kind regards,
> Pascal Kraft
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/45903213-5b86-4b06-8f42-7c68b0ae754fn%40googlegroups.com.


[deal.II] Complex-valued distributed matrices in dealii

2020-07-23 Thread Pascal Kraft
Dear Deal.II devs and users,

In the latest release a lot of (great) work has been done to make complex 
numbers more of a first-class citizen in deal, which has made my code a lot 
more readable. Currently, I am stuck with one problem, though. Are there 
any distributed datatypes for matrices that accept complex numbers?

The dealii sparse matrix implementation is a template and allows complex 
numbers - however that implementation has no MPI functionality, which I 
need.

The Petsc Sparse Matrix and Trilinos Sparse Matrix are no templates. In the 
types header I found the declaration of TrilinosScalar as double but 
changing it and recompiling dealii with the changed header threw an error. 

I have Trillions compiled with support for complex numbers and also 
searched through the LinearAlgebra documentation.

I require GMRES as a solver (which should be possible, because the GMRES 
Versions all use a templated Vector which can take complex components) and 
MPI distribution of a sparse system. I have so far only seen FullMatrix to 
accept complex numbers.

Can anyone give me a pointer on what is possible?

Kind regards,
Pascal Kraft

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4f4af1e0-4020-4dbf-aa9a-3a3fb7297d90o%40googlegroups.com.


Re: [deal.II] Several questions about mesh generation and distributed triangulations

2019-01-15 Thread Pascal Kraft
Dear Wolfgang,

thank you for your reply! I only noticed it now a while later since I had 
thought the topic was dead back when I posted it. Thank you for your time 
and effort!
Remarks to your response below:

Am Dienstag, 30. Oktober 2018 17:26:21 UTC+1 schrieb Wolfgang Bangerth:
>
>
> Pascal, 
> I don't think anyone responded to your email here: 
>
> > I will try to be as short as possible 
>
> That was only moderately successful ;-)) 
>
>
> > - if more details are required 
> > feel free to ask. Also I offer to submit all mesh generation code I 
> > create in the future, since others might have similar needs at some 
> point. 
>
> We would of course love to include this! 
>
I will, once I am satisfied with my solution, offer some examples and make 
them available. 

>
>
> > I work on a 3d mesh with purely axis-parallel edges. The mesh is a 
> > 2d-mesh (say in x and y direction) extruded to 3d (z-direction). Due to 
> > the scheme I use it is required, that the distributed version of this 
> > mesh be distributed as intervals along the z-axis ( process 0 has all 
> > cells with cell->center()[2] in [0,1], process 1 has all cells with 
> > cell->center()[2] in (1,2] and so on.) 
> > What I did originally was simply generating a mesh consisting of 
> > n_processes cells, let that mesh be auto partitioned, then turning 
> > partitioning off 
>
> Out of curiosity, how do you do this? 
>
I start with p::d::tria with 
parallel::distributed::Triangulation<3>::Settings::no_automatic_repartitioning  
and hand it to GridGenerator::subdivided_parallelepiped<3, 3>. In this call 
I use the version that also takes a vector of subdivision per by dimension 
and hand it a vector containtin [1,1,n_processes]. I then manually call 
tria->repartition() which gives me an ordered triangulation where the cells 
are partitioned according to their z-coordintates. I can then perform 
refinement and the basic structure remains the same because automatic 
repartitioning is turned off. The only real downside to this (apart from 
feeling really weird and hacky) is that I am now stuck with a p::d::tria 
which doesn't allow anisotropic refinement which would be nice for me and I 
cannot use another starting mesh then the mesh with one cell per processor 
because then the partitioning done by p4est is no longer easy to predict.

>
>
> > So my questions would be: 
> > 1. given a 2D-mesh, what is the easiest way to get a distributed 
> > 3d-extrusion with the partitioning described above?  (auto-partitioning 
> > generates balls in this mesh, not cuboids with z-orthogonal 
> > process-to-process interfaces) One suggestion for a function here would 
> > be a function to transform a shared to a distributed mesh because on a 
> > shared mesh I could set the subdomain Ids and then just call that 
> > function when I'm done 
>
> You can't do this for a parallel::distributed::Triangulation. That's 
> because the partitioning is done in p4est and p4est orders cells in the 
> depth-first tree ordering along a (space filling) curve and then just 
> subdivides it by however many processors you have. 
>
> The only control you have with p4est is how much weight each cell has in 
> this partition, but you can't say that cutting between cells A and B 
> should be considered substantially worse than cutting between cells C 
> and D -- which is what you are saying: Cutting cells into partitions is 
> totally ok if the partition boundary runs between copies of the base 2d 
> mesh, but should be prohibited if the cut is between cells within a copy. 
>
> Other partitioning algorithms allow this sort of thing. The partition 
> graphs (where each node is a cell, and an edge the connection between 
> cells). Their goal is to find partitions of roughly equal weight (where 
> the weight of a partition is the sum of its node weights) while trying 
> to minimize the cost (where the cost is the sum of the edge weights over 
> all cut edges). If you were to assign very small weights to all edges 
> between cells of different copies, and large weights to edges within a 
> copy, then you would get what you want. 
>
> It should not be terribly difficult to do this with 
> parallel::shared::Triangulation since there we use regular graph 
> partitioning algorithms. But I don't see how it is possible for 
> parallel::distributed::Triangulation. 
>
> OK, that is what I feared. p::s::T would work but I will keep looking for 
a solution with p::d::T since this application should scale to huge meshes 
eventually. 

>
> > 2. say I have a layer of faces in that mesh (in the interior) and I need 
> > nedelec elements and nodal elements on these faces (codimension 1) to 
> > evaluate their shape function and gradients in quadrature points of a 
> > 2D-quadrature on the faces. What is the best way to do this? (if it was 
> > a boundary I could give a boundary id and call 
> > GridGenerator::extract_boundary_mesh but it's not always on the 
> boundary. 
>
> I don't think it 

[deal.II] Re: Several questions about mesh generation and distributed triangulations

2018-10-18 Thread Pascal Kraft
I am by the way aware of the parrallel::distributed::Triangulation 
constructor which accepts another Triangulation as an argument - however 
this calls repartitioning and is therefore no solution to the problem.
Also weights could be passed to influence the partitioning, but determining 
weights such that the partitioning is exactly the one I want would be hard 
and it would depend massively on the algorithm used to compute the 
partitioning which should be kept as a black box. This would also be a very 
error-prone workaround.

Am Donnerstag, 18. Oktober 2018 13:54:13 UTC+2 schrieb Pascal Kraft:
>
> I will try to be as short as possible - if more details are required feel 
> free to ask. Also I offer to submit all mesh generation code I create in 
> the future, since others might have similar needs at some point.
>
> I work on a 3d mesh with purely axis-parallel edges. The mesh is a 2d-mesh 
> (say in x and y direction) extruded to 3d (z-direction). Due to the scheme 
> I use it is required, that the distributed version of this mesh be 
> distributed as intervals along the z-axis ( process 0 has all cells with 
> cell->center()[2] in [0,1], process 1 has all cells with cell->center()[2] 
> in (1,2] and so on.)
> What I did originally was simply generating a mesh consisting of 
> n_processes cells, let that mesh be auto partitioned, then turning 
> partitioning off and then using global refinement of marked cells to 
> generate the right structure inside these cells for each processor. This 
> however feels like a very elaborate workaround and the lack of anisotropic 
> refinement for distributed meshes is a heavy restriction here. However, 
> this seemed to be a feasible workaround for the time being.
> Recently a new problem has arisen however: For the construction of a 
> blockwise, parallel preconditioner for a sweeping method I now need 
> codimension 1 meshes of the process-to-process mesh-interfaces (again a 
> simple copy of the theoretical 2D-mesh, which was extruded to generate the 
> 3d mesh, if that were possible) because I need nodal and Nedelec-elements 
> on these 2D-interfaces for the computation of some integrals.
> So my questions would be:
> 1. given a 2D-mesh, what is the easiest way to get a distributed 
> 3d-extrusion with the partitioning described above?  (auto-partitioning 
> generates balls in this mesh, not cuboids with z-orthogonal 
> process-to-process interfaces) One suggestion for a function here would be 
> a function to transform a shared to a distributed mesh because on a shared 
> mesh I could set the subdomain Ids and then just call that function when 
> I'm done
> 2. say I have a layer of faces in that mesh (in the interior) and I need 
> nedelec elements and nodal elements on these faces (codimension 1) to 
> evaluate their shape function and gradients in quadrature points of a 
> 2D-quadrature on the faces. What is the best way to do this? (if it was a 
> boundary I could give a boundary id and call 
> GridGenerator::extract_boundary_mesh but it's not always on the boundary.
> 3. There is a description on how to manually generate a mesh which seems 
> easy enough in my case. How does this work for a distributed mesh? Is the 
> only version to generate a mesh and then auto-partition or can I somehow 
> define the partitioning in the generation phase similar to the way I could 
> set subdomain Ids in a shared parallel mesh?
> 4. What can I do to extend the exising functionality? Since the memory 
> consumtion of my (and most likely most codes) is low during mesh 
> generation, first generating a shared mesh and then distributing it would 
> not be a problem (compared to keeping the shared mesh during computation 
> when the matrices and vectors also take up a lot of the memory). Do you 
> consider such a function "easy" to implement?
>
> Thank you for your time!
> Pascal
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Several questions about mesh generation and distributed triangulations

2018-10-18 Thread Pascal Kraft
I will try to be as short as possible - if more details are required feel 
free to ask. Also I offer to submit all mesh generation code I create in 
the future, since others might have similar needs at some point.

I work on a 3d mesh with purely axis-parallel edges. The mesh is a 2d-mesh 
(say in x and y direction) extruded to 3d (z-direction). Due to the scheme 
I use it is required, that the distributed version of this mesh be 
distributed as intervals along the z-axis ( process 0 has all cells with 
cell->center()[2] in [0,1], process 1 has all cells with cell->center()[2] 
in (1,2] and so on.)
What I did originally was simply generating a mesh consisting of 
n_processes cells, let that mesh be auto partitioned, then turning 
partitioning off and then using global refinement of marked cells to 
generate the right structure inside these cells for each processor. This 
however feels like a very elaborate workaround and the lack of anisotropic 
refinement for distributed meshes is a heavy restriction here. However, 
this seemed to be a feasible workaround for the time being.
Recently a new problem has arisen however: For the construction of a 
blockwise, parallel preconditioner for a sweeping method I now need 
codimension 1 meshes of the process-to-process mesh-interfaces (again a 
simple copy of the theoretical 2D-mesh, which was extruded to generate the 
3d mesh, if that were possible) because I need nodal and Nedelec-elements 
on these 2D-interfaces for the computation of some integrals.
So my questions would be:
1. given a 2D-mesh, what is the easiest way to get a distributed 
3d-extrusion with the partitioning described above?  (auto-partitioning 
generates balls in this mesh, not cuboids with z-orthogonal 
process-to-process interfaces) One suggestion for a function here would be 
a function to transform a shared to a distributed mesh because on a shared 
mesh I could set the subdomain Ids and then just call that function when 
I'm done
2. say I have a layer of faces in that mesh (in the interior) and I need 
nedelec elements and nodal elements on these faces (codimension 1) to 
evaluate their shape function and gradients in quadrature points of a 
2D-quadrature on the faces. What is the best way to do this? (if it was a 
boundary I could give a boundary id and call 
GridGenerator::extract_boundary_mesh but it's not always on the boundary.
3. There is a description on how to manually generate a mesh which seems 
easy enough in my case. How does this work for a distributed mesh? Is the 
only version to generate a mesh and then auto-partition or can I somehow 
define the partitioning in the generation phase similar to the way I could 
set subdomain Ids in a shared parallel mesh?
4. What can I do to extend the exising functionality? Since the memory 
consumtion of my (and most likely most codes) is low during mesh 
generation, first generating a shared mesh and then distributing it would 
not be a problem (compared to keeping the shared mesh during computation 
when the matrices and vectors also take up a lot of the memory). Do you 
consider such a function "easy" to implement?

Thank you for your time!
Pascal


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Error during configuration since 9.0.0

2018-05-16 Thread Pascal Kraft
I think the problem is the following: In the Ubuntu Package Sources for 
18.04 the standard openmpi version of libscalapack is 2.0 while my system 
has 2.1.1 installed. I guess that causes the problems. I also checked and 
the problems only occur if MPI=ON and scalapack=ON. Otherwise everything is 
fine.
So I guess this problem will appear for everyone who uses the packages 
"openmpi-bin" and "libscalapack-openmpi2.0". 
Thank you all for your time :)

Am Dienstag, 15. Mai 2018 19:38:03 UTC+2 schrieb Pascal Kraft:
>
> Dear Deal.ii devs,
>
> first off: Thanks for your great work and the many new features in 9.0.0!
> I have compiled 9.0.0 successfully on a cluster however there seems to be 
> some kind of bug on my desktop (Ubuntu 18.04) where the configuration of 
> 8.5.1 works completely fine.
> I run my cmake command and all dependencies are found. Then the test suite 
> for compilerflags crashes stating:
>
> ...
>
>> -- Include /home/kraft/Downloads/dealii-9.0.0/cmake/setup_finalize.cmake
>> CMake Error at cmake/setup_finalize.cmake:95 (MESSAGE):
>>   
>> Configuration error: Cannot compile a test program with the final set 
>> of
>> compiler and linker flags:
>>   CXX flags (DEBUG): -pedantic -fPIC -Wall -Wextra -Wpointer-arith 
>> -Wwrite-strings -Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual 
>> -Wno-placement-new -Wno-deprecated-declarations -Wno-literal-suffix 
>> -fopenmp-simd -std=c++17 -pthread -Wno-unused-local-typedefs -Og -ggdb 
>> -Wa,--compress-debug-sections
>>   LD flags  (DEBUG): -Wl,--as-needed -rdynamic -pthread -pthread -ggdb
>>   LIBRARIES (DEBUG): 
>> /usr/lib/x86_64-linux-gnu/libtbb.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;pthread;/home/kraft/trilinos/lib/libmuelu-adapters.so;/home/kraft/trilinos/lib/libmuelu-interface.so;/home/kraft/trilinos/lib/libmuelu.so;/home/kraft/trilinos/lib/libteko.so;/home/kraft/trilinos/lib/libstratimikos.so;/home/kraft/trilinos/lib/libstratimikosbelos.so;/home/kraft/trilinos/lib/libstratimikosaztecoo.so;/home/kraft/trilinos/lib/libstratimikosamesos.so;/home/kraft/trilinos/lib/libstratimikosml.so;/home/kraft/trilinos/lib/libstratimikosifpack.so;/home/kraft/trilinos/lib/libifpack2-adapters.so;/home/kraft/trilinos/lib/libifpack2.so;/home/kraft/trilinos/lib/libzoltan2.so;/home/kraft/trilinos/lib/libanasazitpetra.so;/home/kraft/trilinos/lib/libModeLaplace.so;/home/kraft/trilinos/lib/libanasaziepetra.so;/home/kraft/trilinos/lib/libanasazi.so;/home/kraft/trilinos/lib/libbelostpetra.so;/home/kraft/trilinos/lib/libbelosepetra.so;/home/kraft/trilinos/lib/libbelos.so;/home/kraft/trilinos/lib/libml.so;/home/kraft/trilinos/lib/libifpack.so;/home/kraft/trilinos/lib/libpamgen_extras.so;/home/kraft/trilinos/lib/libpamgen.so;/home/kraft/trilinos/lib/libamesos2.so;/home/kraft/trilinos/lib/libamesos.so;/home/kraft/trilinos/lib/libgaleri-xpetra.so;/home/kraft/trilinos/lib/libgaleri.so;/home/kraft/trilinos/lib/libaztecoo.so;/home/kraft/trilinos/lib/libisorropia.so;/home/kraft/trilinos/lib/libxpetra-sup.so;/home/kraft/trilinos/lib/libxpetra.so;/home/kraft/trilinos/lib/libthyratpetra.so;/home/kraft/trilinos/lib/libthyraepetraext.so;/home/kraft/trilinos/lib/libthyraepetra.so;/home/kraft/trilinos/lib/libthyracore.so;/home/kraft/trilinos/lib/libepetraext.so;/home/kraft/trilinos/lib/libtpetraext.so;/home/kraft/trilinos/lib/libtpetrainout.so;/home/kraft/trilinos/lib/libtpetra.so;/home/kraft/trilinos/lib/libkokkostsqr.so;/home/kraft/trilinos/lib/libtpetrakernels.so;/home/kraft/trilinos/lib/libtpetraclassiclinalg.so;/home/kraft/trilinos/lib/libtpetraclassicnodeapi.so;/home/kraft/trilinos/lib/libtpetraclassic.so;/home/kraft/trilinos/lib/libtriutils.so;/home/kraft/trilinos/lib/libzoltan.so;/home/kraft/trilinos/lib/libepetra.so;/home/kraft/trilinos/lib/libsacado.so;/home/kraft/trilinos/lib/librtop.so;/home/kraft/trilinos/lib/libteuchoskokkoscomm.so;/home/kraft/trilinos/lib/libteuchoskokkoscompat.so;/home/kraft/trilinos/lib/libteuchosremainder.so;/home/kraft/trilinos/lib/libteuchosnumerics.so;/home/kraft/trilinos/lib/libteuchoscomm.so;/home/kraft/trilinos/lib/libteuchosparameterlist.so;/home/kraft/trilinos/lib/libteuchoscore.so;/home/kraft/trilinos/lib/libkokkosalgorithms.so;/home/kraft/trilinos/lib/libkokkoscontainers.so;/home/kraft/trilinos/lib/libkokkoscore.so;/home/kraft/trilinos/lib/libgtest.so;/usr/lib/x86_64-linux-gnu/liblapack.so;/usr/lib/x86_64-linux-gnu/libblas.so;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;/usr/lib/x86_64-linux-gnu/libumfpa

[deal.II] Re: Error during configuration since 9.0.0

2018-05-16 Thread Pascal Kraft
Configuration now works if I explicitely switch scalapack off 
(-DDEAL_II_WITH_SCALAPACK=OFF)... I will try to find out why.

Am Dienstag, 15. Mai 2018 19:38:03 UTC+2 schrieb Pascal Kraft:
>
> Dear Deal.ii devs,
>
> first off: Thanks for your great work and the many new features in 9.0.0!
> I have compiled 9.0.0 successfully on a cluster however there seems to be 
> some kind of bug on my desktop (Ubuntu 18.04) where the configuration of 
> 8.5.1 works completely fine.
> I run my cmake command and all dependencies are found. Then the test suite 
> for compilerflags crashes stating:
>
> ...
>
>> -- Include /home/kraft/Downloads/dealii-9.0.0/cmake/setup_finalize.cmake
>> CMake Error at cmake/setup_finalize.cmake:95 (MESSAGE):
>>   
>> Configuration error: Cannot compile a test program with the final set 
>> of
>> compiler and linker flags:
>>   CXX flags (DEBUG): -pedantic -fPIC -Wall -Wextra -Wpointer-arith 
>> -Wwrite-strings -Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual 
>> -Wno-placement-new -Wno-deprecated-declarations -Wno-literal-suffix 
>> -fopenmp-simd -std=c++17 -pthread -Wno-unused-local-typedefs -Og -ggdb 
>> -Wa,--compress-debug-sections
>>   LD flags  (DEBUG): -Wl,--as-needed -rdynamic -pthread -pthread -ggdb
>>   LIBRARIES (DEBUG): 
>> /usr/lib/x86_64-linux-gnu/libtbb.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;pthread;/home/kraft/trilinos/lib/libmuelu-adapters.so;/home/kraft/trilinos/lib/libmuelu-interface.so;/home/kraft/trilinos/lib/libmuelu.so;/home/kraft/trilinos/lib/libteko.so;/home/kraft/trilinos/lib/libstratimikos.so;/home/kraft/trilinos/lib/libstratimikosbelos.so;/home/kraft/trilinos/lib/libstratimikosaztecoo.so;/home/kraft/trilinos/lib/libstratimikosamesos.so;/home/kraft/trilinos/lib/libstratimikosml.so;/home/kraft/trilinos/lib/libstratimikosifpack.so;/home/kraft/trilinos/lib/libifpack2-adapters.so;/home/kraft/trilinos/lib/libifpack2.so;/home/kraft/trilinos/lib/libzoltan2.so;/home/kraft/trilinos/lib/libanasazitpetra.so;/home/kraft/trilinos/lib/libModeLaplace.so;/home/kraft/trilinos/lib/libanasaziepetra.so;/home/kraft/trilinos/lib/libanasazi.so;/home/kraft/trilinos/lib/libbelostpetra.so;/home/kraft/trilinos/lib/libbelosepetra.so;/home/kraft/trilinos/lib/libbelos.so;/home/kraft/trilinos/lib/libml.so;/home/kraft/trilinos/lib/libifpack.so;/home/kraft/trilinos/lib/libpamgen_extras.so;/home/kraft/trilinos/lib/libpamgen.so;/home/kraft/trilinos/lib/libamesos2.so;/home/kraft/trilinos/lib/libamesos.so;/home/kraft/trilinos/lib/libgaleri-xpetra.so;/home/kraft/trilinos/lib/libgaleri.so;/home/kraft/trilinos/lib/libaztecoo.so;/home/kraft/trilinos/lib/libisorropia.so;/home/kraft/trilinos/lib/libxpetra-sup.so;/home/kraft/trilinos/lib/libxpetra.so;/home/kraft/trilinos/lib/libthyratpetra.so;/home/kraft/trilinos/lib/libthyraepetraext.so;/home/kraft/trilinos/lib/libthyraepetra.so;/home/kraft/trilinos/lib/libthyracore.so;/home/kraft/trilinos/lib/libepetraext.so;/home/kraft/trilinos/lib/libtpetraext.so;/home/kraft/trilinos/lib/libtpetrainout.so;/home/kraft/trilinos/lib/libtpetra.so;/home/kraft/trilinos/lib/libkokkostsqr.so;/home/kraft/trilinos/lib/libtpetrakernels.so;/home/kraft/trilinos/lib/libtpetraclassiclinalg.so;/home/kraft/trilinos/lib/libtpetraclassicnodeapi.so;/home/kraft/trilinos/lib/libtpetraclassic.so;/home/kraft/trilinos/lib/libtriutils.so;/home/kraft/trilinos/lib/libzoltan.so;/home/kraft/trilinos/lib/libepetra.so;/home/kraft/trilinos/lib/libsacado.so;/home/kraft/trilinos/lib/librtop.so;/home/kraft/trilinos/lib/libteuchoskokkoscomm.so;/home/kraft/trilinos/lib/libteuchoskokkoscompat.so;/home/kraft/trilinos/lib/libteuchosremainder.so;/home/kraft/trilinos/lib/libteuchosnumerics.so;/home/kraft/trilinos/lib/libteuchoscomm.so;/home/kraft/trilinos/lib/libteuchosparameterlist.so;/home/kraft/trilinos/lib/libteuchoscore.so;/home/kraft/trilinos/lib/libkokkosalgorithms.so;/home/kraft/trilinos/lib/libkokkoscontainers.so;/home/kraft/trilinos/lib/libkokkoscore.so;/home/kraft/trilinos/lib/libgtest.so;/usr/lib/x86_64-linux-gnu/liblapack.so;/usr/lib/x86_64-linux-gnu/libblas.so;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;/usr/lib/x86_64-linux-gnu/libumfpack.so;/usr/lib/x86_64-linux-gnu/libcholmod.so;/usr/lib/x86_64-linux-gnu/libccolamd.so;/usr/lib/x86_64-linux-gnu/libcolamd.so;/usr/lib/x86_64-linux-gnu/libcamd.so;/usr/lib/x86_64-linux-gnu/libsuitesparseconfig.so;/usr/lib/x86_64-linux-gnu/libamd.so;/usr/lib/x86_64-linux-gnu/libmetis.so;rt;/usr/lib/x86_64-linux-gnu/hdf5/openmpi/lib/libhdf5_hl.so;/usr/lib

[deal.II] Re: Error during configuration since 9.0.0

2018-05-16 Thread Pascal Kraft
I have now tried compiling the code from the last error. First i simply 
used mpicc which worked fine. The command 
/usr/bin/c++-DMPI_WORKING_COMPILER -pedantic -fPIC -Wall -Wextra 
-Wpointer-arith -Wwrite-strings -Wsynth -Wsign-compare -Wswitch 
-Woverloaded-virtual -Wno-placement-new -Wno-deprecated-declarations 
-Wno-literal-suffix -fopenmp-simd -std=c++17  -pthread   -o ./b.o -c ./b.cpp
also worked but 
/usr/bin/c++   -DMPI_WORKING_COMPILER -pedantic -fPIC -Wall -Wextra 
-Wpointer-arith -Wwrite-strings -Wsynth -Wsign-compare -Wswitch 
-Woverloaded-virtual -Wno-placement-new -Wno-deprecated-declarations 
-Wno-literal-suffix -fopenmp-simd -std=c++17  -pthread-rdynamic ./b.o  
-o b -Wl,-rpath,/usr/lib/x86_64-linux-gnu/openmpi/lib -rdynamic 
-fuse-ld=gold  -pthread -lm 
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so 
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempif08.so 
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempi_ignore_tkr.so 
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_mpifh.so 
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so
failed with the same error messages as in the error log, aka
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
reference to 'opal_list_item_t_class'
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
reference to 'opal_class_initialize'
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
reference to 'opal_uses_threads'
I found this dealii issue referencing the problem: 
https://github.com/dealii/dealii/issues/2820 but there seems to be no 
proposed solution and I find that weird since, as mentioned before, dealii 
8.5.1 runs completely fine.
I have then removed the gold linker. I did this by commenting out the line 
"ADD_FLAGS(DEAL_II_LINKER_FLAGS "-fuse-ld=gold")" in 
cmake/checks/check_01_compiler_features.cmake
The error seems to be resolved now. However all the others remain from what 
I can see. The last error ist now this:

Performing C++ SOURCE FILE Test DEAL_II_HAVE_USABLE_FLAGS_DEBUG failed with 
the following output:
Change Dir: /home/kraft/Downloads/dealbuild/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_1c675/fast"
/usr/bin/make -f CMakeFiles/cmTC_1c675.dir/build.make 
CMakeFiles/cmTC_1c675.dir/build
make[1]: Entering directory 
'/home/kraft/Downloads/dealbuild/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_1c675.dir/src.cxx.o
/usr/bin/c++-DDEAL_II_HAVE_USABLE_FLAGS_DEBUG -pedantic -fPIC -Wall 
-Wextra -Wpointer-arith -Wwrite-strings -Wsynth -Wsign-compare -Wswitch 
-Woverloaded-virtual -Wno-placement-new -Wno-deprecated-declarations 
-Wno-literal-suffix -fopenmp-simd -std=c++17 -pthread 
-Wno-unused-local-typedefs -Og -ggdb -Wa,--compress-debug-sections   -o 
CMakeFiles/cmTC_1c675.dir/src.cxx.o -c 
/home/kraft/Downloads/dealbuild/CMakeFiles/CMakeTmp/src.cxx
make[1]: *** No rule to make target '/usr/lib/libscalapack.so', needed by 
'cmTC_1c675'.  Stop.
make[1]: Leaving directory 
'/home/kraft/Downloads/dealbuild/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_1c675/fast' failed
make: *** [cmTC_1c675/fast] Error 2

Source file was:
int main(){ return 0; }

However a simple call of 
/usr/bin/c++ -pedantic -fPIC -Wall -Wextra -Wpointer-arith -Wwrite-strings 
-Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual -Wno-placement-new 
-Wno-deprecated-declarations -Wno-literal-suffix -fopenmp-simd -std=c++17 
-pthread -Wno-unused-local-typedefs -Og -ggdb 
-Wa,--compress-debug-sections   -o /home/kraft/temp/test/b_b.o -c 
/home/kraft/temp/test/b.cpp
yields no errors (code in b.cpp is int main(){ return 0; }).
So am I right in suggesting that there might be an error in 
-DDEAL_II_HAVE_USABLE_FLAGS_DEBUG ?
Do you have any suggestions on what I could try next?

Am Dienstag, 15. Mai 2018 19:38:03 UTC+2 schrieb Pascal Kraft:
>
> Dear Deal.ii devs,
>
> first off: Thanks for your great work and the many new features in 9.0.0!
> I have compiled 9.0.0 successfully on a cluster however there seems to be 
> some kind of bug on my desktop (Ubuntu 18.04) where the configuration of 
> 8.5.1 works completely fine.
> I run my cmake command and all dependencies are found. Then the test suite 
> for compilerflags crashes stating:
>
> ...
>
>> -- Include /home/kraft/Downloads/dealii-9.0.0/cmake/setup_finalize.cmake
>> CMake Error at cmake/setup_finalize.cmake:95 (MESSAGE):
>>   
>> Configuration error: Cannot compile a test program with the final set 
>> of
>> compiler and linker flags:
>>   CXX flags (DEBUG): -pedantic -fPIC -Wall -Wextra -Wpointer-arith 
>> -Wwrite-strings -Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual 
>> -Wno-placement-new -Wno-deprecated-declarations -Wno-literal-suffix 
>> -fopenmp-simd -std=c++17 -pthread -Wno-unused-local-typedefs -Og -ggdb 
>> -Wa,--compress-debug-sections
>>

Re: [deal.II] Error during configuration since 9.0.0

2018-05-15 Thread Pascal Kraft
Dear Timo,

I can check that tomorrow when I'm on that machine again. However: To make 
certain that it wasnt a mistake with my setup I downloaded dealii 8.5.1 
after the problem occured and configured and built it without errors and 
the exact same cmake command. 
In the past (I will also be able to check this tomorrow) I also ran 
mpi-enabled codes of mine on that machine. Could there still be a problem 
with my openmpi-install?

With kind regards,
Pascal Kraft

Am Dienstag, 15. Mai 2018 21:30:12 UTC+2 schrieb Timo Heister:
>
> You need to look at the last error in CMakeError.log: 
>
> > Linking CXX executable cmTC_7254e 
> > /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_7254e.dir/link.txt 
> --verbose=1 
> > /usr/bin/c++   -DMPI_WORKING_COMPILER -pedantic -fPIC -Wall -Wextra 
> -Wpointer-arith -Wwrite-strings -Wsynth -Wsign-compare -Wswitch 
> -Woverloaded-virtual -Wno-placement-new -Wno-deprecated-declarations 
> -Wno-literal-suffix -fopenmp-simd -std=c++17  -pthread-rdynamic 
> CMakeFiles/cmTC_7254e.dir/src.cxx.o  -o cmTC_7254e 
> -Wl,-rpath,/usr/lib/x86_64-linux-gnu/openmpi/lib -rdynamic -fuse-ld=gold 
>  -pthread -lm /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempif08.so 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_usempi_ignore_tkr.so 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_mpifh.so 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so 
> > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
> reference to 'opal_list_item_t_class' 
> > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
> reference to 'opal_class_initialize' 
> > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so: error: undefined 
> reference to 'opal_uses_threads' 
>
> It looks like a problem with the MPI-installation/detection. Can you 
> link a simple hello-mpi example? 
>
>
>
>
>
> On Tue, May 15, 2018 at 7:38 PM, Pascal Kraft <kraft@gmail.com 
> > wrote: 
> > Dear Deal.ii devs, 
> > 
> > first off: Thanks for your great work and the many new features in 
> 9.0.0! 
> > I have compiled 9.0.0 successfully on a cluster however there seems to 
> be 
> > some kind of bug on my desktop (Ubuntu 18.04) where the configuration of 
> > 8.5.1 works completely fine. 
> > I run my cmake command and all dependencies are found. Then the test 
> suite 
> > for compilerflags crashes stating: 
> > 
> > ... 
> >> 
> >> -- Include 
> /home/kraft/Downloads/dealii-9.0.0/cmake/setup_finalize.cmake 
> >> CMake Error at cmake/setup_finalize.cmake:95 (MESSAGE): 
> >> 
> >> Configuration error: Cannot compile a test program with the final 
> set 
> >> of 
> >> compiler and linker flags: 
> >>   CXX flags (DEBUG): -pedantic -fPIC -Wall -Wextra -Wpointer-arith 
> >> -Wwrite-strings -Wsynth -Wsign-compare -Wswitch -Woverloaded-virtual 
> >> -Wno-placement-new -Wno-deprecated-declarations -Wno-literal-suffix 
> >> -fopenmp-simd -std=c++17 -pthread -Wno-unused-local-typedefs -Og -ggdb 
> >> -Wa,--compress-debug-sections 
> >>   LD flags  (DEBUG): -Wl,--as-needed -rdynamic -pthread -pthread 
> -ggdb 
> >>   LIBRARIES (DEBUG): 
> >> 
> /usr/lib/x86_64-linux-gnu/libtbb.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libboost_iostreams.so;/usr/lib/x86_64-linux-gnu/libboost_serialization.so;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libboost_thread.so;/usr/lib/x86_64-linux-gnu/libboost_regex.so;/usr/lib/x86_64-linux-gnu/libboost_chrono.so;/usr/lib/x86_64-linux-gnu/libboost_date_time.so;/usr/lib/x86_64-linux-gnu/libboost_atomic.so;pthread;/home/kraft/trilinos/lib/libmuelu-adapters.so;/home/kraft/trilinos/lib/libmuelu-interface.so;/home/kraft/trilinos/lib/libmuelu.so;/home/kraft/trilinos/lib/libteko.so;/home/kraft/trilinos/lib/libstratimikos.so;/home/kraft/trilinos/lib/libstratimikosbelos.so;/home/kraft/trilinos/lib/libstratimikosaztecoo.so;/home/kraft/trilinos/lib/libstratimikosamesos.so;/home/kraft/trilinos/lib/libstratimikosml.so;/home/kraft/trilinos/lib/libstratimikosifpack.so;/home/kraft/trilinos/lib/libifpack2-adapters.so;/home/kraft/trilinos/lib/libifpack2.so;/home/kraft/trilinos/lib/libzoltan2.so;/home/kraft/trilinos/lib/libanasazitpetra.so;/home/kraft/trilinos/lib/libModeLaplace.so;/home/kraft/trilinos/lib/libanasaziepetra.so;/home/kraft/trilinos/lib/libanasazi.so;/home/kraft/trilinos/lib/libbelostpetra.so;/home/kraft/trilinos/lib/libbelosepetra.so;/home/kraft/trilinos/lib/libbelos.so;/home/kraft/trilinos/lib/libml.so;/home/kraft/trilinos/lib/libifpack.so;/home/kraft/trilinos/lib/libpamgen_extras.so;/home/kraft/trilinos/lib/libpamgen.so;/home

Re: [deal.II] Nedelec Elements and non-tangential Dirichlet data

2018-01-18 Thread Pascal Kraft
Ignore my question about projection. Somehow I thought I remembered that 
the projection functions dont support nedelec elements in deal - my bad.

Am Donnerstag, 18. Januar 2018 18:57:06 UTC+1 schrieb Pascal Kraft:
>
> Thanks for your fast reply! 
> About your first point: Yes, I currently use a FeSystem composed of two 3D 
> Nedelec fes.
> The second point I agree with you. Setting the additional values should 
> not be a problem since I know the analytical solution of the problem so 
> this should not be a problem concerning sulvability of the system. However 
> you are right, it should also not be necessary to set these values in the 
> first place so I can just skip this part.
> As a kind of an extension to my first question: I have a function which 
> computes the analytical solution for a given position. Is there a function 
> to compute the best approximation for a given element type (like nedelec)? 
> Again, thank you for your time,
>
> Kind regards,
> Pascal
>
> Am Dienstag, 16. Januar 2018 18:45:15 UTC+1 schrieb Wolfgang Bangerth:
>>
>> On 01/16/2018 01:41 AM, Pascal Kraft wrote: 
>> > I am currently using a FeSystem composed of two 3D Fields (real and 
>> > imaginary E-field) and I want to impose Dirichlet conditions of the 
>> sort 
>> > E(x,y,z) = E_{in}(x,y,z) on the input interface (in an xy-plane). 
>> > Earlier the z-component had been 0 so I did not run into real problems 
>> > and could use the project_boundary_values_curl_conforming_l2 function 
>> > (for the x and y components) and my own code for the z-components 
>> > (setting them 0 in the first cell layer). Now however I would like to 
>> > impose different values on the z-component. Since my cells have 
>> variable 
>> > thickness in z-direction (i.e. hanging node constraints appear) I would 
>> > like to use some library function for the interpolation. 
>> > Are there any feasible solutions to this? 
>> > Ideally I would like to keep using 
>> > project_boundary_values_curl_conforming_l2 for the tangential 
>> components 
>> > and just add some code for the z-components. 
>>
>> The question doesn't quite have enough information: 
>> * Are you using Nedelec elements for the two fields? 
>> * If the input face is in the xy-plane, and you are using a finite 
>> element that is only curl-conforming (such as the Nedelec element), 
>> isn't it correct that you can only prescribe *tangential* components for 
>> boundary values? In other words, you cannot prescribe z-components 
>> anyway. 
>>
>> Best 
>>   W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Nedelec Elements and non-tangential Dirichlet data

2018-01-18 Thread Pascal Kraft
Thanks for your fast reply! 
About your first point: Yes, I currently use a FeSystem composed of two 3D 
Nedelec fes.
The second point I agree with you. Setting the additional values should not 
be a problem since I know the analytical solution of the problem so this 
should not be a problem concerning sulvability of the system. However you 
are right, it should also not be necessary to set these values in the first 
place so I can just skip this part.
As a kind of an extension to my first question: I have a function which 
computes the analytical solution for a given position. Is there a function 
to compute the best approximation for a given element type (like nedelec)? 
Again, thank you for your time,

Kind regards,
Pascal

Am Dienstag, 16. Januar 2018 18:45:15 UTC+1 schrieb Wolfgang Bangerth:
>
> On 01/16/2018 01:41 AM, Pascal Kraft wrote: 
> > I am currently using a FeSystem composed of two 3D Fields (real and 
> > imaginary E-field) and I want to impose Dirichlet conditions of the sort 
> > E(x,y,z) = E_{in}(x,y,z) on the input interface (in an xy-plane). 
> > Earlier the z-component had been 0 so I did not run into real problems 
> > and could use the project_boundary_values_curl_conforming_l2 function 
> > (for the x and y components) and my own code for the z-components 
> > (setting them 0 in the first cell layer). Now however I would like to 
> > impose different values on the z-component. Since my cells have variable 
> > thickness in z-direction (i.e. hanging node constraints appear) I would 
> > like to use some library function for the interpolation. 
> > Are there any feasible solutions to this? 
> > Ideally I would like to keep using 
> > project_boundary_values_curl_conforming_l2 for the tangential components 
> > and just add some code for the z-components. 
>
> The question doesn't quite have enough information: 
> * Are you using Nedelec elements for the two fields? 
> * If the input face is in the xy-plane, and you are using a finite 
> element that is only curl-conforming (such as the Nedelec element), 
> isn't it correct that you can only prescribe *tangential* components for 
> boundary values? In other words, you cannot prescribe z-components anyway. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Nedelec Elements and non-tangential Dirichlet data

2018-01-16 Thread Pascal Kraft
I am currently using a FeSystem composed of two 3D Fields (real and 
imaginary E-field) and I want to impose Dirichlet conditions of the sort 
E(x,y,z) = E_{in}(x,y,z) on the input interface (in an xy-plane). Earlier 
the z-component had been 0 so I did not run into real problems and could 
use the project_boundary_values_curl_conforming_l2 function (for the x and 
y components) and my own code for the z-components (setting them 0 in the 
first cell layer). Now however I would like to impose different values on 
the z-component. Since my cells have variable thickness in z-direction 
(i.e. hanging node constraints appear) I would like to use some library 
function for the interpolation.
Are there any feasible solutions to this? 
Ideally I would like to keep using 
project_boundary_values_curl_conforming_l2 for the tangential components 
and just add some code for the z-components.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Internal instability of the GMRES Solver / Trilinos

2017-03-16 Thread Pascal Kraft
Hi Martin,

 I have tried a version 
with 
GrowingVectorMemory::release_unused_memory()
 
at the end of each step and removed my change to trilinos_vector.cc l.247 
(back to the version from dealii source) and it seems to work fine. I have 
not tried the other solution you proposed, should I? Would the result help 
you?

Thank you a lot for your support! This had been driving me crazy :)

Best,
Pascal

Am Donnerstag, 16. März 2017 08:58:53 UTC+1 schrieb Martin Kronbichler:
>
> Dear Pascal,
>
> You are right, in your case one needs to call 
>
> GrowingVectorMemory::release_unused_memory()
> rather than for the vector. Can you try that as well?
>
> The problem appears to be that the call to SameAs returns different 
> results for different processors, which it should not be, which is why I 
> suspect that there might be some stale communicator object around. Another 
> indication for that assumption is that you get stuck in the initialization 
> of the temporary vectors of the GMRES solver, which is exactly this kind of 
> situation.
>
> As to the particular patch I referred to: It does release some memory that 
> might have stale information but it also changes some of the call 
> structures slightly. Could you try to change the following:
>
> if (vector->Map().SameAs(v.vector->Map()) == false)
>
> to 
>
> if (v.vector->Map().SameAs(vector 
> <https://www.dealii.org/8.4.0/doxygen/deal.II/classTrilinosWrappers_1_1VectorBase.html#afa80df228813b5bd94a6e780a4f5e6ae>->Map())
>  
> == false)
>
> Best, Martin 
> On 16.03.2017 01:28, Pascal Kraft wrote: 
>
> Hi Martin,
> that didn't solve my problem. What I have done in the meantime is replace 
> the check in line 247 of trilinos_vector.cc with true. I don't know if this 
> causes memory leaks or anything but my code seems to be working fine with 
> that change. 
> To your suggestion: Would I have also had to call the templated version 
> for BlockVectors or only for Vectors? I only tried the latter. Would I have 
> had to also apply some patch to my dealii library for it to work or is the 
> patch you talked about simply that you included the functionality of the 
> call 
> GrowingVectorMemory::release_unused_memory() 
> in some places?
> I have also wanted to try MPICH instead of OpenMPI because of a post about 
> an internal error in OpenMPI and one of the functions appearing in the call 
> stacks sometimes not blocking properly.
> Thank you for your time and your fast responses - the whole library and 
> the people developing it and making it available are simply awesome ;)
> Pascal
> Am Mittwoch, 15. März 2017 17:26:23 UTC+1 schrieb Martin Kronbichler:
>>
>> Dear Pascal,
>>
>> This problem seems related to a problem we recently worked around in 
>> https://github.com/dealii/dealii/pull/4043
>>
>> Can you check what happens if you call 
>> GrowingVectorMemory::release_unused_memory()
>>
>> between your optimization steps? If a communicator gets stack in those 
>> places it is likely a stale object somewhere that we fail to work around 
>> for some reason.
>>
>> Best, Martin 
>> On 15.03.2017 14:10, Pascal Kraft wrote: 
>>
>> Dear Timo, 
>> I have done some more digging and found out the following. The problems 
>> seem to happen in trilinos_vector.cc between the lines 240 and 270.
>> What I see on the call stacks is, that one process reaches line 261 
>> ( ierr = vector->GlobalAssemble (last_action); ) and then waits inside this 
>> call at an MPI_Barrier with the following stack:
>> 20  7fffd4d18f56 
>> 19 opal_progress()  7fffdc56dfca 
>> 18 ompi_request_default_wait_all()  7fffddd54b15 
>> 17 ompi_coll_tuned_barrier_intra_recursivedoubling()  7fffcf9abb5d 
>> 16 PMPI_Barrier()  7fffddd68a9c 
>> 15 Epetra_MpiDistributor::DoPosts()  7fffe4088b4f 
>> 14 Epetra_MpiDistributor::Do()  7fffe4089773 
>> 13 Epetra_DistObject::DoTransfer()  7fffe400a96a 
>> 12 Epetra_DistObject::Export()  7fffe400b7b7 
>> 11 int Epetra_FEVector::GlobalAssemble()  7fffe4023d7f 
>> 10 Epetra_FEVector::GlobalAssemble()  7fffe40228e3 
>> The other (in my case three) processes are stuck in the head of the 
>> if/else-f statement leading up to this point, namely in the line 
>> if (vector->Map().SameAs(v.vector 
>> <https://www.dealii.org/8.4.0/doxygen/deal.II/classTrilinosWrappers_1_1VectorBase.html#afa80df228813b5bd94a6e780a4f5e6ae>->Map())
>>  
>> == false) 
>> inside the call to SameAs(...) with stacks like
>> 15 opal_progress() 7fffdc56dfbc 14 ompi_request_default_wait_all() 
>> 7fffddd54b15 13 ompi_coll_tuned_allreduce_intra_recursivedoubling() 
>> 7f

Re: [deal.II] Re: Internal instability of the GMRES Solver / Trilinos

2017-03-16 Thread Pascal Kraft
Dear Martin,

my local machine is dying to a Valgrind run at the moment, but as soon as 
that is done with one step I will put these changes in right away and post 
the results here (<6 hrs).
>From what I make of the call stacks on process somehow gets out of the 
SameAs() call without being MPI-blocked, and the others are then forced to 
wait during the All_Reduce call. How or where that happens I will try to 
figure out later today. SDM is now working well in my eclipse setup and I 
hope to be able to track the problem.

Best,
Pascal

Am Donnerstag, 16. März 2017 08:58:53 UTC+1 schrieb Martin Kronbichler:
>
> Dear Pascal,
>
> You are right, in your case one needs to call 
>
> GrowingVectorMemory::release_unused_memory()
> rather than for the vector. Can you try that as well?
>
> The problem appears to be that the call to SameAs returns different 
> results for different processors, which it should not be, which is why I 
> suspect that there might be some stale communicator object around. Another 
> indication for that assumption is that you get stuck in the initialization 
> of the temporary vectors of the GMRES solver, which is exactly this kind of 
> situation.
>
> As to the particular patch I referred to: It does release some memory that 
> might have stale information but it also changes some of the call 
> structures slightly. Could you try to change the following:
>
> if (vector->Map().SameAs(v.vector->Map()) == false)
>
> to 
>
> if (v.vector->Map().SameAs(vector 
> <https://www.dealii.org/8.4.0/doxygen/deal.II/classTrilinosWrappers_1_1VectorBase.html#afa80df228813b5bd94a6e780a4f5e6ae>->Map())
>  
> == false)
>
> Best, Martin 
> On 16.03.2017 01:28, Pascal Kraft wrote: 
>
> Hi Martin,
> that didn't solve my problem. What I have done in the meantime is replace 
> the check in line 247 of trilinos_vector.cc with true. I don't know if this 
> causes memory leaks or anything but my code seems to be working fine with 
> that change. 
> To your suggestion: Would I have also had to call the templated version 
> for BlockVectors or only for Vectors? I only tried the latter. Would I have 
> had to also apply some patch to my dealii library for it to work or is the 
> patch you talked about simply that you included the functionality of the 
> call 
> GrowingVectorMemory::release_unused_memory() 
> in some places?
> I have also wanted to try MPICH instead of OpenMPI because of a post about 
> an internal error in OpenMPI and one of the functions appearing in the call 
> stacks sometimes not blocking properly.
> Thank you for your time and your fast responses - the whole library and 
> the people developing it and making it available are simply awesome ;)
> Pascal
> Am Mittwoch, 15. März 2017 17:26:23 UTC+1 schrieb Martin Kronbichler:
>>
>> Dear Pascal,
>>
>> This problem seems related to a problem we recently worked around in 
>> https://github.com/dealii/dealii/pull/4043
>>
>> Can you check what happens if you call 
>> GrowingVectorMemory::release_unused_memory()
>>
>> between your optimization steps? If a communicator gets stack in those 
>> places it is likely a stale object somewhere that we fail to work around 
>> for some reason.
>>
>> Best, Martin 
>> On 15.03.2017 14:10, Pascal Kraft wrote: 
>>
>> Dear Timo, 
>> I have done some more digging and found out the following. The problems 
>> seem to happen in trilinos_vector.cc between the lines 240 and 270.
>> What I see on the call stacks is, that one process reaches line 261 
>> ( ierr = vector->GlobalAssemble (last_action); ) and then waits inside this 
>> call at an MPI_Barrier with the following stack:
>> 20  7fffd4d18f56 
>> 19 opal_progress()  7fffdc56dfca 
>> 18 ompi_request_default_wait_all()  7fffddd54b15 
>> 17 ompi_coll_tuned_barrier_intra_recursivedoubling()  7fffcf9abb5d 
>> 16 PMPI_Barrier()  7fffddd68a9c 
>> 15 Epetra_MpiDistributor::DoPosts()  7fffe4088b4f 
>> 14 Epetra_MpiDistributor::Do()  7fffe4089773 
>> 13 Epetra_DistObject::DoTransfer()  7fffe400a96a 
>> 12 Epetra_DistObject::Export()  7fffe400b7b7 
>> 11 int Epetra_FEVector::GlobalAssemble()  7fffe4023d7f 
>> 10 Epetra_FEVector::GlobalAssemble()  7fffe40228e3 
>> The other (in my case three) processes are stuck in the head of the 
>> if/else-f statement leading up to this point, namely in the line 
>> if (vector->Map().SameAs(v.vector 
>> <https://www.dealii.org/8.4.0/doxygen/deal.II/classTrilinosWrappers_1_1VectorBase.html#afa80df228813b5bd94a6e780a4f5e6ae>->Map())
>>  
>> == false) 
>> inside the call to SameAs(...) with stacks like
>> 15 opal_pro

[deal.II] Re: Internal instability of the GMRES Solver / Trilinos

2017-03-15 Thread Pascal Kraft
Dear Timo,

I have done some more digging and found out the following. The problems 
seem to happen in trilinos_vector.cc between the lines 240 and 270.
What I see on the call stacks is, that one process reaches line 261 ( ierr 
= vector->GlobalAssemble (last_action); ) and then waits inside this call 
at an MPI_Barrier with the following stack:
20  7fffd4d18f56 
19 opal_progress()  7fffdc56dfca 
18 ompi_request_default_wait_all()  7fffddd54b15 
17 ompi_coll_tuned_barrier_intra_recursivedoubling()  7fffcf9abb5d 
16 PMPI_Barrier()  7fffddd68a9c 
15 Epetra_MpiDistributor::DoPosts()  7fffe4088b4f 
14 Epetra_MpiDistributor::Do()  7fffe4089773 
13 Epetra_DistObject::DoTransfer()  7fffe400a96a 
12 Epetra_DistObject::Export()  7fffe400b7b7 
11 int Epetra_FEVector::GlobalAssemble()  7fffe4023d7f 
10 Epetra_FEVector::GlobalAssemble()  7fffe40228e3 

The other (in my case three) processes are stuck in the head of the 
if/else-f statement leading up to this point, namely in the line 
if (vector->Map().SameAs(v.vector 
->Map())
 
== false)
inside the call to SameAs(...) with stacks like

15 opal_progress() 7fffdc56dfbc 14 ompi_request_default_wait_all() 
7fffddd54b15 13 ompi_coll_tuned_allreduce_intra_recursivedoubling() 
7fffcf9a4913 12 PMPI_Allreduce() 7fffddd6587f 11 Epetra_MpiComm::MinAll() 
7fffe408739e 10 Epetra_BlockMap::SameAs() 7fffe3fb9d74 

Maybe this helps. Producing a smaller example will likely not be possible 
in the coming two weeks but if there are no solutions until then I can try.

Greetings,
Pascal

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Internal instability of the GMRES Solver / Trilinos

2017-03-14 Thread Pascal Kraft
Dear list members,

I am facing a really weird problem, that I have been struggling with for a 
while now. I have written a problem class which, based on other objects, 
generates a system matrix, rhs and solution vector object. The 
datastructures are Trilinos Block distributed types. When I do this for the 
first time it all  works perfectly. However the class is part of an 
optimization scheme and usually at the second time the object is used 
(randomly also later, but this has only happened once or twice) the Solver 
does not start. I am checking with MPI-barriers to see if all processes 
arrive at the GMRES::solve and they do but somehow not even my own 
preconditioners vmult method gets called anymore. The objects (the two 
vectors and the system matrix are exactly the same that they have been at 
the previous step (only slightly different numbers, but same vectors of 
IndexSets for the partition among processors)

I have debugged this code-segment with Eclipse and the parallel debugger 
but don't know what to do with the call stack:
18 ompi_request_default_wait_all()  7fffddd54b15 
17 ompi_coll_tuned_barrier_intra_recursivedoubling()  7fffcf9abb5d 
16 PMPI_Barrier()  7fffddd68a9c 
15 Epetra_MpiDistributor::DoPosts()  7fffe4088b4f 
14 Epetra_MpiDistributor::Do()  7fffe4089773 
13 Epetra_DistObject::DoTransfer()  7fffe400a96a 
12 Epetra_DistObject::Export()  7fffe400b7b7 
11 int Epetra_FEVector::GlobalAssemble()  7fffe4023d7f 
10 Epetra_FEVector::GlobalAssemble()  7fffe40228e3 
9 dealii::TrilinosWrappers::MPI::Vector::reinit() trilinos_vector.cc:261 
752c937e 
8 dealii::TrilinosWrappers::MPI::BlockVector::reinit() 
trilinos_block_vector.cc:191 74e43bd9 
7 
dealii::internal::SolverGMRES::TmpVectors::operator()
 
solver_gmres.h:535 4a847d 
6 
dealii::SolverGMRES::solve<dealii::TrilinosWrappers::BlockSparseMatrix,
 
PreconditionerSweeping>() solver_gmres.h:813 4d654a 
5 Waveguide::solve() Waveguide.cpp:1279 48f150 

The last line (5) here is a function I wrote which calls 
SolverGMRES::solve with my 
preconditioner (which works perfecly fine during the previous run. I found 
some information online about MPI_Barrier being instable sometimes but I 
don't know enough about the inner workings of Trilinos (Epetra) and Dealii 
to make a judgment call here. If none can help I will try to provide a code 
fragment but I doubt that will be possible (if it really is a racing 
condition and I strip away the rather large ammout of code surrounding this 
segment, it is unlikely to be reproducible.

Originally I had used two MPI communicatorsthat were only different in the 
numbering of the processes (one for the primal, one for the dual problem) 
and created two independend objects of my problem class hich only used 
their respective communicator. In that case, the solver had only worked 
whenever the numbering of processes was either equal to that of 
MPI_COMM_WORLD or exactly the opposite but not for say 1-2-3-4 -> 1-3-2-4 
and gotten stuck in the exact same way. I had thought it might be some 
internal use of MPI_COMM_WORLD that was blocking somehow but it also 
happens now that I only use one communicator (MPI_COMM_WORLD).

Thank you in advance for your time,
Pascal Kraft

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.