On Mon, Sep 14, 2015 at 3:46 AM, Stephan Herb <
inf74...@stud.uni-stuttgart.de> wrote:

> Hello,
>
> I wrote a code to compute displacements and drilling moments for flat
> shell elements with FEM. It imports a mesh file (xda or msh), assembles
> the system matrix and RHS, solves the system and exports the results as
> vtk files.
> When I run my program with one processor it all works fine and the
> displacements are correct. The problem occurs with more than one processor.
> Simple example: Plane mesh (e.g. 8x8 quadrilaterals). Left edge is
> clamped, the force is applied at the right edge and only at this edge.
> If I run this example with e.g. 2 processors the partitioning cut halves
> the mesh in a left and a right part. The result looks as follows: The
> left part has no displacement at all, while the right part has a huge
> displacement.
>

So you implemented a new type of shell finite element in libmesh?  It would
be great if this is something you could share with us in a branch on
GitHub...


> I studied your example codes and couldn't see any special code fragments
> that is needed for a parallelized version of the code, so I thought:
> 'Hey, libmesh seems to handle the parallelization internally - that's
> great!'. But when I look at my results; in my opinion every processor
> solves its own partial system with its own boundary condition, mesh
> nodes and RHS and produces its own solution - so no communication or
> exchange between the processes at all...
>

Are you using PETSc?  Please send me (or share on google docs) your
config.log file so I can take a look at your system's configuration.


Now to my questions: Do I need additional configurations in my code to
> make it work (at the LinearImplicitSystem, the mesh, the system matrix,
> etc.)? Additional MPI code to exchange values/vectors/matrices by my
> own? Libmesh seems to use MPI communication extensively internally and
> it's difficult for me right now to see where the problem is. Perhaps you
> can give me a hint to be back on track soon.
>

You shouldn't need to do anything too special.  One big thing is to make
sure you use "active_local" element iterators instead of "active" ones in
your assembly routine if you are writing non-FEMSystem style libmesh code.
------------------------------------------------------------------------------
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to