Hi,

my code is the main part of my Master Thesis, so I need to talk with my
professor about sharing it with the libmesh community.
I uploaded my config.log here (http://pastebin.com/X3whQhLs) so you can
check it. As far as I know I don't have PETSc installed/configured yet
but there should be no problem to make up for this if this solves my
problem.
I also uploaded a shortened version of my code such that you can see
what libmesh functions I'm using. Here is the link:
http://pastebin.com/L9QrYkLQ
Everything I removed from the code are computations of local matrix
entries, copying those entries into 'Ke' and 'Fe' and accessing the
pointers to the element's nodes (for their coordinates); so nothing that
could crash the program or interfere with the system solving step...

Bye,
Stephan Herb

Am 14.09.2015 um 17:34 schrieb John Peterson:
>
>
> On Mon, Sep 14, 2015 at 3:46 AM, Stephan Herb
> <inf74...@stud.uni-stuttgart.de
> <mailto:inf74...@stud.uni-stuttgart.de>> wrote:
>
>     Hello,
>
>     I wrote a code to compute displacements and drilling moments for flat
>     shell elements with FEM. It imports a mesh file (xda or msh),
>     assembles
>     the system matrix and RHS, solves the system and exports the
>     results as
>     vtk files.
>     When I run my program with one processor it all works fine and the
>     displacements are correct. The problem occurs with more than one
>     processor.
>     Simple example: Plane mesh (e.g. 8x8 quadrilaterals). Left edge is
>     clamped, the force is applied at the right edge and only at this edge.
>     If I run this example with e.g. 2 processors the partitioning cut
>     halves
>     the mesh in a left and a right part. The result looks as follows: The
>     left part has no displacement at all, while the right part has a huge
>     displacement.
>
>
> So you implemented a new type of shell finite element in libmesh?  It
> would be great if this is something you could share with us in a
> branch on GitHub...
>  
>
>     I studied your example codes and couldn't see any special code
>     fragments
>     that is needed for a parallelized version of the code, so I thought:
>     'Hey, libmesh seems to handle the parallelization internally - that's
>     great!'. But when I look at my results; in my opinion every processor
>     solves its own partial system with its own boundary condition, mesh
>     nodes and RHS and produces its own solution - so no communication or
>     exchange between the processes at all...
>
>
> Are you using PETSc?  Please send me (or share on google docs) your
> config.log file so I can take a look at your system's configuration.
>  
>
>     Now to my questions: Do I need additional configurations in my code to
>     make it work (at the LinearImplicitSystem, the mesh, the system
>     matrix,
>     etc.)? Additional MPI code to exchange values/vectors/matrices by my
>     own? Libmesh seems to use MPI communication extensively internally and
>     it's difficult for me right now to see where the problem is.
>     Perhaps you
>     can give me a hint to be back on track soon.
>
>
> You shouldn't need to do anything too special.  One big thing is to
> make sure you use "active_local" element iterators instead of "active"
> ones in your assembly routine if you are writing non-FEMSystem style
> libmesh code.
>

------------------------------------------------------------------------------
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to