Thanks Julian for addressing this problem; although I changed it to the
correct version nothing changed (well, now only the displacements for
the local nodes gets printed instead of all nodes^^).
I came up with another issue: If you run the example code 4 of "Systems
of Equations" (Linear Elastic Cantilever) with the command "mpirun -np 2
./example-opt" one gets "-nan" as solutions. When you export it for
example as vtu files theone  part cannot be loaded in a visualization
program like paraview and the other part can be loaded but only contains
zeros...
So I might have no problem with my code, but perhaps with my libmesh
configuration, or I need additional starting parameters, or ... don't
know. Any suggestions? Someone has tested the examples with multiple
processes and got right results?

regards,
Stephan

Am 15.09.2015 um 09:54 schrieb Julian Andrej:
> Hi,
>
> maybe there is a problem with your iterator, as you are using
>
> MeshBase::const_node_iterator no = mesh.nodes_begin();
>
> so it tries to get all nodes (from every processor), you could try
>
> MeshBase::const_node_iterator no = mesh.local_nodes_begin();
>
> to fetch just the local nodes. Do you know if your solution (in sense
> of solving) is a problem or gathering the solution onto one machine?
>
> regards
> Julian
>
>
> On Tue, Sep 15, 2015 at 9:28 AM, Stephan Herb
> <inf74...@stud.uni-stuttgart.de> wrote:
>> Hi,
>>
>> my code is the main part of my Master Thesis, so I need to talk with my
>> professor about sharing it with the libmesh community.
>> I uploaded my config.log here (http://pastebin.com/X3whQhLs) so you can
>> check it. As far as I know I don't have PETSc installed/configured yet
>> but there should be no problem to make up for this if this solves my
>> problem.
>> I also uploaded a shortened version of my code such that you can see
>> what libmesh functions I'm using. Here is the link:
>> http://pastebin.com/L9QrYkLQ
>> Everything I removed from the code are computations of local matrix
>> entries, copying those entries into 'Ke' and 'Fe' and accessing the
>> pointers to the element's nodes (for their coordinates); so nothing that
>> could crash the program or interfere with the system solving step...
>>
>> Bye,
>> Stephan Herb
>>
>> Am 14.09.2015 um 17:34 schrieb John Peterson:
>>>
>>> On Mon, Sep 14, 2015 at 3:46 AM, Stephan Herb
>>> <inf74...@stud.uni-stuttgart.de
>>> <mailto:inf74...@stud.uni-stuttgart.de>> wrote:
>>>
>>>     Hello,
>>>
>>>     I wrote a code to compute displacements and drilling moments for flat
>>>     shell elements with FEM. It imports a mesh file (xda or msh),
>>>     assembles
>>>     the system matrix and RHS, solves the system and exports the
>>>     results as
>>>     vtk files.
>>>     When I run my program with one processor it all works fine and the
>>>     displacements are correct. The problem occurs with more than one
>>>     processor.
>>>     Simple example: Plane mesh (e.g. 8x8 quadrilaterals). Left edge is
>>>     clamped, the force is applied at the right edge and only at this edge.
>>>     If I run this example with e.g. 2 processors the partitioning cut
>>>     halves
>>>     the mesh in a left and a right part. The result looks as follows: The
>>>     left part has no displacement at all, while the right part has a huge
>>>     displacement.
>>>
>>>
>>> So you implemented a new type of shell finite element in libmesh?  It
>>> would be great if this is something you could share with us in a
>>> branch on GitHub...
>>>
>>>
>>>     I studied your example codes and couldn't see any special code
>>>     fragments
>>>     that is needed for a parallelized version of the code, so I thought:
>>>     'Hey, libmesh seems to handle the parallelization internally - that's
>>>     great!'. But when I look at my results; in my opinion every processor
>>>     solves its own partial system with its own boundary condition, mesh
>>>     nodes and RHS and produces its own solution - so no communication or
>>>     exchange between the processes at all...
>>>
>>>
>>> Are you using PETSc?  Please send me (or share on google docs) your
>>> config.log file so I can take a look at your system's configuration.
>>>
>>>
>>>     Now to my questions: Do I need additional configurations in my code to
>>>     make it work (at the LinearImplicitSystem, the mesh, the system
>>>     matrix,
>>>     etc.)? Additional MPI code to exchange values/vectors/matrices by my
>>>     own? Libmesh seems to use MPI communication extensively internally and
>>>     it's difficult for me right now to see where the problem is.
>>>     Perhaps you
>>>     can give me a hint to be back on track soon.
>>>
>>>
>>> You shouldn't need to do anything too special.  One big thing is to
>>> make sure you use "active_local" element iterators instead of "active"
>>> ones in your assembly routine if you are writing non-FEMSystem style
>>> libmesh code.
>>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> Libmesh-users mailing list
>> Libmesh-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/libmesh-users


------------------------------------------------------------------------------
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to