On Wed, Aug 16, 2017 at 12:13 PM, Roy Stogner <[email protected]>
wrote:
>
> On Wed, 16 Aug 2017, Renato Poli wrote:
>
> if ( mesh.processor_id() == 0) {
>> MeshSerializer serialize(mesh, true, true);
>>
>
> You actually need a serializer to run on every processor, not just
> proc 0, even in the case where you're only serializing onto proc 0.
> Even if proc 0 is the only thing receiving mesh information, the other
> processors need to know to send it.
>
> This probably isn't a problem for you right now, though - the
> default libMesh Mesh is a ReplicatedMesh, serializers are do-nothing
> code on already-serialized meshes, and you should stick with
> always-serialized ReplicatedMesh until your code is working in that
> simpler case.
>
Ok. I took the serialization outside the "if". Can it be synchronization
problem? Does the master process need to wait the other ones to finish or
so?
>
> FEMainWindow *win = new FEMainWindow(mesh, sys);
>>
>
> Pardon the rude question, but just to make sure we're not in the
> middle of an XY problem:
>
> You're sure you need your own from-scratch GUI here? You definitely
> can't make do with Paraview, VisIt, or some such?
>
Well ... it is possible, of course. I had this simple GUI previously and
being able to interact with the mesh is quite useful for debugging. After
this simple runs I will run into adaptative meshing and (I believe) having
an interface to interact with and play around with geometric algorithms
will save a lot of time.
This is forcing me to go into libmesh access functions and understand its
internals... Sort of produces a steeper learning curve. It is working
nicely in serial (even in parallell when I iterate over the local elements
only).
>
> ...
>> libMesh::MeshBase::const_element_iterator el =
>> _mesh.active_elements_begin();
>> const libMesh::MeshBase::const_element_iterator end_el =
>> _mesh.active_elements_end();
>> for ( ; el != end_el ; ++el)
>> { ... sys.point_value(0,pt,elem) ...}
>>
>
> Here I don't see anything obviously wrong. What's the failure?
>
Listed below. Any hint to obtain better error logging?
EquationSystems
n_systems()=1
System #0, "Poisson"
Type "LinearImplicit"
Variables="u"
Finite Element Types="LAGRANGE"
Approximation Orders="FIRST"
n_dofs()=6474
n_local_dofs()=1658
n_constrained_dofs()=0
n_local_constrained_dofs()=0
n_vectors()=1
n_matrices()=1
DofMap Sparsity
Average On-Processor Bandwidth <= 6.8806
Average Off-Processor Bandwidth <= 0.107816
Maximum On-Processor Bandwidth <= 11
Maximum Off-Processor Bandwidth <= 5
DofMap Constraints
Number of DoF Constraints = 0
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X
to find memory corruption errors
[0]PETSC ERROR: likely location of problem given in stack below
[0]PETSC ERROR: --------------------- Stack Frames
------------------------------------
[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[0]PETSC ERROR: INSTEAD the line number of the start of the function
[0]PETSC ERROR: is given.
[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------
[0]PETSC ERROR: Signal received
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.6, Apr, 24, 2017
[0]PETSC ERROR: obj/bin/Delaunay on a linux-gnu-opt named dev-vb by dev Wed
Aug 16 17:32:49 2017
[0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/mpich
--with-shared-libraries=1 --with-debugging=yes
[0]PETSC ERROR: #1 User provided function() line 0 in unknown file
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
>
> I also inspected the "wirte_nodal_data" in MeshOutput, and got into
>> EquationSystems::build_parallell_solution_vector. I couldn't understand
>> completely what is going on there.
>>
>
> Yeah, there's a lot of complication (multiple systems, multiple
> variables per system, vector-valued variables, subdomain-restricted
> variables...) in that method which won't be relevant for you right
> away.
>
> It seems that each process is adding its local nodes to the
>> parallell_soln_ptr, using a "_communicator". Is that so?
>>
>
> Adding nodes on its local elements, then averaging by the number of
> elements around each node, but yes.
>
> What do you recommend for my case? Should I use the
>> build_parallell_solution_vector directly?
>>
>
> If you're happy with the compromises it makes (e.g. ignoring
> non-vertex/higher-order solution bases, averaging away jumps
> in discontinuous solutions) then it's a good way to start.
>
> Even if you're not happy with those compromises, it might be good for
> getting proof-of-concept code working.
>
> WIll the sys.point_value work after that?
>>
>
> If you use that, then you won't need sys.point_value at all; you can
> just index the solution vector it gives you by the node number you're
> interested in.
>
Let me try to have the sys.point_value working first (as it already is in
serial, the problem must be related to parallelization).
> ---
> Roy
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users