On Wed, 16 Aug 2017, Renato Poli wrote:

      You actually need a serializer to run on every processor, not just
      proc 0, even in the case where you're only serializing onto proc 0.

Ok. I took the serialization outside the "if". Can it be
synchronization problem? Does the master process need to wait the
other ones to finish or so?

Huh, actually that is theoretically possible.  In dbg and devel modes
we sometimes begin a parallel method by doing a quick sync between all
processors to ensure that they've all entered the method together...
and it looks like the MeshSerializer constructor is one of those
methods.
  
      You're sure you need your own from-scratch GUI here?  You definitely
      can't make do with Paraview, VisIt, or some such?

Well ... it is possible, of course. I had this simple GUI previously

Okay; if it's not "from-scratch" then that does change the calculation
a bit.

and being able to interact with the mesh is quite useful for
debugging.

I can't argue with that.  High on my "wishlist that I will never ever
get to myself" is support for live Paraview+simulation integration -
I think their API for that is called Catalyst or Insitu?

             libMesh::MeshBase::const_element_iterator       el     =
            _mesh.active_elements_begin();
             const libMesh::MeshBase::const_element_iterator end_el =
            _mesh.active_elements_end();
             for ( ; el != end_el ; ++el)
             { ... sys.point_value(0,pt,elem) ...}

      Here I don't see anything obviously wrong.  What's the failure?

Oh, of course, now I see it.  sys.point_value(foo,bar,elem) only works
in parallel if elem is what I call "algebraically ghosted" on the processor
calling point_value: if all the coefficients for degrees of freedom
supported on elem are also present in system.current_local_solution.

Listed below. Any hint to obtain better error logging?

Build and run your application against libMesh in devel (or when
you're this early in development, dbg) mode.  Usually an internal
segfault in opt mode turns into a useful assertion in devel mode, and
occasionally into a more informative assertion in dbg mode.  The
internal testing we turn in in dbg mode is crazy slow, so you don't
want to use it with real problems, but it's great for initial
development.

In this case, I suspect if you rebuild in dbg or devel mode then
you'll fail an assertion that will tell you a PetscVector index is
neither local nor ghosted.
...
No, actually, looking at the code you'll probably see a failure of
  libmesh_assert (dof_map.is_evaluable(e, var));
in that System::point_value() overload first.

Let me try to have the sys.point_value working first (as it already
is in serial, the problem must be related to parallelization).

Yeah, I forgot that the sys.point_value(0,pt,elem) overload is really
only intended for use on local elements.  If all you care about is
nodal values for now, then I'd use sys.solution->localize_to_one to
make a serial copy of your solution vector on proc 0 too, then inside
your loop query that copy (using node->dof_number(0,0,0) to get the
dof index for the first system's first variable) for the value.
---
Roy
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to