I got it.
It was a flag initialization issue.
Nothing related to libmesh.
Sorry to bother.

Renato

On Sat, May 4, 2019 at 5:17 PM Renato Poli <rebp...@gmail.com> wrote:

> I'm just reading in from a Exodus file.
> That would be a replicated mesh, right?
>
> ==== CODE
>   libMesh::Mesh lm_mesh(init.comm());
>   lm_mesh.read( exofn );
>
>
> On Sat, May 4, 2019 at 5:10 PM Alexander Lindsay <alexlindsay...@gmail.com>
> wrote:
>
>> Are you using a replicated or a distributed mesh? A distributed mesh will
>> not have the same active elements.
>>
>> > On May 4, 2019, at 1:52 PM, Renato Poli <rebp...@gmail.com> wrote:
>> >
>> > Hi Roy
>> >
>> > I found what is breaking the flow.
>> > Please consider the code below.
>> >
>> > I thought the "active_elements" were the same across the processors. It
>> > seems that is not the case?
>> > Then the point_locator (which is a collective task - right?) breaks sync
>> > across processors.
>> >
>> > My code intends to map values from one mesh to the other.
>> > What is the best construction for that?
>> > Should I use "elements_begin/end" iterator instead?
>> >
>> > === CODE
>> > MBcit el     = mesh.active_elements_begin();
>> > MBcit end_el = mesh.active_elements_end();
>> > for ( ; el != end_el; ++el) {
>> >      ...
>> >      UniquePtr<PointLocatorBase> plocator = mesh.sub_point_locator();
>> >      elem = (*plocator)( pt );
>> >     ...
>> > }
>> >
>> >
>> >> On Fri, May 3, 2019 at 9:30 PM Renato Poli <rebp...@gmail.com> wrote:
>> >>
>> >> Thanks.
>> >>
>> >> Should I call "parallel_object_only()" throughout the code to check
>> when
>> >> it lost sync?
>> >> Any smarter way to do that?
>> >> What GDB can do for me?
>> >> Parallel debugging is really something new to me...
>> >>
>> >> On Fri, May 3, 2019 at 7:34 PM Stogner, Roy H <
>> royst...@ices.utexas.edu>
>> >> wrote:
>> >>
>> >>>
>> >>>> On Fri, 3 May 2019, Renato Poli wrote:
>> >>>>
>> >>>> I see a number of error messages, as below.
>> >>>> I am struggling to understand what they mean and how to move forward.
>> >>>> It is related to manually setting a system solution and closing the
>> >>>> "solution" vector afterwards.
>> >>>> Any idea?
>> >>>>
>> >>>> Assertion
>> >>>>
>> >>>
>> `(this->comm()).verify(std::string("./include/libmesh/petsc_vector.h").size())'
>> >>>> failed.
>> >>>> Assertion
>> >>>> `(this->comm()).verify(std::string("src/mesh/mesh_base.C").size())'
>> >>> failed.
>> >>>> [Assertion
>> >>>>
>> >>>
>> `(this->comm()).verify(std::string("./include/libmesh/petsc_vector.h").size())'
>> >>>> failed.
>> >>>> [2] Assertion
>> >>>>
>> >>>
>> `(this->comm()).verify(std::string("./include/libmesh/petsc_vector.h").size())'
>> >>>> failed.[0] ./include/libmesh/petsc_vector.h, line 812, compiled Feb
>> 22
>> >>> 2019
>> >>>> at 17:56:59
>> >>>> 1] src/mesh/mesh_base.C, line 511, compiled Feb 22 2019 at 17:55:09
>> >>>> ./include/libmesh/petsc_vector.h, line 812, compiled Feb 22 2019 at
>> >>> 17:56:59
>> >>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> >>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
>> >>>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2
>> >>>>
>> >>>> Thanks,
>> >>>> Renato
>> >>>
>> >>> You're running in parallel, but your different processors have gotten
>> >>> out of sync.  At least 1 is at mesh_base.C line 511, and at least 2 or
>> >>> 3 are at petsc_vector.h 812.  Are you not calling PetscVector::close()
>> >>> on every processor, perhaps?  Then the missing processor would
>> >>> continue to whatever the next parallel-only operation is and the
>> >>> dbg/devel mode check for synchronization would fail.
>> >>> ---
>> >>> Roy
>> >>>
>> >>
>> >
>> > _______________________________________________
>> > Libmesh-users mailing list
>> > Libmesh-users@lists.sourceforge.net
>> > https://lists.sourceforge.net/lists/listinfo/libmesh-users
>>
>

_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to