Whoops - sorry - I mixed up two ideas. It isn't the mesh that's serialized to every processor for Nemesis... it's the solution vector (which may be quite large).
Look at lines 49-59 here: https://github.com/libMesh/libmesh/blob/master/src/mesh/mesh_output.C If it's a parallel mesh format we _broadcast_ the solution vector to all processors! This may be an exceedingly bad idea! So.... overall, depending on how many processes per node you're running and how many degrees of freedom per node you're using... you may use less memory by using Exdous output with ParallelMesh. All of this is stuff we should fix... just don't have the time for it at the moment... Derek On Sat, May 10, 2014 at 10:44 AM, Manav Bhatia <bhatiama...@gmail.com>wrote: > Thanks for the heads up, Derek. > > It was about a year ago that I started using NemesisIO for my work, since > I thought that ExodusII needed the mesh to be serialized for output and > Nemesis did not. But you are saying that it is actually the opposite! > > I am a bit confused about this, since the contractor for NemesisIO passes > true for _is_parallel_format to MeshOutput, while ExodusII defaults to the > false argument. Given this, wouldn’t MeshOputput::write_equation_system() > then serialize the whole mesh for ExodusII and not for Nemesis? > > Or am I missing something? > > Manav > > > On May 10, 2014, at 12:11 PM, Derek Gaston <fried...@gmail.com> wrote: > > Just a heads up: Nemesis output is actually MUCH less efficient than > Exodus output currently. During Nemesis output the mesh is actually > serialized (copied) to ALL processors! Which is almost exactly the > opposite of what you probably want to happen! This is a long-standing > issue that I took steps towards fixing a while ago but that work hasn't > been finished off. > > Exodus output on the other hand now does a really efficient parallel > solution reduction where the total solution vector only ends up on > processor 1. See some of the discussion here: > https://github.com/libMesh/libmesh/pull/190 > > Long story short: you currently don't gain anything by using Nemesis > output with libMesh... > > Now - I know that doesn't solve your problem, I just thought you might > want to know that info. > > As for your actual issue... hopefully Roy or John will weigh in with some > troubleshooting tips... > > Derek > > > > On Sat, May 10, 2014 at 9:44 AM, Manav Bhatia <bhatiama...@gmail.com>wrote: > >> I use Exodus to read in the mesh and Nemesis to write the output data. >> >> I have long been using ParallelMesh with AMR without problems, but lately >> my ParallelMesh with Tet4 elements, refined through a couple of AMR steps, >> has been throwing exceptions. But that is a matter of separate discussion. >> >> For now, I have replaced the ParallelMesh with a SerialMesh and am trying >> to do the same thing: read in using ExodusII_IO, AMR, and the output using >> Nemesis. All of this is on two processors. >> >> Now, in the last step, I am getting the error that I described in my >> previous email, which I am trying to decipher. >> >> Manav >> >> >> On May 10, 2014, at 11:00 AM, Derek Gaston <fried...@gmail.com> wrote: >> >> I'm not sure how (or why?) you're using SerialMesh with Nemesis... >> Nemesis is for reading Parallel Meshes... >> >> What exactly are you trying to do here? >> >> Derek >> >> >> >> On Sat, May 10, 2014 at 8:52 AM, Manav Bhatia <bhatiama...@gmail.com>wrote: >> >>> Hi, >>> >>> I am curious if SerialMesh with AMR uses the RemoteElem. In the >>> following function (from elem.C), I have marked the lines of interest with >>> “****”. >>> >>> This method gets called by line 2180 of nemesis_io_helper.C, which >>> wants to find active elements with a common side, but also on the local >>> processor. But, the method in elem.C does not seem to make a distinction >>> about elements that may have a different pid assigned, but may not be >>> remote elems. Elem::is_remote() is false by default. >>> >>> Eventually, error on line 2211 of nemesis_io_helper.C is thrown. >>> >>> Maybe it would make sense to add a check for (processor_id != >>> this->processor_id) in Elem::is_remote()? Any thoughts? >>> >>> Thanks, >>> Manav >>> >>> >>> void Elem::active_family_tree_by_side (std::vector<const Elem*>& family, >>> const unsigned int s, >>> const bool reset) const >>> { >>> // The "family tree" doesn't include subactive elements >>> libmesh_assert(!this->subactive()); >>> >>> // Clear the vector if the flag reset tells us to. >>> if (reset) >>> family.clear(); >>> >>> libmesh_assert_less (s, this->n_sides()); >>> >>> // Add an active element to the family tree. >>> if (this->active()) >>> family.push_back(this); >>> >>> // Or recurse into an ancestor element's children. >>> // Do not clear the vector any more. >>> else >>> for (unsigned int c=0; c<this->n_children(); c++) >>> **** if (!this->child(c)->is_remote() && this->is_child_on_side(c, s)) >>> ***** >>> this->child(c)->active_family_tree_by_side (family, s, false); >>> } >>> >>> >>> ------------------------------------------------------------------------------ >>> Is your legacy SCM system holding you back? Join Perforce May 7 to find >>> out: >>> • 3 signs your SCM is hindering your productivity >>> • Requirements for releasing software faster >>> • Expert tips and advice for migrating your SCM now >>> http://p.sf.net/sfu/perforce >>> _______________________________________________ >>> Libmesh-users mailing list >>> Libmesh-users@lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>> >> >> >> > > ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce _______________________________________________ Libmesh-users mailing list Libmesh-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-users