On Fri, May 31, 2013 at 9:08 AM, Manav Bhatia <[email protected]> wrote:

> On Thu, May 30, 2013 at 5:50 PM, Kirk, Benjamin (JSC-EG311) <
> [email protected]> wrote:
>
> > On May 30, 2013, at 4:44 PM, "Manav Bhatia" <[email protected]>
> wrote:
> >
> > > At this stage, should I attempt to read the .xdr file into my code with
> > a ParallelMesh data structure? Would the mesh from this .xdr file be read
> > in parallel, thereby reducing the memory footprint?
> >
> > That is the theory. The Xdr io code will read the mesh in chunks. Also,
> if
> > the partitioning was written to the file, each processor should only
> accept
> > the pieces it owns.
> >
> > That is the theory. If it doesn't work right I'd love to get your mesh
> and
> > try to make it work right.
> >
> > -Ben
> >
> >
> > I was looking through the Nemesis IO class, and the pdf document for the
> same in contrib/exodus. It seems like Nemesis is able to operate under
> two paradigms: first, a single exodus file with a companion partitinoing
> file, and the second being a set  of exodus files that can be read in
> parallel by individual processors.
>
> Does the Nemesis_IO handle both these?
>

I'm not sure if we handle the first case or not, I don't think we do, doing
some sort of intelligent parallel I/O read would be nice.


>
> It seems like the second scheme is tied to the number of processors since
> the number of exodus files needs to be same as that. If this is true, then
> the first one appears to be more flexible to handle a changing number of
> processors for different runs.
>

Yes, you understand this part correctly, you have to partition the mesh
ahead of time on a large memory machine for the number of processors you
plan to use.


>
> Also, if I intend to use ParallelMesh with one of these two schemes, each
> processor would store its own portion of the mesh even while reading.
>

Again, in the typical working case that we use (the second), you do in fact
only store only the part of the mesh that you need on each processor.


> Correct? Following the initial read, if I intend to re-partition the
> (ParallelMesh) mesh object with Parmetis and redistribute the elements,
> would the data-structure support it?
>

I believe so, but I'll let Ben or John chime in on this question.

Now to answer the other question you might have, "How do I split the mesh
to begin with?"
We download the Seacas tools suite: http://sourceforge.net/projects/seacas/.
 There are several useful Exodus utilities in that suite.  One of them is
called "loadbal" which will split a source mesh into the desired number of
chunks.  We typically create several different splits so we can run on
several different numbers of processors.  You can even keep all of these
splits in one folder since the naming scheme will keep them from mixing
together.  The NemesisIO class will use the correct "spread" for the run
you are performing and write output files in a similar manner.  The nice
thing is that you can return to your big memory machine when you are
through to perform visualization.  Paraview and other viz packages will
automatically read all of the output files at once and piece the final
result into one for you to view.

Cody


>
> Thanks,
> Manav
>
> ------------------------------------------------------------------------------
> Get 100% visibility into Java/.NET code with AppDynamics Lite
> It's a free troubleshooting tool designed for production
> Get down to code-level detail for bottlenecks, with <2% overhead.
> Download for free and get started troubleshooting in minutes.
> http://p.sf.net/sfu/appdyn_d2d_ap2
> _______________________________________________
> Libmesh-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>
------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with <2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to