On Fri, Sep 27, 2013 at 12:10 PM, Roy Stogner <[email protected]>wrote:
>
> On Fri, 27 Sep 2013, permcody . wrote:
>
> I propose raising the file format version number and ALWAYS writing
>> the new field under this new version number whether libMesh is
>> configured with --enable-unique-id or not. We'll simply ignore the
>> value if it's not configured.
>>
>
> Don't you guys use ExodusII more than XDR for restart files now? Now
> that ExodusII-on-HDF5 is an option, I'm kind of curious as to whether
> it'd be possible to just pack all our currently-XDR-specific data into
> extensions to that.
>
> Yeah we use ExodusII most of the time because we can visualize it.
Putting all of the rest of the meta-data in that format does sound like a
good idea moving forward, but that's not what we have today. Our users
make use of XDA when they want to do restart. That's really what we are
aiming for here. Either way, am I on base here with this proposal?
>
> One thing I'm going to look for is the impact in memory usage with
>> this extra id turned on. Right now our parallel_elems typically
>> occupy 40 bytes in the packed data structure (header_size=10 *
>> largest_id_type=4). With the new id turned on and defaulting to 8
>> bytes this will jump to 88 bytes (header_size=11 *
>> largest_id_type=8). When the data is in memory it certainly won't
>> double the size of the structure. It should no more than 15 bytes
>> in a worst case scenario and most likely just 8 (data alignment
>> issues). I suppose there are several ways we could try to reduce
>> the impact of this larger number in the packed structure. We could
>> potentially split the representation of the 8 byte number into two 4
>> byte numbers to keep things lean. Also we could experiment with
>> defaulting the unique id to 4 bytes. I'll look at this if it's
>> necessary.
>>
>
> If we're worried about memory usage, we can change the packed_range
> algorithms to communicate the largest ranges in chunks. Making MPI
> messages *too* small kills performance, sure, but IIRC even on modern
> computers MPI stacks hit their peak bandwidth by the time your buffer
> sizes are O(10^5) bytes; that's nothing.
>
> If we're worried about bandwidth usage, that's something we could
> improve upon, though. I'd love to rewrite the range packing stuff to
> eliminate the always-0 bytes; but at the time I first wrote it I was
> trying to get ParallelMesh redistribution working and I figured having
> simple code was much preferable to having efficient code for the first
> draft of that.
Fine with me!
>
>
> I plan to run some large simulations today with adaptivity to see if
>> I can break the 4 byte limit.
>>
>
> I'd expect transient AMR to blow through 4 billion unique ids pretty
> quickly.
Probably, I'll share my results with the list when I get there.
>
>
> Finally, back to the XDR issue. If I'm going to be making a change
>> to the format, I'd like to go ahead and get the
>> "block/sideset/nodeset" names written to the file as well. This
>> will lead to less version specific logic.
>>
>
> Agreed.
> ---
> Roy
------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel