Hi Ben,

On 11/13/2012 09:53 AM, Kirk, Benjamin (JSC-EG311) wrote:
> Thanks for the info.  Is this something that I could test, or could you send 
> me a patch for one of the examples to crank them up to a big problem size?

I'll send you a patch for one of the examples to make a large RB IO file 
like the one I mentioned.


> Also, what type of computer / filesystem are you testing this on?  Straight 
> local disk or what?

This was on my laptop with Ubuntu 12.04, straight local disk...

> 301MB/19 = 15.8 MB/sec, so I expect there is still some room for improvement.
>
> But there is a lot of overhead too - indexing dof objects, assigning values, 
> copies so as not to overload processor 0, etc..
>
> I'll benchmark the raw underlying XDR I/O performance - how quickly Xdr can 
> write say 500MB - so we know how much higher that number could reasonably go.

OK, that'll be interesting.

David






On Nov 12, 2012, at 8:32 PM, David Knezevic <dkneze...@seas.harvard.edu> 
wrote:
>> On 11/12/2012 09:23 PM, David Knezevic wrote:
>>> On 11/06/2012 05:24 PM, Kirk, Benjamin (JSC-EG311) wrote:
>>>> Dave, my ultimate fix for you will be extending this to write
>>>>
>>>> for node…
>>>>     for vec…
>>>>       for var…
>>>>         for comp…
>>>>
>>>> or some permutation thereof, which will necessitate writing all the 
>>>> reduced basis vectors to a single file, in order to get maximum 
>>>> performance.  I assume that's OK with you?
>>>>
>>> Hi Ben,
>>>
>>> I've tried out the new RB IO on a problem with a "merged" .xdr file that
>>> is 301MB --- previously I read this data in as many separate .xdr files
>>> each of size 130kB.
>>>
>>> In serial, it took about 22 seconds to do the read in the merged case
>>> (i.e. RBEvaluation::read_in_vectors() took 22 seconds) vs. about 42
>>> seconds in the "separated" case.
>>>
>>> So it gives a 2x speedup, which is very helpful indeed. Though I recall
>>> that a 30x speedup didn't seem out of the question --- are there
>>> possibly some parameters (block sizes, etc, like before?) that I can try
>>> tweaking to see if we can speed up the merged case further?
>> P.S. It occurred to me that it's worth benchmarking
>> RBEvaluation::read_in_vectors in a bit more detail, and 19 of the 22
>> seconds is spent in System::read_serialized_vectors(). (The last 3
>> seconds in mostly due to the vector allocation.)
>>
>> David
>>



------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to