Hi all,

I wanted to pick up on this thread again. I gather that a change was 
made to the block size in read_serialized_data that led to a significant 
speedup for the use case that Jens was looking at.

But this is still a bottleneck for a lot of the reduced basis code, so 
I'd like to improve it further. Ben wrote:


On 08/17/2012 02:13 PM, Kirk, Benjamin (JSC-EG311) wrote:
> But beyond that there is likely a lot of libMesh overhead too - string 
> comparisons, memory allocation, etc... I'm wondering if you need a 
> specialized I/O implementation where all the vectors are strung 
> together consecutively and then streamed to/from disk as one big 
> super-vector? That will be much, much faster...


I agree, I think we do need a specialized I/O implementation that 
concatenates vectors. The key parts of the RB code related to this are 
at line 873 and 940 of rb_evaluation.C. We currently write (or read) a 
bunch of vectors one after the other, but it'd obviously be much better 
to string these all together into one big file.

Any thoughts on an API?

System::write_list_of_vectors(Xdr& io, std::vector< 
NumericVector<Number>* > list_of_vectors)

and

System::read_list_of_vectors(Xdr& io, std::vector< 
NumericVector<Number>* > list_of_vectors)

?

Also, any thoughts on how to get started on the implementation would be 
very helpful. I guess a lot would carry over from 
write_serialized_vector/read_serialized_vector.

Thanks,
David


------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to