Hi,

I hope this is the correct list, and the question hasn't been asked before.

I've read through most of the material on the wiki and the website and I
am currently in the process of building a proof-of-concept lustre cluster.
One thing isn't clear about the aggregated throughput figures in the
FAQ: http://www.clusterfs.com/faq.html

Stated throughput on a 64-bit linux OSS in the FAQ is:
 Dual-NIC gig-e on a 64-bit OSS: 220 MB/s

We're hoping to use these OSS's to provide access to a large collection
of rather small files.
Most of them are rendered images in various formats, typical filesize
range is 1Kb -  100Kb

The total volume is expected to grow well over the 10Tb over time, we're
currently at 4TB.
We would like to achieve the fastest throughput possible.

If I understood everything correctly parts of a file can / will be
stored on multiple OSS/OST's ?
Because of this aggregated throughput for a single file can be higher
than the max-throughput per OSS ?
Wat is the smallest element of a file that can be spread over multiple
OSS's/OST's ?

Best Regards,

Ramon van Alteren

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to