Ramon van Alteren wrote:
Hi,

I hope this is the correct list, and the question hasn't been asked before.

I've read through most of the material on the wiki and the website and I
am currently in the process of building a proof-of-concept lustre cluster.
One thing isn't clear about the aggregated throughput figures in the
FAQ: http://www.clusterfs.com/faq.html

Stated throughput on a 64-bit linux OSS in the FAQ is:
 Dual-NIC gig-e on a 64-bit OSS: 220 MB/s

We're hoping to use these OSS's to provide access to a large collection
of rather small files.
Most of them are rendered images in various formats, typical filesize
range is 1Kb -  100Kb

The total volume is expected to grow well over the 10Tb over time, we're
currently at 4TB.
We would like to achieve the fastest throughput possible.

If I understood everything correctly parts of a file can / will be
stored on multiple OSS/OST's ?
Because of this aggregated throughput for a single file can be higher
than the max-throughput per OSS ?
Wat is the smallest element of a file that can be spread over multiple
OSS's/OST's ?

You can stripe but with files so small you'll see no benefit. You really don't want to stripe unless you have to. One thing to watch is your metadata inodes with that many files so small. 2TB of disk can store 2 billion inodes. That means a max of 2 billion files. Since 8TB is the max disk size supported by most distros you're limited to 8 billion files in one lustre filesystem. Then you have to start another filesystem.

Daniel

Best Regards,

Ramon van Alteren

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to