ah; previous message was an example of 'during the operation' i.e.,
randread entire sets;
sorry about that,
cmds uses only a tiny MB ~100MB once files stored.
for cosd, during the operation uses 150-200mb per cosd (both during write/read)

still, when considering performance, previous message stands or else
the performance decreases if the minimum amount of RAM isn't there.

cheers

On Sun, Jun 12, 2011 at 20:41, djlee064 <[email protected]> wrote:
> based on the empirical measurement, starting with a set of (real)
> number of files, focusing mainly on small-files. and found some rough
> relationship,
> all filesets with same dist. i.e., ~80% <4MB file, mostly small, all 1x. 
> replc.
> For MDS RAM:
>
> You need about:
> 0.6GB RAM to store 0.03million files (fileset vol. 1.2TB)
> 1.2GB to store  0.065million files (fileset vol. 2.4TB)
> 1.8GB to store 0.13million files  (fileset vol. 4.8TB)
>
> ratio of no.files per GB fortunately increases, (i.e., 0.07million per
> GB for the 0.13million files), hopefully per-GB will support
> 0.1million files as more files are stored.
> then, 18GB = 1.8million files,
> 180GB = 18million
> 1800GB= 180million (this is about 6.64PB),
>
> So to support that amount,, we need 100 MDS nodes with 18GB purely for
> cmds, incl mem for OS, etc, maybe 20-24GB per node.
>
> Cheers
>
> On Sat, Jun 11, 2011 at 06:47, Fyodor Ustinov <[email protected]> wrote:
>>
>> Hi!
>>
>> Which configuration would you recommended for cluster with 50-80 million 
>> files?
>>
>> WBR,
>>    Fyodor.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to [email protected]
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to