>>>>> "Poul-Henning" == Poul-Henning Kamp <[EMAIL PROTECTED]> writes:

Poul-Henning> In message <[EMAIL PROTECTED]>, Petri Helenius
Poul-Henning> writes:
>> fsck problem should be gone with less inodes and less blocks since
>> if I read the code correctly, memory is consumed according to used
>> inodes and blocks so having like 20000 inodes and 64k blocks should
>> allow you to build 5-20T filesystem and actually fsck them.

Poul-Henning> I am not sure I would advocate 64k blocks yet.

Poul-Henning> I tend to stick with 32k block, 4k fragment myself.

Poul-Henning> This is a problem which is in the cross-hairs for 6.x

That reminds me... has anyone thought of designing the system to have
more than 8 frags per block?  Increasingly, for large file
performance, we're pushing up the block size dramatically.  This is
with the assumption that large disks will contain large files.

... but I havn't seem that, myself.  Large arrays that we run tend to
have multiple system images (for diskless or semi-diskless operation)
and many more thousands of users ... all with their usual complement
of small files.

It strikes me that driving the block size up (as far as 1M) and having
a 256 (or so) fragments might become appropriate.

We probably also need to address disks with larger block sizes soon,
but that's another issue alltogether.


|David Gilbert, Independent Contractor.       | Two things can only be     |
|Mail:       [EMAIL PROTECTED]                    |  equal if and only if they |
|http://daveg.ca                              |   are precisely opposite.  |
[EMAIL PROTECTED] mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to