[Warning: semi-useless information ahead]
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
Somewhere in -STABLE between 4.8-RELEASE and a month or so ago I recreated
a
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning In message [EMAIL PROTECTED], Petri Helenius
Poul-Henning writes:
fsck problem should be gone with less inodes and less blocks since
if I read the code correctly, memory is consumed according to used
inodes and blocks so
In message [EMAIL PROTECTED], David Gilbert writes:
That reminds me... has anyone thought of designing the system to have
more than 8 frags per block? Increasingly, for large file
performance, we're pushing up the block size dramatically. This is
with the assumption that large disks will
David Gilbert wrote:
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning I am not sure I would advocate 64k blocks yet.
Poul-Henning I tend to stick with 32k block, 4k fragment myself.
That reminds me... has anyone thought of designing the system to have
more than 8
PK == Poul-Henning Kamp [EMAIL PROTECTED] writes:
PK I am not sure I would advocate 64k blocks yet.
PK I tend to stick with 32k block, 4k fragment myself.
At what file system size do you recommend bumping the block size?
I've got a 226Gb RAID array and right now it is using the default
newfs
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
-g avgfilesize
The expected average file size for the file system.
-h avgfpdir
Geoff Buckingham wrote:
- This is a big problem (no pun intended), my smallest requirement is still
5TB... what would you recommend? The smallest file on the storage will be
500MB.
If you files are all going this large I imagine you should look carefully at
what you do with inodes, block and
In message [EMAIL PROTECTED], Petri Helenius writes:
fsck problem should be gone with less inodes and less blocks since if
I read the code correctly, memory is consumed according to used inodes
and blocks so having like 2 inodes and 64k blocks should allow
you to build 5-20T filesystem and
Poul-Henning Kamp wrote:
I am not sure I would advocate 64k blocks yet.
Good to know, I have stuck with 16k so far due to the fact that our
database has pagesize of 16k and I found little benefit tuning that.
(but it´s completely different application)
I tend to stick with 32k block, 4k
In message [EMAIL PROTECTED], Petri Helenius writes:
You have any insight into the fsck memory consumption? I remember getting
myself saved quite a long time ago by reducing the number of inodes.
I have not studied it. I always try to avoid having more than an
order of magnitude more inodes
On Tue, Sep 02, 2003 at 03:53:53PM -0700, Max Clark wrote:
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's using the default newfs settings
(which would create an insane
In the last episode (Sep 02), Max Clark said:
[ quoting format manually recovered ]
Dan Nelson wrote
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you
13 matches
Mail list logo