Geoff Buckingham wrote:
On Thu, Sep 04, 2003 at 01:12:45AM -0700, Terry Lambert wrote:
Yes. Limit the number of CG bitmaps you examine simultaneously,
and make the operation multiple pass over the disk. This is not
that hard a modification to fsck, and it can be done fairly
quickly by
David Schultz ([EMAIL PROTECTED]) wrote:
From my brief research on the subject, the FreeBSD community
has been highly resistant to supporting third party filesystems
precisely because nobody with such needs as yours has ever
contributed the code necessary to make third party filesystem
[Warning: semi-useless information ahead]
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
Somewhere in -STABLE between 4.8-RELEASE and a month or so ago I recreated
a
On Fri, Sep 05, 2003, David Gilbert wrote:
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning In message [EMAIL PROTECTED], Petri Helenius
Poul-Henning writes:
fsck problem should be gone with less inodes and less blocks since
if I read the code correctly, memory is
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning In message [EMAIL PROTECTED], Petri Helenius
Poul-Henning writes:
fsck problem should be gone with less inodes and less blocks since
if I read the code correctly, memory is consumed according to used
inodes and blocks so
In message [EMAIL PROTECTED], David Gilbert writes:
That reminds me... has anyone thought of designing the system to have
more than 8 frags per block? Increasingly, for large file
performance, we're pushing up the block size dramatically. This is
with the assumption that large disks will
David Gilbert wrote:
Poul-Henning == Poul-Henning Kamp [EMAIL PROTECTED] writes:
Poul-Henning I am not sure I would advocate 64k blocks yet.
Poul-Henning I tend to stick with 32k block, 4k fragment myself.
That reminds me... has anyone thought of designing the system to have
more than 8
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
The kernel being able to address the RAM does not meant that
the KVA+UVA space is larger than 4G. At best, you could take
the uiomove/copyin/copyout performance hit,
On Thu, Sep 04, 2003 at 01:12:45AM -0700, Terry Lambert wrote:
Yes. Limit the number of CG bitmaps you examine simultaneously,
and make the operation multiple pass over the disk. This is not
that hard a modification to fsck, and it can be done fairly
quickly by anyone who understands the
On Wed, 3 Sep 2003, Tim Kientzle wrote:
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
That's 4G of memory in the system. 32-bit processors
are still limited to 4G processor address space, which means
On 4 Sep 2003, at 11:53, Julian Elischer wrote:
On Wed, 3 Sep 2003, Tim Kientzle wrote:
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that
5.x w/ PAE could address more than 4GB of Ram.
That's 4G of memory in the system. 32-bit processors
are
On Thu, 4 Sep 2003, Andrew Kinney wrote:
Our experience has been that with 4GB of RAM (or more) you
really must increase your KVA to 2GB, leaving only 2GB of UVA.
So, I would concur with what Julian said.
ducks his head to avoid the rotten tomatoes that are sure to be
thrown ;-)
On Thu, Sep 04, 2003, Andrew Kinney wrote:
Our experience has been that with 4GB of RAM (or more) you
really must increase your KVA to 2GB, leaving only 2GB of UVA.
So, I would concur with what Julian said.
ducks his head to avoid the rotten tomatoes that are sure to be
thrown ;-)
be? Will there be PCI cards that I would not be able to use in either of
these systems?
Thanks,
-Max
-Original Message-
From: Petri Helenius [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 03, 2003 10:13 AM
To: Max Clark
Cc: Dan Nelson; [EMAIL PROTECTED]
Subject: Re: 20TB Storage System
Max Clark
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
- The PAE support allows FreeBSD machines to make use of more than 4
gigabytes of RAM. This functionality was originally written by Jake
Burkholder under contract with
[Please, please, please fix your mailer to quote properly. It's very
difficult to read your messages.]
On Wed, Sep 03, 2003 at 11:08:28AM -0700, Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
- The PAE support
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
It does. However as long as a pointer is 32 bits, your address space for
a process
is maxed out at 4G which translates to about 2.5G user after kernel and
other
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
It can. PAE lets the hardware address more than 4GB of RAM, but that doesn't
change how much memory you can give to any one process: a 32-bit process still
has a
Max Clark wrote:
Ohh, that's an interesting snag. I was under the impression that 5.x w/ PAE
could address more than 4GB of Ram.
That's 4G of memory in the system. 32-bit processors
are still limited to 4G processor address space, which means
3G per process (allowing some memory for kernel
On Wed, Sep 03, 2003 at 11:06:15AM +, Geoff Buckingham wrote:
However I just read the newfs man page and am intrigued to know what effect
the -g and -h options have
-g avgfilesize
The expected average file size for the file system.
-h avgfpdir
Geoff Buckingham wrote:
- This is a big problem (no pun intended), my smallest requirement is still
5TB... what would you recommend? The smallest file on the storage will be
500MB.
If you files are all going this large I imagine you should look carefully at
what you do with inodes, block and
In message [EMAIL PROTECTED], Petri Helenius writes:
fsck problem should be gone with less inodes and less blocks since if
I read the code correctly, memory is consumed according to used inodes
and blocks so having like 2 inodes and 64k blocks should allow
you to build 5-20T filesystem and
Poul-Henning Kamp wrote:
I am not sure I would advocate 64k blocks yet.
Good to know, I have stuck with 16k so far due to the fact that our
database has pagesize of 16k and I found little benefit tuning that.
(but it´s completely different application)
I tend to stick with 32k block, 4k
In message [EMAIL PROTECTED], Petri Helenius writes:
You have any insight into the fsck memory consumption? I remember getting
myself saved quite a long time ago by reducing the number of inodes.
I have not studied it. I always try to avoid having more than an
order of magnitude more inodes
On Tue, Sep 02, 2003 at 03:53:53PM -0700, Max Clark wrote:
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's
Sorry for the cross post.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Max Clark
Sent: Tuesday, September 02, 2003 11:00 AM
To: [EMAIL PROTECTED]
Subject: 20TB Storage System
Hi all,
I need to attach 20TB of storage to a network (as low cost
In message [EMAIL PROTECTED], Max Clark writ
es:
Given the above:
1) What would my expected IO be using vinum to stripe the storage enclosures
detailed above?
That depends a lot on the applications I/O pattern, an I doubt a
precise prediction is possible.
In particular the FibreChannel is hard
[This isn't really a performance issue so I trimmed it.]
On Tue, Sep 02, 2003 at 12:48:29PM -0700, Max Clark wrote:
I need to attach 20TB of storage to a network (as low cost as possible), I
need to sustain 250Mbit/s or 30MByte/s of sustained IO from the storage to
the disk.
I have found
-Original Message-
From: Poul-Henning Kamp [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 02, 2003 1:02 PM
To: Max Clark
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: FW: 20TB Storage System
In message [EMAIL PROTECTED], Max Clark
writ
es:
Given the above:
1) What would
In message [EMAIL PROTECTED], Max Clark writ
es:
I know adding ccd/vinum to the equation will lower my IO throughput, but the
question is... if I have an external hardware shelf with 3.5TB (16 250GB
drives w/ Raid 5 from hardware) and I put a Raid 0 stripe across 3 of these
shelves what would my
Poul-Henning Kamp wrote:
2) What is the maximum size of a filesystem that I can present to the host
OS using vinum/ccd? Am I limited anywhere that I am not aware of?
Good question, I'm not sure we currently know the exact barrier.
Just make sure you run UFS2, which is the default on
Just make sure you run UFS2, which is the default on -CURRENT because
UFS1 has
a 1TB limit.
- What's the limit with UFS2?
Are there major requirements to run FreeBSD 5.x or can I still run stable
with this?
Thanks,
Max
___
[EMAIL PROTECTED] mailing
In the last episode (Sep 02), Max Clark said:
2) What is the maximum size of a filesystem that I can present to the
host OS using vinum/ccd? Am I limited anywhere that I am not aware
of?
Depends on whether you plan on crashing or not :) According to
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you want fsck to
succeed. I don't know if that's using the default newfs settings
(which would create an insane
In the last episode (Sep 02), Max Clark said:
[ quoting format manually recovered ]
Dan Nelson wrote
Depends on whether you plan on crashing or not :) According to
http://lists.freebsd.org/pipermail/freebsd-fs/2003-July/000181.html,
you may not want to create filesystems over 3TB if you
35 matches
Mail list logo