On Tue, 1 Feb 2005, Nick Pavlica wrote:
I was wondering if any progress has been made in determining the cause
of the poor disk I/O performance illustrated by the testing in this
thread? Now that 5.3 is labeled as the production stable version, and
4.x is labeled as legacy, improving the
All,
I was wondering if any progress has been made in determining the
cause of the poor disk I/O performance illustrated by the testing in
this thread? Now that 5.3 is labeled as the production stable
version, and 4.x is labeled as legacy, improving the performance of
the 5.4+ distributions is
On Thu, 27 Jan 2005, Mike Tancsa wrote:
I/O (reads, writes at fairly large multiples of the sector size -- 512k is
a good number) and small I/O size (512 bytes is good). This will help
identify the source along two dimmensions: are we looking at a basic
storage I/O problem that's present
The move to an MPSAFE VFS will help with that a lot, I should think.
Do you know if this will find it's way to 5.x in the near future?
Also, while on face value this may seem odd, could you try the following
additional variables:
- Layer the test UFS partition directly over ad0 instead
On Thu, 27 Jan 2005, Nick Pavlica wrote:
The move to an MPSAFE VFS will help with that a lot, I should think.
Do you know if this will find it's way to 5.x in the near future?
Hopefully not too quickly, it's fairly experimental. I know there's
interest in getting it into 5.x however.
At 08:14 PM 27/01/2005, Robert Watson wrote:
My tests use the exact same disk layout, and hardware. However, I have
had consistent results on all 4 boxes that I have tested on.
I am redoing mine so that I boot from a different drive and just test on
one large RAID5 partition so that the
All,
With the recent release of 4.11 I thought that I would give it a
spin and com pair my results with my previous testing. I was blown
away by the performance difference between 4.11 and 5.3. Iostat
showed a difference of over 30Mb/s difference between the two. In
fact, it kept up or out
At 01:47 PM 26/01/2005, Nick Pavlica wrote:
All,
With the recent release of 4.11 I thought that I would give it a
Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G
of RAM on a 3ware 8605 controller with 4 drives in RAID5. Virtually every
test I did with iozone* showed
On Wed, 26 Jan 2005, Mike Tancsa wrote:
At 01:47 PM 26/01/2005, Nick Pavlica wrote:
All,
With the recent release of 4.11 I thought that I would give it a
Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G
of RAM on a 3ware 8605 controller with 4 drives in RAID5.
On Thu, 27 Jan 2005, Robert Watson wrote:
While it's not for the feint of heart, it might be interesting to see
how results compare in 6-CURRENT + debugging of various sorts (including
malloc) turned off, and debug.mpsafevfs turned on. One possible issue
with the twe/twa drivers is that
Quoting Nick Pavlica ([EMAIL PROTECTED]):
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the information I discovered in my testing. I also
want to reiterate my earlier statement that
Petri Helenius wrote:
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
a) ext3 and xfs are logging filesystems, so the problem with
asynchronous metadata updates possibly corrupting the filesystem on a
crash doesn't arise.
b) asynchronous
Matthias Buelow wrote:
Petri Helenius wrote:
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
a) ext3 and xfs are logging filesystems, so the problem with
asynchronous metadata updates possibly corrupting the filesystem on a
crash doesn't
All,
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the information I discovered in my testing. I also
want to reiterate my earlier statement that this is not an X vs. X
discussion, but an
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
Pete
Nick Pavlica wrote:
All,
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the
PH Date: Tue, 25 Jan 2005 00:08:52 +0200
PH From: Petri Helenius
PH To: Nick Pavlica
PH Are you sure you aren't comparing filesystems with different mount
PH options? Async comes to mind first.
speculation
He _did_ say as many default options as possible... does Linux still
mount async by
I didn't change any of the default mount options on either OS.
FreeBSD:
# cat /etc/fstab
# DeviceMountpoint FStype Options DumpPass#
17 matches
Mail list logo