On Tue, 1 Feb 2005, Nick Pavlica wrote:
I was wondering if any progress has been made in determining the cause
of the poor disk I/O performance illustrated by the testing in this
thread? Now that 5.3 is labeled as the production stable version, and
4.x is labeled as legacy, improving the
All,
I was wondering if any progress has been made in determining the
cause of the poor disk I/O performance illustrated by the testing in
this thread? Now that 5.3 is labeled as the production stable
version, and 4.x is labeled as legacy, improving the performance of
the 5.4+ distributions is
On Thu, 27 Jan 2005, Mike Tancsa wrote:
I/O (reads, writes at fairly large multiples of the sector size -- 512k is
a good number) and small I/O size (512 bytes is good). This will help
identify the source along two dimmensions: are we looking at a basic
storage I/O problem that's present
The move to an MPSAFE VFS will help with that a lot, I should think.
Do you know if this will find it's way to 5.x in the near future?
Also, while on face value this may seem odd, could you try the following
additional variables:
- Layer the test UFS partition directly over ad0 instead
On Thu, 27 Jan 2005, Nick Pavlica wrote:
The move to an MPSAFE VFS will help with that a lot, I should think.
Do you know if this will find it's way to 5.x in the near future?
Hopefully not too quickly, it's fairly experimental. I know there's
interest in getting it into 5.x however.
At 08:14 PM 27/01/2005, Robert Watson wrote:
My tests use the exact same disk layout, and hardware. However, I have
had consistent results on all 4 boxes that I have tested on.
I am redoing mine so that I boot from a different drive and just test on
one large RAID5 partition so that the
All,
With the recent release of 4.11 I thought that I would give it a
spin and com pair my results with my previous testing. I was blown
away by the performance difference between 4.11 and 5.3. Iostat
showed a difference of over 30Mb/s difference between the two. In
fact, it kept up or out
At 01:47 PM 26/01/2005, Nick Pavlica wrote:
All,
With the recent release of 4.11 I thought that I would give it a
Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G
of RAM on a 3ware 8605 controller with 4 drives in RAID5. Virtually every
test I did with iozone* showed
On Wed, 26 Jan 2005, Mike Tancsa wrote:
At 01:47 PM 26/01/2005, Nick Pavlica wrote:
All,
With the recent release of 4.11 I thought that I would give it a
Yes, I found the same thing basically. My test box is a P4 3Ghz with 2G
of RAM on a 3ware 8605 controller with 4 drives in RAID5.
On Thu, 27 Jan 2005, Robert Watson wrote:
While it's not for the feint of heart, it might be interesting to see
how results compare in 6-CURRENT + debugging of various sorts (including
malloc) turned off, and debug.mpsafevfs turned on. One possible issue
with the twe/twa drivers is that
Quoting Nick Pavlica ([EMAIL PROTECTED]):
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the information I discovered in my testing. I also
want to reiterate my earlier statement that
Petri Helenius wrote:
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
a) ext3 and xfs are logging filesystems, so the problem with
asynchronous metadata updates possibly corrupting the filesystem on a
crash doesn't arise.
b) asynchronous
Matthias Buelow wrote:
Petri Helenius wrote:
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
a) ext3 and xfs are logging filesystems, so the problem with
asynchronous metadata updates possibly corrupting the filesystem on a
crash doesn't
Am Montag, 24. Januar 2005 06:17 schrieb Oliver Fuchs:
In addition, was on OS running a window manager and the other not? Was
one running ssh and the other not, was FBSD running Linux emu? ... Was
one running (insert program) and the other not...
In addition to this:
- how often did you
On Mon, 24 Jan 2005, Emanuel Strobl wrote:
Am Montag, 24. Januar 2005 06:17 schrieb Oliver Fuchs:
In addition, was on OS running a window manager and the other not? Was
one running ssh and the other not, was FBSD running Linux emu? ... Was
one running (insert program) and the other
Chris wrote:
In addition, was on OS running a window manager and the other not? Was
I seriously doubt that raw disk performance of such a test is noticably
affected by the existence of a window manager, or sshd...
mkb.
___
Oliver Fuchs wrote:
Maybe there is a performance problem with FreeBSD - but again that was not
his question.
I don't know why people are so obsessed with performance.. after all,
you can't really load stock Unix systems properly anyways (like, say, an
IBM mainframe, which you can keep at 90+%
All,
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the information I discovered in my testing. I also
want to reiterate my earlier statement that this is not an X vs. X
discussion, but an
Are you sure you aren't comparing filesystems with different mount
options? Async comes to mind first.
Pete
Nick Pavlica wrote:
All,
I would like to start addressing some of the feedback that I have
been given. I started this discussion because I felt that it was
important to share the
PH Date: Tue, 25 Jan 2005 00:08:52 +0200
PH From: Petri Helenius
PH To: Nick Pavlica
PH Are you sure you aren't comparing filesystems with different mount
PH options? Async comes to mind first.
speculation
He _did_ say as many default options as possible... does Linux still
mount async by
I didn't change any of the default mount options on either OS.
FreeBSD:
# cat /etc/fstab
# DeviceMountpoint FStype Options DumpPass#
On Sat, 22 Jan 2005, Nick Pavlica wrote:
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and didn't want to take
FBSD out of the loop just yet. There may
Oliver Fuchs wrote:
On Sat, 22 Jan 2005, Nick Pavlica wrote:
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and didn't want to take
FBSD out of the loop just
On Sun, 23 Jan 2005, Chris wrote:
Oliver Fuchs wrote:
On Sat, 22 Jan 2005, Nick Pavlica wrote:
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and didn't want to take
FBSD out of the loop just yet. There may be flaws in my testing that
have led me to
I apologize if this has been posted twice.
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and didn't want to take
FBSD out of the loop just yet. There may be
it was said:
All,
This post is not about BSD VS. Linux and should not be taken that
way. I think that Flame Wars/Engineer Wars are waste of time and
energy. I was surprised by my test results and didn't want to take
FBSD out of the loop just yet. There may be flaws in my testing that
have
All,
I have been evaluating operating systems/filesystems for an upcoming
web application service. Like most web applications, it will rely
heavily on the database and disk I/O. We have decided to use
Postgresql for our database needs, but haven't finalized our OS
choice. I have been testing
On Fri, Jan 21, 2005 at 03:20:58PM -0700, Nick Pavlica wrote:
To be sure that I was using up to date versions of each OS I performed
a cvsup and rebuilt the kernel (GENERIC) during the FBSD setup, and a
yum update on the Linux install.
Most likely unrelated to your performance question, but
Quoting Nick Pavlica ([EMAIL PROTECTED]):
[Performance tests]
Are there any good reasons for such a difference. Your thoughts are
appreciated.
There is so little information, so anything we throw your way will be
guesses. So I'll try to mention things one should be aware of when
measuring
it was said:
snip
However, after performing a number of I/O and Postgresql tests on
different equipment, the performance proved to be considerably faster
when using Fedora. Fedora with XFS was the clear performance winner
in every test, followed by Fedora with EXT3, then FreeBSD. I was
31 matches
Mail list logo