Hi,

I have here a new server (compute server), which still is not
active. I will add to type's of raid systems from EMC.
A AX-100 (S-ATA with 4 Disks) and from our main Raid (Clarrion FC4700)
some disk space.
Mainly I'd like to check an effect reported by a college from another institute, how noticed a dramatic performance decrease after
1-2 weeks using a file system (he tested ext3 and xfs).


If one is sending me a script or some hints how to setup
the system to extract comparable results!

The machine is a Sun v40 with 4 Opteron and 8 GB Ram and a qlogic
FC-card. But we only have a 1GB-SAN ;) .

otherwise I'll try to copy the settings from thew linux magazine.

Bye, Peer

Sonny Rao schrieb:
On Wed, Dec 08, 2004 at 10:39:29PM +0100, Michael M?ller wrote:

On Mon, Dec 06, 2004 at 05:53:33PM -0500, Sonny Rao <[EMAIL PROTECTED]> wrote:

On Sun, Dec 05, 2004 at 11:40:21AM +0100, Michael M?ller wrote:

Hi all,

I read an article in the German 'Linux Magazin' 11/04 about a
comparision of the different FS. They tested Ext2, Ext3, JFS, XFS,
ReiserFS, Reiser4 and Veritas. Detailed results can be found on
http://www.linux-magazin.de/Service/Listings/2004/11/fs_bench.

The link only contains test results; no German texts.


True, there were a few articles for November here:
http://www.linux-magazin.de/Artikel/ausgabe/2004/11


My guess is that they didn't set the readahead high enough for
whatever type of device they were testing on 2.6 (It looks like a Raid
array, since on 2.4 it gets about 100MB/sec, which I don't think very
many single disks can do).  The readahead implementation on 2.6 is
certainly different from the one on 2.4.  IO performance on 2.6 is
much, much better across the board.

My German isn't great, so I'm not going to try and read the article,
but I'd also like to know what kind of array they are using for this
test.  Before we can make any conclusions, we should know what the
hardware is capable of doing.

The hardware:

Pentium 4, 2.8GHz, 512MB, 12 SATA-HDs in a RAID, overall capacity 2TB,
test partition 200GB

For the 2.4 tests they used SuSE Linux Enterprise Server 8, kernel
2.4.21-138-smp, for 2.6 SuSE Linux 9.1, 2.6.7-mm4 with patches for
Reiser 4.


Ok, how did they set the readahead size in the tests, just the defaults ?
For a 12 disk array, the default of 128k readahead on 2.6 isn't going
to cut it.

Was it a hardware or software RAID? RAID-0, RAID-5, RAID-10?
If it was hardware, what type of adapter, was it a PCI-X adapter or
just a regular PCI?


Given all of that, what is the expected/advertised hardware throughput?
Something like aio-stress would be a good test since a filesystem
isn't required, and we can isolate problems in the block layer/drivers.


One needs most of these details to make any kind of reasonable
conclusion from the results given.

Sonny

_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion




--
Mit freundlichem Gruss
    Peer-Joachim Koch
_________________________________________________________
Max-Planck-Institut fuer Biogeochemie
Dr. Peer-Joachim Koch
Hans-Knöll Str.10            Telefon: ++49 3641 57-6705
D-07745 Jena                 Telefax: ++49 3641 57-7705
begin:vcard
fn:Peer-Joachim Koch
n:Koch;Peer-Joachim
org;quoted-printable:MPI f=C3=BCr Biogeochemie;DVA
adr:;;Hans-Knoell-Str. 10;Jena;;07745;Germany
email;internet:[EMAIL PROTECTED]
title:Dr. 
tel;work:+49 3641 576705
tel;fax:+49 3641 577705
x-mozilla-html:FALSE
version:2.1
end:vcard

Reply via email to