Greetings to everybody...

        I've been setting up linux RAID (mainly 0, 1 or 5) for a while,
always on servers with UWSCSI discs, kernel 2.0.36, and I'm willing to
discuss performance issues, interchange opinions and such. 

        I have read both old and new Software-RAID howtos, and everything
I catch on the web, but sometimes the info is contradictory...

        I'm setting up a 0+1 RAID, with a 2.2.5 kernel, raidtools
19990309-0.90, kernel raid patch raid0145-19990309 (compiled as SMP). 

        I merged the 2.2.3 kernel patch to a 2.2.5 kernel, and by now
nothing have exploded :) But one of the odd things I've observed may be
due to this... 

        The machine is a Dual PPro 200 (Asus P65UP5), 128 Mb RAM, system
disk is a SCSI-II quantum hanging from a ncr53c875. 

        I've bought 4 10Gb IDE disks and 2 4.5 UW discs intending to set
up a file server with two main areas: 

1.- A big archive, redundancy is prioritary, performance is as important
as space, so I intend to set a 0 + 1 RAID with the 4 ide disks.

2.- Not so big archive, performance is a must, and redundancy is desired,
although daily backups are done. So I intend to set a RAID 1 with the 2
4,5 Gb UW discs. 

        What do you think about those decisions? suggestions/corrections
to my assumptions are welcome :)

        I've always fiddled with chunk size and the mke2fs stride options,
but that was with SCSI disks. To my (maybe wrong) understanding, once one
chooses a chunk size/filesystem block size depending on the expected
average size of the files one is going to have there, one recommendable
stride option is chunk size/fs block size.

Now, trying to obtain maximum performance from cheap ide discs, I'm also
tweaking with hdparm. And that's were I need advice: an example of the
desirable correlations between a choosen chunk/filesystem block size and
the hdparm parameters:

 - sector count for filesystem read-ahead       
 - sector count for multiple sector I/O
 - maximum sector count for the drive's internal prefetch mechanism

here's what a hdparm -vi shows for the IDE drives:

 Model=ST310240A, FwRev=3.41, SerialNo=GD344172
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
 RawCHS=19846/16/63, TrkSize=0, SectSize=0, ECCbytes=0
 BuffType=0(?), BuffSize=128kB, MaxMultSect=16, MultSect=8
 DblWordIO=no, maxPIO=2(fast), DMA=yes, maxDMA=2(fast)
 CurCHS=19846/16/63, CurSects=20005650, LBA=yes, LBAsects=20005650
 tDMA={min:120,rec:120}, DMA modes: mword0 mword1 *mword2
 IORDY=on/off, tPIO={min:240,w/IORDY:120}, PIO modes: mode3 mode4 

        md0 (O) has hda,hdb and md1 has hdc,hdd. What do you think? It
will be better to do them with hda,hdc/hdb,hdd? md2 is the 0+1 RAID.

        I noticed while benching that the first IDE port (md0) behaves
substantially better (about a 15%) than the second, I assume this is
normal, as happened with crappy 486/586 Motherboards, although I didn't
expect that in this (a while ago) expensive mobo... 

Let's discuss this setup, I think it will be very illustrative...

        I'm very excited with the new RAID version (I have always used
raidtools 0.50 ans stock 2.0.36 kernel) it gives the feeling things have
evolved a lot in advance!

        Greetings,
        
*****---(*)---**********************************************---------->
Francisco J. Montilla              Systems & Network administrator
[EMAIL PROTECTED]      irc: pukka        Seville            Spain   
INSFLUG (LiNUX) Coordinator. www.insflug.org   -   ftp.insflug.org

Reply via email to