Re: [gentoo-user] RAID 1+0 question

2006-03-02 Thread Richard Fish
On 3/2/06, Marton Gabor [EMAIL PROTECTED] wrote:
 Hi!

 Thank you all for the fast replies, you helped me a lot. Unfortunately
 we cannot afford a HW RAID card, so I have to make it with software RAID.
 Now I have the idea to use RAID5 and if I get the picure rigth I need
 let's say a ~100MB /boot in RAID1, 512MB swap not in RAID on every disk,
 and I can build a RAID5 from the rest of the storage space, and will be
 able to use 750GB-(/boot*4)-(swap*4) and the 4th HD will store the
 so-called parity information.

FYI, RAID5 will spread the parity information across all disks.

You should also consider what kind of IO throughput you require from
this system.  RAID5 will require an IO to every drive for each write
operation.  Additionally, reads can only be satisfied by a single
drive.  This means your write performance will max out at around
33MB/s, and reads will max out at the speed of the disks (70MB/s
typical today)

However writes to a RAID 0+1 array will only require writing to 2
disks, so your maximum bandwidth should be around 66MB/s when writing.
 Reads really benefit here however, since they can be satisfied by
either RAID1 set, so you should easily be able to saturate the bus
bandwidth at 132MB/s.

Of course, if you really need IO bandwidth, hardware RAID is best...

-Richard

-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] RAID 1+0 question

2006-03-02 Thread Jarry
Richard Fish wrote:

 On 3/2/06, Marton Gabor [EMAIL PROTECTED] wrote:
let's say a ~100MB /boot in RAID1, 512MB swap not in RAID on every disk,

Actually, if you make 512MB non-raid swap on each disk with equal
priority, its like having swap on raid0 (it will be stripped over
swap-partitions on all disks). But disadvantage is, that if swap
is used and some of your disks fails, your system probably crushes
and will have to be restarted. If stability is your concern, you could
maybe think about swap on raid1. In such a case you would survive
disk failure even if swap had been already used (because it is be
mirrored too).

 You should also consider what kind of IO throughput you require from
 this system.  RAID5 will require an IO to every drive for each write
 operation.  Additionally, reads can only be satisfied by a single
 drive.  This means your write performance will max out at around
 33MB/s, and reads will max out at the speed of the disks (70MB/s
 typical today)

Frankly, I dont understand this. Why should the write speed be so
degraded? If you have 4 disks in raid5, and you want to write
1.5 GB of data, you actually write 500MB on disk1, 500MB on disk2,
500MB on disk3 and 500MB on disk4 (1.5 GB data + 0.5 GB parity).
And because they are sata-disks, they do not share i/o channel,
as 2 pata-disks on one cable. In other words, write operations
are parallel. There is of course some overhead caused by parity
calculation and synchronisation, but with today's cpu it is not
problem. I'm sure with 4 todays equal sata disks read/write speed
of raid5-array would be much higher...

 either RAID1 set, so you should easily be able to saturate the bus
 bandwidth at 132MB/s.

Nope. Today disks controllers are not attached to southbridge
through pci, but rather through a few pci-express lines - 2, 4, or
even more, depending on mobo configuration. For example nForce4 has
20 pci-express flexible lines, it means mobo-producers can use them
as they want, but most cheap boards have 2pci-express lines
assigned to sata disk controller).

FYI, peak transfer rates:
pci-express x1 = ~500MB/s unencoded data rate (1st gen. 250MB/s),
33MHz pci = 133 MB/s

And moreover unlike pci, pci-express is bi-directional, at the same
time data can be read/written...

But the other (and rather sad) thing is pci-express and sata-II/NCQ
support in linux... :-(

Jarry
-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] RAID 1+0 question

2006-03-02 Thread Richard Fish
On 3/2/06, Jarry [EMAIL PROTECTED] wrote:
 Frankly, I dont understand this. Why should the write speed be so
 degraded? If you have 4 disks in raid5, and you want to write
 1.5 GB of data, you actually write 500MB on disk1, 500MB on disk2,
 500MB on disk3 and 500MB on disk4 (1.5 GB data + 0.5 GB parity).

If you are writing raw data to an array, you are correct.  But if you
are using a filesystem, it is the filesystem that determines where
each file block is located, while it is the RAID layer tat determines
which parity block goes with which data block.  So the 500MB of data
written to disk1 is almost certainly going to be written to a
different set of blocks than the 500MB written to disk2, which will
differ from disk3, etc.  To calculate the parity, the RAID layer will
need to know the data of the other 3 disks, so for any blocks not in
buffer cache (those that were written should be in the cache), it will
need to issue a read for those blocks.

I may have underestimated things by using the pathalogical case where
every write requires 2 reads (for the data not in cache) and 2 writes
(for the updated data and the parity), but it isn't hard to imagine
that the pathalogical case becomes the norm for a moderately
fragmented filesystem.  But the _worst_ (and best) case for RAID 0+1
is two writes.

 And because they are sata-disks, they do not share i/o channel,
 as 2 pata-disks on one cable. ytIn other words, write operations
 are parallel. There is of course some overhead caused by parity
 calculation and synchronisation, but with today's cpu it is not
 problem. I'm sure with 4 todays equal sata disks read/write speed
 of raid5-array would be much higher...

It is PCI (or PCIe) bandwidth that will be the limiting factor...

  either RAID1 set, so you should easily be able to saturate the bus
  bandwidth at 132MB/s.

 Nope. Today disks controllers are not attached to southbridge
 through pci, but rather through a few pci-express lines - 2, 4, or
 even more, depending on mobo configuration. For example nForce4 has
 20 pci-express flexible lines, it means mobo-producers can use them
 as they want, but most cheap boards have 2pci-express lines
 assigned to sata disk controller).

My 4mo old nforce4 motherboard with a SATA chipset and RAID0 array
maxes out at 128MB/s throughput.  I am quite certain that only a
single PCIe lane is being used for the SATA controller there, as each
disk runs at ~80MB/s.  I've seen some reports of 160MB/s to SATA
disks, but that was using a hardware RAID controller, in RAID0, and
1rpm drives.

What MB are you using that allows you to get 500MB/s throughput to your disks?

-Richard

-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] RAID 1+0 question

2006-03-01 Thread jarry
Marton Gabor [EMAIL PROTECTED] wrote:

 - could someone give me a good howto?

http://www.gentoo.org/doc/en/gentoo-x86-tipsntricks.xml#software-raid

 - do I need to make a /boot partition which is not part of any
 arrays or will grub boot from raid1+0?

You can make /boot on raid too, but only raid1. Dont forget to
compile raid1 support into kernel, not modules. You can make
/dev/md0 for /boot out of 4 small partitions (sda1,sdb1,sdc1,sdd1).
All other partitions can be on raid0+1 or any other combination.

My own opinion: if I had 4x good 250GB sata drives, I'd probably
do raid1 for /boot, and raid5 for rest...

Jarry

-- 
Bis zu 70% Ihrer Onlinekosten sparen: GMX SmartSurfer!
Kostenlos downloaden: http://www.gmx.net/de/go/smartsurfer
-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] RAID 1+0 question

2006-03-01 Thread Matt Randolph

[EMAIL PROTECTED] wrote:


Marton Gabor [EMAIL PROTECTED] wrote:

 


   - could someone give me a good howto?
   



http://www.gentoo.org/doc/en/gentoo-x86-tipsntricks.xml#software-raid

 


   - do I need to make a /boot partition which is not part of any
arrays or will grub boot from raid1+0?
   



You can make /boot on raid too, but only raid1. Dont forget to
compile raid1 support into kernel, not modules. You can make
/dev/md0 for /boot out of 4 small partitions (sda1,sdb1,sdc1,sdd1).
All other partitions can be on raid0+1 or any other combination.

My own opinion: if I had 4x good 250GB sata drives, I'd probably
do raid1 for /boot, and raid5 for rest...

Jarry

 



...or RAID 6 if you're paranoid.

You might want to have a look at:
http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID

Matt

--
gentoo-user@gentoo.org mailing list