Check mdadm-Raid 5

2008-01-30 Thread Michael Mott
Hi there,

you're just my last hope - I think.

I've posted this question at many Web-Sites in the last week, but nobody can 
help me.

For a view days I've created a Software Raid 5 (mdadm) under Ubuntu 7.10 32Bit 
Alternate with 4 x 233 GiB SATA II Hdds @ USB 2.0.

After that I want to create a Truecrypt-Volume on this md-Device what not 
should be a problem.

But the Process hangs after a view Minutes running and then whole System hangs. 
So I've pushed the Power-Button and the System writes something to the 
System-Hdd - but then nothing happens further. Then I have resetted the System.

After next boot I've assembled the Raid 5 again what function without any 
problem. The System even wont to resync anything.

After that I've build the Truecrypt-Volume on the whole md-Device without any 
Problem. 

Now my question: Is there any Methode to Test the integrity of die md-Device at 
low-level? Can I trust the Device?

Best regards,

Michael
-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread David Greaves
Peter Rabbitson wrote:
 I guess I will sit down tonight and craft some patches to the existing
 md* man pages. Some things are indeed left unsaid.
If you want to be more verbose than a man page allows then there's always the
wiki/FAQ...

http://linux-raid.osdl.org/

Keld Jørn Simonsen wrote:
 Is there an official web page for mdadm?
 And maybe the raid faq could be updated?

That *is* the linux-raid FAQ brought up to date (with the consent of the
original authors)

Of course being a wiki means it is now a shared, community responsibility - and
to all present and future readers: that means you too ;)

David

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread David Greaves
On 26 Oct 2007, Neil Brown wrote:
On Thursday October 25, [EMAIL PROTECTED] wrote:
 I also suspect that a *lot* of people will assume that the highest superblock
 version is the best and should be used for new installs etc.

 Grumble... why can't people expect what I want them to expect?


Moshe Yudkowsky wrote:
 I expect it's because I used 1.2 superblocks (why
 not use the latest, I said, foolishly...) and therefore the RAID10 --

Aha - an 'in the wild' example of why we should deprecate '0.9 1.0 1.1, 1.2' and
rename the superblocks to data-version + on-disk-location :)


David





-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Check mdadm-Raid 5

2008-01-30 Thread Rui Santos

Michael Mott wrote:

Hi there,
  

Hi,

you're just my last hope - I think.

I've posted this question at many Web-Sites in the last week, but nobody can 
help me.

For a view days I've created a Software Raid 5 (mdadm) under Ubuntu 7.10 32Bit 
Alternate with 4 x 233 GiB SATA II Hdds @ USB 2.0.

After that I want to create a Truecrypt-Volume on this md-Device what not 
should be a problem.

But the Process hangs after a view Minutes running and then whole System hangs. 
So I've pushed the Power-Button and the System writes something to the 
System-Hdd - but then nothing happens further. Then I have resetted the System.

After next boot I've assembled the Raid 5 again what function without any 
problem. The System even wont to resync anything.

After that I've build the Truecrypt-Volume on the whole md-Device without any Problem. 


Now my question: Is there any Methode to Test the integrity of die md-Device at 
low-level? Can I trust the Device?
  


I use this script on a regular basis to check for RAID integrity. I hope 
this is what you mean.


#!/bin/bash

# Check all RAIDs
RAIDS_TO_CHECK=md0 md1 md2

# Check all md's
for i in $RAIDS_TO_CHECK; do
   echo check  /sys/block/$i/md/sync_action
   sleep 5
   while ( test `mdadm --misc --detail /dev/$i | grep -c Rebuild 
Status` -gt 0 ); do

   sleep 10
   done
   if ( test $(( `cat /sys/block/$i/md/mismatch_cnt` )) -gt 0 ); then
   echo repair  /sys/block/$i/md/sync_action
   sleep 2
   while ( test `mdadm --misc --detail /dev/$i | grep -c 
Rebuild Status` -gt 0 ); do

   sleep 10
   done
   echo Warning: A Repair to RAID $i was needed...
   fi
done


Best regards,

Michael
  

Rui


--

Cumprimentos

*Rui Santos*
Dep. Testes

*GrupoPIE Portugal, S.A.*
Tel:   +351 252 290 600
Fax:  +351 252 290 601

Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Web: www.grupopie.com http://www.grupopie.com/

/WinREST /EVERYWHERE

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Moshe Yudkowsky

David Greaves wrote:


Moshe Yudkowsky wrote:

I expect it's because I used 1.2 superblocks (why
not use the latest, I said, foolishly...) and therefore the RAID10 --


Aha - an 'in the wild' example of why we should deprecate '0.9 1.0 1.1, 1.2' and
rename the superblocks to data-version + on-disk-location :)


Even if renamed, I'd still need a Clue as to why to prefer one scheme 
over the other. For example, I've now learned that if I want to set up a 
RAID1 /boot, it must actually be 1.2 or grub won't be able to read it. 
(I would therefore argue that if the new version ever becomes default, 
then the default sub-version ought to be 1.2.)


As to the wiki: I am not certain I found the Wiki you're referring to; I 
did find others, and none had the ringing clarity of Peter's definitive 
RAID10 won't work for /boot.


The process I'm going through -- cloning an old amd-k7 server into a new 
amd64 server -- is something I will document, and this particular grub 
issue is one of the things I intend to mention. So, where is this Wiki 
of which you speak?


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
 A kind word will go a long way, but a kind word and
  a gun will go even further.
-- Al Capone
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Help, big error, dd first GB of a raid:-/

2008-01-30 Thread Lars Schimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

Due to a very bad idea/error, I zeroed my first GB of /dev/md0.
Now fdisk doesn't find any disk on /dev/md0.
Any idea on how to recover?

MfG,
Lars Schimmer
- --
- -
TU Graz, Institut für ComputerGraphik  WissensVisualisierung
Tel: +43 316 873-5405   E-Mail: [EMAIL PROTECTED]
Fax: +43 316 873-5402   PGP-Key-ID: 0x4A9B1723
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHoFTUmWhuE0qbFyMRAvqEAJ4iSwo/mkAZF0YGviMlIZ6PEZtg1ACfRxom
d2MzENir1SWC0YTjIpxZPbU=
=APxR
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


WRONG INFO (was Re: In this partition scheme, grub does not find md information?)

2008-01-30 Thread Peter Rabbitson

Moshe Yudkowsky wrote:
over the other. For example, I've now learned that if I want to set up a 
RAID1 /boot, it must actually be 1.2 or grub won't be able to read it. 
(I would therefore argue that if the new version ever becomes default, 
then the default sub-version ought to be 1.2.)


In the discussion yesterday I myself made a serious typo, that should not 
spread. The only superblock version that will work with current GRUB is 1.0 
_not_ 1.2.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Peter Rabbitson

Peter Rabbitson wrote:

Moshe Yudkowsky wrote:
Here's a baseline question: if I create a RAID10 array using default 
settings, what do I get? I thought I was getting RAID1+0; am I really?


Maybe you are, depending on your settings, but this is beyond the point. 
No matter what 1+0 you have (linux, classic, or otherwise) you can not 
boot from it, as there is no way to see the underlying filesystem 
without the RAID layer.


With the current state of affairs (available mainstream bootloaders) the 
rule is:

Block devices containing the kernel/initrd image _must_ be either:
* a regular block device (/sda1, /hda, /fd0, etc.)
* or a linux RAID 1 with the superblock at the end of the device 
(0.9 or 1.2)





If any poor soul finds this in the mailing list archives, the above should read:

...
	* or a linux RAID 1 with the superblock at the end of the device (either 
version 0.9 or _1.0_)


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help, big error, dd first GB of a raid:-/

2008-01-30 Thread Peter Rabbitson

Lars Schimmer wrote:


Hi!

Due to a very bad idea/error, I zeroed my first GB of /dev/md0.
Now fdisk doesn't find any disk on /dev/md0.
Any idea on how to recover?



It largely depends on what is /dev/md0, and what was on /dev/md0. Provide very 
detailed info:


* Was the MD device partitioned?
* What filesystem(s) were residing on the array, what sizes, what order
* What was each filesystem used for (mounted as what)

Someone might be able to help at this point, however if you do not have a 
backup - you are in very very deep trouble already.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux raid faq

2008-01-30 Thread David Greaves
Keld Jørn Simonsen wrote:
 Hmm, I read the Linux raid faq on
 http://www.faqs.org/contrib/linux-raid/x37.html
 
 It looks pretty outdated, referring to how to patch 2.2 kernels and
 not mentioning new mdadm, nor raid10. It was not dated. 
 It seemed to be related to the linux-raid list, telling where to find
 archives of the list.
 
 Maybe time for an update? or is this not the right place to write stuff?

http://linux-raid.osdl.org/index.php/Main_Page

I have written to faqs.org but got no reply. I'll try again...

 
 If I searched on google for raid faq, the first say 5-7 items did not
 mention raid10.

Until people link to and use the new wiki, Google won't find it.


 Maybe wikipedia is the way to go? I did contribute myself a little
 there.
 
 The software raid howto is dated v. 1.1 3rd of June 2004,
 http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO.html
 also pretty old.

FYI
http://linux-raid.osdl.org/index.php/Credits

David

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help, big error, dd first GB of a raid:-/

2008-01-30 Thread Lars Schimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Peter Rabbitson wrote:
 Lars Schimmer wrote:

 Hi!

 Due to a very bad idea/error, I zeroed my first GB of /dev/md0.
 Now fdisk doesn't find any disk on /dev/md0.
 Any idea on how to recover?

 
 It largely depends on what is /dev/md0, and what was on /dev/md0.
 Provide very detailed info:
 
 * Was the MD device partitioned?
 * What filesystem(s) were residing on the array, what sizes, what order
 * What was each filesystem used for (mounted as what)

One large parttiton on /dev/md0 with ext3.

 Someone might be able to help at this point, however if you do not have
 a backup - you are in very very deep trouble already.

I activate the backup right now - was OpenAFS with some RW volumes -
fairly easy to backup, but...
If it's hard to recover raid data, I recreate the raid and forget the
old data on it.

MfG,
Lars Schimmer
- --
- -
TU Graz, Institut für ComputerGraphik  WissensVisualisierung
Tel: +43 316 873-5405   E-Mail: [EMAIL PROTECTED]
Fax: +43 316 873-5402   PGP-Key-ID: 0x4A9B1723
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHoG9AmWhuE0qbFyMRAnjOAJ4rMZR8eCjYkMinlQ8vxZTIiLuxWwCfe3zj
VihVxdjRnE8oc9BRq6hcaVQ=
=uzyt
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help, big error, dd first GB of a raid:-/

2008-01-30 Thread Peter Rabbitson

Lars Schimmer wrote:


I activate the backup right now - was OpenAFS with some RW volumes -
fairly easy to backup, but...
If it's hard to recover raid data, I recreate the raid and forget the
old data on it.


It is not that hard to recover the raid itself, however the ext3 on top is 
most likely FUBAR (especially after 1GB was overwritten). Since like it seems 
the data is not that important to you, just roll back to a backup and move on.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WRONG INFO (was Re: In this partition scheme, grub does not find md information?)

2008-01-30 Thread David Greaves
Peter Rabbitson wrote:
 Moshe Yudkowsky wrote:
 over the other. For example, I've now learned that if I want to set up
 a RAID1 /boot, it must actually be 1.2 or grub won't be able to read
 it. (I would therefore argue that if the new version ever becomes
 default, then the default sub-version ought to be 1.2.)
 
 In the discussion yesterday I myself made a serious typo, that should
 not spread. The only superblock version that will work with current GRUB
 is 1.0 _not_ 1.2.

Ah, the joys of consolidated and yet editable documentation - like a wiki

David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Use new sb type

2008-01-30 Thread David Greaves
Bill Davidsen wrote:
 David Greaves wrote:
 Jan Engelhardt wrote:
  
 This makes 1.0 the default sb type for new arrays.

 

 IIRC there was a discussion a while back on renaming mdadm options
 (google Time
 to  deprecate old RAID formats?) and the superblocks to emphasise the
 location
 and data structure. Would it be good to introduce the new names at the
 same time
 as changing the default format/on-disk-location?
   
 
 Yes, I suggested some layout names, as did a few other people, and a few
 changes to separate metadata type and position were discussed. BUT,
 changing the default layout, no matter how better it seems, is trumped
 by breaks existing setups and user practice. For all of the reasons
 something else is preferable, 1.0 *works*.

It wasn't my intention to change anything other than the naming.

If the default layout was being updated to 1.0 then I thought it would be a good
time to introduce 1-start, 1-4k and 1-end names and actually announce a default
of 1-end and not 1.0.

Although I still prefer a full separation:
  mdadm --create /dev/md0 --metadata 1 --meta-location start

David

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Michael Tokarev
Moshe Yudkowsky wrote:
[]
 Mr. Tokarev wrote:
 
 By the way, on all our systems I use small (256Mb for small-software systems,
 sometimes 512M, but 1G should be sufficient) partition for a root filesystem
 (/etc, /bin, /sbin, /lib, and /boot), and put it on a raid1 on all...
 ... doing [it]
 this way, you always have all the tools necessary to repair a damaged system
 even in case your raid didn't start, or you forgot where your root disk is
 etc etc.
 
 An excellent idea. I was going to put just /boot on the RAID 1, but
 there's no reason why I can't add a bit more room and put them all
 there. (Because I was having so much fun on the install, I'm using 4GB
 that I was going to use for swap space to mount base install and I'm
 working from their to build the RAID. Same idea.)
 
 Hmmm... I wonder if this more expansive /bin, /sbin, and /lib causes
 hits on the RAID1 drive which ultimately degrade overall performance?
 /lib is hit only at boot time to load the kernel, I'll guess, but /bin
 includes such common tools as bash and grep.

You don't care of the speed of your root filesystem.  Note there are
two speeds - write and read.

You only write to root (including /bin and /lib and so on) during
software (re)install and during some configuration work (writing
/etc/password and the like).  First is very infrequent, and both
needs only a few writes, -- so write speed isn't important.

Read speed also not that important, because most commonly used
stuff from there will be cached anyway (like libc.so, bash and
grep), and again, reading such tiny stuff - it doesn't matter
if it's fast raid or a slow one.

What you do care about the speed of devices where your large,
commonly accessed/modified files - such as video files esp.
when you want streaming video - are resides.  And even here,
unless you've special requirement for speed, you will not
notice any difference between slow and fast raid levels.

For typical filesystem usage, raid5 works good for both reads
and (cached, delayed) writes.  It's workloads like databases
where raid5 performs badly.

What you do care about is your data integrity.  It's not really
interesting to reinstall a system or lose your data in case if
something goes wrong, and it's best to have recovery tools as
easily available as possible.  Plus, amount of space you need.

 Also, placing /dev on a tmpfs helps alot to minimize number of writes
 necessary for root fs.
 
 Another interesting idea. I'm not familiar with using tmpfs (no need,
 until now); but I wonder how you create the devices you need when you're
 doing a rescue.

When you start udev, your /dev will be on tmpfs.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WRONG INFO (was Re: In this partition scheme, grub does not find md information?)

2008-01-30 Thread Michael Tokarev
Peter Rabbitson wrote:
 Moshe Yudkowsky wrote:
 over the other. For example, I've now learned that if I want to set up
 a RAID1 /boot, it must actually be 1.2 or grub won't be able to read
 it. (I would therefore argue that if the new version ever becomes
 default, then the default sub-version ought to be 1.2.)
 
 In the discussion yesterday I myself made a serious typo, that should
 not spread. The only superblock version that will work with current GRUB
 is 1.0 _not_ 1.2.

Ghrrm.  1.0, or 0.9.  0.9 is still the default with mdadm.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Michael Tokarev
Keld Jørn Simonsen wrote:
[]
 Ugh.  2-drive raid10 is effectively just a raid1.  I.e, mirroring
 without any striping. (Or, backwards, striping without mirroring).
 
 uhm, well, I did not understand: (Or, backwards, striping without
 mirroring).  I don't think a 2 drive vanilla raid10 will do striping.
 Please explain.

I was referring to raid0+1 here - mirror of stripes.  Which makes
no sense by its own, but when we create such thing on only 2 drives,
it becomes just raid0...  Backwards as raid1+0 vs raid0+1.

This is just to show that various raid levels, in corner cases,
tends to transform from one to another.

 Pretty much like with raid5 of 2 disks - it's the same as raid1.
 
 I think in raid5 of 2 disks, half of the chunks are parity chynks which
 are evenly distributed over the two disks, and the parity chunk is the
 XOR of the data chunk. But maybe I am wrong. Also the behaviour of suce
 a raid5 is different from a raid1 as the parity chunk is not used as
 data.

With N-disk raid5, parity in a row is calculated by XORing together
data from all the rest of the disks (N-1), ie, P = D1 ^ ... ^D(N-1).

In case of 2-disk raid5 (it's also a corner case), the above formula
becomes just P = D1.  So, parity block in each row contains exactly
the same data as data block, effectively turning the whole thing into
a raid1 of two disks.  Sure in raid5 parity blocks called just that -
parity, but in reality that parity is THE SAME as data (again, in
case of only 2-disk raid5).

 I am not sure what vanilla linux raid10 (near=2, far=1)
 has of properties. I think it can run with only 1 disk, but I think it
 number of copies should be = number of disks, so no.
 
 I have a clear understanding that in a vanilla linux raid10 (near=2, far=1)
 you can run with one failing disk, that is with only one working disk.
 Am I wrong?

In fact, with (all softs) or raid10, it's not only the number of drives
that can fail that matters, but also WHICH drives can fail.  In classic
raid10:

DiskA   DiskB  DiskC  DiskD
  0   0  1  1
  2   2  3  3
  4   4  5  5
  

(where numbers are the data blocks), you can have only 2 working
disks (ie, 2 failed), but only from different pairs.  You can't
have A and B failed and C and D working for example - you'll lose
half the data and thus the filesystem.  You can have A and C failed
however, or A and D, or BC, or BD.

You see - in the above example, all numbers (data blocks) should be
present at least once (after you pull a drive or two or more).  If
at least some numbers don't appear at all, your raid array's dead.

Now write out the layout you want to use like the above, and try
removing some drives, and see if you still have all numbers.

For example, with 3-disk linux raid10:

  A  B  C
  0  0  1
  1  2  2
  3  3  4
  4  5  5
  

We can't pull 2 drives anymore here.  Eg, pulling AB removes
0 and 3. Pulling BC removes 2 and 5.  AC = 1 and 4.

With 5-drive linux raid10:

   A  B  C  D  E
   0  0  1  1  2
   2  3  3  4  4
   5  5  6  6  7
   7  8  8  9  9
  10 10 11 11 12
   ...

AB can't be removed - 0, 5.  AC CAN be removed, as
are AD.  But not AE - losing 2 and 7.  And so on.

6-disk raid10 with 3 copies of each (near=3 with linux):

   A B C D E F
   0 0 0 1 1 1
   2 2 2 3 3 3

It can run as long as from each triple (ABC and DEF), at
least one disk is here.  Ie, you can lose up to 4 drives,
as far as the condition is true.  But if you lose only
3 - ABC or DEF - it can't work anymore.

The same goes for raid5 and raid6, but they're symmetric --
any single (raid5) or double (raid6) disk failure is Ok.
The principle is this:

  raid5: P = D1^D2^D3^...^D(N-1)
so, you either have all Di (nothing to reconstruct), or
you have all but one Di AND P - in this case, missing Dm
can be recalculated as
  Dm = P^D1^...^D(m-1)^D(m+1)^...^D(N-1)
(ie, a XOR of all the remaining blocks including parity).
(exactly the same applies to raid4, because each row in
raid4 is identical to that of raid5, the difference is
that parity disk is different in each row in raid5, while
in raid4 it stays the same).

I wont write the formula for raid6 as it's somewhat more
complicated, but the effect is the same - any data block
can be reconstructed from any N-2 drives.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux raid faq

2008-01-30 Thread Janek Kozicki
David Greaves said: (by the date of Wed, 30 Jan 2008 12:46:52 +)
 
 http://linux-raid.osdl.org/index.php/Main_Page

great idea! I belive that wikis are the best way to go.
 
 I have written to faqs.org but got no reply. I'll try again...
  If I searched on google for raid faq, the first say 5-7 items did not
  mention raid10.
 
 Until people link to and use the new wiki, Google won't find it.


Everyone that has a website - link to that wiki RIGHT NOW! And we
will have a central place for all linux-raid documentation. Findable
with google.

There should be a link to it from vger.kernel.org. Or even the
kernel.org itself. Mailing list admins - can you do it?

best regards.
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Moshe Yudkowsky

Michael Tokarev wrote:


You only write to root (including /bin and /lib and so on) during
software (re)install and during some configuration work (writing
/etc/password and the like).  First is very infrequent, and both
needs only a few writes, -- so write speed isn't important.


Thanks, but I didn't make myself clear. The preformance problem I'm 
concerned about was having different md drives accessing different 
partitions.


For example, I can partition the drives as follows:

/dev/sd[abcd]1 -- RAID1, /boot

/dev/sd[abcd]2 -- RAID5, the rest of the file system

I originally had asked, way back when, if having different md drives on 
different partitions of the *same* disk was a problem for perfomance -- 
 or if, for some reason (e.g., threading) it was actually smarter to do 
it that way. The answer I received was from Iustin Pop, who said :


Iustin Pop wrote:

md code works better if it's only one array per physical drive,
because it keeps statistics per array (like last accessed sector,
etc.) and if you combine two arrays on the same drive these
statistics are not exactly true anymore


So if I use /boot on its own drive and it's only accessed at startup, 
the /boot will only be accessed that one time and afterwards won't cause 
problems for the drive statistics. However, if I use put /boot, /bin, 
and /sbin on this RAID1 drive, it will always be accessed and it might 
create a performance issue.


To return to that peformance question, since I have to create at least 2 
md drives using different partitions, I wonder if it's smarter to create 
multiple md drives for better performance.


/dev/sd[abcd]1 -- RAID1, the /boot, /dev, /bin/, /sbin

/dev/sd[abcd]2 -- RAID5, most of the rest of the file system

/dev/sd[abcd]3 -- RAID10 o2, a drive that does a lot of downloading (writes)


For typical filesystem usage, raid5 works good for both reads
and (cached, delayed) writes.  It's workloads like databases
where raid5 performs badly.


Ah, very interesting. Is this true even for (dare I say it?) bittorrent 
downloads?



What you do care about is your data integrity.  It's not really
interesting to reinstall a system or lose your data in case if
something goes wrong, and it's best to have recovery tools as
easily available as possible.  Plus, amount of space you need.


Sure, I understand. And backing up in case someone steals your server. 
But did you have something specific in mind when you wrote this? Don't 
all these configurations (RAID5 vs. RAID10) have equal recovery tools?


Or were you referring to the file system? Reiserfs and XFS both seem to 
have decent recovery tools. LVM is a little tempting because it allows 
for snapshots, but on the other hand I wonder if I'd find it useful.




Also, placing /dev on a tmpfs helps alot to minimize number of writes
necessary for root fs.

Another interesting idea. I'm not familiar with using tmpfs (no need,
until now); but I wonder how you create the devices you need when you're
doing a rescue.


When you start udev, your /dev will be on tmpfs.


Sure, that's what mount shows me right now -- using a standard Debian 
install. What did you suggest I change?



--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
Many that live deserve death. And some that die deserve life. Can you 
give it to

them? Then do not be too eager to deal out death in judgement. For even the
wise cannot see all ends.
-- Gandalf (J.R.R. Tolkien)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Peter Rabbitson

Michael Tokarev wrote:


With 5-drive linux raid10:

   A  B  C  D  E
   0  0  1  1  2
   2  3  3  4  4
   5  5  6  6  7
   7  8  8  9  9
  10 10 11 11 12
   ...

AB can't be removed - 0, 5.  AC CAN be removed, as
are AD.  But not AE - losing 2 and 7.  And so on.


I stand corrected by Michael, this is indeed the case with the current state 
of md raid 10. Either my observations were incorrect when I made them a year 
and a half ago, or some fixes went into the kernel since then.


In any way - linux md10 does behave exactly as a classic raid 1+0 when created 
with -n D -p nS where D and S are both even and D = 2S.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Keld Jørn Simonsen
On Wed, Jan 30, 2008 at 03:47:30PM +0100, Peter Rabbitson wrote:
 Michael Tokarev wrote:
 
 With 5-drive linux raid10:
 
A  B  C  D  E
0  0  1  1  2
2  3  3  4  4
5  5  6  6  7
7  8  8  9  9
   10 10 11 11 12
...
 
 AB can't be removed - 0, 5.  AC CAN be removed, as
 are AD.  But not AE - losing 2 and 7.  And so on.

I see. Does the kernel code allow this? And mdadm?

And can B+E be removed safely, and C+E and B+D? 

best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Loop devices to RAID? (was Re: In this partition scheme, grub does not find md information?)

2008-01-30 Thread Tim Southerwood

Moshe Yudkowsky wrote:
My mind boggles. I know how to mount an ISO as a loop device onto the 
file system, but if you'd be so kind, can you give a super-brief 
description on how to get a loop device to look like an actual partition 
that can be made into a RAID array? I can see this software-only 
solution as being quite interesting for testing in general.




I tried this a while back, IIRC the procedure was:

1) Make some empty files of the required length each.

2) Use losetup to mount each one onto a loop device (loop0-3 say).

3) Use /dev/loop[0-3] as component devices to mdadm as you would use any 
other device or partition. It is not necessary to partition the loop 
devices, use them whole.


HTH

Tim
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Peter Rabbitson

Keld Jørn Simonsen wrote:

On Wed, Jan 30, 2008 at 03:47:30PM +0100, Peter Rabbitson wrote:

Michael Tokarev wrote:


With 5-drive linux raid10:

  A  B  C  D  E
  0  0  1  1  2
  2  3  3  4  4
  5  5  6  6  7
  7  8  8  9  9
 10 10 11 11 12
  ...

AB can't be removed - 0, 5.  AC CAN be removed, as
are AD.  But not AE - losing 2 and 7.  And so on.


I see. Does the kernel code allow this? And mdadm?

And can B+E be removed safely, and C+E and B+D? 



It seems like it. I just created the above raid configuration with 5 loop 
devices. Everything behaved just like Michael described. When the wrong drives 
disappeared - I started getting IO errors.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Loop devices to RAID? (was Re: In this partition scheme, grub does not find md information?)

2008-01-30 Thread Moshe Yudkowsky

Peter Rabbitson wrote:
It seems like it. I just created the above raid configuration with 5 
loop devices. Everything behaved just like Michael described. When the 
wrong drives disappeared - I started getting IO errors.


My mind boggles. I know how to mount an ISO as a loop device onto the 
file system, but if you'd be so kind, can you give a super-brief 
description on how to get a loop device to look like an actual partition 
that can be made into a RAID array? I can see this software-only 
solution as being quite interesting for testing in general.


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
 I'm very well aquainted/with the seven deadly sins/
  I keep a busy schedule/ to try to fit them in.
-- Warren Zevon, Mr. Bad Example
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Janek Kozicki
Hello,

Yes, I know that some levels give faster reading and slower writing, etc.

I want to talk here about a typical workstation usage: compiling
stuff (like kernel), editing openoffice docs, browsing web, reading
email (email: I have a webdir format, and in boost mailing list
directory I have 14000 files (posts), opening this directory takes
circa 10 seconds in sylpheed). Moreover, opening .pdf files, more
compiling of C++ stuff, etc...

I have a remote backup system configured (with rsnapshot), which does
backups two times a day. So I'm not afraid to lose all my data due to
disc failure. I want absolute speed.

Currently I have Raid-0, because I was thinking that this one is
fastest. But I also don't need twice the capacity. I could use Raid-1
as well, if it was faster.

Due to recent discussion about Raid-10,f2 I'm getting worried that
Raid-0 is not the fastest solution, but instead a Raid-10,f2 is
faster.

So how really is it, which level gives maximum overall speed?


I would like to make a benchmark, but currently, technically, I'm not
able to. I'll be able to do it next month, and then - as a result of
this discussion - I will switch to other level and post here
benchmark results.

How does overall performance change with the number of available drives?

Perhaps Raid-0 is best for 2 drives, while Raid-10 is best for 3, 4
and more drives?


best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Keld Jørn Simonsen
On Wed, Jan 30, 2008 at 07:21:33PM +0100, Janek Kozicki wrote:
 Hello,
 
 Yes, I know that some levels give faster reading and slower writing, etc.
 
 I want to talk here about a typical workstation usage: compiling
 stuff (like kernel), editing openoffice docs, browsing web, reading
 email (email: I have a webdir format, and in boost mailing list
 directory I have 14000 files (posts), opening this directory takes
 circa 10 seconds in sylpheed). Moreover, opening .pdf files, more
 compiling of C++ stuff, etc...
 
 I have a remote backup system configured (with rsnapshot), which does
 backups two times a day. So I'm not afraid to lose all my data due to
 disc failure. I want absolute speed.
 
 Currently I have Raid-0, because I was thinking that this one is
 fastest. But I also don't need twice the capacity. I could use Raid-1
 as well, if it was faster.
 
 Due to recent discussion about Raid-10,f2 I'm getting worried that
 Raid-0 is not the fastest solution, but instead a Raid-10,f2 is
 faster.
 
 So how really is it, which level gives maximum overall speed?
 
 
 I would like to make a benchmark, but currently, technically, I'm not
 able to. I'll be able to do it next month, and then - as a result of
 this discussion - I will switch to other level and post here
 benchmark results.
 
 How does overall performance change with the number of available drives?
 
 Perhaps Raid-0 is best for 2 drives, while Raid-10 is best for 3, 4
 and more drives?

Teoretically, raid0 and raid10,f2 should be the same for reading, given the
same size of the md partition, etc. For writing, raid10,f2 should be half the 
speed of
raid0. This should go both for sequential and random read/writes.
But I would like to have real test numbers. 

best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Bill Davidsen

Moshe Yudkowsky wrote:

Bill Davidsen wrote:

According to man md(4), the o2 is likely to offer the best 
combination of read and write performance. Why would you consider f2 
instead?


f2 is faster for read, most systems spend more time reading than 
writing.


According to md(4), offset should give similar read characteristics 
to 'far' if a suitably large chunk size is used, but without as much 
seeking for writes.


Is the man page not correct, conditionally true, or simply not 
understood by me (most likely case)?


I wonder what suitably large is...

My personal experience is that as chunk gets larger random write gets 
slower, sequential gets faster. I don't have numbers any more, but 
20-30% is sort of the limit of what I saw for any chunk size I consider 
reasonable. f2 is faster for sequential reading, tune your system to 
annoy you least. ;-)


--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: In this partition scheme, grub does not find md information?

2008-01-30 Thread Bill Davidsen

Peter Rabbitson wrote:

Keld Jørn Simonsen wrote:

On Tue, Jan 29, 2008 at 06:44:20PM -0500, Bill Davidsen wrote:

Depending on near/far choices, raid10 should be faster than raid5, 
with far read should be quite a bit faster. You can't boot off 
raid10, and if you put your swap on it many recovery CDs won't use 
it. But for general use and swap on a normally booted system it is 
quite fast.


Hmm, why would you put swap on a raid10? I would in a production
environment always put it on separate swap partitions, possibly a 
number,

given that a number of drives are available.



Because you want some redundancy for the swap as well. A swap 
partition/file becoming inaccessible is equivalent to yanking out a 
stick of memory out of your motherboard.


I can't say it better. Losing a swap area will make the system fail in 
one way or the other, in my systems typicalls expressed as a crash of 
varying severity. I use raid10 because it is the fastest reliable level 
I've found.


--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 




-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Janek Kozicki
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)

 Teoretically, raid0 and raid10,f2 should be the same for reading, given the
 same size of the md partition, etc. For writing, raid10,f2 should be half the 
 speed of
 raid0. This should go both for sequential and random read/writes.
 But I would like to have real test numbers. 

Me too. Thanks. Are there any other raid levels that may count here?
Raid-10 with some other options?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Documentation? failure to update-initramfs causes Infinite md loop on boot

2008-01-30 Thread maximilian attems
On Wed, Jan 30, 2008 at 04:32:46PM -0600, Moshe Yudkowsky wrote:
 I reformatted the disks in preparation to my move to a RAID1/RAID5 
 combination. I couldn't --stop the array (that should have told me 
 something), so I removed ARRAY from mdadm.conf and restarted. I ran 
 fdisk to create the proper partitions, and then I removed the /dev/md* 
 and /dev/md/* entries in anticipation of creating the new ones. I then 
 rebooted to pick up the new partitions I'd created.

pretty simple, you change mdadm.conf put it also on initramfs:
update-initramfs -u -k all
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Documentation? failure to update-initramfs causes Infinite md loop on boot

2008-01-30 Thread Moshe Yudkowsky
I reformatted the disks in preparation to my move to a RAID1/RAID5 
combination. I couldn't --stop the array (that should have told me 
something), so I removed ARRAY from mdadm.conf and restarted. I ran 
fdisk to create the proper partitions, and then I removed the /dev/md* 
and /dev/md/* entries in anticipation of creating the new ones. I then 
rebooted to pick up the new partitions I'd created.


Now I can no longer boot, with this series of messages:

md: md_import_device returned: -22
md: mdadm failed to add /dev/sdb2 to /dev/md/all: invalid argument
mdadm: failed to RUN_ARRAY /dev/md/all: invalid argument
md: sdc2 has invalid sb, not importing!

Thousands of these go past, and there's no escape. That's quite a severe 
error. I'm going to boot on a rescue disk to fix this -- there's no 
other way I can think of to get out of this mess -- but I wonder if 
there ought to be documentation on the interaction between mdadm and 
update-initramfs.


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
Becoming the biggest banana republic in the world -- and without the 
bananas, at that -- is an unenviable prospect.

-- Sergei Stepashin, Prime Minister of Russia
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


problem with spare, acive device, clean degrated, reshaip RADI5, anybody can help ?

2008-01-30 Thread Andreas-Sokov
Hello linux-raid.

i have DEBIAN.

raid01:/# mdadm -V
mdadm - v2.6.4 - 19th October 2007

raid01:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
  Creation Time : Tue Nov 13 18:42:36 2007
 Raid Level : raid5
 Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
  Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Sun Jan 27 00:24:44 2008
  State : clean, degraded
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

  Delta Devices : 1, (4-5)

   UUID : 4fbdc8df:07b952cf:7cc6faa0:04676ba5
 Events : 0.683478

Number   Major   Minor   RaidDevice State
   0   8   320  active sync   /dev/sdc
   1   8   481  active sync   /dev/sdd
   2   8   642  active sync   /dev/sde
   3   8   803  active sync   /dev/sdf
   4   004  removed

   5   8   16-  spare   /dev/sdb


Anybody know what i need to do for /dev/sdb became ACTIVE DEVICE ?




**
@raid01:/# mdadm -E /dev/sdb
/dev/sdb:
  Magic : a92b4efc
Version : 00.91.00
   UUID : 4fbdc8df:07b952cf:7cc6faa0:04676ba5
  Creation Time : Tue Nov 13 18:42:36 2007
 Raid Level : raid5
  Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
 Array Size : 1953545984 (1863.05 GiB 2000.43 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 1

  Reshape pos'n : 194537472 (185.53 GiB 199.21 GB)
  Delta Devices : 1 (4-5)

Update Time : Tue Jan 29 02:05:52 2008
  State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 1
   Checksum : 450cd41b - correct
 Events : 0.683482

 Layout : left-symmetric
 Chunk Size : 64K

  Number   Major   Minor   RaidDevice State
this 5   8   165  spare   /dev/sdb

   0 0   8   320  active sync   /dev/sdc
   1 1   8   481  active sync   /dev/sdd
   2 2   8   642  active sync   /dev/sde
   3 3   8   803  active sync   /dev/sdf
   4 4   004  faulty removed
   5 5   8   165  spare   /dev/sdb


   

-- 
Best regards
 Andreas
 mailto:[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-01-30 Thread David Rees
On Jan 30, 2008 2:06 PM, Richard Scobie [EMAIL PROTECTED] wrote:
 hda has failed and after spending some time with a rescue disk mounting
 hdc's /boot partition (hdc1) and changing the grub.conf device
 parameters, I have no success in booting off it.

 I then set them back to the original (hd0,0) and moved hdc into hda's
 position.

 Booting from there brings up the message: GRUB hard disk error

Have you tried re-running grub-install after booting from a rescue disk?

-Dave
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Documentation? failure to update-initramfs causes Infinite md loop on boot

2008-01-30 Thread Moshe Yudkowsky

maximilian attems wrote:


pretty simple, you change mdadm.conf put it also on initramfs:
update-initramfs -u -k all


Sure, that's what I did after boot on rescue, chroot, etc. However, I 
wonder if the *documentation* -- Wiki, or even the man page discussion 
on boot -- should mention that changes in the mdadm.conf have to be 
propogated to /boot?


In fact, that's an interesting question: which changes have to propagate 
to /boot? I'd think any change that affects md devices with /etc/fstab 
entries set to auto would have to be followed by update-initramfs. 
That's quite some bit of hidden knowledge if true.


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
  All friends have real and imaginary components.
 -- Moshe Yudkowsky
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-01-30 Thread Richard Scobie

David Rees wrote:


Have you tried re-running grub-install after booting from a rescue disk?

-Dave


Hi David,

I have but although I can advance further it seems that the BIOS is 
doing some strange things as well, switching drive ordering around.


With a new hda installed and partitioned, ready to be rebuilt, the good 
drive, hdc installed, the grub.conf modified to address (hd2,0) - I have 
 an hdb installed also, and grub installed on hdc, booting with the 
BIOS set to start on hdc hangs with the message grub stage2 then drops 
to a grub prompt.


I then enter kernel (hd0,0)/vmlinuz and it finds the kernel. I would 
have expected this to be on (hd2,0).


Next, boot root=/dev/md2, boot root=/dev/hdc3 or boot 
root=/dev/hda3 all result in the kernel booting then panicing with a 
cannot open root device.


I suspect you are correct that the Fedora installer, having built and 
installed to RAID1, does not finish the job by installing grub on the 
second drive.


While it is not a problem with this particular box to do a reinstall, it 
does not inspire confidence for a number of others that I have.


This is the first time I have lost the primary member of a RAID1, having 
replaced secondary members a number of times without issue.


Regards,

Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Keld Jørn Simonsen
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
 Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)
 
  Teoretically, raid0 and raid10,f2 should be the same for reading, given the
  same size of the md partition, etc. For writing, raid10,f2 should be half 
  the speed of
  raid0. This should go both for sequential and random read/writes.
  But I would like to have real test numbers. 
 
 Me too. Thanks. Are there any other raid levels that may count here?
 Raid-10 with some other options?

Given that you want maximum thruput for both reading and writing, I
think there is only one way to go, that is raid0.

All the raid10's will have double time for writing, and raid5 and raid6
will also have double or triple writing times, given that you can do
striped writes on the raid0. 

For random and sequential writing in the normal case (no faulty disks) I would
guess that all of the raid10's, the raid1 and raid5 are about equally fast, 
given the
same amount of hardware.  (raid5, raid6 a little slower given the
unactive parity chunks).

For random reading, raid0, raid1, raid10 should be equally fast, with
raid5 a little slower, due to one of the disks virtually out of
operation, as it is used for the XOR parity chunks. raid6 should be 
somewhat slower due to 2 non-operationable disks. raid10,f2 may have a
slight edge due to virtually only using half the disk giving better
average seek time, and using the faster outer disk halves.

For sequential reading, raid0 and raid10,f2 should be equally fast.
Possibly raid10,o2 comes quite close. My guess is that raid5 then is
next, achieving striping rates, but with the loss of one parity drive,
and then raid1 and raid10,n2 with equal performance.

In degraded mode, I guess for random read/writes the difference is not
big between any of the raid1, raid5 and raid10 layouts, while sequential
reads will be especially bad for raid10,f2 approaching the random read
rate, and others will enjoy the normal speed of the above filesystem
(ext3, reiserfs, xfs etc).

Theory, theory theory. Show me some real figures.

Best regards
Keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-01-30 Thread Richard Scobie

A followup for the archives:

I found this document very useful:

http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html

After modifying my grub.conf to refer to (hd0,0), reinstalling grub on 
hdc with:


grub device (hd0) /dev/hdc

grub root (hd0,0)

grub (hd0)

and rebooting with the bios set to boot off hdc, everything burst back 
into life.


I shall now be checking all my Fedora/Centos RAID1 installs for grub 
installed on both drives.


Regards,

Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-01-30 Thread Richard Scobie

David Rees wrote:


FWIW, this step is clearly marked in the Software-RAID HOWTO under
Booting on RAID:
http://tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.3


The one place I didn't look...



BTW, I suspect you are missing the command setup from your 3rd
command above, it should be:

# grub
grub device (hd0) /dev/hdc
grub root (hd0,0)
grub setup (hd0)


That is correct.

Regards,

Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-01-30 Thread David Rees
On Jan 30, 2008 6:33 PM, Richard Scobie [EMAIL PROTECTED] wrote:
 I found this document very useful:
 http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html

 After modifying my grub.conf to refer to (hd0,0), reinstalling grub on
 hdc with:

 grub device (hd0) /dev/hdc
 grub root (hd0,0)
 grub (hd0)

 and rebooting with the bios set to boot off hdc, everything burst back
 into life.

FWIW, this step is clearly marked in the Software-RAID HOWTO under
Booting on RAID:
http://tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.3

If it appears that Fedora isn't doing this when installing on a
Software RAID 1 boot device, I suggest you open a bug.

BTW, I suspect you are missing the command setup from your 3rd
command above, it should be:

# grub
grub device (hd0) /dev/hdc
grub root (hd0,0)
grub setup (hd0)

 I shall now be checking all my Fedora/Centos RAID1 installs for grub
 installed on both drives.

Good idea. Whenever setting up a RAID1 device to boot from, I perform
the above 3 steps. I also suggest using labels to identify partitions
and testing the two failure modes and that you are able to boot with
either drive disconnected.

-Dave
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html