Re: RAID 1 and grub

2008-02-03 Thread Peter Rabbitson

Bill Davidsen wrote:

Richard Scobie wrote:

A followup for the archives:

I found this document very useful:

http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html

After modifying my grub.conf to refer to (hd0,0), reinstalling grub on 
hdc with:


grub device (hd0) /dev/hdc

grub root (hd0,0)

grub (hd0)

and rebooting with the bios set to boot off hdc, everything burst back 
into life.


I shall now be checking all my Fedora/Centos RAID1 installs for grub 
installed on both drives.


Have you actually tested this by removing the first hd and booting? 
Depending on the BIOS I believe that the fallback drive will be called 
hdc by the BIOS but will be hdd in the system. That was with RHEL3, but 
worth testing.




The line:

grub device (hd0) /dev/hdc

simply means treat /dev/hdc as the first _bios_ hard disk in the system. 
This way when grub writes to the MBR of hd0, it will be in fact writing to 
/dev/hdc. The reason the drive must be referenced as hd0 (and not hd2) is 
because grub enuerates drives according to the bios, and therefore the drive 
from which the bios is currently booting is _always_ hd0.


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid problem: after every reboot /dev/sdb1 is removed?

2008-02-03 Thread Bill Davidsen

Berni wrote:

Hi

I created the raid arrays during install with the text-installer-cd. 
So first the raid array was created and then the system was installed on it. 



I don't have a extra /boot partition its on the root (/) partition and the root 
is the md0 in the raid. Every partition for ubuntu (also swap) is in the raid.

What exactly means rerunning grub? (to put both hdd into the mbr)? 
I can't find the mkinitrd into ubuntu. I made a update-initramfs but it didn't help.


  
I think you need some ubuntu guru to help, I always create a small raid1 
for /boot and then use other arrays for whatever the system is doing. I 
don't know if ubuntu uses mkinitrd or what, but it clearly didn't get it 
right without a little help from you.
thanks 
  


How about some input, ubuntu users (or Debian, isn't ubuntu really Debian?).


On Sat, 02 Feb 2008 14:47:50 -0500
Bill Davidsen [EMAIL PROTECTED] wrote:

  

Berni wrote:


Hi!

I have the following problem with my softraid (raid 1). I'm running
Ubuntu 7.10 64bit with kernel 2.6.22-14-generic.

After every reboot my first boot partition in md0 is not synchron. One
of the disks (the sdb1) is removed. 
After a resynch every partition is synching. But after a reboot the
state is removed. 

The disks are new and both seagate 250gb with exactly the same partition table. 

  
  
Did you create the raid arrays and then install on them? Or add them 
after the fact? I have seen this type of problem when the initrd doesn't 
start the array before pivotroot, usually because the raid capabilities 
aren't in the boot image. In that case rerunning grub and mkinitrd may help.


I run raid on Redhat distributions, and some Slackware, so I can't speak 
for Ubuntu from great experience, but that's what it sounds like. When 
you boot, is the /boot mounted on a degraded array or on the raw partition?




--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-02-03 Thread Bill Davidsen

Richard Scobie wrote:

A followup for the archives:

I found this document very useful:

http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html

After modifying my grub.conf to refer to (hd0,0), reinstalling grub on 
hdc with:


grub device (hd0) /dev/hdc

grub root (hd0,0)

grub (hd0)

and rebooting with the bios set to boot off hdc, everything burst back 
into life.


I shall now be checking all my Fedora/Centos RAID1 installs for grub 
installed on both drives.


Have you actually tested this by removing the first hd and booting? 
Depending on the BIOS I believe that the fallback drive will be called 
hdc by the BIOS but will be hdd in the system. That was with RHEL3, but 
worth testing.


--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: draft howto on making raids for surviving a disk crash

2008-02-03 Thread Keld Jørn Simonsen
On Sun, Feb 03, 2008 at 10:53:51AM -0500, Bill Davidsen wrote:
 Keld Jørn Simonsen wrote:
 This is intended for the linux raid howto. Please give comments.
 It is not fully ready /keld
 
 Howto prepare for a failing disk
 
 6. /etc/mdadm.conf
 
 Something here on /etc/mdadm.conf. What would be safe, allowing
 a system to boot even if a disk has crashed?
   
 
 Recommend PARTITIONS by used

Thanks Bill for your suggestions, which I have incorporated in the text.

However, I do not understand what to do with the remark above.
Please explain.

Best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid1 and raid 10 always writes all data to all disks?

2008-02-03 Thread Keld Jørn Simonsen
On Sun, Feb 03, 2008 at 10:56:01AM -0500, Bill Davidsen wrote:
 Keld Jørn Simonsen wrote:
  I found a sentence in the HOWTO:
 
 raid1 and raid 10 always writes all data to all disks
 
 I think this is wrong for raid10.
 
 eg
 
 a raid10,f2 of 4 disks only writes to two of the disks -
 not all 4 disks. Is that true?
   
 
 I suspect that really should have read all mirror copies, in the 
 raid10 case.

OK, I changed the text to:

raid1 always writes all data to all disks.

raid10 always writes all data to the number of copies that the raid holds.
For example on a raid10,f2 or raid10,o2 of 6 disks, the data will only
be written 2 times.

Best regards
Keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 1 and grub

2008-02-03 Thread Richard Scobie

Bill Davidsen wrote:

Have you actually tested this by removing the first hd and booting? 
Depending on the BIOS I believe that the fallback drive will be called 
hdc by the BIOS but will be hdd in the system. That was with RHEL3, but 
worth testing.




Hi Bill,

I did not try this particular combination. I shut the box down, removed 
the failed drive (hda) and installed it's replacement and proceeded from 
there.


Once I had discovered that hdc had no grub installed, after running:

grub device (hd0) /dev/hdc

grub root (hd0,0)

grub setup (hd0)

I set the BIOS to boot from hdc and it all worked.

Regards,

Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Moshe Yudkowsky
I've been reading the draft and checking it against my experience. 
Because of local power fluctuations, I've just accidentally checked my 
system:  My system does *not* survive a power hit. This has happened 
twice already today.


I've got /boot and a few other pieces in a 4-disk RAID 1 (three running, 
one spare). This partition is on /dev/sd[abcd]1.


I've used grub to install grub on all three running disks:

grub --no-floppy EOF
root (hd0,1)
setup (hd0)
root (hd1,1)
setup (hd1)
root (hd2,1)
setup (hd2)
EOF

(To those reading this thread to find out how to recover: According to 
grub's map option, /dev/sda1 maps to hd0,1.)



After the power hit, I get:

 Error 16
 Inconsistent filesystem mounted

I then tried to boot up on hda1,1, hdd2,1 -- none of them worked.

The culprit, in my opinion, is the reiserfs file system. During the 
power hit, the reiserfs file system of /boot was left in an inconsistent 
state; this meant I had up to three bad copies of /boot.


Recommendations:

1. I'm going to try adding a data=journal option to the reiserfs file 
systems, including the /boot. If this does not work, then /boot must be 
ext3 in order to survive a power hit.


2. We discussed what should be on the RAID1 bootable portion of the 
filesystem. True, it's nice to have the ability to boot from just the 
RAID1 portion. But if that RAID1 portion can't survive a power hit, 
there's little sense. It might make a lot more sense to put /boot on its 
own tiny partition.


The Fix:

The way to fix this problem with booting is to get the reiser file 
system back into sync. I did this by booting to my emergency single-disk 
partition ((hd0,0) if you must know) and then mounting the /dev/md/root 
that contains /boot. This forced a resierfs consistency check and 
journal replay, and let me reboot without problems.




--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
A gun is, in many people's minds, like a magic wand. If you point it at 
people,

they are supposed to do your bidding.
-- Edwin E. Moise, _Tonkin Gulf_
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Robin Hill
On Sun Feb 03, 2008 at 01:15:10PM -0600, Moshe Yudkowsky wrote:

 I've been reading the draft and checking it against my experience. Because 
 of local power fluctuations, I've just accidentally checked my system:  My 
 system does *not* survive a power hit. This has happened twice already 
 today.

 I've got /boot and a few other pieces in a 4-disk RAID 1 (three running, 
 one spare). This partition is on /dev/sd[abcd]1.

 I've used grub to install grub on all three running disks:

 grub --no-floppy EOF
 root (hd0,1)
 setup (hd0)
 root (hd1,1)
 setup (hd1)
 root (hd2,1)
 setup (hd2)
 EOF

 (To those reading this thread to find out how to recover: According to 
 grub's map option, /dev/sda1 maps to hd0,1.)

This is wrong - the disk you boot from will always be hd0 (no matter
what the map file says - that's only used after the system's booted).
You need to remap the hd0 device for each disk:

grub --no-floppy EOF
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdb
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdc
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdd
root (hd0,1)
setup (hd0)
EOF


 After the power hit, I get:

  Error 16
  Inconsistent filesystem mounted

 I then tried to boot up on hda1,1, hdd2,1 -- none of them worked.

 The culprit, in my opinion, is the reiserfs file system. During the power 
 hit, the reiserfs file system of /boot was left in an inconsistent state; 
 this meant I had up to three bad copies of /boot.

Could well be - I always use ext2 for the /boot filesystem and don't
have it automounted.  I only mount the partition to install a new
kernel, then unmount it again.

Cheers,
Robin
-- 
 ___
( ' } |   Robin Hill[EMAIL PROTECTED] |
   / / )  | Little Jim says |
  // !!   |  He fallen in de water !! |


pgp9vtpHG44V7.pgp
Description: PGP signature


Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Michael Tokarev
Moshe Yudkowsky wrote:
 I've been reading the draft and checking it against my experience.
 Because of local power fluctuations, I've just accidentally checked my
 system:  My system does *not* survive a power hit. This has happened
 twice already today.
 
 I've got /boot and a few other pieces in a 4-disk RAID 1 (three running,
 one spare). This partition is on /dev/sd[abcd]1.
 
 I've used grub to install grub on all three running disks:
 
 grub --no-floppy EOF
 root (hd0,1)
 setup (hd0)
 root (hd1,1)
 setup (hd1)
 root (hd2,1)
 setup (hd2)
 EOF
 
 (To those reading this thread to find out how to recover: According to
 grub's map option, /dev/sda1 maps to hd0,1.)

I usually install all the drives identically in this regard -
to be treated as first bios disk (disk 0x80).  As already
pointed out in this thread - not all BIOSes are able to boot
off a second or third disk, so if your first disk (sda) fail
your only option is to put your sdb into place of sda and boot
from it - this way, grub needs to think it's first boot drive
too.

By the way, lilo works here more easily and more reliable.
You just install a standard mbr (lilo has it too) which just
boots from an active partition, and install lilo onto the
raid array, and tell it to NOT do anything fancy with raid
at all (raid-extra-boot none).  But for this to work, you
have to have identical partitions with identical offsets -
at least for the boot partitions.

 After the power hit, I get:
 
 Error 16
 Inconsistent filesystem mounted

But did it actually mount it?

 I then tried to boot up on hda1,1, hdd2,1 -- none of them worked.

Which is in fact expected after the above.  You have 3 identical
copies (thanks to raid) of your boot filesystem, all 3 equally
broken.  When it boots, it assembles your /boot raid array - the
same regardless if you boot off hda, hdb or hdc.

 The culprit, in my opinion, is the reiserfs file system. During the
 power hit, the reiserfs file system of /boot was left in an inconsistent
 state; this meant I had up to three bad copies of /boot.

I've never seen any problem with ext[23] wrt unexpected power loss, so
far.  Running several 100s of different systems, some since 1998, some
since 2000.  Sure there was several inconsistencies, sometimes (maybe
once or twice) some minor data loss (only few newly created files were
lost), but most serious was to find a few items in lost+found after an
fsck - that's ext2, never seen that with ext3.

More, I tried hard to force a power failure at unexpected time, by
doing massive write operations and cutting power while at it - I was
never able to trigger any problem this way, at all.

In any case, even if ext[23] is somewhat damaged, it can be mounted
still - access to some files may return I/O errors (in the parts
where it's really damaged), but the rest will work.

On the other hand, I had several immediate issues with reiserfs.  It
was long time ago, when the filesystem first has been included into
mainline kernel, so that doesn't reflect current situation.  Yet even
at that stage, reiserfs was declared stable by the authors.  Issues
were trivially triggerable by cutting the power at an unexpected
time, and fsck didn't help several times.

So I tend to avoid reiserfs - due to my own experience, and due to
numerous problems elsewhere.

 Recommendations:
 
 1. I'm going to try adding a data=journal option to the reiserfs file
 systems, including the /boot. If this does not work, then /boot must be
 ext3 in order to survive a power hit.

By the way, if your /boot is separate filesystem (ie, there's nothing
more there), I see absolutely, zero no reason for it to crash.  /boot
is modified VERY rarely (only when installing a kernel), and only when
it's modified there's a chance for it to be damaged somehow.  During
the rest of the time, it's constant, and any power cut should not hurt
it at all.  If even for a non-modified filesystem reiserfs shows such
behavour (

 2. We discussed what should be on the RAID1 bootable portion of the
 filesystem. True, it's nice to have the ability to boot from just the
 RAID1 portion. But if that RAID1 portion can't survive a power hit,
 there's little sense. It might make a lot more sense to put /boot on its
 own tiny partition.

Hehe.

/boot doesn't matter really.  Separate /boot were used for 3 purposes:

1) to work around bios 1024th cylinder issues (long gone with LBA)
2) to be able to put the rest of the system onto an unsupported-by-
 bootloader filesystem/raid/lvm/etc.  Like, lilo didn't support
 reiserfs (and still doesn't with tail packing enabled), so if you
 want to use reiserfs for your root fs, put /boot into a separate
 ext2fs.  The same is true for raid - you can put the rest of the
 system into a raid5 array (unsupported by grub/lilo), and in order
 to boot, create small raid1 (or any other supported level) /boot.
3) to keep it as less volatile as possible. Like, an area of the
 disk which never changes (except of a few very rare cases).  For
 

Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Moshe Yudkowsky

Robin Hill wrote:


This is wrong - the disk you boot from will always be hd0 (no matter
what the map file says - that's only used after the system's booted).
You need to remap the hd0 device for each disk:

grub --no-floppy EOF
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdb
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdc
root (hd0,1)
setup (hd0)
device (hd0) /dev/sdd
root (hd0,1)
setup (hd0)
EOF


For my enlightenment: if the file system is mounted, then hd2,1 is a 
sensible grub operation, isn't it? For the record, given my original 
script when I boot I am able to edit the grub boot options to read


root (hd2,1)

and proceed to boot.


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
 I love deadlines... especially the whooshing sound they
  make as they fly past.
-- Dermot Dobson
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Moshe Yudkowsky

Michael Tokarev wrote:


Speaking of repairs.  As I already mentioned, I always use small
(256M..1G) raid1 array for my root partition, including /boot,
/bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
their own filesystems).  And I had the following scenarios
happened already:


But that's *exactly* what I have -- well, 5GB -- and which failed. I've 
modified /etc/fstab system to use data=journal (even on root, which I 
thought wasn't supposed to work without a grub option!) and I can 
power-cycle the system and bring it up reliably afterwards.


So I'm a little suspicious of this theory that /etc and others can be on 
the same partition as /boot in a non-ext3 file system.


--
Moshe Yudkowsky * [EMAIL PROTECTED] * www.pobox.com/~moshe
 Thanks to radio, TV, and the press we can now develop absurd
  misconceptions about peoples and governments we once hardly knew
  existed.-- Charles Fair, _From the Jaws of Victory_
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Michael Tokarev
Moshe Yudkowsky wrote:
 Michael Tokarev wrote:
 
 Speaking of repairs.  As I already mentioned, I always use small
 (256M..1G) raid1 array for my root partition, including /boot,
 /bin, /etc, /sbin, /lib and so on (/usr, /home, /var are on
 their own filesystems).  And I had the following scenarios
 happened already:
 
 But that's *exactly* what I have -- well, 5GB -- and which failed. I've
 modified /etc/fstab system to use data=journal (even on root, which I
 thought wasn't supposed to work without a grub option!) and I can
 power-cycle the system and bring it up reliably afterwards.
 
 So I'm a little suspicious of this theory that /etc and others can be on
 the same partition as /boot in a non-ext3 file system.

If even your separate /boot failed (which should NEVER fail), what to
say about the rest?

I mean, if you'll save your /boot, what help it will be for you, if
your root fs is damaged?

That's why I said /boot is mostly irrelevant.

Well.  You can have some recovery stuff in your initrd/initramfs - that's
for sure (and for that to work, you can make your /boot more reliable by
creating a separate filesystem for it).  But if to go this route, it's
better to boot off some recovery CD instead of trying recovery from very
limited toolset available in your initramfs.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)

2008-02-03 Thread Robin Hill
On Sun Feb 03, 2008 at 02:46:54PM -0600, Moshe Yudkowsky wrote:

 Robin Hill wrote:

 This is wrong - the disk you boot from will always be hd0 (no matter
 what the map file says - that's only used after the system's booted).
 You need to remap the hd0 device for each disk:
 grub --no-floppy EOF
 root (hd0,1)
 setup (hd0)
 device (hd0) /dev/sdb
 root (hd0,1)
 setup (hd0)
 device (hd0) /dev/sdc
 root (hd0,1)
 setup (hd0)
 device (hd0) /dev/sdd
 root (hd0,1)
 setup (hd0)
 EOF

 For my enlightenment: if the file system is mounted, then hd2,1 is a 
 sensible grub operation, isn't it? For the record, given my original script 
 when I boot I am able to edit the grub boot options to read

 root (hd2,1)

 and proceed to boot.

Once the file system is mounted then hdX,Y maps according to the
device.map file (which may actually bear no resemblance to the drive
order at boot - I've had issues with this before).  At boot time it maps
to the BIOS boot order though, and (in my experience anyway) hd0 will
always map to the drive the BIOS is booting from.

So initially you may have:
SATA-1: hd0
SATA-2: hd1
SATA-3: hd2

Now, if the SATA-1 drive dies totally you will have:
SATA-1: -
SATA-2: hd0
SATA-3: hd1

or if SATA-2 dies:
SATA-1: hd0
SATA-2: -
SATA-3: hd1

Note that in the case where the drive is still detected but fails to
boot then the behaviour seems to be very BIOS dependent - some will
continue to drive 2 as above, whereas others will just sit and complain.

So to answer the second part of your question, yes - at boot time
currently you can do root (hd2,1) or root (hd3,1).  If a disk dies,
however (whichever disk it is), then root (hd3,1) will fail to work.

Note also that the above is only my experience - if you're depending on
certain behaviour under these circumstances then you really need to test
it out on your hardware by disconnecting drives, substituting
non-bootable drives, etc.

HTH,
Robin
-- 
 ___
( ' } |   Robin Hill[EMAIL PROTECTED] |
   / / )  | Little Jim says |
  // !!   |  He fallen in de water !! |


pgpuoxEErijXz.pgp
Description: PGP signature


Re: raid10 on three discs - few questions.

2008-02-03 Thread Neil Brown
On Sunday February 3, [EMAIL PROTECTED] wrote:
 Hi,
 
 Maybe I'll buy three HDDs to put a raid10 on them. And get the total
 capacity of 1.5 of a disc. 'man 4 md' indicates that this is possible
 and should work.
 
 I'm wondering - how a single disc failure is handled in such configuration?
 
 1. does the array continue to work in a degraded state?

Yes.

 
 2. after the failure I can disconnect faulty drive, connect a new one,
start the computer, add disc to array and it will sync automatically?
 

Yes.

 
 Question seems a bit obvious, but the configuration is, at least for
 me, a bit unusual. This is why I'm asking. Anybody here tested such
 configuration, has some experience?
 
 
 3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?

Yes.

 
 4. Would it be possible to later '--grow' the array to use 4 discs in
raid10 ? Even with far=2 ?
 

No.

Well if by later you mean in five years, then maybe.  But the
code doesn't currently exist.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 on three discs - few questions.

2008-02-03 Thread Janek Kozicki
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)

wow, thanks for quick reply :)

  3. Another thing - would raid10,far=2 work when three drives are used?
 Would it increase the read performance?
 
 Yes.

is far=2 the most I could do to squeeze every possible MB/sec
performance in raid10 on three discs ?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 on three discs - few questions.

2008-02-03 Thread Jon Nelson
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
 Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)

 wow, thanks for quick reply :)

   3. Another thing - would raid10,far=2 work when three drives are used?
  Would it increase the read performance?
 
  Yes.

 is far=2 the most I could do to squeeze every possible MB/sec
 performance in raid10 on three discs ?

In my opinion, yes. It has sequential read characteristics that place
at /or better than/ raid0. Writing is slower, about the speed of a
single disk, give or take.  The other two raid10 layouts (near and
offset) are very close in performance to each other - nearly identical
for reading/writing.

-- 
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 on three discs - few questions.

2008-02-03 Thread Jon Nelson
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
 Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)

 wow, thanks for quick reply :)

   3. Another thing - would raid10,far=2 work when three drives are used?
  Would it increase the read performance?
 
  Yes.

 is far=2 the most I could do to squeeze every possible MB/sec
 performance in raid10 on three discs ?

In my opinion, yes. It has sequential read characteristics that place
at /or better than/ raid0. Writing is slower, about the speed of a
single disk, give or take.  The other two raid10 layouts (near and
offset) are very close in performance to each other - nearly identical
for reading/writing.

-- 
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problem with spare, acive device, clean degrated, reshaip RADI5, anybody can help ?

2008-02-03 Thread Neil Brown
On Thursday January 31, [EMAIL PROTECTED] wrote:
 Hello linux-raid.
 
 i have DEBIAN.
 
 raid01:/# mdadm -V
 mdadm - v2.6.4 - 19th October 2007
 
 raid01:/# mdadm -D /dev/md1
 /dev/md1:
 Version : 00.91.03
   Creation Time : Tue Nov 13 18:42:36 2007
  Raid Level : raid5

   Delta Devices : 1, (4-5)

So the array is in the middle of a reshape.

It should automatically complete...  Presumably it isn't doing that?

What does
   cat /proc/mdstat
say?

Where kernel log messages do you get when you assemble the array?


The spare device will not be added to the array until the reshape has
finished.

Hopefully you aren't using a 2.6.23 kernel?
That kernel had a bug which corrupted data when reshaping a degraded
raid5 array.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re[2]: problem with spare, acive device, clean degrated, reshaip RADI5, anybody can help ?

2008-02-03 Thread Andreas-Sokov
Здравствуйте, Neil.

Вы писали 4 февраля 2008 г., 03:44:21:

 On Thursday January 31, [EMAIL PROTECTED] wrote:
 Hello linux-raid.
 
 i have DEBIAN.
 
 raid01:/# mdadm -V
 mdadm - v2.6.4 - 19th October 2007
 
 raid01:/# mdadm -D /dev/md1
 /dev/md1:
 Version : 00.91.03
   Creation Time : Tue Nov 13 18:42:36 2007
  Raid Level : raid5
 
   Delta Devices : 1, (4-5)

 So the array is in the middle of a reshape.

 It should automatically complete...  Presumably it isn't doing that?

 What does
cat /proc/mdstat
 say?

raid01:/etc# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
[multipath] [faulty]
md1 : active(auto-read-only) raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
  1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [_]

unused devices: none
raid01:/etc#


 Where kernel log messages do you get when you assemble the array?
into /var/log/messages (os is DEBIAN)
last messages after -A was :

== /var/log/messages ==
Feb  4 02:39:53 raid01 kernel: RAID5 conf printout:
Feb  4 02:39:53 raid01 kernel:  --- rd:5 wd:4
Feb  4 02:39:53 raid01 kernel:  disk 0, o:1, dev:sdc
Feb  4 02:39:53 raid01 kernel:  disk 1, o:1, dev:sdd
Feb  4 02:39:53 raid01 kernel:  disk 2, o:1, dev:sde
Feb  4 02:39:53 raid01 kernel:  disk 3, o:1, dev:sdf
Feb  4 02:39:53 raid01 kernel: ...ok start reshape thread


 The spare device will not be added to the array until the reshape has
 finished.
ok. I think what such must was to be (excuse me for my bad english)
But when i did not see nothing about reshape (reshaping) in mdadm -D /dev/md1
this was confuse to me .

 Hopefully you aren't using a 2.6.23 kernel?
yep, i remember you alert to me about 2.6.23 kernel
raid01:/etc# uname -a
Linux raid01 2.6.22.16-6 #7 SMP Thu Jan 24 21:58:32 MSK 2008 i686 GNU/Linux
raid01:/etc#


 That kernel had a bug which corrupte ata when reshaping a degraded
 raid5 array.

 NeilBrown



-- 
С уважением,
 Andreas-Sokov  mailto:[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: /dev/sdb has different metadata to chosen array /dev/md1 0.91 0.90.

2008-02-03 Thread Neil Brown
On Saturday February 2, [EMAIL PROTECTED] wrote:
 Çäðàâñòâóéòå, linux-raid.
 
 Help please, How i can to fight THIS :
 
 [EMAIL PROTECTED]:~# mdadm -I /dev/sdb
 mdadm: /dev/sdb has different metadata to chosen array /dev/md1 0.91 0.90.
 

Apparently mdadm -I doesn't work with arrays that are in the middle
of a reshape.  I'll try to fix that for the next release.

Thanks for the report.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re[2]: problem with spare, acive device, clean degrated, reshaip RADI5, anybody can help ?

2008-02-03 Thread Andreas-Sokov
Hi, Neil.

4 февраля 2008 г., 03:44:21:

 On Thursday January 31, [EMAIL PROTECTED] wrote:
 Hello linux-raid.
 
 i have DEBIAN.
 
 raid01:/# mdadm -V
 mdadm - v2.6.4 - 19th October 2007
 
 raid01:/# mdadm -D /dev/md1
 /dev/md1:
 Version : 00.91.03
   Creation Time : Tue Nov 13 18:42:36 2007
  Raid Level : raid5
 
   Delta Devices : 1, (4-5)

 So the array is in the middle of a reshape.

AND :
i use now version mdadm :
raid01:/etc# mdadm -V
mdadm - v2.6.4 - 19th October 2007

and in previous version DURING reshaping such information apper in mdadm -D 
command
Why not appear now ?
May be you return it back ?
And indicating about % of process reshaping ?

What you think about this ?

-- 
 Andreas-Sokov

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Re[2]: problem with spare, acive device, clean degrated, reshaip RADI5, anybody can help ?

2008-02-03 Thread Neil Brown
On Monday February 4, [EMAIL PROTECTED] wrote:
 
 raid01:/etc# cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
 [multipath] [faulty]
 md1 : active(auto-read-only) raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
  ^^^
   1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] 
 [_]
 
 unused devices: none

That explains it.  The array is still 'read-only' and won't write
anything until you allow it to.
The easiest way is
  mdadm -w /dev/md1

That should restart the reshape.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mdadm 2.6.4 : How i can check out current status of reshaping ?

2008-02-03 Thread Andreas-Sokov
Hi linux-raid.

on DEBIAN :

[EMAIL PROTECTED]:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
  Creation Time : Tue Nov 13 18:42:36 2007
 Raid Level : raid5
 Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
  Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Mon Feb  4 06:51:47 2008
  State : clean, degraded
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

  Delta Devices : 1, (4-5)
^
  UUID : 4fbdc8df:07b952cf:7cc6faa0:04676ba5
 Events : 0.683598

Number   Major   Minor   RaidDevice State
   0   8   320  active sync   /dev/sdc
   1   8   481  active sync   /dev/sdd
   2   8   642  active sync   /dev/sde
   3   8   803  active sync   /dev/sdf
   4   004  removed

   5   8   16-  spare   /dev/sdb



[EMAIL PROTECTED]:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
[multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
  1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [_]

unused devices: none

##
But how i can see the status of reshaping ?
Is it reshaped realy ? or may be just hang up ? or may be mdadm nothing do not 
give in
general ?
How long wait when reshaping will finish ?
##




-- 
Best regards,
Andreas-Sokov

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid1 or raid10 for /boot

2008-02-03 Thread Keld Jørn Simonsen
I understand that lilo and grub only can boot partitions that look like
a normal single-drive partition. And then I understand that a plain
raid10 has a layout which is equivalent to raid1. Can such a raid10
partition be used with grub or lilo for booting?
And would there be any advantages in this, for example better disk
utilization in the raid10 driver compared with raid?

best regards
keld
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html