unsuscribe

1999-10-19 Thread ricard


-Mensaje original-
De: Michael Franzino [EMAIL PROTECTED]
Para: [EMAIL PROTECTED] [EMAIL PROTECTED]
Fecha: dilluns, 18 / octubre / 1999 21:25
Asunto: how to re-introduce a spare?


I'm new to Linux and this discussion group so
please be patient.

I'm able to make a Software RAID1 system with a
mirrored pair plus one spare using standard
Redhat 6.0 (kernel 2.2.5).  The hardware is just a
Pentium Motherboard with one IDE drive and a 
string of three SCSI Hard Drives. Each SCSI drive
has only one partition. It is the three 
SCSI drives that make up the mirrored pair 
plus one spare. I want to use cron to bring-down
one of the drives in the mirrored pair every hour.
The reconstruction will then bring in the spare as 
the new half of the mirrored pair. The idea (likely
bad) is that this makes sure I always have a good 
filesystem no more than one hour old on the drive that
has been brought down? To make this work I need help 
with following questions:
 
1) Once a spare has become part of the mirrored
   pair (after reconstruction), how do I re-introduce
   the third drive back as the new spare? Are there 
   console commands for this?
 
2) Other than powering off a drive in the mirrored 
   pair (I tried this), how can I bring it down so
   that reconstruction starts on the spare?


=

__
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com




Re: raidreconf utility

1999-10-19 Thread Glenn McGrath

Resizing raids would be really cool...
Andrew Clausen [EMAIL PROTECTED] is writting a program called
parted
that can do similar things to what partition magic does.
Parted is suppsed to be a frontend to the fat resizer (also done by him) and
extresize which has passed v1 now.
It would be great to see raid support for a program such as this.

Ive been thinking i would like to try and hack grub or lilo to recognise
raid partitions, im not sure how dificult it would be, but its a shame that
the kernel supports boot from striped raid but no bootloaders do.

my 2c

Subject: raidreconf utility



 Hi people !

 I started hacking together a utility to allow resizing and eventually
 reconfiguration of RAID sets.

 The utility is called ``raidreconf'', and I intend to make it read two
raidtab
 files (eg. raidtab and raidtab.new), and then (if possible) convert the
given
 MD device from the layout in the old raidtab file to the layout in the new
one.

 I just got it working, at least a little, so now I can resize a RAID0
array
 successfully.

 Now, the code is *not* production quality, in fact, it's not even alpha
 quality, or any quality for that matter.  I know there is a large number
of
 bugs in the code, and I'll throw in another day or so to get the code
prettier,
 less unstable, and more functional.

 I tested it on a three-disk RAID0, and successfully expanded it to a
four-disk
 RAID0.  But that's it.   The code can currently _only_ expand a RAID0 set.
 And mind you, that's not even stable in all cases, just the case where you
have
 equally sized disks in a simple ordering.

 The reason I'm writing about it now is, that I'd like to hear comments on
the
 idea.  Is it reasonable to use two raidtab files and a raid-device on the
 command-line, then try the conversion ?  And what features would people
like to
 see ?

 I guess resizing is the most urgently needed feature.  But conversion from
 raid0 to raid5 etc. might be something worth hacking on too ?

 Anyway, if you have too much free time on your hands, you can take a look
at
 the code at  http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/

 Cheers,

 
 : [EMAIL PROTECTED]  : And I see the elder races, :
 :.: putrid forms of man:
 :   Jakob Østergaard  : See him rise and claim the earth,  :
 :OZ9ABN   : his downfall is at hand.   :
 :.:{Konkhra}...:






Re: raidreconf utility

1999-10-19 Thread James Manning

[ Tuesday, October 19, 1999 ] Glenn McGrath wrote:
 Ive been thinking i would like to try and hack grub or lilo to recognise
 raid partitions, im not sure how dificult it would be, but its a shame that
 the kernel supports boot from striped raid but no bootloaders do.

lilo.raid1 patch from the RH 6.1 .src.rpm (which I've mailed multiple
times to this list) allows booting from s/w raid1

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Re: raidreconf utility

1999-10-19 Thread Glenn McGrath

Yea, i know you can boot raid1 because each partition is still recognisable
as normal ext2 partition...

I was thinking of raid0 and i guess raid 4/5

Subject: Re: raidreconf utility


 [ Tuesday, October 19, 1999 ] Glenn McGrath wrote:
  Ive been thinking i would like to try and hack grub or lilo to recognise
  raid partitions, im not sure how dificult it would be, but its a shame
that
  the kernel supports boot from striped raid but no bootloaders do.

 lilo.raid1 patch from the RH 6.1 .src.rpm (which I've mailed multiple
 times to this list) allows booting from s/w raid1

 James
 --
 Miscellaneous Engineer --- IBM Netfinity Performance Development





RE: stripes of raid5s - crash

1999-10-19 Thread Christopher E. Brown

On Thu, 14 Oct 1999, Tom Livingston wrote:

 Florian Lohoff wrote:
  I did a bit further - Hung the machine - Couldnt log in (All Terms
  hang immediatly) - Tried to reboot and when it hung at
  "Unmounting file..."
  i got a term SysRq- Tand saw many processes stuck in the D state.
 
  Seems something produces a deadlock (ll_rw_blk ?)  and all processes
  trying to access disk get stuck.
 
 Can you duplicate this using only one of the raid5 sets? I tried to cause
 the same behvior with a single raid5 set and it worked fine... but I did not
 layer raid on raid, perhaps this is where the issue is?


When working with a 5 x 18G RAID5 (0 spare) using 2.2.12SMP +
raid 2.2.11 (compiled in, not modules) I would get a endless stream
about buffers when trying to mount the device, mke2fs and e2fsck
worked fine.  Seemed to happen when the array was in the beginning of
the reconstruct.


With 2.2.13pre15SMP + raid 2.2.11 I managed to get this a
couple times, but only if I mount it right after reconstruct starts on
a just mkraided array.  If I wait till the reconstruct hits 2 - 3 % it
mounts just fine.  I have not seen this on arrays smaller than 50G
(but this is not hard data, it could just be the faster reconstruct).

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.




Bad rawio/raid performance

1999-10-19 Thread David Teigland

Has anyone else tried raw-io with md devices?  It works for me but the
performance is quite bad.  

Using raid0 across 4 disks I get about 40 MB/sec from /dev/md0.  After a
"raw /dev/raw1 /dev/md0" I only get 2 to 3 MB/sec from /dev/raw1.  Going
back to md0 after this, throughput is wrecked and will only be 2 - 3
MB/sec from md0.

I recompiled with read-ahead set to 0, but it didn't help.  Any ideas?
I'm using the latest raid and rawio patches.

Thanks,
Dave Teigland



RE: Bad rawio/raid performance

1999-10-19 Thread Tom Livingston

David Teigland wrote:
 Has anyone else tried raw-io with md devices?  It works for me but the
 performance is quite bad.

This is a recently reported issue on the linux-kernel mailing list.  The
jist of it is that rawio is using a 512 byte blocksize, where raid assumes a
1024. This was only first reported a couple of days ago (10/16)

The person who reported it included a "hack" patch to get things back up to
speed.  It's not in patch form, you just need to add the extra lines to
raw.c.  However, this likely won't be how it ends up working, use at your
own risk, etc etc.

Without further ado, Davide Rossetti's post:


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Davide Rossetti
 Sent: Saturday, October 16, 1999 4:56 AM
 To: Stephen C. Tweedie
 Cc: linux kernel mailing list; Sandro Tassa
 Subject: RAW and soft RAID interaction


 hi Stephen,

 By chance we found a (bad?) interaction between the raw device when
 attached to md RAID0 devices. I easily tracked it down to
 raw.c:raw_open() setting blocksize to 512. With this setting performange
 goes from 30 MB/s down to 200KB/s on a 4 disk / 4 SCSI controller setup.
 The easy fix:

   sector_size = 512;
   if (lookup_vfsmnt(bdev) != NULL) {
 if (blksize_size[MAJOR(bdev)]) {
   printk(KERN_INFO "raw(%d,%d): mounted blksize_size=%d\n",
  MAJOR(bdev), MINOR(bdev),
  blksize_size[MAJOR(bdev)][MINOR(bdev)]);
   sector_size = blksize_size[MAJOR(bdev)][MINOR(bdev)];
 }
   } else {
 if (hardsect_size[MAJOR(bdev)]) {
   printk(KERN_INFO "raw(%d,%d): unmounted hardsect_size=%d\n",
  MAJOR(bdev), MINOR(bdev),
  hardsect_size[MAJOR(bdev)][MINOR(bdev)]);
   sector_size = hardsect_size[MAJOR(bdev)][MINOR(bdev)];
} else if (MAJOR(bdev) == MD_MAJOR) {
  printk(KERN_INFO "raw(%d,%d): setting sector_size = 1024\n",
 MAJOR(bdev), MINOR(bdev));
  sector_size = 1024;
 }
   }

 while not being a production fix, at least allowed me to regain full
 performance.
 1024 seems to be the default MD blocksize.

 Otherwise, it seems MD devices should define set their hardsect_size
 array slot with 1024 or whatever.

 PS: how much is the sector_size=512 default setting in RAW important ? I
 mean, may someone safely change it to gain performance (seems to be
 verified on our FibreChannel controller driver) ?
 are 1024 or 2048 meaningful values ?

 thanks for your great work and for RAW.

 regards.

 --
 +--+
 |Rossetti Davide   INFN - Sezione Roma I - gruppo V, prog. APEmille|
 |  web: http://apemaia.roma1.infn.it/~rossetti |
 |" E-mail : [EMAIL PROTECTED]  |
 ||o o| phone  : (+39)-06-49914412  |
 |--o00O-O00o-- fax: (+39)-06-49914423   (+39)-06-4957697   |
 |  address: Dipartimento di Fisica (V.E.)  |
 |   Universita' di Roma "La Sapienza"  |
 |   P.le Aldo Moro,5 I - 00185 Roma - Italy|
 | pgp pub. key: finger [EMAIL PROTECTED]  |
 |  |
 |"Most people think about twice a year.  I got famous by thinking  |
 | once a week." - George B. Shaw   |
 +--+

 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/




RE: raidreconf utility

1999-10-19 Thread Tom Livingston

Jakob Østergaard wrote:
 I started hacking together a utility to allow resizing and eventually
 reconfiguration of RAID sets.

Kick ass.  I had been thinking of doing the same thing, as I could have used
such a thing in the past.  I gave it a shot on a mini-test raid0 setup i
made for it, and it seemed to work just fine... no compile issues, added the
partition no problem.

 The reason I'm writing about it now is, that I'd like to hear
 comments on the idea.  Is it reasonable to use two raidtab files
 and a raid-device on the command-line, then try the conversion ?
 And what features would people like to see ?

The two raidtab way is what I would do as well. Handling command line
arguments to describe changes to the array might be simpler for some
operations (like /dev/md0 + /dev/sdc1) it would get horrible complex for
more detailed changes, as well as having to handle grammar for failed disks,
etc.

A fair thing to note is that this is a "riskier" endeavor than anything else
in the raidtools package.  That being so, it makes sense to have a good
amount of checks along the way.  I see you do check the raidtab against the
superblocks, which is most of it.  I can't remember, though, if you do ext2
checking  mounted file system checking like mkraid does

Also, some thought needs to go into how this could handle a power fail in
the middle.  Certainly you don't want the old raid set to auto start when
the machine is rebooted, doing so could cause all sorts of problems.
Instead of requiring users to disable the raid by removing the fd partition
labels, maybe the reconf utility should erase the superblocks on the md
device it's working on, placing instead a marker that shows it's in the
middle of being reworked.  If status information about the reconstruction
was kept in the superblocks (or just one) the reconf utility could use this
data to pick up where it left off...

Also, when allowing a user to reduce the raidset size ( can't remember if
you already allow this... I read the code sunday and already I've forgotten
everything ;), you probably want to do a sanity check to see if the ext2
partition on that device has already been sized down.

Might also benefit in considering what happens if there's a hardware error
on one of the old or new disks during the process...  perhaps an area of bad
sectors on one of the new disks?  I think all of the information is still
there at this point to do an about face, and start un-reconf'ing the
drive... walking backwards to put it back in it's original state.

I believe in put-up or shut-up, so I'm happy to lend my time to the process.
I'm in between consulting gigs right now and could probably add something.

Great work, thanks much.

Tom




Re: raidreconf utility

1999-10-19 Thread Jakob Østergaard

On Tue, Oct 19, 1999 at 09:47:13PM -0700, Tom Livingston wrote:
[snip]
 
 A fair thing to note is that this is a "riskier" endeavor than anything else
 in the raidtools package.  That being so, it makes sense to have a good
 amount of checks along the way.  I see you do check the raidtab against the
 superblocks, which is most of it.  I can't remember, though, if you do ext2
 checking  mounted file system checking like mkraid does

A little checking.  But I'll start by putting in algorithms that don't destroy
your data if you use differently sized disks   :)

 Also, some thought needs to go into how this could handle a power fail in
 the middle.  Certainly you don't want the old raid set to auto start when

Yes, checkpointing would be good.  It should be fairly simple, if one could
accept to still loose one (or N = # of disks) chunks.

 the machine is rebooted, doing so could cause all sorts of problems.
 Instead of requiring users to disable the raid by removing the fd partition
 labels, maybe the reconf utility should erase the superblocks on the md
 device it's working on, placing instead a marker that shows it's in the
 middle of being reworked.  If status information about the reconstruction

Good point:)

 was kept in the superblocks (or just one) the reconf utility could use this
 data to pick up where it left off...
 
 Also, when allowing a user to reduce the raidset size ( can't remember if
 you already allow this... I read the code sunday and already I've forgotten
 everything ;), you probably want to do a sanity check to see if the ext2
 partition on that device has already been sized down.

I'm trying to use the existing code that mkraid uses (from raid_io.c) to do
the superblock updating, so some of the clever checks are only maintained
in one place.  But the raidreconf utility will need some checks added to
it, in time.

I'll do basic features first, checks later.

 Might also benefit in considering what happens if there's a hardware error
 on one of the old or new disks during the process...  perhaps an area of bad
 sectors on one of the new disks?  I think all of the information is still
 there at this point to do an about face, and start un-reconf'ing the
 drive... walking backwards to put it back in it's original state.

This quickly gets hairy.   I think, once it handles a number of basic features
(like shrink+grow of raid[01]) it'll be easier to see what can be done.

One could do a write test on the new disks and a read test on the old ones
before actually moving data.  I guess that would be a pretty good start  (?)

 I believe in put-up or shut-up, so I'm happy to lend my time to the process.
 I'm in between consulting gigs right now and could probably add something.

I'll write back to the list as soon as I've re-done the basic algorithm.  The
code available for download now is _really_ basic, and wrong.  I've already
changed quite a lot of it. Wait a day or so for an update   :)

Cheers,


: [EMAIL PROTECTED]  : And I see the elder races, :
:.: putrid forms of man:
:   Jakob Østergaard  : See him rise and claim the earth,  :
:OZ9ABN   : his downfall is at hand.   :
:.:{Konkhra}...: