Hello...
Thanks to Ingo for replying and heading me in the right direction.
This was the setup:
raiddev /dev/md0
persistent-superblock 1
raid-level 0
nr-raid-disks 3
nr-spare-disks 0
chunk-size 32
device /dev/sdb1
...
I used mkraid <force> /dev/md0
The raid was made but it would not stick
-------
>Ingo:
>
>have you used mkraid -R /dev/md0 to remake the array? Do you have the
>persistent-superblock = 1 option too?
I tried mkraid <force> -R /dev/md0, with the same results...
I changed the raidtab from the above to:
raiddev /dev/md0
raid-level 0
nr-raid-disks 3
nr-spare-disks 0
chunk-size 32
persistent-superblock 1
device /dev/sdb1
...
( I moved the persistent super block down 5 lines )
And it worked!
Since I will have to rebuild it in a couple of days with a new drive, I
then changed it to a raid 5 and that worked also... :). note: my
spelling is horrible so the problem could have been me misspelling
persistent.
The raid 5 worked great. On reboot it was detected fine and the first
thing it did was start resyncing.
cat /proc/mdstat produced something like
...resync=5% finish=104.0m
with 0% of the disk full.
Then I started innd :)
Then I copied the local.* spool (mail2news archive of vger, redhat, and
server other lists) backup I had made ( about 600M and ~ 200,000 files )
Then I started rc5des. :)
I thought about removing a drive and puting it back in 5 minutes later,
but I didn't.
several hours later cat /proc/mdstat grew to
...resync=7% finish=1330.0m
thats when the copy finished and I throttled innd. The resync finished
in about an hour or so.
So all in all I stressed it about as much as I could and it kept on
going.
What if this array had been a production database? If it was, would I
prefer fast access but slow rebuild, or slower access but a faster
rebuild? An option to be able to run raid5syncd at different priorities
might be cool.
kudos and beers all around for the raid maintainers.
Christopher McCrory wrote:
>
> Hello...
>
> I had a raid0 set where one of the drives started going bad. The raid
> is for a usenet feed. I took the raid down, took out the bad drive and
> remade it with the 3 remaining drives. I works but the autodetect still
> wants 4 drives.
>
> kernel 2.2.3
> latest raid015 patches
> latest raidtools
>
> ...
> md: device name has changed from sde1 to sdb1 since last import!
> md0: former device sdd1 is unavailable, removing from array!
> ...
> md: md0, array needs 4 disks, has 3, aborting.
> ...
>
> How do I reset the raid superblock information? Or is there somthing
> else?
>
> TIA
>
> --
>
> Christopher McCrory
> Lead Bithead, Netus Inc.
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>
> "Linux: Because rebooting is for adding new hardware"
> "Linux: Because Dilbert's mom uses it"
--
Christopher McCrory
Lead Bithead, Netus Inc.
[EMAIL PROTECTED]
[EMAIL PROTECTED]
"Linux: Because rebooting is for adding new hardware"
"Linux: Because Dilbert's mom uses it"