>
> On Sun, 14 Mar 1999, Andrew Doane wrote:
>
> > The first, I believe is a bug. I had a RAID5 partition set up and
> > then decided to re-allocate the disks for a raid0 configuration.
> > I successfully created the RAID0 device, created a file system, and
> > copied some files over. I then shut down the md device, and rebooted.
> > Upon reboot, the auto-start mechanism in the kernel still found raid5
> > superblocks and attempted to bring it up, but failed. [...]
>
> did you have a
>
> persistent-superblock 1
>
> line in your RAID0 raidtab configuration section?
No - but I do now and that fixed it. Sorry about that, a pretty stupid
mistake on my part. At least my diagnosis was right :-)
>
> > [...] It appears
> > to me that under a raid0 configuration the raid superblocks are not
> > being saved durng a raidstop. For kicks I created a raid1 device using
>
> the thing is, the 'default' value for persistent-superblock is 0 for RAID0
> and 1 for RAID1,4,5. This is to lessen the chance of messed up old RAID0
> arrays. I'll probably enforce the persistent-superblock line in the next
> release so that your problem will not happen again.
At this point I would agree the default should be on.
> > The second problem is with system hangs while using raid5. I did tweak
> > raidtools and the kernel source to allow up to 14 disks (going into the
>
> sorry, only 12 disks are possible currently :( the RAID superblock is 4K.
> It's easy to extend it but it has to be done carefully. (a superblock size
> field solves the problem) Is it important to have 14 disks?
>
> > reserved data space - hopefully I won't get in trouble later on), but
> > I exhibited the same problems with non-modified source/tools. The RAID5
>
> but you cannot create a 14-disks array with the unmodified raidtools, can
> you? it should not be possible.
With unmodified source it would not allow me to pass 12. I changed the
following to go to 14:
In raidtools:
md-int.h:#define MD_SB_DISKS_WORDS 448
In the kernel source:
md_p.h:#define MD_SB_DISKS_WORDS 448
Which appears to have worked successfully:
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid0 sdn1[13] sdg1[12] sdm1[11] sdf1[10] sdl1[9] sde1[8] sdk1[7] sdd1[6]
sdj1[5] sdc1[4]
sdi1[3] sdb1[2] sdh1[1] sda1[0] 124373760 blocks 64k chunks
unused devices: <none>
/dev/md0 123360228 52 122116440 0% /a1
This is over two ultra-wide differential buses (7 disks per bus). Its fast
as all hell; It created the above file system in under a minute.
-Andrew