Hello again Martin
At 15:04 22/04/01, you wrote:
>1) get a kernel including raid onto the machine; the dmsg still shows
>it to be trying to load a raid module.
>
> > running: <hdg1><hde1><hdc1>
> > now!
> > hdg1's event counter: 00000082
> > hde1's event counter: 00000080
> > hdc1's event counter: 00000080
I followed your instructions, & to be sure loaded all raid options into the
kernel. Everything seemed to be fine. The raid array was accessible. I
rebooted twice more. I then mounted the degraded array (read only) & tried
du to check the files. There were a number of IO errors. I then rebooted
again, & now the two disks are out of sync, & once again I have no raid array.
I cannot help feeling I have made some serious errors. Am I still able to
recover the situation?
Dmesg output:
linear personality registered
raid0 personality registered
raid1 personality registered
raid5 personality registered
raid5: measuring checksumming speed
8regs : 1208.800 MB/sec
32regs : 731.200 MB/sec
pII_mmx : 1868.800 MB/sec
p5_mmx : 2386.400 MB/sec
raid5: using function: p5_mmx (2386.400 MB/sec)
md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md.c: sizeof(mdp_super_t) = 4096
autodetecting RAID arrays
(read) hdc1's sb offset: 20089152 [events: 00000085]
(read) hde1's sb offset: 20089152 [events: 00000083]
autorun ...
considering hde1 ...
adding hde1 ...
adding hdc1 ...
created md0
bind<hdc1,1>
bind<hde1,2>
running: <hde1><hdc1>
now!
hde1's event counter: 00000083
hdc1's event counter: 00000085
md: superblock update time inconsistency -- using the most recent one
freshest: hdc1
md: kicking non-fresh hde1 from array!
unbind<hde1,1>
export_rdev(hde1)
md0: removing former faulty hde1!
md0: max total readahead window set to 496k
md0: 2 data-disks, max readahead per data-disk: 248k
raid5: device hdc1 operational as raid disk 0
raid5: not enough operational devices for md0 (2/3 failed)
RAID5 conf printout:
--- rd:3 wd:1 fd:2
disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdc1
disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
disk 2, s:0, o:0, n:2 rd:2 us:1 dev:[dev 00:00]
raid5: failed to run raid set md0
pers->run() failed ...
do_md_run() returned -22
md0 stopped.
unbind<hdc1,0>
export_rdev(hdc1)
... autorun DONE.
Thank you for your patience,
Mike Parsons
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]