Hi all,

first of all...sorry for my english ;)
I had a strange problem at boot time. After a reboot it disappeared, but any
way I`m interested what happened, especially with this superblock
[RAID1 System on 2.2.14,hda and hdc, Array was set up by RH installer]
md: superblock update time inconsistency = using most recent on
freshest hda1
request module [md_personality_3]: Root fs not mounted
do_md_run()returned -22
unbind <hdc1,1>
export redv(hdc1)
unbind <hda1,0>
md stop
...autorun done
Bad md_map in ll_rw_block
EXT2fs unable to read superblock
Bad md_map in ll_rw_block
isofs_read_super: bread failed, dev=09:02, iso_blknu=16, block=32
kernel panic: VFS: Unable to mount root fs on 09:02

Another problem...perhaps not really matching but perhaps anyone can help

RAID1 Sytem on a 2.2.14 Kernel. The Raid Array was created during the
installation by the RH distribution. There are 2 IDE Disks in the Array (hda
and hdc).

xxx kernel: hda: dma_intr: status=0x51 {DriveReady SeekCompleteError}
xxx kernel: hda: dam_intr: error=0x84 {DriveStatusError BadCRC}
...a few times this error message, then:
xxx kernel: hda: DMA disabled
xxx kernel: ide0: reset: success

OK........the kernel detected problems with the DMA mode (which could
probably cause data faults, etc.) and so disabled it. The disks are both
Maxtor92041U4s and the chipset is a Intel 440BX, so there should be no
problem with the DMA mode. The IDE cables are also not very long (which
length do you recommend?). The only thing the hda differs from the hdc is,
that it is in a removable frame. Could that cause the problems?  Or any
other hdparm settings.

Thanks in advance


Reply via email to