-----BEGIN PGP SIGNED MESSAGE-----
Sorry if this is out of place on the linux-raid list (it appears to be
mostly 'software raid' discussions), i am just trying to find anyone who
has run into a similar problem when formatting their drives as below, and
i suspect someone on this list may of seen something like this:
- -thanks
After several days (through several www sites, through red-hat's site,
through dpt's lack-luster www site), DPT tech support finally said I
should contact the driver make and mail-lists and see if anyone had a
suggestion prior to RMA'ing the board.
The specs:
DPT pm2144W Firmware 7m1 (m1 is recomended by support for the 100mhz buss
motherboards) with 16 meg simm
PII 400mhz, tyan tiger II 100 (this is a dual slot cpu motherboard, 100
mhz, with just 1 400 mhz chip in there).
256meg ram, generic video trident, and 3com 3905b 100-base-t
Hard Drives
- - - -----------
#1 - Quantum xp31070W Revision L912 [BOOT / OS drive]
#2 through #5 are the SAME
Segate st39173W Revision 5764 9 Gig [Data Drives, for raid]
Upon setting the '4' segate's via RAID 0 to 128kb stripe (performace w/o
redundancy), and configuring all this via hte DOS boot disk and DPTMGR. I
configured the raid-0 for LINUX as well.
When I boot to linux (red hat 5.1, and i tried using my own custom kernel
to ensure everything was fine too) i can fdisk /dev/sdb as one big 36 gig
partion, but when i run
/sbin/mke2fs -c /dev/sdb1
AFter about 3 hours, it comes back with:
- - - ------
Checking for bad blocks (read-only test): done
Block 49 in primary superblock/group descriptor area bad.
Blocks 1 through 138 must be good in order to build a filesystem.
aborting....
- - - ------
If i use
/sbin/mke2fs /dev/sdb1 (without bad block checking) it goes fine, except i
know that if i start using the drive and 'hit' that block, ill surely be
crashing hard. So i know something is wrong.
DPT Tech support has had me do the following w/o sucess:
- - - --------------------------------------------------------------
#1 - Low level format all drives (no change)
#2 - swap pci slots with DPT card (no change)
#3 - use the "dptmgr /fw0" hidden command from DOS to use 'firmware RAID'
(no change)
#4 - I removed 2 of the 4 segates, to try to locate which drive
was causing it, and could not pinpoint it, they all do it
no matter which 2 i use for RAID 0.
#5 - I disabled raid-0, and boot up, and can manually partion each
drive just fine with
mke2fs -c /dev/sdb1
mke2fs -c /dev/sdc1
mke2fs -c /dev/sdd1
mke2fs -c /dev/sde1
So DPT and I agree, its not hte hard drives, but the controller
or something interacting with the controller (driver/OS?).
When I use just '2' of the 4 seagates, the error message CHANGES to:
- - - ------
Checking for bad blocks (read-only test): done
Block 49 in primary superblock/group descriptor area bad.
Blocks 1 through 70 must be good in order to build a filesystem.
aborting....
- - - ------
Notice the only difference is that "blocks 1 through 70" instead of "138".
I appreciate any info someone else might of come across on this, thanks!
Adam Wills Global 2000 Communications
Director of Networking Systems 1840 Western Ave.
[EMAIL PROTECTED] Albany, NY, 12203
http://www.global2000.net (518) 452-1465
-----BEGIN PGP SIGNATURE-----
Version: 2.6.2
iQCVAwUBNiIeyK7yaLQI4biZAQFh+wP+Ln+gURWxnX1zqxssVPl67O5S4E9eungV
SFRKBJ0s2AXOW7wByDVcQ2GvTIVNdXlKerjp5lqWYta1RlNSmPeEjyXT7G+Y0Rj0
02RsUhpYERfaXZF1Tf2O24z7MHxBLCl85COEL0P1OstWL3qDJqu3dx3TisWb8Sy0
KG11qUyDTmM=
=R2fM
-----END PGP SIGNATURE-----