On Saturday 21 April 2001 23:35, you wrote:
> [root@charybdis /home]# mke2fs /dev/md0
> mke2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09
> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> 1667904 inodes, 3333968 blocks
> 166698 blocks (5.00%) reserved for the super user
> First data block=0
> 102 block groups
> 32768 blocks per group, 32768 fragments per group
> 16352 inodes per group
> Superblock backups stored on blocks:
>         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
> 2654208
>
> Writing inode tables: Segmentation fault (core dumped)
>
>
> mkreiserfs hangs as well.
>
> e2fsprogs-1.19-4mdk
> kernel 2.4.3-20mdk
> raidtools-0.90-9mdk
>
>
> Any idea whats going on?

Very likely you have chosen a RAID configuration incompatible with the 
filesystems.  I noticed on a Mylex DAC960 and 3 34G IBM drives that I could 
configure RAID5 and it would work flawlessly, but RAID0 was very particular 
about the logical arrangement of drives.  In particular, 3 logical drives the 
same size as the physical ones was a loser.  It installed then got as far as 
"LI".  Using the same physical setup with one 102G logical drive worked just 
dandily, and one server is now on the web with this running kernel 2.2.19 
secure and reiser notail pertitions.

Of course on the basis of the information you provided, no speculation is 
likely to be fruitful.  What is your RAID config?  Were you using it 
successfully anywhere else, under any other software?  What does lspcidrake 
say?  How about a dmesg?

2.4 is still a relatively new kernel, and I am certain earlier releases by 
others had even more problems.  As we have people who waited til now to test, 
we are not in the best position to judge what might be wrong.  You may have 
either an untested RAID configuration, or untested hardware, or both.

Civileme

Reply via email to