Re: Software RAID with kernel 2.2.14

2000-03-24 Thread [EMAIL PROTECTED]

 someone out there correct me if i am wrong, but from looking at my
boxes:
[big snip]
 allan

Ok, now it's clear.
I need to patch my kernel. =)
I fount a patch here: http://people.redhat.com/mingo/raid-patch

But, another question, so theresn't any problem running a 
raid on the same device?!?!? (say /dev/hda1 and /dev/hda2)
Only for purpose test.

Paolo





Cant get Raid to work

2000-03-24 Thread mark

Hi there,

Can anyone help ..

I have 2 new 9 gig scsi drives that I have installed onto my linux machine.
I did a fdisk on both (full size)
Then I did a mkfs on both.

So now I can mount the two drives no problem

mount /dev/sdb1 /mail2
mount /dev/sdc1 /mail3

I wanted to create a raid0 setup for those 2 disks.
I followed the software raid HOWTO and made the /etc/raidtab to match.
I then typed in mkraid /dev/md0 which worked
I then formated the drive and mounted it.

When I rebooted I got errors  so I moved the etc/raidtab to /etc/raidtab.old
and then I could get back in.

After doing that I tried to do the mkraid again and it now says (and changing
back the etc/raidtab)

handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 8891376kB, raid superblock at 8891264kB
/dev/sdb1 appears to be already part of a raid array -- use -f to
force the destruction of the old superblock
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.

proc/mdstat now looks like this
Personalities : [raid0]
read_ahead 1024 sectors
unused devices: none

I also tried that -f flag but it just gives me a warning

What must I do  and how do I set it up so that it starts automatically ?

Thanks for the help
Mark






ICP VORTEX

2000-03-24 Thread Santiago Campos Barrera


Any experiencies with ICP VORTEX



Reboot and RAID0

2000-03-24 Thread mark

Hi all 

I got the raid working now .
But how do I get it to start after rebooting.

I do the following .

raidstart /dev/md0
mount /dev/md0 /storage

Do I need to change the init scripts for this ...
Is it safe to reboot while this is running ?

I have read the software howto. It mentions about changing the devices types to
fd  but I'm not totally sure on how to do this.

Thanks
Mark



raid5 and the 2.4 kernel

2000-03-24 Thread Brynn Rogers

I tried to upgrade to the 2.4[pre] kernel, but my system hangs when tring to
mount the raid5 array.

After perusing this list a bit I discovered that raid5 doesn't yet exist for
2.4.  Grr.

What can I do to boot 2.4?with or without raid5?
I tried commenting out the /dev/md line in my /etc/fstab, but the booting
kernel still tires
to install a raid5 module which doesn't exist.

Obviously I could pull all my disks out (SCA so it is easy).I had a funny
thought, 
what would the 5 disks (1GB each) that make up my raid5 array do if I put them
back in the wrong order??

I am beginning to think that I should shitcan the 1GB drives and just get a 9
or 18 instead and not run raid.

Brynn
-- 
http://triplets.tonkaland.com/  to see my triplets !



Kernel panic: B_FREE inserted into queues

2000-03-24 Thread Mike Bilow

I am extremely inexperienced with software RAID, so please don't flame me
if this message comes arcross as evidence of outright idiocy.  I prefer to
think the line between idiocy and adventurousness is just very thin.

I built a custom kernel using the 2.2.14 source and applying the most
recent patch, raid0145-19990824-2.2.11.  Although that patch was targeted
officially at 2.2.11, it worked fairly well, rejecting only some
architecture-specific files related to SPARC and PPC, and the raid0.c
module.  Since I am not even compiling the raid0.c module and am running
on Intel x86, this seemed like a reasonable result.  The kernel builds and
runs fairly well in most respects that I can see.  Besides, it was the
most recent source patch at the places mentioned in the HOWTO; is there
anything better available?

I then used this custom kernel as a replacement on the installation disk
for the Debian "Potato" distribution.  I put the userland raidtools onto a
floppy disk, which I can mount manually after booting.  I also wrote a
short raidtab onto the floppy which almost exactly follows the example in
the HOWTO for a RAID-1 set, using type 0xFD /dev/hda1 and /dev/hdc1
partitions which had been created manually with fdisk.  Both /dev/hda and
/dev/hdc are identical 30 GB Western Digital drives, partitioned
identically.  I used mkraid successfully and I can use raidstart and
raidstop with no problems.  The contents of /proc/mdstat look good.

If I allow the initial mirroring to complete, the /dev/md0 device works
normally as far as I can tell.  It can be mounted, files read and written
to it, and so on.  However, during the remirroring process, I was able to
to do everything successfully prior to mount.  During the remirroring
process, I can run mke2fs and e2fsck successfully, and e2fsck reports it
as clean when that would be expected.  However, any attempt to mount the
partition during the remirroring process fails out with the unhelpful:

Kernel PANIC: B_FREE inserted into queues.

This is particularly irritating, since this causes a complete lockup and
the remirror starts again from scratch on the reboot.

The ext2 fs code seems to be solid.  The md0 device and its ext2 fs work
fine as long as the remirror is not in progress, or at least I think so. 
I can mount an ext2 fs floppy disk (which I created as a test) during the
remirror of md0 with no trouble.

I really have too little experience with software RAID to know what I am
looking at here.  The kernel panic is not helpful, and it generates
nothing by way of stack trace or other information besides the one line
message.  I am at a complete loss even to guess which subsystem it is
coming from, since the queue manager is touched by everything.  I have
even been wondering if the problem could be hardware, triggered by
something like writing to both IDE channels in quick succession or DMA
mismanagement.  The system in question is a Pentium 166 MHz with 64 MB RAM
using an Asus P55TP4N motherboard (upgraded to the latest BIOS), which has
the Intel Triton chipset (82371 and 82437) handling the IDE channels.

Although the system runs when the mirror is synchronized, this instability
during remirror greatly troubles me, and it essentially defeats the whole
purpose of using RAID.  I also have doubts about whether the system is
truly stable in this configuration even with the mirror synchronized, or
whether it is working by coincidence and some set of circumstances or
sequence of events will cause a kernel panic during normal operation.

Can you give me any insight on this?   I am very new to software RAID.

-- Mike





hi

2000-03-24 Thread Chris Bondy



I've ran old md-utils since it come out, worked fine for me, but since I
have this new promise ata66 controller, I needed to got newer kernel's
So I noticed old md-utils didn't work, after alot time looking around for
new tools ( most I found are dated 97/98, the newest ones I've , were
titled -dangerous :)

Anyways I've finally gotten it work sorta. I cna mount /dev/md0 now. The
problem I have is, if I mount maxtor 40gig ata66 by them selfs (I have two
on promise ata66 addon card) I get almost 40gigs per drive, all fine.
But when raid0 them, even one at time, I get 
more /proc/mdstat
Personalities : [linear] [raid0] 
read_ahead 1024 sectors
md0 : active raid0 hde1[0]
  6990448 blocks 4k chunks
  
unused devices: none

if I add the second drive, I get whole 13gigs, instead of 70-80.
Would anyone be able to tell if there is way to fix this?



Linux shell2 2.3.48 #4 SMP Fri Mar 24 00:30:02 EST 2000 i686 unknown





Re: RAID5 array not coming up after repaired disk

2000-03-24 Thread Danilo Godec

On Fri, 24 Mar 2000, Douglas Egan wrote:

 When this happened to me I had to "raidhotadd" to get it back in the
 list.  What does your /proc/mdstat indicate?
 
 Try:
 raidhotadd /dev/md0 /dev/sde7
 

I *think* you should 'raidhotremove' the failed disk-partition first, then
you can 'raidhotadd' it back.

   D.




superblock or the partition table is corrupt?

2000-03-24 Thread Jason Lin

I am trying to setup RAID 1 on RedHat6.1.

"fsck /dev/md0" gives the following:

Parallelizing fsck version 1.15 (18-Jul-1999)
e2fsck 1.15, 18-Jul-1999 for EXT2 FS 0.5b, 95/08/09
The filesystem size (according to the superblock) is
757055 blocks
The physical size of the device is 757024 blocks
Either the superblock or the partition table is likely
to be corrupt!
Aborty?   
-
But the constituent disks have 757055 blocks each.
Should I do "mkraid --really-force /dev/md0"  ?



__
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://im.yahoo.com



Re: RAID5 array not coming up after repaired disk

2000-03-24 Thread James Manning

[Marc Haber]
 |autorun ...
 |considering sde7 ...
 |adding sde7 ...
 |adding sdd7 ...
 |adding sdc7 ...
 |adding sdb7 ...
 |adding sda7 ...
 |created md0

Ok, maybe I'm on crack and need to lay off the pipe a little while, but
it appears that sdf7 doesn't have a partition type of "fd" and as such
isn't getting considered for inclusion in md0.  

sde7 failure + lack of available sdf7 == 2 "failed" disks == dead raid5

James, waiting for the inevitable smack of being wrong



Re: superblock or the partition table is corrupt?

2000-03-24 Thread David Cooley

If dev/md0 is running, then the raid is running...
you need to run mke2fs on it first to format the raid before fsck will work.



At 06:45 PM 3/24/00, Jason Lin wrote:
I am trying to setup RAID 1 on RedHat6.1.

"fsck /dev/md0" gives the following:

Parallelizing fsck version 1.15 (18-Jul-1999)
e2fsck 1.15, 18-Jul-1999 for EXT2 FS 0.5b, 95/08/09
The filesystem size (according to the superblock) is
757055 blocks
The physical size of the device is 757024 blocks
Either the superblock or the partition table is likely
to be corrupt!
Aborty?
-
But the constituent disks have 757055 blocks each.
Should I do "mkraid --really-force /dev/md0"  ?




===
David Cooley N5XMT Internet: [EMAIL PROTECTED]
Packet: N5XMT@KQ4LO.#INT.NC.USA.NA T.A.P.R. Member #7068
We are Borg... Prepare to be assimilated!
===




Re: a question or two

2000-03-24 Thread Jakob Østergaard

On Fri, 24 Mar 2000, Herbert Goodfellow wrote:

 I got your message about the "really force" flag when trying to use the -f
 option with the /sbin/mkraid Linux utility.  I am running Slackware and have
 gotten confused by the man pages and the errors I am getting.  What is the
 "really force" command line option?

Below you refer to the old-style RAID in the stock kernels which is considered
unstable or at least out-dated by many.  Quite a few (most ?) people on this
list use the new-style RAID, referred to as 0.90 RAID, and that's where the
``really force'' stuff comes into the picture.

See  http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/  for an explanation.

 
 Also, simple steps to creating a raid1 cfg on an IDE bus:
 
 1) identify the devices [hdc, hdd]

Yep

 2) create file systems on them?

No why ?  They're about to be combined into a new block device, they're
not in their final form yet.

 3) run mdadd -ar after you have modilfied the /etc/mdtab?
  this gives me an "I/O error on hdd" everytime I do this. . .

Nope.  mdadd is old-style RAID.

 4) configure the /etc/raidtab

Yes !  With 0.90 RAID there is one configuration file, /etc/raidtab, and even
this will be unneeded once the array is up running, because the new-style RAID
keeps the configuration in superblocks on the devices participating in the
array.  (But you might want to keep the config file anyway - just in case  :)

 5) run "/sbin/mkraid -f -c /etc/raidtab /dev/md0"  and get the "if you force
 it you
 might loose data" message.  I know there's no data on the drives, but I am
 missing the "really force it" option.

The option will be printed for you once you install both a kernel with new-style
RAID, and the proper raidtools for that kernel.  Again, see the HOWTO.

 I know I am confused.  Could you provide me with any feedback and/or a
 pointer in the right direction?

Hope this helps you more than it confuses you.

-- 

: [EMAIL PROTECTED]  : And I see the elder races, :
:.: putrid forms of man:
:   Jakob Østergaard  : See him rise and claim the earth,  :
:OZ9ABN   : his downfall is at hand.   :
:.:{Konkhra}...:



Re: superblock or the partition table is corrupt?

2000-03-24 Thread m . allan noah

looks like you ran mke2fs on your partitions, then you did mkraid on them.

guess what? raid code puts a little chunk of info about each disk and the raid
array it is part of onto the end of the partition, and then reports the size
of the device as being a little smaller than the number of blocks on the disk.

raid is a block device, e2fs is a file system. you have to make the raid
first, then the fs on top of that.

so, you need to run mke2fs on /dev/md0, rather than the individual partitions,
then you should be fine.

allan

Jason Lin [EMAIL PROTECTED] said:

 I am trying to setup RAID 1 on RedHat6.1.
 
 "fsck /dev/md0" gives the following:
 
 Parallelizing fsck version 1.15 (18-Jul-1999)
 e2fsck 1.15, 18-Jul-1999 for EXT2 FS 0.5b, 95/08/09
 The filesystem size (according to the superblock) is
 757055 blocks
 The physical size of the device is 757024 blocks
 Either the superblock or the partition table is likely
 to be corrupt!
 Aborty?   
 -
 But the constituent disks have 757055 blocks each.
 Should I do "mkraid --really-force /dev/md0"  ?
 
 
 
 __
 Do You Yahoo!?
 Talk to your friends online with Yahoo! Messenger.
 http://im.yahoo.com