Raid problems

2006-10-30 Thread Andrea Ganduglia

Hi. This moring after fatal ATA Abnormal Status machine freeze, and
after reboot it said on syslog:

Oct 30 12:35:15 prometeo kernel: DMA write timed out
Oct 30 12:35:16 prometeo kernel: parport0: BUSY timeout (1) in
compat_write_block_pio

what means this error?

--
Openclose.it - Idee per il software libero
http://www.openclose.it


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Timo Railo
[ Either me or my mail program is going nutty... I could have sworn  
that I replied to the list, not you Justin. Sorry. ]

After much much headache (and almost buying new serial ata raid hw  
setup etc.), I got it to work (almost). Solution: compiling my own  
kernel. You should have warned me how easy it would be. It was really a  
no-brainer, I was afraid of it for no reason. (Thanks Martin, those  
german instructions were most helpful, also stuff from  
http://www.projektfarm.com/en/support/howto/debian_kernel_compile.html  
helped. )

Things learned along the way (for a Debian [and mostly linux] newbie):
- apt-get, differences between distros
- lilo principals
- crub principals
- fstab principals
- two different sets of raid tools grin
- kernel modules
- inittab
- compiling my own kernel
- patience
- that debian community is superb!
I suppose I could package this experience and sell it as a hands-on  
linux course. ;-)

For anyone starting with Linux software raid:
1. install stable (or anything else you prefer)
2. get 2.4.24 kernel source (apt-get install kernel-source-2.4.24) (or  
newer)
3. follow instructions from:
http://www.projektfarm.com/en/support/howto/debian_kernel_compile.html
	- remember to include raid-support (not as module, but built in)
4. follow instructions from  
http://www.cs.montana.edu/faq/faqw.admin.py? 
query=1.22querytype=simplecasefold=yesreq=search
NOTE:
Lilo didn't work for me, it just froze in startup. Therefore had to go  
the grub way with rescue cd.

ONE final question on this issue:
All is well, except there is something funny about grub setup (or bios,  
but grub suspected). If I disable either one of the hard disks, it  
wont' startup at all (it just can't boot). Any pointers as to what  
might be causing this greatly appreciated again.

Thanks for great help so far!

Timo


On Monday 16 February 2004 13:34, you wrote:
Hi,

(Justin, sorry to harass you off-the list)

I'm really loosing my hair with this. So far:

I got it all to work (sw RAID), it was mostly fstab that was not setup
right. But since then, I've really managed to mess things up. I  
thought
everything was fine, but didn't realize that Lilo was still reading
it's conf from the old disk. So repartitioning the original disk - it
would not boot anymore. (that's my guess anyway)

That is probably correct.

After much trouble, I thought I'm better off re-installing the whole
thing. Wrong again.
Well, at least I found out what was causing:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
        or too many mounted file systems
        (could this be the IDE device where you in fact use
        ide-scsi so that sr0 or sda or so is needed?)
It's giving this because raid was not properly loaded on startup (hope
this small tidbit will help someone).
Did you mean to copy this to the list?  Or did you, and I just missed  
it?

PROBLEM:
After doing raidstart /dev/md0, my raid disk mounts just fine, and  
also
shows up nicely with /proc/mdstat. But I haven't figured out how could
I have it start up automatically on boot. I've understood that kernels
from 2.4. up should have raid automatically built in, not requiring  
any
init scripts. I've also tried kernel 2.6. (both kernels with ready
debian packages via apt-get). I know it's possible without compiling  
my
own kernel, I've got it working before (just don't know how).

Any input appriciated (hope someone got this far... )!

Cheers,

Timo
I recall reading somewhere that the raid has to be started upon boot,  
and
that it wasn't enough to simply make sure the kernel has the proper  
drivers
loaded.  That doesn't jive with my experience, as I've got a RAID box
working, though I did have to compile the drivers into the kernel.

Given that, let's go over what I've got installed.  I'm using  
raidtools2.
I've got /etc/raidtab as a symlink that points to /etc/raid/raidtab.
My /etc/fstab uses a RAID array as / and /boot.  Based upon your  
message,
it looks like you've done all the above, too, right?

Now, the difference is down to the kernels.  As I said, I compiled my  
own.
I couldn't get the stock kernel to work, because it had to load the  
modules
from the raid in order to see the raid.  I don't have a link handy,  
but I
believe there's info on the Debian website that will give some details
about building a new initial ram disk.  That's what you'll need to do.
Make sure you put all modules that the kernel needs to view the raid  
into
the initrd.  Let me know if you need some pointers on that.  I think  
I'm
going to try it myself, just to see if I can get my raid box to boot a
stock kernel.

Let us know how it goes.

Justin


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Timo Railo
Something is wacky. Any idea what could be causing such a poor  
performance. Quite new hardware, normal IDE, disks in separate BUSses.

tmoby:~# hdparm -T /dev/md0

hdparm -Tt /dev/hda /dev/hdc /dev/md0

/dev/hda:
 Timing buffer-cache reads:   1828 MB in  2.00 seconds = 914.00 MB/sec
 Timing buffered disk reads:8 MB in  3.00 seconds =   2.67 MB/sec
/dev/hdc:
 Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00 MB/sec
 Timing buffered disk reads:   10 MB in  3.69 seconds =   2.71 MB/sec
/dev/md0:
 Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00 MB/sec
 Timing buffered disk reads:   10 MB in  3.63 seconds =   2.75 MB/sec
Cheers,
Timo Railo
[ Either me or my mail program is going nutty... I could have sworn  
that I replied to the list, not you Justin. Sorry. ]

After much much headache (and almost buying new serial ata raid hw  
setup etc.), I got it to work (almost). Solution: compiling my own  
kernel. You should have warned me how easy it would be. It was really  
a no-brainer, I was afraid of it for no reason. (Thanks Martin, those  
german instructions were most helpful, also stuff from  
http://www.projektfarm.com/en/support/howto/debian_kernel_compile.html  
helped. )

Things learned along the way (for a Debian [and mostly linux] newbie):
- apt-get, differences between distros
- lilo principals
- crub principals
- fstab principals
- two different sets of raid tools grin
- kernel modules
- inittab
- compiling my own kernel
- patience
- that debian community is superb!
I suppose I could package this experience and sell it as a hands-on  
linux course. ;-)

For anyone starting with Linux software raid:
1. install stable (or anything else you prefer)
2. get 2.4.24 kernel source (apt-get install kernel-source-2.4.24) (or  
newer)
3. follow instructions from:
http://www.projektfarm.com/en/support/howto/debian_kernel_compile.html
	- remember to include raid-support (not as module, but built in)
4. follow instructions from  
http://www.cs.montana.edu/faq/faqw.admin.py? 
query=1.22querytype=simplecasefold=yesreq=search
NOTE:
Lilo didn't work for me, it just froze in startup. Therefore had to go  
the grub way with rescue cd.

ONE final question on this issue:
All is well, except there is something funny about grub setup (or  
bios, but grub suspected). If I disable either one of the hard disks,  
it wont' startup at all (it just can't boot). Any pointers as to what  
might be causing this greatly appreciated again.

Thanks for great help so far!

Timo


On Monday 16 February 2004 13:34, you wrote:
Hi,

(Justin, sorry to harass you off-the list)

I'm really loosing my hair with this. So far:

I got it all to work (sw RAID), it was mostly fstab that was not  
setup
right. But since then, I've really managed to mess things up. I  
thought
everything was fine, but didn't realize that Lilo was still reading
it's conf from the old disk. So repartitioning the original disk -  
it
would not boot anymore. (that's my guess anyway)

That is probably correct.

After much trouble, I thought I'm better off re-installing the whole
thing. Wrong again.
Well, at least I found out what was causing:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
        or too many mounted file systems
        (could this be the IDE device where you in fact use
        ide-scsi so that sr0 or sda or so is needed?)
It's giving this because raid was not properly loaded on startup  
(hope
this small tidbit will help someone).

Did you mean to copy this to the list?  Or did you, and I just missed  
it?

PROBLEM:
After doing raidstart /dev/md0, my raid disk mounts just fine, and  
also
shows up nicely with /proc/mdstat. But I haven't figured out how  
could
I have it start up automatically on boot. I've understood that  
kernels
from 2.4. up should have raid automatically built in, not requiring  
any
init scripts. I've also tried kernel 2.6. (both kernels with ready
debian packages via apt-get). I know it's possible without compiling  
my
own kernel, I've got it working before (just don't know how).

Any input appriciated (hope someone got this far... )!

Cheers,

Timo
I recall reading somewhere that the raid has to be started upon boot,  
and
that it wasn't enough to simply make sure the kernel has the proper  
drivers
loaded.  That doesn't jive with my experience, as I've got a RAID box
working, though I did have to compile the drivers into the kernel.

Given that, let's go over what I've got installed.  I'm using  
raidtools2.
I've got /etc/raidtab as a symlink that points to /etc/raid/raidtab.
My /etc/fstab uses a RAID array as / and /boot.  Based upon your  
message,
it looks like you've done all the above, too, right?

Now, the difference is down to the kernels.  As I said, I compiled my  
own.
I couldn't get the stock kernel to work, because it had to load the  
modules
from the raid in order to see the raid.  I don't have a link handy,  
but I
believe there's info on the Debian website that 

Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Benedict Verheyen
Timo Railo wrote:
 Something is wacky. Any idea what could be causing such a poor
 performance. Quite new hardware, normal IDE, disks in separate BUSses.

 tmoby:~# hdparm -T /dev/md0

 hdparm -Tt /dev/hda /dev/hdc /dev/md0

 /dev/hda:
   Timing buffer-cache reads:   1828 MB in  2.00 seconds = 914.00
   MB/sec Timing buffered disk reads:8 MB in  3.00 seconds =
 2.67 MB/sec

 /dev/hdc:
   Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00
   MB/sec Timing buffered disk reads:   10 MB in  3.69 seconds =
 2.71 MB/sec

 /dev/md0:
   Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00
   MB/sec Timing buffered disk reads:   10 MB in  3.63 seconds =
 2.75 MB/sec


 Cheers,
 Timo Railo

Check your hd settings via hdparm -i /dev/hda to see if dma is enabled.
It's normally disabled so you'll have to enable it.
hdparm -d1 /dev/hda to set it.
Then adjust the hdparm script in /etc/init.d to enable this on startup
or
adjust your kernel to always set this automatically.

Regards,
Benedict




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Timo Railo
Thanks Benedict,

this is what happened (also tried just -d1).

hdparm -d 1 -c 1 /dev/hda

/dev/hda:
 setting 32-bit IO_support flag to 1
 setting using_dma to 1 (on)
 HDIO_SET_DMA failed: Operation not permitted
 IO_support   =  1 (32-bit)
 using_dma=  0 (off)
Man this is tough...

Timo


Timo Railo wrote:
Something is wacky. Any idea what could be causing such a poor
performance. Quite new hardware, normal IDE, disks in separate BUSses.
tmoby:~# hdparm -T /dev/md0

hdparm -Tt /dev/hda /dev/hdc /dev/md0

/dev/hda:
  Timing buffer-cache reads:   1828 MB in  2.00 seconds = 914.00
  MB/sec Timing buffered disk reads:8 MB in  3.00 seconds =
2.67 MB/sec
/dev/hdc:
  Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00
  MB/sec Timing buffered disk reads:   10 MB in  3.69 seconds =
2.71 MB/sec
/dev/md0:
  Timing buffer-cache reads:   1864 MB in  2.00 seconds = 932.00
  MB/sec Timing buffered disk reads:   10 MB in  3.63 seconds =
2.75 MB/sec
Cheers,
Timo Railo
Check your hd settings via hdparm -i /dev/hda to see if dma is enabled.
It's normally disabled so you'll have to enable it.
hdparm -d1 /dev/hda to set it.
Then adjust the hdparm script in /etc/init.d to enable this on startup
or
adjust your kernel to always set this automatically.
Regards,
Benedict


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact 
[EMAIL PROTECTED]


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Justin Guerin
Hi Timo.

On Wednesday 18 February 2004 08:19, Timo Railo wrote:
 Thanks Benedict,

 this is what happened (also tried just -d1).

 hdparm -d 1 -c 1 /dev/hda

 /dev/hda:
   setting 32-bit IO_support flag to 1
   setting using_dma to 1 (on)
   HDIO_SET_DMA failed: Operation not permitted
   IO_support   =  1 (32-bit)
   using_dma=  0 (off)

 Man this is tough...

 Timo

Looking at the man page for hdparm, it's possible your drive doesn't support 
that operation (-d).  You might google with the name of your drive, to see 
if it's a known issue.  Have you also tried hdparm -d 1 -X34 /dev/hda?  I 
wonder if that would help.  

For what it's worth, my RAID-1 performance was quite different from yours:
# hdparm -Tt /dev/hda /dev/hdc /dev/md1

/dev/hda:
 Timing buffer-cache reads:   128 MB in  1.02 seconds =125.49 MB/sec
 Timing buffered disk reads:  64 MB in  3.01 seconds = 21.26 MB/sec

/dev/hdc:
 Timing buffer-cache reads:   128 MB in  1.11 seconds =115.32 MB/sec
 Timing buffered disk reads:  64 MB in  2.93 seconds = 21.84 MB/sec

/dev/md1:
 Timing buffer-cache reads:   128 MB in  1.09 seconds =117.43 MB/sec
 Timing buffered disk reads:  64 MB in  3.01 seconds = 21.26 MB/sec

My disks are operating in udma2 mode.  
Justin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-18 Thread Benedict Verheyen
Timo Railo wrote:
 Thanks Benedict,
 
 this is what happened (also tried just -d1).
 
 hdparm -d 1 -c 1 /dev/hda
 
 /dev/hda:
   setting 32-bit IO_support flag to 1
   setting using_dma to 1 (on)
   HDIO_SET_DMA failed: Operation not permitted
   IO_support   =  1 (32-bit)
   using_dma=  0 (off)
 
 Man this is tough...
 
 Timo

I had this happen once too. The reason was that i didn't compile
the correct motherboard chipset in. Because of that, i also couldn't
do a hdparm -d 1
After i recompiled my kernel with the correct chipset support,
all things worked ok.
Use lspci to find out about your motherboard and  the chipset it
uses, then compile it in (not as a module).
You'll also need to set:
  CONFIG_BLK_DEV_IDEDMA_PCI=y
  CONFIG_BLK_DEV_IDEDMA=y
And, in my case, i needed 
  CONFIG_BLK_DEV_PIIX 
But this last line will be different unless you have a PII.

Regards,
Benedict



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-12 Thread Justin Guerin
On Thursday 12 February 2004 07:05, you wrote:
[snip]
 
  You're going to have problems with that setup.  You can't have a raid
  using
  part of a disk (hdc2) and the entire disk (hdc).  You should be using
  two
  partitions, like this:
  # cat /etc/raid/raidtab
  raiddev /dev/md0
  raid-level 1...

 Starting from the scratch with different approach (this time
 instructions from http://karaolides.com/computing/HOWTO/lvmraid). Now I
 have raid disk that comes nicely up on boot and I've managed to copy my
 data there. However, I can't get it to boot from raid. Here is my
 raidtab:

 # cat /etc/raid/raidtab
 raiddev /dev/md0
 raid-level 1
 nr-raid-disks 2
 nr-spare-disks 0
 chunk-size 32
 persistent-superblock 1
  device /dev/hda1
  failed-disk 0
  device /dev/hdc1
  raid-disk 1

OK, this is fine, since you're following the instructions from the link you 
posted.

 And here is lilo.conf

 lba32
 boot=/dev/hdc1

This line is the source of your problems.  You're telling the bootloader to 
look on /dev/hdc1 for your boot sector.  You should be telling it to look 
on /dev/md0.  After all, isn't that the device that you want /boot on?

 root=/dev/md0
 install=/boot/boot-menu.b
 map=/boot/map
 vga=normal
 default=Linux

 image=/vmlinuz
  label=Linux
  read-only
 image=/vmlinuz.old
  label=LinuxOLD
  read-only
  optional

 When I do chroot /mnt/md0 /sbin/lilo, I get the following warnings:

I can't explain all the warnings you're getting from Lilo, because I can't 
see your current /etc/fstab, but I have some guesses.

 Warning: '/proc/partitions' does not exist, disk scan bypassed
When you chroot to /mnt/md0, you can no longer access any directory not 
under /mnt/md0.  If you want to access proc, make a /mnt/md0/proc 
directory, and mount proc there before you chroot.

 Warning: BIOS drive 0x81 may not be accessible
 Warning: /dev/hdc1 is not on the first disk
 Warning: BIOS drive 0x81 may not be accessible
 Warning: Partition 1 on /dev/hdc is not marked Active.

When you partitioned this drive, did you mark the raid partition as 
bootable?  Both raid partitions (/dev/hdc1 and /dev/hda1) should be marked 
as bootable.

 Warning: partition type 0xFD on device 0x1601 is a dangerous place for
  a boot sector.

 Then it says (after saying yes)

 Warning: BIOS drive 0x81 may not be accessible
 Warning: BIOS drive 0x81 may not be accessible
 Added Linux *
 Skipping /vmlinuz.old

The rest of these errors are likely due to your trying to access /dev/hdc1 
directly, instead of using the raid drivers and going through /dev/md0.

 After reboot, systems comes up, but booting to hda1 instead of md0.

 Any pointers as to how to get it boot properly appreciated.

First, change your /etc/lilo.conf to boot from /dev/md0.  Mount /proc onto a 
directory you can view from a chrooted /dev/md0.  Then chroot and run lilo.

Note that if you're using the raid drivers as modules, you've got to put 
them in the initial ram disk for them to be loaded, and start the raid 
array with the proper userspace utilities.  I do not know how to do this.  
Instead, I ended up compiling my own kernel with raid drivers built in, and 
booting without an initial ram disk.  Are you booting your own kernel?

Let us know if these changes don't solve your problems.  Remember to finish 
the install by properly partitioning /dev/hdc, adding it to the array, and 
waiting for it to update properly.

  Also note that you'll get much better performance if you can separate
  your
  disks onto individual IDE busses.  For my machine, I moved one disk
  from
  the slave on the first bus (hdb) to the master on the second bus
  (hdc), and
  now I've got a quicker setup.  But you can still make RAID work on one
  disk / IDE bus, if you need to / have to.

 They are actually on two different busses. What worries me a bit, is
 that they have a different cable setup at this moment (the other one
 with standard IDE cable, the other one with yellow ide cable).

Someone else may have to help you out here.  My gut feeling is that there's 
no difference, but I have no data to back that up.

  Also, provide the output of cat /proc/mdstat.  You should see
  something
  like this:
  Personalities : [raid1]
  read_ahead 1024 sectors
  md0 : active raid1 hdc1[1] hda2[0]
15936 blocks [2/2] [UU]
 
  Personalities : [raid1]
  read_ahead not set
  unused devices: none
 
  This means that you're drives aren't properly recognized.  You'll have
  to
  fix your raidtab and then start the raid.  I didn't use mdadm, so I'm
  not
  sure of the syntax.  But since you've actually got a /proc/mdstat, that
  means your drivers are loaded.

 Now this is ok as well:

 Personalities : [raid1]
 read_ahead 1024 sectors
 md0 : active raid1 hdc1[0]
34179648 blocks [2/1] [U_]

 unused devices: none

This is better.  You should see both drives listed once you successfully 
boot from /dev/md0 and then unmark /dev/hda1 

Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread David Clymer
On Sun, 2004-02-08 at 16:38, Timo Railo wrote:
 Hi!
 
 I'm having problems getting software raid to work with my IDE drives. I  
 had Redhat9 installed previously on the same machine (with working  
 software raid setup), but I'm now moving to Debian.
 
 My kernel is 2.4.18-bf2.4 and has support for RAID1, which I'm trying  
 to create. I'm following these instructions (thank you Lucas for  
 excellent instructions!):
 
 http://www.cs.montana.edu/faq/faqw.admin.py? 
 query=1.22querytype=simplecasefold=yesreq=search
 
 All goes well up to point 8 (I'm able to create raid, put ext3  
 filesystem on it and even mount it). But that's only the first mount.  
 After reboot, when trying to mount, I get this error message:
 
 mount: wrong fs type, bad option, bad superblock on /dev/md0,
 or too many mounted file systems
 (could this be the IDE device where you in fact use
 ide-scsi so that sr0 or sda or so is needed?)
 
 I've tried zeroing the superblock, with no help. Also tried recreating  
 the partitions (times x) and tried recreating filesystem.
 

How are you mounting it? If you are mounting using an explicit: mount -t
ext3 /dev/md0 /mnt, but have not modified your /etc/fstab correctly
(thats the file that is used at boot-time to decide where and with what
options filesystems are to be mounted), then this behavior would make
perfect sense.

-davidc


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread Justin Guerin
Timo Railo wrote:

 Hi!
 
 I'm having problems getting software raid to work with my IDE drives. I
 had Redhat9 installed previously on the same machine (with working
 software raid setup), but I'm now moving to Debian.
 
 My kernel is 2.4.18-bf2.4 and has support for RAID1, which I'm trying
 to create. I'm following these instructions (thank you Lucas for
 excellent instructions!):
 
 http://www.cs.montana.edu/faq/faqw.admin.py?
 query=1.22querytype=simplecasefold=yesreq=search
 
 All goes well up to point 8 (I'm able to create raid, put ext3
 filesystem on it and even mount it). But that's only the first mount.
 After reboot, when trying to mount, I get this error message:
 
 mount: wrong fs type, bad option, bad superblock on /dev/md0,
 or too many mounted file systems
 (could this be the IDE device where you in fact use
 ide-scsi so that sr0 or sda or so is needed?)
 
 I've tried zeroing the superblock, with no help. Also tried recreating
 the partitions (times x) and tried recreating filesystem.
 
 Please help, I'm really running out of ideas.
 
 Thank you very much,
 
 Timo Railo

I recall seeing a similar (or the same) error message when I tried to do the
same thing.  I don't know much about builing an initial ram disk, but I
gather that you've somehow got to get the raid drivers into the initial ram
disk for this to work.  What ended up working for me was compiling my own
kernel, with all RAID functionality built in, instead of in modules.  I was
never able to get the stock (or bf2.4) kernel to work.

I know that's not exactly the solution to your problem, but I hope you find
it useful, nonetheless.

Justin Guerin



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread Timo Railo
Hi David,

thank you for your reply!

I've tried putting it to /etc/fstab, but getting the error on boot 
time. And since it's remote computer, it's a little inconvenient cause 
it won't continue the bootup process without keyboard input. Here is my 
fstab setup:

/dev/hda1   /   ext3errors=remount-ro   0   
1
/dev/hda2   noneswapsw  0   
0
/dev/hda3   /mnt/hda3   ext3defaults0   
2
/dev/md0/mnt/md0ext3defaults0   
0 (this commented out for boot)
proc/proc   procdefaults0   
0
/dev/fd0/floppy autouser,noauto 0   
0
/dev/cdrom  /cdrom  iso9660 ro,user,noauto  0   
0

When I build the raid with mdadm --create, do mkfs.ext3 and then do 
mount (with or without -t, doesn't matter) it mounts beautifully. When 
I reboot, uncomment the device in fstab and do mount -a or just try 
mounting it via mount -t ext3 /dev/md0 /mnt/md0 it gives the error.

I've also tried doing dd if=/dev/zero of=/dev/hdc to make it clean. 
When doing e2fsck /dev/hdc1 I get this error:

e2fsck 1.35-WIP (07-Dec-2003)
e2fsck: Invalid argument while trying to open /dev/hdc1
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate 
superblock:
e2fsck -b 8193 device

Many thanks for any input!!

Timo



On Feb 9, 2004, at 4:21 PM, David Clymer wrote:

On Sun, 2004-02-08 at 16:38, Timo Railo wrote:
Hi!

I'm having problems getting software raid to work with my IDE drives. 
I
had Redhat9 installed previously on the same machine (with working
software raid setup), but I'm now moving to Debian.

My kernel is 2.4.18-bf2.4 and has support for RAID1, which I'm trying
to create. I'm following these instructions (thank you Lucas for
excellent instructions!):
http://www.cs.montana.edu/faq/faqw.admin.py?
query=1.22querytype=simplecasefold=yesreq=search
All goes well up to point 8 (I'm able to create raid, put ext3
filesystem on it and even mount it). But that's only the first mount.
After reboot, when trying to mount, I get this error message:
mount: wrong fs type, bad option, bad superblock on /dev/md0,
or too many mounted file systems
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
I've tried zeroing the superblock, with no help. Also tried recreating
the partitions (times x) and tried recreating filesystem.
How are you mounting it? If you are mounting using an explicit: mount 
-t
ext3 /dev/md0 /mnt, but have not modified your /etc/fstab correctly
(thats the file that is used at boot-time to decide where and with what
options filesystems are to be mounted), then this behavior would make
perfect sense.

-davidc


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread Justin Guerin
On Monday 09 February 2004 10:16, Timo Railo wrote:
 Hi David,

 thank you for your reply!

 I've tried putting it to /etc/fstab, but getting the error on boot
 time. And since it's remote computer, it's a little inconvenient cause
 it won't continue the bootup process without keyboard input. Here is my
 fstab setup:

You should use the option noauto instead of defaults while you're 
debugging it.  That way, it won't try to mount on boot up.  It will only 
try to mount when you ask specifically.

 /dev/hda1       /               ext3    errors=remount-ro       0      
 1
 /dev/hda2       none            swap    sw                      0      
 0
 /dev/hda3       /mnt/hda3       ext3    defaults                0      
 2
 /dev/md0        /mnt/md0        ext3    defaults                0      
 0 (this commented out for boot)
 proc            /proc           proc    defaults                0      
 0
 /dev/fd0        /floppy         auto    user,noauto             0      
 0
 /dev/cdrom      /cdrom          iso9660 ro,user,noauto          0      
 0

 When I build the raid with mdadm --create, do mkfs.ext3 and then do
 mount (with or without -t, doesn't matter) it mounts beautifully. When
 I reboot, uncomment the device in fstab and do mount -a or just try
 mounting it via mount -t ext3 /dev/md0 /mnt/md0 it gives the error.

 I've also tried doing dd if=/dev/zero of=/dev/hdc to make it clean.
 When doing e2fsck /dev/hdc1 I get this error:

 e2fsck 1.35-WIP (07-Dec-2003)
 e2fsck: Invalid argument while trying to open /dev/hdc1

 The superblock could not be read or does not describe a correct ext2
 filesystem.  If the device is valid and it really contains an ext2
 filesystem (and not swap or ufs or something else), then the superblock
 is corrupt, and you might try running e2fsck with an alternate
 superblock:
      e2fsck -b 8193 device

 Many thanks for any input!!

 Timo

I see from your fstab that you're not trying to boot off the raid device.  
My previous advice was for that situation.  Sorry if it caused confusion.

Your fstab file for your raid device is correct.  What does your /etc/
raidtab and /etc/raid/raidtab look like?  /etc/raidtab should be a link 
to /etc/raid/raidtab.  Post the contents of /etc/raid/raidtab, and we may 
be more able to pinpoint your problem.

Also, provide the output of cat /proc/mdstat.  You should see something 
like this:
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda2[0]
  15936 blocks [2/2] [UU]

If you don't, it means your raid drivers haven't been properly initialized 
and pointed to your raid disks.

Justin Guerin


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread Timo Railo
I've tried putting it to /etc/fstab, but getting the error on boot
time. And since it's remote computer, it's a little inconvenient cause
it won't continue the bootup process without keyboard input. Here is 
my
fstab setup:

You should use the option noauto instead of defaults while you're
debugging it.  That way, it won't try to mount on boot up.  It will 
only
try to mount when you ask specifically.
Thanks, obvious but overlooked... like so many great things in life.


/dev/hda1       /               ext3    errors=remount-ro       0     
 
1
/dev/hda2       none            swap    sw                      0     
 
0
/dev/hda3       /mnt/hda3       ext3    defaults                0     
 
2
/dev/md0        /mnt/md0        ext3    defaults                0     
 
0 (this commented out for boot)
proc            /proc           proc    defaults                0     
 
0
/dev/fd0        /floppy         auto    user,noauto             0     
 
0
/dev/cdrom      /cdrom          iso9660 ro,user,noauto          0     
 
0

When I build the raid with mdadm --create, do mkfs.ext3 and then do
mount (with or without -t, doesn't matter) it mounts beautifully. When
I reboot, uncomment the device in fstab and do mount -a or just try
mounting it via mount -t ext3 /dev/md0 /mnt/md0 it gives the error.
I've also tried doing dd if=/dev/zero of=/dev/hdc to make it clean.
When doing e2fsck /dev/hdc1 I get this error:
e2fsck 1.35-WIP (07-Dec-2003)
e2fsck: Invalid argument while trying to open /dev/hdc1
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the 
superblock
is corrupt, and you might try running e2fsck with an alternate
superblock:
     e2fsck -b 8193 device

Many thanks for any input!!

Timo
I see from your fstab that you're not trying to boot off the raid 
device.
My previous advice was for that situation.  Sorry if it caused 
confusion.

Your fstab file for your raid device is correct.  What does your /etc/
raidtab and /etc/raid/raidtab look like?  /etc/raidtab should be a link
to /etc/raid/raidtab.  Post the contents of /etc/raid/raidtab, and we 
may
be more able to pinpoint your problem.
Actually, I think that didn't exist at all (I think mdadm doesn't 
really care that much for raidtab at all). I've created one manually, 
while trying another route, and it looks like this (no 
/etc/raid/raidtab at all):

raiddev /dev/md0
raid-level  linear
nr-raid-disks   2
chunk-size  32
persistent-superblock   1
device  /dev/hdc2
raid-disk   0
device  /dev/hdc
raid-disk   1

Also, provide the output of cat /proc/mdstat.  You should see 
something
like this:
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda2[0]
  15936 blocks [2/2] [UU]
Personalities : [raid1]
read_ahead not set
unused devices: none
Thanks Justin!!!

Timo

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Software RAID problems (bad filesystem type)

2004-02-09 Thread Justin Guerin
On Monday 09 February 2004 11:05, Timo Railo wrote:
[snip]
  Your fstab file for your raid device is correct.  What does your /etc/
  raidtab and /etc/raid/raidtab look like?  /etc/raidtab should be a link
  to /etc/raid/raidtab.  Post the contents of /etc/raid/raidtab, and we
  may be more able to pinpoint your problem.

 Actually, I think that didn't exist at all (I think mdadm doesn't
 really care that much for raidtab at all). I've created one manually,
 while trying another route, and it looks like this (no
 /etc/raid/raidtab at all):

 raiddev /dev/md0
  raid-level  linear
  nr-raid-disks   2
  chunk-size  32
  persistent-superblock   1
  device  /dev/hdc2
  raid-disk   0
  device  /dev/hdc
  raid-disk   1

You're going to have problems with that setup.  You can't have a raid using 
part of a disk (hdc2) and the entire disk (hdc).  You should be using two 
partitions, like this:
# cat /etc/raid/raidtab
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/hda2
raid-disk 0
device /dev/hdc1
raid-disk 1

Also note that you'll get much better performance if you can separate your 
disks onto individual IDE busses.  For my machine, I moved one disk from 
the slave on the first bus (hdb) to the master on the second bus (hdc), and 
now I've got a quicker setup.  But you can still make RAID work on one 
disk / IDE bus, if you need to / have to.

  Also, provide the output of cat /proc/mdstat.  You should see
  something
  like this:
  Personalities : [raid1]
  read_ahead 1024 sectors
  md0 : active raid1 hdc1[1] hda2[0]
15936 blocks [2/2] [UU]

 Personalities : [raid1]
 read_ahead not set
 unused devices: none

This means that you're drives aren't properly recognized.  You'll have to 
fix your raidtab and then start the raid.  I didn't use mdadm, so I'm not 
sure of the syntax.  But since you've actually got a /proc/mdstat, that 
means your drivers are loaded.

 Thanks Justin!!!

 Timo
You're welcome.

Justin Guerin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Software RAID problems (bad filesystem type)

2004-02-08 Thread Timo Railo
Hi!

I'm having problems getting software raid to work with my IDE drives. I  
had Redhat9 installed previously on the same machine (with working  
software raid setup), but I'm now moving to Debian.

My kernel is 2.4.18-bf2.4 and has support for RAID1, which I'm trying  
to create. I'm following these instructions (thank you Lucas for  
excellent instructions!):

http://www.cs.montana.edu/faq/faqw.admin.py? 
query=1.22querytype=simplecasefold=yesreq=search

All goes well up to point 8 (I'm able to create raid, put ext3  
filesystem on it and even mount it). But that's only the first mount.  
After reboot, when trying to mount, I get this error message:

mount: wrong fs type, bad option, bad superblock on /dev/md0,
   or too many mounted file systems
   (could this be the IDE device where you in fact use
   ide-scsi so that sr0 or sda or so is needed?)
I've tried zeroing the superblock, with no help. Also tried recreating  
the partitions (times x) and tried recreating filesystem.

Please help, I'm really running out of ideas.

Thank you very much,

Timo Railo

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



raid problems

2001-01-26 Thread Knud Sørensen
I have a raid1 device for which fsck returns a error.
It says that the physical size of the device is 26000k but
that the superblock says that the size is 26066k.

if I run mkraid --upgrade it tells me that
the physical size is 26066k and that the superblock
starts at 26000k.

What should i do to make fsck successful?

  



Knud



Re: raid problems

2001-01-26 Thread Knud Sørensen
Solved the problem myself.

And found a alternative way to set up raid.

I have to identical disk hda, hdc.

1)
Install debian on hda.

hda1/boot
hda2/
hda3Swap

2) 
config raidtab with hdc af failed.
md0 for boot
md1 for root

3)
config fstab and lilo.conf
md0 for boot
md1 for root

4)
clone the disk   
dd if=/dev/hda of=/dev/hdc 
This take some time.

5) start raid
raidstart /dev/md0
raidstart /dev/md1

6) write new superblock
mke2fs -S /dev/md0
mke2fs -S /dev/md1

7) change /boot to md0
umount /boot
mount /dev/md0 /boot

8) run lilo
lilo

9) reboot and test
reboot
(when up again)
df
this should give
/dev/md1/
/dev/md0/boot

10) hotadd some disk
remove failed-disk from  /etc/raidtab 
then
raidhotadd /dev/md0 /dev/hdc1
raidhotadd /dev/md1 /dev/hdc2



This works for me.

Knud














Knud Sørensen wrote:
 
 I have a raid1 device for which fsck returns a error.
 It says that the physical size of the device is 26000k but
 that the superblock says that the size is 26066k.
 
 if I run mkraid --upgrade it tells me that
 the physical size is 26066k and that the superblock
 starts at 26000k.
 
 What should i do to make fsck successful?
 
 
 
 Knud
 
 --
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



R: raid-problems

1999-07-10 Thread Fabio Massimo Di Nitto
hello

after a reboot (caused by a power fail) my raid was checked with ckraid and
brought back into sync, but e2fsck sais, that the md-device-partition has
zero length??

you have to restart md-device in /etc/init.d using mdutils

just a tip: use ckraid --fix configfile

the problem is, that my /usr /home and /var on the md-device resist

any hints?


see above

--
until next mail B-)

Peter
--
   :~~  [EMAIL PROTECTED]  ~~:
   :  student of technical computer science   :
   : university of applied sciences krefeld (germany) :
~~
   FD314F21   C7 AE 2F 28 C1 33 71 77  0D 77 CD 6E 58 E9 06 6B


--
Unsubscribe?  mail -s unsubscribe [EMAIL PROTECTED] 
/dev/null





raid-problems

1999-07-09 Thread Peter Bartosch
hello

after a reboot (caused by a power fail) my raid was checked with ckraid and
brought back into sync, but e2fsck sais, that the md-device-partition has
zero length??

the problem is, that my /usr /home and /var on the md-device resist

any hints?

-- 
until next mail B-)

Peter
-- 
   :~~  [EMAIL PROTECTED]  ~~:
   :  student of technical computer science   :
   : university of applied sciences krefeld (germany) :
~~
   FD314F21   C7 AE 2F 28 C1 33 71 77  0D 77 CD 6E 58 E9 06 6B