RE: First RAID Setup

2005-12-15 Thread Callahan, Tom
You should have a designated spare for RAID-5.

Not sure why you have 3 disks for each RAID1, RAID1 is mirror, and unless
the third drive is a spare, it is not needed.

Thanks,
Tom Callahan

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Andargor The Wise
Sent: Thursday, December 15, 2005 2:10 PM
To: linux-raid@vger.kernel.org
Subject: First RAID Setup


I admit it. I'm a RAID virgin.

However, after a disastrous failure of the sole drive
I wasn't backing up, I decided to go RAID-5 under
Slack 10.2 (first time ever with RAID-5).

The config:

Asus P5GL-MX (ICH6) mobo w/1 GB RAM, 4 x SATA ports
P4 3.0G/1M
3 x WD2000JS 200.0 GB SATA drives

First, a question: the BIOS on this machine seems to
list the SATA ports as third/fourth IDE
master/slave. Further, the documentation seems to say
that SATA 1/2 are master and SATA 3/4 are slave
(black and red connectors, respectively).

My understanding is that SATA drives are each on
separate buses. Is this because the BIOS offers a
P-ATA emulation mode for SATA and it makes it easier
to understand for novices to show them that way?

I ask because people have said that it is not a good
idea to have both IDE masters and slaves on the same
bus as part of a RAID-5 array. I know SATA is
different, but will using three of the SATA ports on
this mobo be OK?

Second, after reading the excellent advice in this
list, I decided that booting from RAID-5 might not be
a good idea. So this is what I've been thinking:

Each disk partitioned alike:
1   30MB 
2   8GB (to allow for memory upgrades later)
5   rest_of_disk

mds:
md0 raid1 sda1 sdb1 sdc1
md1 raid1 sda2 sdb2 sdc2
md2 raid5 sda5 sdb5 sdc5

md0 /boot
md1 swap
md2 /

Does this look OK? What should the stripe and chunk
sizes be, considering I'll be going with reiserfs?
Typical usage: development machine, some DB apps with
medium load, read-only mostly, not many writes. Very
few large files (such as multimedia).

Or should I set up separate RAID-5's for /usr and /var
as well?

Lastly, can I install directly to this configuration,
or should I install on a separate disk and move things
into the array?

Andargor


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: First RAID Setup

2005-12-15 Thread Callahan, Tom
I understand the reason for the RAID1 devices. I was asking why you have
3 devices in the RAID1 setup? RAID1 is a mirrored configuration, requiring
only 2 disks for operation.

It is always wise to build in a spare however, that being said about all
raid levels. In your configuration, if a disk fails in your RAID5, your
array will go down. RAID5 is usually 3+ disks, with a mirror. So you should
have 3 disks at minimum, and then a 4th as a spare.

The MD modules/subsystem will then automagically bring in that spare disk if
any of the existing 3 in your RAID5 setup fail.

It is wise to think through your layout prior to building, and I commend you
for that. You may also want to review/experiment with the MD subsystem. For
instance, There is a neat --grow mode that is not mentioned in many vendor
man pages that can allow you to grow an MD device as needed.

Another gotcha, it's usually better to use entire disks, if you can afford
to, in an MD array. This alleviates growing pains of having to manually
repartition if you want to grow an exisiting filesystem. This may not make
much sense now, but once you have to do it, you'll smack your forehead in
grief.

Thanks,
Tom Callahan

-Original Message-
From: Andargor The Wise [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 15, 2005 2:45 PM
To: Callahan, Tom; linux-raid@vger.kernel.org
Subject: RE: First RAID Setup


The RAID1 partitions are to make sure:

1) The machine is able to boot even if a disk is lost
(/boot).
2) The machine isn't brought down if a disk is lost
(swap)

I thought about a spare drive, but I don't need high
availability. I'm satisfied with being able to recover
my data.

Andargor


--- Callahan, Tom [EMAIL PROTECTED] wrote:

 You should have a designated spare for RAID-5.
 
 Not sure why you have 3 disks for each RAID1, RAID1
 is mirror, and unless
 the third drive is a spare, it is not needed.
 
 Thanks,
 Tom Callahan
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf
 Of Andargor The Wise
 Sent: Thursday, December 15, 2005 2:10 PM
 To: linux-raid@vger.kernel.org
 Subject: First RAID Setup
 
 
 I admit it. I'm a RAID virgin.
 
 However, after a disastrous failure of the sole
 drive
 I wasn't backing up, I decided to go RAID-5 under
 Slack 10.2 (first time ever with RAID-5).
 
 The config:
 
 Asus P5GL-MX (ICH6) mobo w/1 GB RAM, 4 x SATA ports
 P4 3.0G/1M
 3 x WD2000JS 200.0 GB SATA drives
 
 First, a question: the BIOS on this machine seems to
 list the SATA ports as third/fourth IDE
 master/slave. Further, the documentation seems to
 say
 that SATA 1/2 are master and SATA 3/4 are slave
 (black and red connectors, respectively).
 
 My understanding is that SATA drives are each on
 separate buses. Is this because the BIOS offers a
 P-ATA emulation mode for SATA and it makes it
 easier
 to understand for novices to show them that way?
 
 I ask because people have said that it is not a good
 idea to have both IDE masters and slaves on the same
 bus as part of a RAID-5 array. I know SATA is
 different, but will using three of the SATA ports on
 this mobo be OK?
 
 Second, after reading the excellent advice in this
 list, I decided that booting from RAID-5 might not
 be
 a good idea. So this is what I've been thinking:
 
 Each disk partitioned alike:
   1   30MB 
   2   8GB (to allow for memory upgrades later)
   5   rest_of_disk
 
 mds:
   md0 raid1 sda1 sdb1 sdc1
   md1 raid1 sda2 sdb2 sdc2
   md2 raid5 sda5 sdb5 sdc5
 
   md0 /boot
   md1 swap
   md2 /
 
 Does this look OK? What should the stripe and chunk
 sizes be, considering I'll be going with reiserfs?
 Typical usage: development machine, some DB apps
 with
 medium load, read-only mostly, not many writes. Very
 few large files (such as multimedia).
 
 Or should I set up separate RAID-5's for /usr and
 /var
 as well?
 
 Lastly, can I install directly to this
 configuration,
 or should I install on a separate disk and move
 things
 into the array?
 
 Andargor
 
 
 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam
 protection around 
 http://mail.yahoo.com 
 -
 To unsubscribe from this list: send the line
 unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at 
 http://vger.kernel.org/majordomo-info.html
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: resize2fs failing--how to resize my fs?

2005-12-14 Thread Callahan, Tom
Was this resize done while the FS was mounted?

Thanks,
Tom Callahan

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Michael Stumpf
Sent: Wednesday, December 14, 2005 3:33 PM
To: linux-raid@vger.kernel.org
Subject: Re: resize2fs failing--how to resize my fs?


Michael Stumpf wrote:

 I get this from the latest stable resize2fs:

 [EMAIL PROTECTED] parted]# resize2fs /dev/my_vol_grp/my_log_vol
 resize2fs 1.38 (30-Jun-2005)
 Resizing the filesystem on /dev/my_vol_grp/my_log_vol to 488390656 
 (4k) blocks.
 Killed

 Parted (again, latest stable) tells me the following:
 Using /dev/mapper/my_vol_grp-my_log_vol
 (parted) resize 1 0 
 100% No 
 Implementation: This ext2 file system has a rather strange layout!  
 Parted can't resize this (yet).
 (parted)
 Similar results from ext2resize/ext2online.  This is an ordinary ext3 
 fs, living inside an LVM2 that has already been increased to 
 accomodate (used all free extents)..  I've resized it down and up 
 before, though it is possible I am resizing it larger than it has been 
 before (1.8TB).
 Not sure what's up.  Any advice welcome; my research in this has me 
 getting a bit nervous about lvm2 bugs causing loss of data.  While I 
 want a single resilient (via raid 5) volume, I may be willing to ditch 
 a whole layer of software (lvm2) to get some security.


Surprised noone has hit this before.  It turns out that somehow my swap 
space disappeared in a system migration.  This became more obvious as I 
explicitly tried to extend the fs to a lower limit (438390656), where 
resize2fs worked for a while, then informed me that it couldn't allocate 
some memory.

Add 512mb of swap and problem solved.. never used more than ~100mb of 
swap (256mb main memory).

Hope this helps someone.



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: s/w raid and bios renumbering HDs

2005-10-31 Thread Callahan, Tom
You are testing failover with reboots. So when Linux probes the disks, it is
putting hdc where hda used to be This seems a bit strange, as
hda/hdb should theoretically be IDE1 and hdc/hdd should be IDE2

As far as your grub setup, it looks perfectly fine. You should have two
entries as you have, because if disc1 fails, you cannot boot to hd(0,0) and
vice-versa.
One gotcha, make sure grub is installed in the MBR of BOTH drives, not just
the MD device

Thanks,
Tom Callahan
TESSCO Technologies Inc.
410-229-1361


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Hari Bhaskaran
Sent: Monday, October 31, 2005 10:57 AM
To: linux-raid@vger.kernel.org
Subject: s/w raid and bios renumbering HDs


Hi,

I am trying to setup a RAID-1 setup for the boot/root partition. I got
the setup working, except what I see with some of my tests leave me
less convinced that it is actually working. My system is debian 3.1
and I am not using the raid-setup options in the debian-installer,
I am trying to add raid-1 to an existing system (followed
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html  -- 7.4 method 2)

I have /dev/hda (master on primary) and /dev/hdc (master on secondary)
setup as mirrors. I also have a cdrom on /dev/hdd. Now if I disconnect
hda and reboot, everything seems work - except what used to be
/dev/hdc comes up as /dev/hda. I know this since I the bios does
complain that primary disk 0 is missing and I would have expected a
missing hda, not a missing hdc. Anyways, the software seems to
recognize the failed-disk fine if I connect the real hda back. Is
this the way it is supposed to work? Can I rely on this? Also what
happens when I move on to fancier setups like raid5?. My box is a dell
400sc with some phoenix bios (doesnt have many options either). I get
different (still unexpected) results with the cdrom connected and not.

Question #2 (probably related to my problem)

My grub menu.lst is as follows (/dev/md0 is made of /dev/hda1 and /dev/hdc1)
For testing, I made two entries (one for (hd0,0) and another for
(hd1,0)). The howto
I was reading wasn't clear to me. Should I be making just one entry
pointing to /dev/md0?
Also trying labels for hda and hdc after connecting the faulty
drive back gave me different results ( in one case I was looking at
older data and in the other case I wasn't)

(ignore the vs2.1.xxx. it is a linux-vserver patch - shouldn't matter here)

title   Debian GNU/Linux, kernel 2.6.13.3-vs2.1.0-rc4-RAID-hda
root(hd0,0)
kernel  /boot/vmlinuz-2.6.13.3-vs2.1.0-rc4 root=/dev/md0 ro
initrd  /boot/initrd.img-2.6.13.3-vs2.1.0-rc4.md0
savedefault
boot

title   Debian GNU/Linux, kernel 2.6.13.3-vs2.1.0-rc4-RAID-hdc
root(hd1,0)
kernel  /boot/vmlinuz-2.6.13.3-vs2.1.0-rc4 root=/dev/md0 ro
initrd  /boot/initrd.img-2.6.13.3-vs2.1.0-rc4.md0
savedefault
boot

Any help is appreciated. If there is a better/current HOWTO, please
let me know. The ones I have seen so far refer to now deprecated tools
(raidtools or raidtools2) and I have had a hard time trying to find
the equivalent syntax for mdadm.

--
Hari
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html