Jordi Molina posted
<[EMAIL PROTECTED]>, excerpted
below,  on Tue, 24 Jan 2006 00:04:06 +0100:

> I installed gentoo from another livecd and then compiled the kernel
> and the initrd image to support sata_nv.
> 
> It boots fine for me. Forget about using the nvraid, is not hardware,
> so if you need it, go sw raid or buy a decent RAID card.

As someone else stated, my info doesn't quite fit your scenario, but it
can add to the list.  I agree with the above, go sw (preferably
kernel-built-in) RAID.

I'm running an older (I believe SATA-1) Silicon Image 3114, on a dual
Opteron Tyan s2885.  Attached to it, I have four Seagate SATA-2 300 gig
drives, in mixed-RAID configuration, all using the kernel's software RAID.
The fact that I'm using kernel-RAID means that I don't have to worry about
hardware RAID compatibility or the like, when this system dies or I simply
decide to upgrade.  Simply building a new kernel with the standard SATA
chipset drivers and installing it to /boot before unplugging my drives and
plugging them into the new system, should be all I need to do to port to a
new SATA chipset.

As I mentioned, four drives, mixed RAID, arranged as follows.  A small
RAID-1 to boot off of.  Since RAID-1 is direct mirrored, I can install
GRUB to the MBR of all four drives, and can boot to GRUB from any of the  
four, by just switching the BIOS to the one I want to boot.  GRUB doesn't
do RAID, but it sees each of the mirrors individually, which is all that
it needs to see the kernel mirrored on each one, so it can boot it.

My main system is RAID-6 over the four drives, which means any of the four
can die and I'll remain up with little speed degradation, a second one can
die, and I'll still have my data, but at significantly reduced speed,
until I recover at least to three drives.  I was originally thinking about
RAID-5 with a hot-spare, but decided RAID-6 without a hot-spare is
effectively the same thing, only with more protection because the second
drive can die before the hot-spare could have been brought online, and
I'll still be fine.

Stuff like /tmp, /var/tmp, and the portage tree and distdir, are on
RAID-0, to maximize speed and space usage, because that's either
non-critical data or stuff that can be redownloaded off the net rather
quickly in any case.  I have swap distributed across the four drives as
effectively a RAID-0 as well, as they are all set at the same priority,
which allows the kernel to manage them effectively as RAID-0.  If a drive
or two dies, therefore, I'll go down, but can come right back up by simply
reconfiguring the swap and RAID-0 for two or three drives instead of four,
remaking the RAID-0, and running that way if necessary until I can procure
another 300 gig drive or two to get back to normal operation.

I don't have to use an initrd at all.  With a couple kernel parameters,
the kernel can find and reassemble the RAID-6 upon which my root file
system is based, without an initrd.  I did choose to use partitioned
RAID-6 (partitioned RAID is possible on 2.6, with an additional kernel
command line append telling it which RAIDs to load partitioned) for my
root and root-backup-image filesystems, rather than LVM, thus avoiding the
complication of initrd/initramfs, which LVM would require.  However, the
rest of my RAID-6 data is on another RAID-6 partition, which is LVM-2
split in ordered to be more dynamically manageable, into my other logical
volumes (home and home-backup-image, media and media-backup-image, log,
which I decided I didn't need a backup image for, mail and
mail-backup-image, etc).

The backup images are there to prevent the one thing RAID redundancy does
NOT protect against -- fat-fingered admins!  Of course, the root-backup
also protects against the occasional issue with a bad update making my
working root unbootable, or without a working gcc or portage or whatever,
giving me an emergency root backup boot option, whether the main root boot
failure is due to my own fat-fingering, or the occasional bad upgrade one
might have with ~amd64 plus pulling in stuff like modular-X and gcc-4
before it's even stable enough for ~arch!

I've been VERY impressed with the speed improvement of the system, over
bog-standard single-disk PATA.  Now that  I know how much more responsive
the system is with 2-4 way striped RAID (a four-disk RAID-6 is effectively
2-way-striped, the RAID-0 is of course 4-way-striped, as is swap), I wish
I had done it earlier!

As I mentioned at the top, I recommend kernel-RAID, for two reasons.  One,
it massively decreases porting or upgrade worries, as it's not dependent
on specific hardware, only SATA standard hardware.  Two, the mixed-RAID
implementation I've setup as described above isn't possible, to my
knowledge, on hardware RAID.  The two combined, plus the fact that I've 
got a dual processor system already, so the rather small CPU hit of
software RAID matters even less, PLUS the fact that I could direct-boot
it, something I thought was only possible with hardware RAID, made this by
FAR the best choice possible for me.

Some of that may apply to your current RAID-1 situation, some not.  If you
are only going all RAID-1 because you didn't realize you could do
mixed-RAID, depending on your usage, you may wish to reconsider doing
mixed-RAID, now that you know it's an option.  With a two-physical-drive
solution, you can at least implement RAID-0 for /tmp and the like,
/provided/ that it's not absolutely critical to keep it from going down
period.  Or you can throw another drive in and make it RAID-5, with a
small RAID-1 for /boot and possibly a RAID-0 for non-critical data.

**  Something that *WILL* apply to your situation, even (especially) if
you are sticking with RAID-1 only -- for installation, you can do a
conventional single-drive installation, if necessary, no RAID drivers
necessary on the LiveCD.  When you build your kernel, just ensure that it
includes software RAID built-in, along with the regular SATA chipset
drivers.  Then, after you are up and running on the single drive,
create a "degraded" RAID-1 on the second drive, activate it, partition it
if you have it setup as  partitionable RAID, create your filesystems on
it, mount them, and copy your system over from the single drive to the
degraded-but-operational RAID-1.  Once that's done and GRUB is installed
too the MBR of the degraded RAID-1, reboot onto the degraded RAID-1, and
then activate what WAS your single drive as the second RAID-1 drive. 
It'll take some time to mirror everything over, doing its recovery cycle,
destroying the single-drive installation in the process, but when it's
done, you'll have a fully active non-degraded RAID-1 going, all without
requiring the RAID drivers on the LiveCD, only the standard SATA drivers. 
The process of installing a RAID system in this manner, by installing to a
single drive then activating the RAID in degraded mode to copy everything
over, before bringing in the single drive as the missing one and
recovering, is covered in more detail in the various RAID HOWTOs and
Gentoo documentation.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman in
http://www.linuxdevcenter.com/pub/a/linux/2004/12/22/rms_interview.html


-- 
[email protected] mailing list

Reply via email to