On Mon, 29 Oct 2001, Rich C wrote:
> I just wanted to beat this horse until it worked.

  Ah.  I am familiar with that.  :-)

>> A hardware RAID controller should not expose any of its internal
>> housekeeping data to the OS (or other software).  As far as the OS is
>> concerned, you just have a single, normal drive.  All the mirroring
>> happens at a layer lower than that.
>
> But didn't you say earlier that it's not true hardware RAID?

  Er... now we are getting into semantics.  What I meant was, this device is
doing the RAID in the device driver for the controller.  The device driver
shows a single device -- the virtual RAID volume -- to the OS core.  So the
OS never knows you have two disks.

#ifdef GORY_DETAILS

  There are at least four different ways to do RAID that I am aware of:

  There is "pure software" RAID, which is done entirely in the OS.  In this
case, your boot loader (and any other pre-OS software) is generally not
redundant, since you need the OS before your RAID volume becomes available.
Regular host adapters (and drivers thereof) are used.

[Disk]----[HA]----[Driver]
                          >---[OS RAID]----> Rest of System
[Disk]----[HA]----[Driver]

  There is "pure hardware" RAID, which is done entirely in the hardware.
You connect disks to a magic box, and you connect that magic box to your
standard host adapter.  The host adapter (and OS) see a single device, and
have no knowledge of the RAID at all.  No special drivers are needed.

[Disk]
      >---[HW RAID]----[HA]----[Driver]----> Rest of System
[Disk]

  There are "RAID host adapters".  To the human eye, these look something
like a regular host adapter, but implement RAID.  The OS sees this as a
different kind of disk controller, and needs different drivers to support
it.  There are two kinds of these.

  There are RAID HAs with onboard firmware and processing power.  All the
calculations and RAID intelligence is self-contained.  The host CPU and
memory is not used for the RAID process.  On IBM-PCs, these controllers
provide a BIOS INT13 interface so things like LILO can access the RAID
volume.

  [Disk]
        >---[HA/RAID]----[Driver]----> Rest of System
  [Disk]

  Then there are these RAID HAs that implement the RAID in their drivers.
These are basically standard HAs with custom firmware.  The INT13 interface
implements the RAID intelligence for boot.  Once the OS loads, the driver
for the HA has to do the RAID work.  Host CPU and memory is used in this
case.

  [Disk]
        >---[HA]----[Driver/RAID]---->
  [Disk]

#endif /* GORY_DETAILS */

> Maybe. You don't really even NEED initrd unless you need some special
> drivers loaded, or am I wrong?

  You are correct, but "special drivers" means anything not standard IDE,
including your Promise controller.

#ifdef GORY_DETAILS

  Red Hat uses a modular kernel.  The only disk drivers included in their
kernels are for IDE.  If you have any non-IDE devices (SCSI, RAID, old
proprietary CD-ROM controllers, PCMCIA, etc.), a kernel module has to be
loaded before they will work.  This lets RHS distribute a single kernel for
everyone, and just load modules as needed.

  The catch is, you have a chicken-and-egg problem if your root filesystem
is on a non-IDE device.  You need the root FS to load a module, but you need
to load a module before you can get to the root FS.

  initrd to the rescue.  During install, the system builds an "initial
ramdisk" image which contains the needed modules (and code to load them).
This initrd is loaded by LILO along with the kernel.  The first thing the
kernel does is mount the initrd image as root, and run the code that loads
the modules.  Now that the needed modules are loaded, the kernel can free
the initrd, mount the *real* root, and continue on.

#endif /* GORY_DETAILS */

> That's primarily why I built this machine, instead of shelling out the bucks
> for a 1.7GHz P4.

  Especially since the 1.7 GHz Pentium 4 doesn't appear to deliver the
promise you would expect from a 1.7 GHz Pentium III.  :-)

>> Probably not so important to you, but it can be a very significant
>> difference if you are tuning for a particular application.  :-)
>
> It is important to me. Once I get this system ironed out and I'm
> comfortable with it, I plan to build a real time kernel using RTLinux
> and play with that in an SMP environment.

  Cool.  (Hey, you could give a presentation to GNHLUG about that...)

  It can work the other way, too.  If you have a single-threaded application
that cannot easily be made parallel, you buy the fastest single processor
you can get, since multiple CPUs will not help you.  (Well, I'm ignoring OS
tasks -- it may make sense to have a dual-proc system.)

-- 
Ben Scott <[EMAIL PROTECTED]>
| The opinions expressed in this message are those of the author and do not |
| necessarily represent the views or policy of any other person, entity or  |
| organization.  All information is provided without warranty of any kind.  |


*****************************************************************
To unsubscribe from this list, send mail to [EMAIL PROTECTED]
with the text 'unsubscribe gnhlug' in the message body.
*****************************************************************

Reply via email to