> As for RAID on a firewall, uh...no, all things considered, I'd rather
> AVOID that, actually.  Between added complexity,
what complexity?
>  added boot time, and
> disks that can't be used without the RAID controller,
why would you want to use your disk WITHOUT the raid controller?
>  it is a major
> loser when it comes to total up-time if you do things right.  Put a
> second disk in the machine, and regularly dump the primary to the
> secondary.  Blow the primary drive, you simply remove it, and boot off
> the secondary (and yes, you test test test this to make sure you did it
> right!). 
Now you're talking crazy. Lets consider the two setups:
No-raid setup:
  - two separately controlled disks, you are in charge of syncing
between them
  - if one dies, the machine goes down, and you go to the machine, and
manually boot from the backup disk
  - IF you had important data on the dead disk not yet backed up, you
are screwed.
you could almost look at this as poor mans manual pretend raid.

Raid setup:
  - two disks, constantly synced, if one dies, the machine does NOT go down
  - if a disk fails, just go and plug a new one in _at your
convenience*_ and it will autmatically rebuild, a task any person could
perform with proper direction. Not a seconds downtime.

* this is _very_ important if your machine is hosted where you don't
have easy physical access to it. Machines at a colo center would be a
very common scenario.
>  RAID is great when you have constantly changing data and you
> don't want to lose ANYTHING EVER (i.e., mail server).  When you have a
> mostly-static system like a firewall, there are simpler and better ways.
>   
RAID is great for any server. So are scsi drives. If you are a company
that loses more money on a few hours (or even minutes) downtime than it
costs to invest in proper servers with proper hw raid + scsi disks, then
you are ill-advised _not_ to raid all your missioncritical servers. And
have backup machines, too!  Preferably loadbalanced.
> A couple months ago, our Celeron 600 firewall seemed to be having
> "problems", which we thought may have been due to processor load.  We
> were able to pull the disk out of it, put it in a much faster machine,
> adjust a few files, and we were back up and running quickly...and found
> that the problem was actually due to a router misconfig and a run-away
> nmap session.  Would not have been able to do that with a RAID card.
>   
Next time, you may want to check what the machine is actually doing
before you start blaming your hardware.
I personally would not trust the OS setup on one machine to run smoothly
in any machine not more or less identical to itself as far as the hw
goes. Especially not for a production unit.
But if you really wanted too, you could move the entire raid array over
to a different machine, if that makes you happy.

Alec

Reply via email to