I actually normally configure them for RAID5...typically 4 drives striped
and 1 drive parity.  However, I've also setup 3 drives striped and 1 drive
parity.  A little slower, but LOTS more reliable.

Now having said that, I've also done nutso and went crazy once and did a
mirrored RAID5 across two jbods...but this was a financial system that was
logging fund transfers....so I figured we should be paranoid about it.
Can't remember what it was, but I think some folks call it RAID10.

/brian chee

University of Hawaii ICS Dept
Advanced Network Computing Lab
1680 East West Road, POST rm 311
Honolulu, HI  96822
808-956-5797 voice, 808-956-5175 fax

----- Original Message -----
From: "Charles Lockhart" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, June 05, 2002 10:29 AM
Subject: Re: [luau] SANs


>
> Thanks Brian,
>
> Question, as I understand it, these arrays are typically set to RAID-0 for
> striping across the drives, thereby increasing throughput.  Assuming this
> is correct, do you know how many drives are required, or typically
> required, for this?
>
> -Charles
>
> On Tue, 4 Jun 2002, Brian Chee wrote:
>
> > Hmmmm.....let's see how well this can be done in a nutshell....
> >
> > SAN's come in several flavors, but the idea is that from a 64bit cards
> > (yup...if you ain't got 64bit PCI slots, you're not going to get
anything
> > better than ultra160) you travel over a fiber optic path to either a
switch
> > or a hub.  Just like data networks a switch is a good idea if you have a
> > multipath environment with multiple destinations....if you have but one
> > server with one set of drives...then a switch is not necessary....a
single
> > HBA (fiber channel comes 1mb/sec or 2mb/sec) to an elcheapo hub to the
jbod
> > (just a bunch of disks) is pretty inexpensive and gives you the terrific
> > throughput that folks like about sans.
> >
> > Now iscsi and such....same thing with a different name, different
protocol
> > with different amounts of overhead, etc....keep in mind that if you
wanna
> > play your storage over IP...that you're paying for the IP
> > overhead....1gb/sec fiberchannel is faster than storage over
Gig-ethernet
> > due to overhead.
> >
> > Linux is supported well by qlogic and compaq is a relabeled qlogic card
(may
> > have changed...heard rumblings about emulex too).....interphase had
great
> > cards (gib+fiber channel) but I'm not sure they're around anymore????
> >
> > I've run qlogic cards in solaris, linux and NT...they all work
well....but a
> > switch is necessary only if you're mixing several systems and have to
carve
> > up your array into several pieces.....but if only one system and one set
of
> > drives (can be multiple jbods) then a hub is fine.  Emulex makes fine
cards
> > and hubs....
> >
> > Oh yeah....most fiber channel doodads are LC fiber connectors over
multimode
> > fiber....those suckers are VERY expensive cables....you can also do sans
> > over copper which is LOTS cheaper, just change the gbic....oh yeah, a
> > gigabit ethernet SX fiber gbic is exactly the same at layer1 as fiber
> > channel 1gig.....they are interchangable....
> >
> > /brian chee
> >
> > University of Hawaii ICS Dept
> > Advanced Network Computing Lab
> > 1680 East West Road, POST rm 311
> > Honolulu, HI  96822
> > 808-956-5797 voice, 808-956-5175 fax
> >
>
> _______________________________________________
> LUAU mailing list
> [EMAIL PROTECTED]
> http://videl.ics.hawaii.edu/mailman/listinfo/luau

Reply via email to