On Tue, 2003-07-29 at 14:00, scott.marlowe wrote:
> On 29 Jul 2003, Ron Johnson wrote:
> 
> > On Tue, 2003-07-29 at 11:18, scott.marlowe wrote:
> > > On 29 Jul 2003, Ron Johnson wrote:
> > > 
> > > > On Tue, 2003-07-29 at 10:14, Vivek Khera wrote:
> > > > > >>>>> "GS" == Greg Stark <[EMAIL PROTECTED]> writes:
> > > > > 
> > > > > GS> "scott.marlowe" <[EMAIL PROTECTED]> writes:
> > > > > 
> > > > > GS> But you have to actually test your setup in practice to see if it
> > > > > GS> hurts. A big data warehousing system will be faster under RAID5
> > > > > GS> than under RAID1+0 because of the extra disks in the
> > > > > GS> stripeset. The more disks in the stripeset the more bandwidth you
> > > > > GS> get.
> > > > > 
> > > > > Anyone have ideas on 14 spindles?  I just ordered a disk subsystem
> > > > > with 14 high speed (U320 15kRPM) SCSI disks to hook up with a dell
> > > > > PERC3/DC controller (only 128MB cache, though).
> > > > 
> > > > 14 drives on one SCSI card, eh?  I'd be worried about saturating
> > > > the bus.
> > > 
> > > I'm pretty sure those PERCs are based on the megaraid cards, which can 
> > > handle 3 or 4 channels each...
> > 
> > Each with 14 devices?  If so, isn't that a concentrated point of
> > failure, even if the channels are 1/2 full?
> 
> Yep.  I've built one once before when BIG hard drives were 9 gigs.  :-)
> 
> And it is a point of concentrated failure, which brings me to my favorite 
> part about the LSI megaraid cards (which most / all perc3s are 
> apparently.)
> 
> If you build a RAID1+0 or 0+1, you can seperate it out so each sub part is 
> on it's own card, and the other cards keep acting like one big card.  
> Assuming the bad card isn't killing your PCI bus or draining the 12V rail 
> or something.

Sounds like my kinda card!

Is the cache battery-backed up?  

How much cache can you stuff in them?



> > > > Maybe it's an old rule of thumb, but I would fill a SCSI chain
> > > > more than half full.
> > > 
> > > It's an old rule of thumb, but it still applies, it just takes more drives 
> > > to saturate the channel.  Figure ~ 30 to 50 MBytes a second per drive, on 
> > > a U320 port it would take 10 drives to saturate it, and considering random 
> > > accesses will be much slower than the max ~30 megs a second off the 
> > > platter rate, it might take more than the max 14 drives to saturate U320.
> > 
> > Ok.  You'd still saturate the 133MB/s PCI bus at 133/30 = 4.4 drives.
> 
> But that's seq scan.  For many database applications, random access 
> performance is much more important.  Imagine 200 people entering 
> reservations of 8k or less each into a transaction processing engine.  
> Each transactions chance to hit an unoccupied spindle is what really 
> counts.  If there's 30 spindles, each doing a stripe's worth of access all 
> the time, it's likely to never flood the channel.  
> 
> If random access is 1/4th the speed of seq scan, then you need to multiply 
> it by 4 to get the number of drives that'd saturate the PCI bus.

Maybe it's just me, but I've never seen a purely TP system.

Even if roll off the daily updates to a "reporting database" each
night, some yahoo manager with enough juice to have his way still
wants up-to-the-minute reports...

Better yet, the Access Jockey, who thinks s/he's an SQL whiz but
couldn't JOIN himself out of a paper bag...

-- 
+-----------------------------------------------------------------+
| Ron Johnson, Jr.        Home: [EMAIL PROTECTED]             |
| Jefferson, LA  USA                                              |
|                                                                 |
| "I'm not a vegetarian because I love animals, I'm a vegetarian  |
|  because I hate vegetables!"                                    |
|    unknown                                                      |
+-----------------------------------------------------------------+



---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to