On Dec 5, 2007 1:50 PM, Greg Freemyer <[EMAIL PROTECTED]> wrote:
>
> On Dec 4, 2007 1:49 PM, Aaron Kulkis <[EMAIL PROTECTED]> wrote:
> > Chris Worley wrote:
> > > On Dec 4, 2007 10:22 AM, Jc Polanycia <[EMAIL PROTECTED]> wrote:
> > >>> Off topic, as I seldom partition anything (unpartitioned drives
> > >>> perform best), but, you're setting yourself up for disaster using LVM
> > >>> (any corruption to the LVM layer is not recoverable... you'll loose
> > >>> everything... been there done that), and the performance is poor, and
> > >>> MD RAID5/6 devices can be grown (add more disks).
> > >>>
> > >>> Chris
> > >>>
> > >> Fair enough.  I appreciate the input because I haven't run across any
> > >> real-world stories about LVM corruption.  I have personally encountered
> > >> corruption problems with RAID5/6 as well as problems with decreased
> > >> performance as a RAID5 structure gets more members added to it.
> > >
> > > I saw some RAID6 issues last year, so I use RAID5... but recent tests
> > > have shown MD RAID6 as solid.
> > >
> > > "Decreased performance as more members get added to it"?  Bull!!!  I'm
> > > guessing you have another bottleneck that has led you to this
> > > conclusion.
> > >
> > > While the performance increase doesn't scale linearly as disks are
> > > added (some CPU verhead is added with each additional drive), the more
> > > disks, the better the performance.  I'm sure there is some Amdahl's
> > > law limit to the increased performance scalability, but I run RAIDS up
> > > to 12 drives, and see performance added w/ each new member.
> > >
> >
> > You're hallucinating.  That defies basic information theory.
> >
> > Your assertion is akin to suggesting that you power your
> > computers with a perpetual motion machine (despite the
> > fact that such would violate the 1st, 2nd, and 3rd laws
> > of thermodynamics).
>
> Single threaded access to a raid array may not be helped by adding
> drives.  Drive access can end up being sequential and your not really
> buying anything.
>
> Multi-threaded storage performance is definitely positively affected
> by adding disks to an array.
>
> For multi-threaded, effectively each disk can do N IOPS  (IOs per Second.)
>
> So if you have M drives, you can do M*N IOPS.
>
> The trouble with Raid 5 is that it typically requires 4 IOs to update
> a single sector.
>
> ie.
> Read checksum,
> Read original sector,  (so you can remove it from the checksum)
> write updated sector
> write new checksum.
>
> So it ends up being M*N / 4  IOPS.
Greg,

Doesn't that assume a sector/block mismatch?  If your sectors and
blocks are aligned (sectors are some multiple of blocks), then no
read-mask-write is necessary.

Even if there is a misalignment, if the amount of data being written
is large, the read-mask-write operation is only at the beginning and
tail ends of the entire operation.

Also, the writes are all in parallel.  The above makes it sound like
the writes of updated stripes, and the write of the checksum are
serial... they should all be posted nearly simultaneously (some
serialization introduced by the CPU).

>
> So from a performance perspective on _writes_  you need at least a 4
> drive array just to be as fast as a single disk.
>
> Reads OTOH just need to read the sector they want (unless you have a
> failed drive).
>
> So _read_ performance is M*N.  Or always faster than a single drive.
>

On a RAID5 you only need M-1 (or M-2 for RAID6) completions of
parallel operations... you can discard the slowest disks results, as
that can be recreated without all the data.

Chris
> Greg
> --
> Greg Freemyer
> Litigation Triage Solutions Specialist
> http://www.linkedin.com/in/gregfreemyer
> First 99 Days Litigation White Paper -
> http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
>
> The Norcross Group
> The Intersection of Evidence & Technology
> http://www.norcrossgroup.com
> --
>
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to