Is your DB read-bound or write-bound?

If reading's the problem, then adding more RAM is probably the
cheapest way to go.  Go to 32 GB, or even higher if your hardware can
support it.  If your entire DB fits into RAM, then disk read
performance is kind of irrelevant.

If writing's the problem, then the first thing to do is to ditch
raidz1.  An N-drive raidz array has roughly the same random write
performance as a single drive.  Move to RAID 1 and you'll see *much*
better performance.

Personally, I'd avoid spending tons of cash on high-performance drives
this year if I could put it off for a while.  Odds are SSDs will start
making a big dent into the 15k drive market soon, and things will be a
lot clearer by this time next year.


Scott

On Tue, Jul 1, 2008 at 5:20 PM, Justin Vassallo
<[EMAIL PROTECTED]> wrote:
> **Use:
> Mission critical 33GB MySQL db with 15 large tables. 6GB of bin
> (transaction) logs daily (we keep 5 days unzipped).
>
> **Current hardware setup:
> SunFire X4200 M2 with 16GB memory, 4 internal 70G 15krpm 2.5" SAS drives,
> with a raidz1 across the 4 discs. Db feels a little slow and getting slower
> quickly.
>
> **Estimated use:
> 200GB db in Jun 2009, with 30GB of transaction logs daily, and 500GB in Jun
> 2010.
>
> **Objective:
> Hardware that will take me to 2010 in terms of performance, reliability and
> capacity. Would like to provide a highly available storage, including during
> array firmware upgrades.
>
> I don't want to be the first on the moon. I prefer a proven and stable
> solution.
>
> Less power consumption is a benefit but not a priority.
>
> NB - I need to set aside 200G for a fileserver
>
> **Hardware options being considered:
> A) upgrade of SunFire's memory to 32G, and direct connect to the array to
> avoid cost of FC switch
> B) 1*6140 w 16*146GB 3.5" 15krpm 4 Gb/sec FC disks, dual controllers, 2*2G
> cache; dual dual-port PCIe cards (2 spare ports on sunfire). Max 550W
> C) 2*2540 w 12*300GB 3.5" 15krpm 3 Gb/sec SAS disks, dual controllers,
> 2*512M cache; dual dual-port PCIe cards (no spare). Max 515W/array=1030W
>
> **Consideration factors:
> 1) 2.5" disks produce less vibration and are less sensitive to it, so seek
> time is better and more reliable. Also, less heat so less energy consumed.
> Any other array i should consider?
> 2) Separate arrays typically provide better hardware redundancy. But isn't
> the 6140 completely redundant up till each disk's connection? Which setup
> should give better redundancy?
> 3) which setup will perform better? I've seen posts saying the 2540 is half
> the 6140 (zfs-discuss: some trends from the test center of SUN/LSI: 2540 /
> ca. 100 KIOPs, ca. 600 MB/s; 6140 / ca. 200 KIOPs, ca. 1000 MB/s)
> 4) HW Raid5 vs HW mirroring vs raidz with a lun/disk. I'm quite lost on this
> one.
> 5) Is layering zfs on top of the 6140 of any benefit, given the
> multi-pathing? (i'm also posting this separately to zfs-discuss)
> 6) Time is against me. Is either setup simpler to set up?
> 7) Anything i'm missing?
>
> justin
>
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>
>
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to