On Aug 1, 2006, at 22:23, Luke Lonergan wrote:

Torrey,

On 8/1/06 10:30 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:

http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml

Look at the specs page.

I did.

This is 8 trays, each with 14 disks and two active Fibre channel
attachments.

That means that 14 disks, each with a platter rate of 80MB/s will be driven over a 400MB/s pair of Fibre Channel connections, a slowdown of almost 3 to
1.

This is probably the most expensive, least efficient way to get disk
bandwidth available to customers.

WRT the discussion about "blow the doors", etc., how about we see some
bonnie++ numbers to back it up.


actually .. there's SPC-2 vdbench numbers out at:
http://www.storageperformance.org/results

see the full disclosure report here:
http://www.storageperformance.org/results/b00005_Sun_SPC2_full- disclosure_r1.pdf

of course that's a 36GB 15K FC system with 2 expansion trays, 4HBAs and 3 yrs maintenance in the quote that was spec'd at $72K list (or $56/GB) .. (i'll use list numbers for comparison since they're the easiest )

if you've got a copy of the vdbench tool you might want to try the profiles in the appendix on a thumper - I believe the bonnie/bonnie++ numbers tend to skew more on single threaded low blocksize memory transfer issues.

now to bring the thread full circle to the original question of price/ performance and increasing the scope to include the X4500 .. for single attached low cost systems, thumper is *very* compelling particularly when you factor in the density .. for example using list prices from http://store.sun.com/

X4500 (thumper) w/ 48 x 250GB SATA drives = $32995 = $2.68/GB
X4500 (thumper) w/ 48 x 500GB SATA drives = $69995 = $2.84/GB
SE3511 (dual controller) w/ 12 x 500GB SATA drives = $36995 = $6.17/GB
SE3510 (dual controller) w/ 12 x 300GB FC drives = $48995 = $13.61/GB

So a 250GB SATA drive configured thumper (server attached with 16GB of cache .. err .. RAM) is 5x less in cost/GB than a 300GB FC drive configured 3510 (dual controllers w/ 2 x 1GB typically mirrored cache) and a 500GB SATA drive configured thumper (server attached) is 2.3x less in cost/GB than a 500GB SATA drive configured 3511 (again dual controllers w/ 2 x 1GB typically mirrored cache)

For a single attached system - you're right - 400MB/s is your effective throttle (controller speeds actually) on the 3510 and your realistic throughput on the 3511 is probably going to be less than 1/2 that number if we factor in the back pressure we'll get on the cache against the back loop .. your bonnie ++ block transfer numbers on a 36 drive thumper were showing about 424MB/s on 100% write and about 1435MB/s on 100% read .. it'd be good to see the vdbench numbers as well (but i've have a hard time getting my hands on one since most appear to be out at customer sites)

Now with thumper - you are SPoF'd on the motherboard and operating system - so you're not really getting the availability aspect from dual controllers .. but given the value - you could easily buy 2 and still come out ahead .. you'd have to work out some sort of timely replication of transactions between the 2 units and deal with failure cases with something like a cluster framework. Then for multi- initiator cross system access - we're back to either some sort of NFS or CIFS layer or we could always explore target mode drivers and virtualization .. so once again - there could be a compelling argument coming in that arena as well. Now, if you already have a big shared FC infrastructure - throwing dense servers in the middle of it all may not make the most sense yet - but on the flip side, we could be seeing a shrinking market for single attach low cost arrays.

Lastly (for this discussion anyhow) there's the reliability and quality issues with SATA vs FC drives (bearings, platter materials, tolerances, head skew, etc) .. couple that with the fact that dense systems aren't so great when they fail .. so I guess we're right back to choosing the right systems for the right purposes (ZFS does some great things around failure detection and workaround) .. but i think we've beat that point to death ..

---
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to