> Please tell us how many storage arrays are required to meet a theoretical I/O bandwidth of 244 GBytes/s?

Just considering disks, you need approximately 6,663 all streaming 50 MB/sec with RAID-5 3+1 (for example). That is assuming sustained large block sequential I/O. If you have 8 KB Random I/O you need somewhere between 284,281 and 426,421 disks each delivering between 100 and 150 IOPS.

Dave

Richard Elling wrote:
Anton B. Rang wrote:
Thumper seems to be designed as a file server (but curiously, not for high availability).

hmmm... Often people think that because a system is not clustered, then it is not designed to be highly available. Any system which provides a single view of data (eg. a persistent storage device) must have at least one single point of failure. The 4 components in a system which break most often are: fans, power supplies, disks, and DIMMs. You will find that most servers, including thumper, has redundancy to cover these failure modes. We've done extensive modelling and measuring of these systems and think that we have hit a pretty good balance of availability and cost.
A thumper is not a STK9990V, nor does it cost nearly as much.

Incidentally, thumper field reliability is better than we expected. This is causing
me to do extra work, because I have to explain why.

It's got plenty of I/O bandwidth. Mid-range and high-end servers, though, are starved of I/O bandwidth relative to their CPU & memory. This is particularly true for Sun's hardware.

Please tell us how many storage arrays are required to meet a theoretical I/O bandwidth of 244 GBytes/s? Note: I have to say theoretical bandwidth here because no such system has ever been built for testing, and such a system would be very, very expensive.
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Dave Fisk, ORtera Inc.
http://www.ORtera.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to