John Stoffel wrote: > In this case, I know that Netapp is solid (mostly, don't ever use > their X.0 or even X.1 releases if you can help it now, they're just > not as solid as they used to be...) and works well performance wise, > but they're expensive. And they have this limit of 16Tb for a > continuous chunk of data.
Yup, rock solid, but fairly expensive. IMO, they're worth it, and they certainly aren't the most expensive storage on the block. > Brad> We don't normally get such good deals from Sun on storage, so in > Brad> this space there's not that much price advantage for Sun versus > Brad> Netapp. > > Have you looked at the new 7x00 series servers from Sun lately? The > prices really are quite compelling, as are the features of ZFS. I know what we've recently paid for some of the storage systems we've bought from Netapp, and I've done the pricing scenarios on the Sun site. There's not much cost difference, perhaps in part because both systems are mostly spec'ing out 1TB SATA-II drives. But no, I haven't gone the next step to find out what our official discounted prices would be. > Yup, lots of questions. And I do have questions about their > replication, esp across the WAN. I need to pull down and fire up some > simulators on each coast to test this out. They allow you to make > 30Gb test area for simulation with the shipping code, which is more > than enough data to test out replication. That'll be enough to test out whether it works at all, but not enough to test out how well it works. There's all these weird edge cases that you just don't get a chance to see until you start throwing around the terabytes. > Brad> For doing compute clusters, note that Ranger uses Thumpers > Brad> running GLustre. In the kind of situation you describe, I'd > Brad> definitely take a look at something like Lustre. > > Who/what is Ranger? Ranger is the largest/highest performance cluster in the world that is "publicly" available, i.e., not used for classified work and where a certain percentage is reserved for the use of any University student in the country who puts together a suitable proposal. You can get thousands of CPU hours for free, as an undergrad. Of course, it's on the UT Austin J.J. Pickle north campus. ;-) And Ranger is not the only cluster we've got that is "publicly" available. It's just the largest. There are larger clusters out there, but they're for the dedicated use of DOE or other isolated and/or classified environments. We're still in the Top Five overall. > Also, we're not talking *large* clusters, or high IO clusters. We're > doing EDA designs. Lots of data in some ways, but mostly lots of > simulations of chips. I know some semiconductor companies here in the Austin area that have looked very closely at the previous generation Thumpers, and have an extensive Netapp infrastructure. So you're certainly not the only ones. > But doing DR across a WAN using T3s and 16Tb of data still sucks, esp > when they can create/delete 2Tb in a single day. It's just amazing > how quickly you fall behind in your replication when TCP over a > fat-wide pipe just sucks performance wise. Oh, gack. I wouldn't call T3s a "fat-wide pipe". 10G, yes. T3s, no. But the latency-bandwidth product nature of TCP really does cause huge problems across the WAN when you're trying to move really large chunks of data -- Even if you do have the right TCP stacks with the right options settings on both ends and everywhere else between. -- Brad Knowles <[email protected]> If you like Jazz/R&B guitar, check out LinkedIn Profile: my friend bigsbytracks on YouTube at <http://tinyurl.com/y8kpxu> http://preview.tinyurl.com/bigsbytracks _______________________________________________ Tech mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
