on 2/17/09 3:28 PM, John Stoffel said: > Has anyone done a migration from a mostly NFS based Netapp setup to > one with the new Sun ZFS solution on the 7x00 series arrays? We're > thinking about doing this at $WORK due to the *large* cost savings. > We're looking at around 200Tb of disk spread across four to five > sites, with reliability and performance the two main drivers.
You say that reliability and performance are your main drivers, but you give the main reason you're looking at ZFS as being cost. That sounds to me like cost is really the main driver. In our case, we get really, really good prices from Dell on their storage, in part because Michael Dell studied at UT Austin, and that apparently really means something to their salespeople. Because of that, we also get really, really good prices from NetApp, otherwise there wouldn't be any NetApp equipment on campus. We don't normally get such good deals from Sun on storage, so in this space there's not that much price advantage for Sun versus Netapp. Ironically, when we price out Sun servers using the standard retail cost, that still comes out better than the discounted pricing we get from Dell with their discount. And Sun usually has a matching grant program once a year for participating educational institutions, so you can get twice as much equipment for the same price. And these Sun servers usually have more disk and more RAM than the Dell equivalents, in half the rack space (1U instead of 2U, 2U instead of 4U, etc...). You can also manage these Sun servers remotely using ALOM or ILOM from any web browser that can do Java, whereas the Dell DRAC cards can only be managed remotely from IE on Windows, because they do Active-X. Can you tell that I've spent a lot of time on the respective websites lately, trying every which way to maximize the amount of equipment we can get for our budget? For my part, reliability and performance are two of the three biggest questions I've got with regards to the Sun 7000 series. The third question has to do with functionality, and how we get the equivalent features of things like SnapMirror, SnapClone, de-duplication, MetroCluster, etc.... HSM is nice, and ZFS can do snapshots, but what about the rest? How does their replication compare to SnapMirror? How does their clustering compare? How does their compression compare to NetApp de-dupe? Traditionally, one problem with mixing NFS, CIFS and iSCSI on the same platform is that you can only make storage available via one of these protocols, and you can't easily share something via both NFS and CIFS, because the NTFS ACLs interfere. I'm told that there is a way you can set up Netapp devices so that they actually work in these situations, although this type of configuration is rather rigid and brittle -- at least it's do-able. Can Sun do the same? And where does iSCSI fit in? Is it a bastard step child, or is everything based on iSCSI internally? > We do have some iSCSI and some CIFS volumes, but not large numbers, > mostly we're NFS for compute clusters, home dirs, etc. We generally > just export one *large* NFS mount point at each site for all data. It > makes life simpler so we don't have to shuffle data/volumes around. For doing compute clusters, note that Ranger uses Thumpers running GLustre. In the kind of situation you describe, I'd definitely take a look at something like Lustre. > - Unknown performance of volume replication across WAN (NetApp > SnapVault sucks across WAN, known Con :-) Have you tried WAN accelerators here? There are some that are specifically designed to optimize storage across the WAN, and I'd check into whether or not they can help with SnapVault and SnapMirror performance. -- Brad Knowles <[email protected]> If you like Jazz/R&B guitar, check out LinkedIn Profile: my friend bigsbytracks on YouTube at <http://tinyurl.com/y8kpxu> http://preview.tinyurl.com/bigsbytracks _______________________________________________ Tech mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
