On Mon, 13 Jun 2011, Ski Kacoroski wrote: > I am in the market for a new NAS system (30TB usable for end user use - > not virtual machines) and our 2 finalists (EMC and NetApp) have taken > very different approaches to solving the problem. I am wondering which > solution you would be most comfortable with.
Hoo, boy! Great topic. Full disclosure: a few years ago, I worked for NetApp for a year. I no longer work for a vendor -- didn't care for the general atmosphere of having to believe that everything my company sold was the best at everything for everything. That said, there is no question in my mind that I would choose NetApp NAS over EMC Celerra NAS in nearly every case. > $Vendor1(more expensive): Use 256GB flash cache and 72 600GB 15K SAS > disks (second option would have 96 450GB SAS disks if we felt we needed > more IOPs). Minimal dependence on algorithms moving data between > storage mediums. While it's true that more spindles will get you more iops, you probably won't see significantly better performance from 96 450s vs. 72 600s (you'll certainly get some small amount, but 15k disks are so fast already that there's not an appreciable difference unless you've got very specific workloads). > $Vendor2 (much less expensive): Use 300GB flash cache, 8 - 100GB SSDs, > 16 - 600GB 15K SAS, and 16 - 2TB 7.2K SATA disks. This depends then a > lot on their technology for moving hot blocks to faster storage mediums. Ooh, sounds really complicated. Clearly it's EMC! :) > My environment does have a lot of data (e.g. student portfolio data) > that is rarely touched so $Vendor2 may be a good fit. My concern is > that a user will be working in a directory for 2 - 3 weeks, get used to > a certain level of response and performance, then go to a directory that > is on the slow disks and see a huge slowdown to the extent they think > the system is broken. > > Has anyone had experiences with NAS systems that rely a lot on different > storage mediums and migrating data between them to get performance? > Appreciate your thoughts and ideas on this. Ok, yes and no. I have a lot of experience with SAN disk arrays that automatically migrate data blocks around to the "appropriate" disk tier. On SAN, this works amazingly well, and most of your data would end up on the 2TB SATA drives. The sad part is that there's no real reason why NetApp couldn't do this, but they haven't implemented it yet. Even when users have data they haven't touched which ends up on the SATA drives, once they start accessing it, it pretty quickly moves up to faster disk, and if you have enough SATA drives, performance really isn't much of a problem. Consider, also, the type of data -- databases or HPC or OLTP really *needs* immediate access to fast disks, but what is the nature of the user portfolio data that they absolutely cannot wait. Humans are usually more tolerant of delays than computers, since we process computerized information more slowly than the CPUs do. :) Specific to your options, keep in mind that EMC is not actually using FAST to do the data movement, and first-generation FAST sucks anyway. The Celerra uses a Clariion backend, and Clariion does not support FAST2. They may tell you it does, but then they told us a lot of things about the stuff we recently bought that was not in fact the case. FAST-VP (the new name for FAST2 on the VMAX platform) works well -- almost as well as Compellent's "fluid data". It doesn't require any administration or management once the policy is configured. Since the Clariion doesn't support the fancy FAST-VP stuff, EMC will instead suggest the use of their "FMA", or file management appliance, which is a RAINFinity. To be frank, every RAINFinity user I've met has been disgusted with the product, and in some cases it has actually taken down networks, though I'm afraid I don't have specific details (it happened to the network at the company that I was stationed at as a NetApp employee before they abandoned their EMC Celerras and switched to NetApp NAS). Note also that even though the Clariion supports original FAST, the Celerra does not, so you can't use it with the Celerra. FAST2 will take even longer for the Celerra to start supporting it. I'm not sure why it doesn't -- the Celerra is just using LUNs from the underlying Clariion, but there you go. On the flip-side, the NetApp is certainly going to be more expensive, especially with no SATA drives. If you have any idea of how to separate your data into high-performance vs. general storage requirements, consider getting some SATA drives to offset the costs. The NetApp is significantly simpler to manage than the Celerra, and the snapshot functionality and snapshot-based features are also significantly better than what EMC has to offer, so if self-service recovery is important to you, this should be a consideration. You should also note that when you compare NetApp to your entire infrastructure, not just storage, it gets much more cost-effective. For example, when you consider using a SATA-based NetApp as a nearstore type of appliance using Snapvault for data recovery instead of a traditional tape backup infrastructure, you may come out about even and will certainly have easier and faster time-to-recover. Of course, EMC recently bought Isilon, which has some amazing performance technology, but still doesn't support the kinds of snapshot magic that NetApp does. I'm curious to see what's going to happen with the Celerra now that they have Isilon. When they bought Data Domain, they had *just* sold us a DL3D for backup storage after telling us how much DD sucked. They came in the literally one week after the purchase to tell us that they were wrong, the DD was better, and they were going to swap out the DL3D with a DD. :) The *idea* of vendor2's solution is great, but sadly the execution is far too lacking to make it worth considering from my perspective (on the NAS side -- as I said, the SAN data migration stuff is *much* better all around). You should consider putting together a list of features that you're interested in as well as other features touted by the vendors. Assign weights to them as to how important they are to you, and then rate the vendors on each of those features. See who comes out on top technologically, and see if the cost difference is then worthwhile. There should be a minimal (say, 3-5) set of features that are *must haves*, such that regardless of how cheap a vendor is, you won't buy them because they don't do what you need. I hope that helps. Please feel free to ask any other followup questions. We've been going through these evaluations for a while (I'm on the storage architecture team at my current employer). We have EMC Clariion, all-in-one/Celerra, and VMAX/VPLEX, as well as IBM and Compellent SAN storage and IBM SVC (VPLEX competitor), and NetApp and Isilon equipment, so we've dealt with pretty much everything. -Adam _______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
