On Mon, Jun 13, 2011 at 7:38 PM, Ski Kacoroski <[email protected]> wrote:
> Hi, > > I am in the market for a new NAS system (30TB usable for end user use - > not virtual machines) and our 2 finalists (EMC and NetApp) have taken > very different approaches to solving the problem. I am wondering which > solution you would be most comfortable with. > > $Vendor1(more expensive): Use 256GB flash cache and 72 600GB 15K SAS > disks (second option would have 96 450GB SAS disks if we felt we needed > more IOPs). Minimal dependence on algorithms moving data between > storage mediums. > > $Vendor2 (much less expensive): Use 300GB flash cache, 8 - 100GB SSDs, > 16 - 600GB 15K SAS, and 16 - 2TB 7.2K SATA disks. This depends then a > lot on their technology for moving hot blocks to faster storage mediums. > > My environment does have a lot of data (e.g. student portfolio data) > that is rarely touched so $Vendor2 may be a good fit. My concern is > that a user will be working in a directory for 2 - 3 weeks, get used to > a certain level of response and performance, then go to a directory that > is on the slow disks and see a huge slowdown to the extent they think > the system is broken. > > With our current NAS this is a big problem especially when the Mac > clients open up a directory that has 20000 folders in it as Macs need > all the meta-data on the 20000 folders before they will display anything > in the GUI other than a beach ball. > > Has anyone had experiences with NAS systems that rely a lot on different > storage mediums and migrating data between them to get performance? > Appreciate your thoughts and ideas on this. Vendor1 - Single storage tier with, hopefully, enough cache memory to speed up common operations. Management should be easier but you have to trust your workload isn't too unpredictable and power cycles usually mean you loose the cache. Until the cache is warm again you'll not see the same performance you were used to and all I/O will be served by the disks. Is the disk performance alone acceptable? Or your application requires the performance the flash cache is giving it otherwise things will break? Vendor2 - A clear example of tiered storage. They are proposing you segment your data based on how critical it is.. if this is EMC they will try to sell you their FAST software to move data between the layers in an semi-intelligent way. The good thing is you'll have some assurance that applications that need extreme performance will be stored in fast media if you want. It's the difference between cache memory and permanent storage.. some folks plan their applications based on how well cache performs but forget that things break.. and when they do, data has to be read from permanent storage. Oracle ZFS appliances are a good example of $vendor1 approach. I wrote a bit about them after we suffered some outages and the business wasn't totally sure how they worked, so they freaked. (http://bit.ly/fFqzty). IMHO, it all depends on your application requirements. Some people would prefer to quit IT altogether rather than deal with storage tiering... others find that they cannot rely exclusively on their caches and prefer to have everything in a type of media that will offer the performance they need (seeing cache as a cool feature, not the end of it). Personally I've worked at a company that was fine with cache+slow_disks and when the requirements on reliability/predictability got higher... they were starting to put up with the management hassle of storage tiering for that extra confidence. -- Giovanni Tirloni
_______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
