Hi,

I am in the market for a new NAS system (30TB usable for end user use - 
not virtual machines) and our 2 finalists (EMC and NetApp) have taken 
very different approaches to solving the problem.  I am wondering which 
solution you would be most comfortable with.

$Vendor1(more expensive): Use 256GB flash cache and 72 600GB 15K SAS 
disks (second option would have 96 450GB SAS disks if we felt we needed 
more IOPs).  Minimal dependence on algorithms moving data between 
storage mediums.

$Vendor2 (much less expensive): Use 300GB flash cache, 8 - 100GB SSDs, 
16 - 600GB 15K SAS, and 16 - 2TB 7.2K SATA disks.  This depends then a 
lot on their technology for moving hot blocks to faster storage mediums.

My environment does have a lot of data (e.g. student portfolio data) 
that is rarely touched so $Vendor2 may be a good fit.  My concern is 
that a user will be working in a directory for 2 - 3 weeks, get used to 
a certain level of response and performance, then go to a directory that 
is on the slow disks and see a huge slowdown to the extent they think 
the system is broken.

With our current NAS this is a big problem especially when the Mac 
clients open up a directory that has 20000 folders in it as Macs need 
all the meta-data on the 20000 folders before they will display anything 
in the GUI other than a beach ball.

Has anyone had experiences with NAS systems that rely a lot on different 
storage mediums and migrating data between them to get performance? 
Appreciate your thoughts and ideas on this.

cheers,

ski

-- 
"When we try to pick out anything by itself, we find it
  connected to the entire universe"            John Muir

Chris "Ski" Kacoroski, [email protected], 206-501-9803
or ski98033 on most IM services
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to