Just to throw out my 2cents... We currently have a NetApp 3070? Was purchased quickly when we were having problems with our NFS cluster (pair of v240's, sol9, vxvm+ufs from 9980)....we eventually resolved the issue...there was a >1TB UFS filesystem that had started out life as a 40MB filesystem and slowly grown over time....and the users never delete anything on it....so every now then things would just stop so it could search through the entire filesystem from beginning to end for some free inodes. Even though somebody came in better and lower to the bid, they pushed for NetApp and that's what we got.
Never got NFSv4 to work, and therefore no ACLs...so filesystems that had ACLs had to stay put....and FIS would run really slowly when we changed its NFS store to the NetApp (<2 seconds vs 30-60 seconds). So, we are looking at replacing the NetApp (not just because they quoted us 3 year support...where they won't do anything except give us the occasion replacement disk, it needs a firmware upgrade...but they won't do it for us and it would be a disruptive upgrade...that is more than what it would cost to buy a new NetApp.) We went with only 1 more year of support, and plan to be migrated and turning it off by November. We talked to various companies and resellers, IBM (3), HP (2), Oracle (3), EMC...and a few others. One of the IBM resellers did offer another NetApp. At one time we were strongly considering SONAS, and they offered to take the NetApp in trade (we later heard they offered to sell that NetApp to another department on campus, wonder if they would repopulate with all new drives....since the drives now don't get to leave the datacenter intact.) We had one make or break requirement on whatever NAS we got....the Finance group to not know that we had moved their NFS store to the new NAS. We are currently doing a try-and-buy with Oracle for a 7420 ZFS Storage Appliance. At first it wasn't going well, but when the admin reconfigured the storage to follow Oracle's recommendations....it worked. Funny how that works. There was the part that the FIS app is Oracle EBiz suite. Now they are working through all the other things we want it to do. Even though there's some discussion on when the trial period ended. The lawyers slowed it down a bit, because it was an Oracle software contract with a hardware paragraph tacked on. For try/buy we had to deal directly with Oracle. But, the state contract directly with Oracle is only for software....Sun hardware state contract was with a reseller. There was discussion on ZFS in production? We've been running ZFS in production for several years....virtually all our production Solaris 10 systems use ZFS, and many of them boot ZFS (and I'm slowly converting many of them as part of the upgrade from update 3 to current as part of our push to Oracle 11g -- Oracle gave us a huge bill for continuing 9i/10i support, so we got one more year of that and the DBAs need to get things to 11g by then....) Back when we ran our own email....we were running 2+TB ZFS filesystems for mail spools (shared out over NFS to 3 MDA's and 8 imap/pop servers) The only hiccup was we had to turn ZIL off, because ZFS insisting that our 9980 flush cache and blocking everything until it did wasn't working too well (also ran into an issue with IPF and NFS...it was a T2000 and IPF would pin one CPU and cap our throughput....we knew we could do better from testing, but put 40,000 users on it and couldn't get there...) First there were 2 of them....a pair of T2000's doing NFS, later we went 4 wide with another pair of T2000's. A couple years before we had talked about removing the NFS bottleneck from the equation...but ran out of time. And, then a moratorium on upgrades until they go from bragging that they only spent $30,000 to provide email last year to paying millions to outsource it. Otherwise I was really tempted to put the put 'additional' MDA's and MUA's on the NFS servers.... Each server did NFS mount the other filesystem to it, the app that provisioned email accounts runs on the NFS server but wasn't told how to deal with half and later quarter of the alphabet in different places. Though other monitoring tools did get upgraded (only because I could) We still have an archive of our mail spool (at first it was incase something didn't make the imapsync to zimbra, later it was about the Prince case).....it became a single 4.5TB ZFS filesystem (the 9980 was retired last year). Reportedly Solaris 11 does fix on problem we have with ZFS....right now it doesn't tell anybody in logs or by email that its DEGRADED. We get emails from VxVM when it loses a disk. I ended up making our cfengine check zpool status, first time it ran....I discovered 5 degraded zpools. 2 were dismissed as firmware bugs, so I had patch the drives.....a couple were where a former admin had taken the disk out to do an upgrade and never put it back. And, one was an actual bad disk (and Oracle surprised me by promptly agreeing and sending me a new disk...) The 2 that were firmware (I inherited the tickets)....it took 6 months to eventually get the downtime on those production servers to patch the disks (at one point I was worried I had bricked the drives on one of the servers, but turned out I had lost network) Uh-oh, I think I heard my name.... _______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
