I just say '~60-65% usable after disk rounds+raid+wafl+snaps for NFS
atop NetApp' when bosses ask me.  I dont deep-dive into the math.  :)

On Mon, Jun 7, 2010 at 10:21 AM, Adam Levin <[email protected]> wrote:
>
> On Mon, 7 Jun 2010, Jefferson Cowart wrote:
>> I've thought about Oracle on NFS, but not Oracle. (Our Oracle install is
>> reasonably small and we don't really manage it ourselves. We have two
>> applications that run on top of Oracle. We know enough to keep it
>> running/backed up/etc, but we rely on the application vendors for the most
>> things Oracle related. As a result we'd probably leave Oracle on block since
>> that's the way they deal with it.)
>
> Ok (I assume you meant you're thinking about VMWare on NFS but not
> Oracle).  Keep an open mind, even (and perhaps especially) for small
> Oracle instances.  The snapshot/flexclone functionality on the filer
> works extremely well over NFS for testing database changes, running
> warehouse-type queries against the database, and giving copies to
> developers for a "playground".  You can do it with LUNs, but not as easily
> or elegantly.
>
>> Right now our server access layer is via 1G (X6748-GE-TX blades in a Cisco
>> 6500), but that will likely get upgraded in the 12-18 month time frame. At
>> that point we'll likely move to the Nexus 5000 line. As a result I'm making
>> sure we have 10G (iSCSI + NFS) and FCoE upgrade options available. (With
>> NetApp this unfortunately pushes us out of the FAS2000 line which otherwise
>> does everything we'd like but has no upgrade options.)
>
> Sounds about right.  I wouldn't worry too much about FCoE on the storage
> end -- it seems more useful at the host / client end.  The storage can
> still use regular ethernet and FC (with the Cisco gear, you're going to
> need MDS for your FC SAN for the foreseeable future anyway).
>
>> Our plan is to start out at ~10TB usable. That would be about 6-7TB CIFS/NFS
>> to end-users, ~2TB of VMware (including some MS SQL and Exchange that's on 
>> top
>> of VMware), and some small other misc bits . The rest would be available for
>> future expansion.
>
> At those numbers, the NetApp should definitely be near the top of your
> list, especially if you move the 2TB of VMWare to NFS instead of block.
>
>> We are in the midst of an Exchange 2010 upgrade/domain consolidation. (I'm
>> amused by MS telling everyone to use DAS for Exchange 2010 since that doesn't
>> work very well in a virtaul environment.)
>
> Heh, yeah, we're in the same boat, although in our case MS said that
> because of the new I/O profile, you *can* use DAS.  This somehow
> translated to our Exchange team as "you *should* use DAS".  Oy!  :)
>
>> Sounds like a nice idea, but I don't think we'd be able to do that. (Plus the
>> NetApp V series stuff doesn't appear to support Compellent on the backend.)
>
> Really?  That's disappointing.  I'll have to look into that further -- we
> have a pretty good relationship with both Compellent and NetApp.
>
>> I think the biggest question I've got is real-world utilization rates. (i.e.
>> how much raw storage does 10TB of data actually take after accounting for all
>> overhead?) I currently have proposals from both NetApp and Compellent (and
>> expect to have an EMC one shortly) and I'm trying to get as much of an
>> apples-to-apples comparison as I can. I've seen various things online (mostly
>> vendor blogs) and heard statements from resellers, but I'd appreciate hearing
>> from customers what they are seeing in actual installs.
>
> I've found that first of all you should require the vendors to give you
> quotes for usable storage.  Give them as much information as you can about
> what you want, and tell them you want X TB of usable for that data.  That
> way, you have recourse if they screw up (we had a case at a previous job
> where we asked for 7TB of usable from NetApp, and they sold us 7TB raw.
> We didn't realize it, but once we got the shelf and didn't have what we
> needed, they ended up giving us a shelf because we documented that we
> asked for 7TB usable).
>
> So, the things to know for raw vs. usable:
>
> 1) The size of a disk is not the size of a disk.  Disks are "rightsized"
> so that various vendors can provide disks to the storage companies and
> still come up looking like the same disk.  In NetApp's case, for example,
> a 144GB FC disk starts out as 136GB to the system.  All vendors do this.
> SATA drives lose more space than FC drives because of the block size (520
> vs. 512).
>
> 2) Once the disks are rightsized, there's RAID.  You usually have options
> for how to lay out RAID spindles (for NetApp / EMC) or RAID protection at
> the block level (Compellent).  This affects performance and capacity.
> NetApp lets you select how many disks per raid group.  EMC has fixed
> choices.  I believe the default for NetApp is still 14-2 -- 14 data disks
> with 2 parity, so 2 out of every 16 drives will be non-data.
>
> 3) After that, there's the filesystem.  WAFL takes up space, and there's
> overhead (about 10%) also.  Any filesystem will take up space, of course,
> and all unix filesystems attempt to prevent you from using 100% of the
> space anyway, so there's overhead there.  On an EMC LUN, though,
> it won't take up space until the host sees it and formats it.  This is
> where the apples-to-apples comparison becomes crucial.
>
> 4) After the filesystem, there's the snapshot space.  NetApp snapshots are
> golden -- that is, they take precedence over anything else *if you're
> using them*.  So, you need to know your rate of change to calculate how
> much snapshot space you need.  The default used to be 20%, but I'm not
> sure if it still is.
>
> 5) For a NetApp LUN, after all that is calculated, you'll then present the
> LUN to a host and the host will format it, which means you'll lose more
> space.  So, for comparison purposes, a NetApp LUN is less efficient and
> uses more space than an EMC or Compellent LUN.  That's why it's important
> to classify what will be NAS vs. SAN.  If you minimize your block
> requirements, NetApp makes sense, but if it's mostly block, it'll cost a
> lot (er, even more :) ) than it otherwise would have.
>
> EMC and NetApp have had an ongoing battle about raw vs. usable.  EMC used
> to claim that their LUNs were 80% usable while NetApp's were 55% (or
> sometimes even less) usable.  They were sort of right, but they were not
> comparing equivalent configurations.
>
> Here's an example from our FAS3020 with one shelf of 14 144GB FC drives:
> Aggregate 'aggr1'
>     Total space    WAFL reserve    Snap reserve    Usable space       BSR
> NVLOG           A-SIS
>     974701056KB      97470104KB             0KB     877230952KB
> 0KB       3305148KB
>
> Aggregate                       Allocated            Used           Avail
> Total space                   747156988KB     317262708KB     126724900KB
> Snap reserve                          0KB             0KB             0KB
> WAFL reserve                   97470104KB       8614364KB      88855740KB
>
> Aggregate 'aggr0'
>     Total space    WAFL reserve    Snap reserve    Usable space       BSR
> NVLOG           A-SIS
>     139243008KB      13924300KB             0KB     125318708KB
> 0KB             0KB
>
> Aggregate                       Allocated            Used           Avail
> Total space                   119595480KB       3448220KB       5714504KB
> Snap reserve                          0KB             0KB             0KB
> WAFL reserve                   13924300KB       1407224KB      12517076KB
>
> The raw space would be 14*144= 2,113,929,216 KB.
> Rightsized space: 14*136     = 1,996,488,704 KB.
>
> We use RAID DP, and there are two aggregates, one 3 disk and one 9 disk
> (our root vol is on its own aggregate, which is a bit of wasted space but
> in a larger filer is better for management).
>
> So, for the 9 disk aggregate:
> 9*136 = 1,283,457,024 KB (or 9*144=1,358,954,496 KB)
> 7*136 = 998,244,352 KB.
>
> The filer itself is reporting 974,701,056 KB.  That's probably a
> difference in 1000 vs. 1024 or something similar.
>
> The filer then takes 10% for WAFL reserve and there's an aggregate
> snapshot reserve that's not required but defaults to 5% -- you should make
> that 0% unless you're using a very specific data protection scheme (you'll
> see it in the documentation somewhere -- almost nobody uses it :) ).
>
> So we're told the usable space (before volume snap reserve) is 877,230,952
> KB out of 1,358,954,496 KB raw (worst case calculation) for an overhead of
> 35%.
>
> If you add a standard 20% snap reserve to that, you can see where the 55%
> overhead comes from.
>
> Now, we have an EMC CX4-120 with 1TB SATA drives configured as a RAID6,
> which is roughly equivalent to NetApp's RAID-DP (I don't have any 1TB
> SATAs on the NetApp, unfortunately).
>
> The 1TB drives are rightsized to 917.18GB on the CX4.  There are 16 of the
> drives configured as RAID6, and the system tells me I have 12840.088GB
> total space available there.
>
> 16*1TB = 17,179,869,184 KB
> 12840GB = 13,463,715,840 KB
>
> So after rightsizing and RAID we're at 22% before formatting a LUN with a
> filesystem.  I would expect that after formatting we'd lose at least 10%
> to filesystem and overhead, just like WAFL, which puts us about on par
> with the NetApp.
>
> So, the only real difference is when you take the NetApp, create a LUN on
> WAFL, and then format it with another filesystem like UFS.  You'll lose
> another 10%.
>
> Anyone care to check my math?  :)
>
> -Adam
>
> _______________________________________________
> Tech mailing list
> [email protected]
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>

_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to