Thanks for the responses.  I read the docs that Cindy suggested and they
were educational but I still don't understand where the missing disk space
is.  I used the zfs list command and added up all space used.  If I'm
reading it right, I have <250GB of snapshots.   Zpool list shows that the
pool size (localpool)is 1.81TB, of which 1.68 shows allocated.  The
filesystem that I am concerned about is localhome and a du -sk shows that it
is ~650GB in size.  This corresponds to the output from df -lk.  This is
also in the neighborhood of what I see in the REFER column in zfs list.  So
my question remains:

I have 1.68TB of space allocated.  Of that, there is ~650GB of actual
filesystem data and <250GB of snapshots.  That leaves almost 800GB of space
unaccounted for.  I would like to understand if my logic, or method, is
flawed.  If not, how can I go about determining what happened to the 800GB?

I am including the output from the zfs list and zpool list commands.

Zpool list:
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
localpool  1.81T  1.68T   137G    92%  ONLINE  -

Zfs list:
NAME                                                    USED  AVAIL  REFER
MOUNTPOINT
localpool                                              1.68T   109G    24K
/localpool
localpool/backup                                         21K   109G    21K
/localpool/backup
localpool/localhome                                    1.68T   109G   624G
/localpool/localhome
localpool/localhome@weekly00310-date2011-11-06-hour00  10.1G      -   754G
-
localpool/localhome@weekly00338-date2011-12-04-hour00  4.63G      -   847G
-
localpool/localhome@weekly001-date2012-01-01-hour00    5.13G      -   938G
-
localpool/localhome@weekly0036-date2012-02-05-hour00   10.3G      -  1.06T
-
localpool/localhome@weekly0064-date2012-03-04-hour00   84.1G      -  1.22T
-
localpool/localhome@weekly0092-date2012-04-01-hour00   8.43G      -   709G
-
localpool/localhome@weekly00127-date2012-05-06-hour00  11.1G      -   722G
-
localpool/localhome@weekly00155-date2012-06-03-hour00  20.5G      -   737G
-
localpool/localhome@weekly00183-date2012-07-01-hour00  10.9G      -   672G
-
localpool/localhome@weekly00190-date2012-07-08-hour00  11.0G      -   696G
-
localpool/localhome@weekly00197-date2012-07-15-hour00  7.92G      -   662G
-
localpool/localhome@weekly00204-date2012-07-22-hour00  13.5G      -   691G
-
localpool/localhome@weekly00211-date2012-07-29-hour00  7.88G      -   697G
-
localpool/localhome@12217-date2012-08-04-hour12         248M      -   620G
-
localpool/localhome@13217-date2012-08-04-hour13         201M      -   620G
-
localpool/localhome@14217-date2012-08-04-hour14         151M      -   620G
-
localpool/localhome@15217-date2012-08-04-hour15         143M      -   620G
-
localpool/localhome@16217-date2012-08-04-hour16         166M      -   621G
-
localpool/localhome@17217-date2012-08-04-hour17         157M      -   620G
-
localpool/localhome@18217-date2012-08-04-hour18         136M      -   620G
-
localpool/localhome@19217-date2012-08-04-hour19         178M      -   620G
-
localpool/localhome@20217-date2012-08-04-hour20         152M      -   620G
-
localpool/localhome@21217-date2012-08-04-hour21         117M      -   620G
-
localpool/localhome@22217-date2012-08-04-hour22         108M      -   620G
-
localpool/localhome@23217-date2012-08-04-hour23         156M      -   620G
-
localpool/localhome@weekly00218-date2012-08-05-hour00  34.7M      -   620G
-
localpool/localhome@00218-date2012-08-05-hour00        35.3M      -   620G
-
localpool/localhome@01218-date2012-08-05-hour01         153M      -   620G
-
localpool/localhome@02218-date2012-08-05-hour02         126M      -   620G
-
localpool/localhome@03218-date2012-08-05-hour03        98.0M      -   620G
-
localpool/localhome@04218-date2012-08-05-hour04         318M      -   620G
-
localpool/localhome@05218-date2012-08-05-hour05        4.31G      -   624G
-
localpool/localhome@06218-date2012-08-05-hour06         587M      -   621G
-
localpool/localhome@07218-date2012-08-05-hour07         200M      -   621G
-
localpool/localhome@08218-date2012-08-05-hour08         119M      -   621G
-
localpool/localhome@09218-date2012-08-05-hour09         141M      -   621G
-
localpool/localhome@10218-date2012-08-05-hour10         189M      -   621G
-
localpool/localhome@11218-date2012-08-05-hour11         243M      -   621G
-
localpool/localhome@12218-date2012-08-05-hour12         256M      -   621G
-
localpool/localhome@13218-date2012-08-05-hour13         221M      -   621G
-
localpool/localhome@14218-date2012-08-05-hour14         168M      -   621G
-
localpool/localhome@15218-date2012-08-05-hour15         156M      -   621G
-
localpool/localhome@16218-date2012-08-05-hour16         147M      -   621G
-
localpool/localhome@17218-date2012-08-05-hour17         118M      -   621G
-
localpool/localhome@18218-date2012-08-05-hour18         151M      -   621G
-
localpool/localhome@19218-date2012-08-05-hour19         252M      -   621G
-
localpool/localhome@20218-date2012-08-05-hour20         244M      -   621G
-
localpool/localhome@21218-date2012-08-05-hour21         201M      -   621G
-
localpool/localhome@22218-date2012-08-05-hour22         198M      -   621G
-
localpool/localhome@23218-date2012-08-05-hour23         164M      -   621G
-
localpool/localhome@00219-date2012-08-06-hour00         116M      -   621G
-
localpool/localhome@01219-date2012-08-06-hour01         113M      -   621G
-
localpool/localhome@02219-date2012-08-06-hour02         127M      -   621G
-
localpool/localhome@03219-date2012-08-06-hour03         130M      -   621G
-
localpool/localhome@04219-date2012-08-06-hour04         212M      -   621G
-
localpool/localhome@05219-date2012-08-06-hour05        4.29G      -   628G
-
localpool/localhome@06219-date2012-08-06-hour06         282M      -   624G
-
localpool/localhome@07219-date2012-08-06-hour07         220M      -   624G
-
localpool/localhome@08219-date2012-08-06-hour08         186M      -   624G
-
localpool/localhome@09219-date2012-08-06-hour09         265M      -   624G
-
localpool/localhome@10219-date2012-08-06-hour10         233M      -   624G
-
localpool/localhome@11219-date2012-08-06-hour11         218M      -   624G
-
localpool/tmp                                            28K   109G    28K
/localpool/tmp

thanks,

Burt Hailey

-----Original Message-----
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
zfs-discuss-requ...@opensolaris.org
Sent: Saturday, August 04, 2012 6:34 AM
To: zfs-discuss@opensolaris.org
Subject: zfs-discuss Digest, Vol 82, Issue 11

Send zfs-discuss mailing list submissions to
        zfs-discuss@opensolaris.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body 'help' to
        zfs-discuss-requ...@opensolaris.org

You can reach the person managing the list at
        zfs-discuss-ow...@opensolaris.org

When replying, please edit your Subject line so it is more specific than
"Re: Contents of zfs-discuss digest..."


Today's Topics:

   1. Re: Missing disk space (Cindy Swearingen)
   2. Re: what have you been buying for slog and l2arc?
      (Bob Friesenhahn)
   3. Re: what have you been buying for slog and l2arc?
      (Hung-Sheng Tsao (LaoTsao) Ph.D)
   4. Re: what have you been buying for slog and l2arc? (Neil Perrin)
   5. Re: what have you been buying for slog and l2arc? (Eugen Leitl)
   6. Re: what have you been buying for slog and l2arc?
      (Hung-Sheng Tsao (LaoTsao) Ph.D)


----------------------------------------------------------------------

Message: 1
Date: Fri, 03 Aug 2012 17:03:51 -0600
From: Cindy Swearingen <cindy.swearin...@oracle.com>
To: Burt Hailey <bhai...@triunesystems.com>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Missing disk space
Message-ID: <501c58d7.6050...@oracle.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

You said you're new to ZFS so might consider using zpool list and zfs list
rather df -k to reconcile your disk space.

In addition, your pool type (mirrored on RAIDZ) provides a different space
perspective in zpool list that is not always easy to understand.

http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-6.html#scrolltoc

See these sections:

Displaying ZFS File System Information
Resolving ZFS File System Space Reporting Issues

Let us know if this doesn't help.

Thanks,

Cindy

On 08/03/12 16:00, Burt Hailey wrote:
> I seem to be missing a large amount of disk space and am not sure how 
> to locate it. My pool has a total of 1.9TB of disk space. When I run 
> df -k I see that the pool is using ~650GB of space and has only ~120GB 
> available. Running zfs list shows that my pool (localpool) is using 
> 1.67T. When I total up the amount of snapshots I see that they are 
> using <250GB. Unless I?m missing something it appears that there is 
> ~750GB of disk space that is unaccounted for. We do hourly snapshots. 
> Two days ago I deleted 100GB of data and did not see a corresponding 
> increase in snapshot sizes. I?m new to zfs and am reading the zfs 
> admin handbook but I wanted to post this to get some suggestions on what
to look at.
>
> Burt Hailey
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


------------------------------

Message: 2
Date: Fri, 3 Aug 2012 20:39:55 -0500 (CDT)
From: Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
To: Karl Rossing <karl.ross...@barobinson.ca>
Cc: ZFS filesystem discussion list <zfs-discuss@opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
        l2arc?
Message-ID:
        <alpine.gso.2.01.1208032035270.27...@freddy.simplesystems.org>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Fri, 3 Aug 2012, Karl Rossing wrote:

> I'm looking at
> http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-
> drives-ssd.html
> wondering what I should get.
>
> Are people getting intel 330's for l2arc and 520's for slog?

For the slog, you should look for a SLC technology SSD which saves unwritten
data on power failure.  In Intel-speak, this is called "Enhanced Power Loss
Data Protection".  I am not running across any Intel SSDs which claim to
match these requirements.

Extreme write IOPS claims in consumer SSDs are normally based on large write
caches which can lose even more data if there is a power failure.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/


------------------------------

Message: 3
Date: Fri, 3 Aug 2012 22:05:03 -0400
From: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laot...@gmail.com>
To: Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
Cc: Karl Rossing <karl.ross...@barobinson.ca>,  ZFS filesystem
        discussion list <zfs-discuss@opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
        l2arc?
Message-ID: <650f7bf0-fc24-4619-a6d6-1d40855c9...@gmail.com>
Content-Type: text/plain; charset="us-ascii"

Intel 311 Series Larsen Creek 20GB 2.5" SATA II SLC Enterprise Solid State
Disk SSDSA2VP020G201

Average Rating
(12 reviews)
Write a Review

Sent from my iPad

On Aug 3, 2012, at 21:39, Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
wrote:

> On Fri, 3 Aug 2012, Karl Rossing wrote:
> 
>> I'm looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.
>> 
>> Are people getting intel 330's for l2arc and 520's for slog?
> 
> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure.  In Intel-speak, this is called "Enhanced
Power Loss Data Protection".  I am not running across any Intel SSDs which
claim to match these requirements.
> 
> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power failure.
> 
> Bob
> -- 
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120803/5ca5
6ba1/attachment-0001.html>

------------------------------

Message: 4
Date: Fri, 03 Aug 2012 23:29:43 -0600
From: Neil Perrin <neil.per...@oracle.com>
To: Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
Cc: Karl Rossing <karl.ross...@barobinson.ca>,  ZFS filesystem
        discussion list <zfs-discuss@opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
        l2arc?
Message-ID: <501cb347.50...@oracle.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 08/03/12 19:39, Bob Friesenhahn wrote:
> On Fri, 3 Aug 2012, Karl Rossing wrote:
>
>> I'm looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.
>>
>> Are people getting intel 330's for l2arc and 520's for slog?
>
> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure.  In Intel-speak, this is called "Enhanced
Power Loss Data Protection".  I am not running across any Intel SSDs which
claim to match these requirements.

- That shouldn't be necessary. ZFS flushes the write cache for any device
written before returning
from the synchronous request to ensure data stability.

>
>
> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power failure.
>
> Bob



------------------------------

Message: 5
Date: Sat, 4 Aug 2012 11:50:18 +0200
From: Eugen Leitl <eu...@leitl.org>
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] what have you been buying for slog and
        l2arc?
Message-ID: <20120804095018.go12...@leitl.org>
Content-Type: text/plain; charset=us-ascii

On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:

> For the slog, you should look for a SLC technology SSD which saves  
> unwritten data on power failure.  In Intel-speak, this is called  
> "Enhanced Power Loss Data Protection".  I am not running across any  
> Intel SSDs which claim to match these requirements.

The
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-710-series.html
seems to qualify:

"Enhanced power-loss data protection. Saves all cached data in the process
of being 
written before the Intel SSD 710 Series shuts down, which helps minimize
potential 
data loss in the event of an unexpected system power loss."

> Extreme write IOPS claims in consumer SSDs are normally based on large  
> write caches which can lose even more data if there is a power failure.

Intel 311 with a good UPS would seem to be a reasonable tradeoff.


------------------------------

Message: 6
Date: Sat, 4 Aug 2012 07:32:37 -0400
From: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laot...@gmail.com>
To: "Hung-Sheng Tsao (LaoTsao) Ph.D" <laot...@gmail.com>
Cc: Karl Rossing <karl.ross...@barobinson.ca>,  ZFS filesystem
        discussion list <zfs-discuss@opensolaris.org>
Subject: Re: [zfs-discuss] what have you been buying for slog and
        l2arc?
Message-ID: <5ba00fd9-6ab7-4cb6-910d-154c674f6...@gmail.com>
Content-Type: text/plain; charset="us-ascii"

hi

may be check out stec ssd
or  checkout the service manual of sun zfs appliance service manual
to see the read and write ssd in the system
regards


Sent from my iPad

On Aug 3, 2012, at 22:05, "Hung-Sheng Tsao (LaoTsao) Ph.D"
<laot...@gmail.com> wrote:

> Intel 311 Series Larsen Creek 20GB 2.5" SATA II SLC Enterprise Solid State
Disk SSDSA2VP020G201
> 
> Average Rating
> (12 reviews)
> Write a Review
> 
> Sent from my iPad
> 
> On Aug 3, 2012, at 21:39, Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
wrote:
> 
>> On Fri, 3 Aug 2012, Karl Rossing wrote:
>> 
>>> I'm looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives
-ssd.html wondering what I should get.
>>> 
>>> Are people getting intel 330's for l2arc and 520's for slog?
>> 
>> For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure.  In Intel-speak, this is called "Enhanced
Power Loss Data Protection".  I am not running across any Intel SSDs which
claim to match these requirements.
>> 
>> Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power failure.
>> 
>> Bob
>> -- 
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120804/68f7
f157/attachment.html>

------------------------------

_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


End of zfs-discuss Digest, Vol 82, Issue 11
*******************************************

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to