Thanks for the input Sun-gurus!!

I understand that there could and should be small differences in the output 
from 
df / du / zfs list but not as much as 700GB on a 1.5TB file system... My ZFS 
rsync 
copy of this file system is 300GB and this is what I expect to find on this 
file 
system.

In which cases would you expect to see a large difference in output from B & C?

> B: "du -sh dir" list the space used by the Current Directory Contents.
> C: "zfs list" lists the space used by the Current Filesystem & Pool Statistics

I found info on OpenSolaris but this is still unclear...

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq/#HWhydoesdu1reportdifferentfilesizesforZFSandUFSWhydoesntthespaceconsumptionthatisreportedbythedfcommandandthezfslistcommandmatch

More info:

iscsi-roskva# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
ospool                      9.06G  57.9G    94K  /ospool
ospool/ROOT                 6.06G  57.9G    18K  legacy
ospool/ROOT/s10s_u6wos_07b  6.06G  57.9G  6.06G  /
ospool/dump                 1.00G  57.9G  1.00G  -
ospool/export                 38K  57.9G    20K  /export
ospool/export/home            18K  57.9G    18K  /export/home
ospool/swap                    2G  59.9G    16K  -
storagepool                 1.69T  6.34T    18K  /storagepool
storagepool/ndc             1008G   528G  1008G  /ndc
storagepool/seismo           580G   444G   580G  /seismo
storagepool/seisproj         145G   879G   145G  /seisproj

iscsi-roskva# zpool history storagepool
History for 'storagepool':
2009-07-30.15:14:11 zpool create storagepool c3t2d0 c3t4d0
2009-07-30.15:14:29 zfs set atime=off storagepool
2009-07-30.15:14:29 zfs create -o sharenfs=rw,root=hugin -o mountpoint=/ndc 
storagepool/ndc
2009-07-30.15:14:29 zfs create -o sharenfs=rw,root=hugin -o mountpoint=/seismo 
storagepool/seismo
2009-07-30.15:14:30 zfs create -o sharenfs=rw,root=hugin -o 
mountpoint=/seisproj storagepool/seisproj
2009-07-30.15:14:30 zfs set quota=1T storagepool/ndc
2009-07-30.15:14:30 zfs set quota=1T storagepool/seismo
2009-07-30.15:14:30 zfs set quota=1T storagepool/seisproj
2009-08-11.00:00:02 zpool scrub storagepool
2009-08-18.22:00:01 zpool scrub storagepool
2009-08-25.22:00:01 zpool scrub storagepool
2009-09-01.22:00:01 zpool scrub storagepool
2009-09-04.22:00:01 zpool scrub storagepool
2009-09-11.22:00:01 zpool scrub storagepool
2009-09-18.22:00:06 zpool scrub storagepool
2009-09-19.09:13:32 zpool scrub -s storagepool
2009-09-19.09:14:24 zpool scrub -s storagepool
2010-01-04.08:41:45 zfs set quota=1.5TB storagepool/ndc
2010-01-05.10:42:35 zfs set quota=2TB storagepool/ndc
2010-01-05.20:53:13 zfs set quota=1.5TB storagepool/ndc

Best regards,

Nils

-----Original Message-----
From: Joel.Buckley
Sent: Wednesday, January 06, 2010 4:59 PM
To: Nils K. Schoeyen
Cc: storage-discuss@opensolaris.org
Subject: Re: [storage-discuss] ZFS filesystem size mismatch

Hi Nils,

You are asking three DIFFERENT questions (ls -l, du -sh, zfs list)
and getting three VALID answers.

A: "ls -l dir" lists the size of the Directory Inode
B: "du -sh dir" list the space used by the Current Directory Contents.
C: "zfs list" lists the space used by the Current Filesystem & Pool 
Statistics.

e.g. A usage <= B usage <= C usage

Cheers,
Joel.


Nils K. Schøyen wrote:
> A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a 
> 'du -sh' on the filesystem root, I only get appr. 300GB which is the correct 
> size.
>
> The file system became full during Christmas and I increased the quota from 1 
> to 1.5 to 2TB and then decreased to 1.5TB. No reservations. Files and 
> processes that filled up the file systems have been removed/stopped.
>
> Server: Sun Fire V240 with Solaris 10 10/08 Sparc. iSCSI-connected storage.
>
> Size on other zfs file systems on this server are correctly reported.
>
> See output below.
>
> Any ideas??
>
> iscsi-roskva# ls -la /ndc
> total 57
> drwxr-xr-x  11 root     root          11 Jan  5 13:33 .
> drwxr-xr-x  30 root     root          38 Jan  6 07:17 ..
> drwxr-xr-x   2 root     other         12 Nov 29 22:59 TT_DB
> drwxr-xr-x  66 1050     11000        112 Jan  5 21:20 cssdata
> drwxr-xr-x   4 206      26             4 Mar 24  2009 dbsave
> drwxr-xr-x  33 11017    11000         60 Oct  8 13:18 infomap
> drwxr-xr-x   8 root     other          8 Mar  6  2008 operations
> drwxr-xr-x  48 11000    11000         90 Jan  5 16:11 programs
> drwxr-xr-x  12 root     other         13 Aug 10 10:48 projects
> drwxr-xr-x  21 11001    11000         51 Sep 15 12:00 request
> drwxr-xr-x  32 11005    11000         45 Jan  5 11:02 stations
>
> iscsi-roskva# du -sh /ndc/*
>   24K   TT_DB
>  6.5G   cssdata
>  4.4G   dbsave
>  535M   infomap
>   71G   operations
>   46G   programs
>   79G   projects
>  6.7G   request
>   70G   stations
>
> iscsi-roskva# df -h /ndc
> Filesystem             size   used  avail capacity  Mounted on
> storagepool/ndc        1.5T  1007G   529G    66%    /ndc
>
> iscsi-roskva# zfs get all storagepool/ndc
> NAME             PROPERTY         VALUE                  SOURCE
> storagepool/ndc  type             filesystem             -
> storagepool/ndc  creation         Thu Jul 30 15:14 2009  -
> storagepool/ndc  used             1007G                  -
> storagepool/ndc  available        529G                   -
> storagepool/ndc  referenced       1007G                  -
> storagepool/ndc  compressratio    1.00x                  -
> storagepool/ndc  mounted          yes                    -
> storagepool/ndc  quota            1.50T                  local
> storagepool/ndc  reservation      none                   default
> storagepool/ndc  recordsize       128K                   default
> storagepool/ndc  mountpoint       /ndc                   local
> storagepool/ndc  sharenfs         rw,root=hugin          local
> storagepool/ndc  checksum         on                     default
> storagepool/ndc  compression      off                    default
> storagepool/ndc  atime            off                    inherited from 
> storagepool
> storagepool/ndc  devices          on                     default
> storagepool/ndc  exec             on                     default
> storagepool/ndc  setuid           on                     default
> storagepool/ndc  readonly         off                    default
> storagepool/ndc  zoned            off                    default
> storagepool/ndc  snapdir          hidden                 default
> storagepool/ndc  aclmode          groupmask              default
> storagepool/ndc  aclinherit       restricted             default
> storagepool/ndc  canmount         on                     default
> storagepool/ndc  shareiscsi       off                    default
> storagepool/ndc  xattr            on                     default
> storagepool/ndc  copies           1                      default
> storagepool/ndc  version          3                      -
> storagepool/ndc  utf8only         off                    -
> storagepool/ndc  normalization    none                   -
> storagepool/ndc  casesensitivity  sensitive              -
> storagepool/ndc  vscan            off                    default
> storagepool/ndc  nbmand           off                    default
> storagepool/ndc  sharesmb         off                    default
> storagepool/ndc  refquota         none                   default
> storagepool/ndc  refreservation   none                   default
>   

_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to