>Is there a way to get the total amount of data referenced by a snapshot that 
>isn't referenced by a specified snapshot/filesystem?  I think this is what is 
>really desired in order to locate >snapshots with offending space usage.  The 
>written and written@ attributes seem to only do the reverse.  I think you can 
>back calculate it from the snapshot and filesystem >"referenced" sizes, and 
>the "written@<snap>" property of the filesystem, but that isn't particularly 
>convenient to do (looks like "zfs get -Hp ..." makes it possible to hack a 
>script >together for, though).

This is what I was hoping to get as well, but I am not sure it's really 
possible.  Even if you try to calculate the referenced space + displayed used 
space and compare against the active filesystem that doesn't really tell you 
much because the data on the active filesystem might not be as static as you 

For example:

If it references 10G and the active filesystem shows 10G used, you might expect 
that the snapshot isn't using any space.  However, the 10G it referenced might 
have been deleted and the 10G in the active filesystem might be new data and 
that means your snap could be 10G.  But if 9G of that was on another snapshot, 
you would have something like this:

rootpool/export/home@snap.0            -   1G        -       -              -   
rootpool/export/home@snap.1            -   27K          -       -              
-          -
rootpool/export/home@snap.2            -   0           -       -              - 

And the referenced would look something like:

rootpool/export/home@snap.0                0      -  10G  -
rootpool/export/home@snap.1                0      -  9G    -
rootpool/export/home@snap.1                0      -  10G    -

And the current filesystem would be:

rootpool/export/home                              40G  20G     10G   10G        
      0          0

Then imagine that across more than three snapshots.  I can't wrap my head 
around logic that would work there.

I would love if someone could figure out a good way though...

- Chad

zfs-discuss mailing list

Reply via email to