> The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
> the 5.34 ref for the snapshot vs zfs list on the new system and looking at
> space used.)
> 
> Is this a problem? Should I be panicking yet?

Well, the zfs send/receive finally finished, at a size of 9.56TB (apologies
for the HTML, it was the only way I could make the columns readable):

root@archive:/home/admin# zfs get all archive1/RichRAID
NAME                PROPERTY              VALUE                  SOURCE
archive1/RichRAID   type                  volume                 -
archive1/RichRAID   creation              Fri Sep 14  4:17 2012  -
archive1/RichRAID   used                  9.56T                  -
archive1/RichRAID   available             1.10T                  -
archive1/RichRAID   referenced            9.56T                  -
archive1/RichRAID   compressratio         1.00x                  -
archive1/RichRAID   reservation           none                   default
archive1/RichRAID   volsize               5.08T                  local
archive1/RichRAID   volblocksize          8K                     -
archive1/RichRAID   checksum              on                     default
archive1/RichRAID   compression           off                    default
archive1/RichRAID   readonly              off                    default
archive1/RichRAID   copies                1                      default
archive1/RichRAID   refreservation        none                   default
archive1/RichRAID   primarycache          all                    default
archive1/RichRAID   secondarycache        all                    default
archive1/RichRAID   usedbysnapshots       0                      -
archive1/RichRAID   usedbydataset         9.56T                  -
archive1/RichRAID   usedbychildren        0                      -
archive1/RichRAID   usedbyrefreservation  0                      -
archive1/RichRAID   logbias               latency                default
archive1/RichRAID   dedup                 off                    default
archive1/RichRAID   mlslabel              none                   default
archive1/RichRAID   sync                  standard               default
archive1/RichRAID   refcompressratio      1.00x                  -
archive1/RichRAID   written               9.56T                  -

So used is 9.56TB, volsize is 5.08TB (which is the amount of data used on
the volume). The Mac connected to the FC target sees a 5.6TB volume with
5.1TB used, so that makes sense-- but where did the other 4TB go?

(I'm about at the point where I'm just going to create and export another
volume on a second zpool and then let the Mac copy from one zvol to the
other-- this is starting to feel like voodoo here.)
-- 
Dave Pooser
Manager of Information Services
Alford Media  http://www.alfordmedia.com





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to