Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Bill Sommerfeld
On 09/14/12 22:39, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Pooser

Unfortunately I did not realize that zvols require disk space sufficient
to duplicate the zvol, and my zpool wasn't big enough. After a false start
(zpool add is dangerous when low on sleep) I added a 250GB mirror and a
pair of 3GB mirrors to miniraid and was able to successfully snapshot the
zvol: miniraid/RichRAID@exportable


This doesn't make any sense to me.  The snapshot should not take up any 
(significant) space on the sending side.  It's only on the receiving side, 
trying to receive a snapshot, that you require space.  Because it won't clobber 
the existing zvol on the receiving side until the complete new zvol was 
received to clobber it with.

But simply creating the snapshot on the sending side should be no problem.


By default, zvols have reservations equal to their size (so that writes 
don't fail due to the pool being out of space).


Creating a snapshot in the presence of a reservation requires reserving 
enough space to overwrite every block on the device.


You can remove or shrink the reservation if you know that the entire 
device won't be overwritten.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Dave Pooser
 The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
 That... doesn't look right. (Comparing zfs list -t snapshot and looking at
 the 5.34 ref for the snapshot vs zfs list on the new system and looking at
 space used.)
 
 Is this a problem? Should I be panicking yet?

Well, the zfs send/receive finally finished, at a size of 9.56TB (apologies
for the HTML, it was the only way I could make the columns readable):

root@archive:/home/admin# zfs get all archive1/RichRAID
NAMEPROPERTY  VALUE  SOURCE
archive1/RichRAID   type  volume -
archive1/RichRAID   creation  Fri Sep 14  4:17 2012  -
archive1/RichRAID   used  9.56T  -
archive1/RichRAID   available 1.10T  -
archive1/RichRAID   referenced9.56T  -
archive1/RichRAID   compressratio 1.00x  -
archive1/RichRAID   reservation   none   default
archive1/RichRAID   volsize   5.08T  local
archive1/RichRAID   volblocksize  8K -
archive1/RichRAID   checksum  on default
archive1/RichRAID   compression   offdefault
archive1/RichRAID   readonly  offdefault
archive1/RichRAID   copies1  default
archive1/RichRAID   refreservationnone   default
archive1/RichRAID   primarycache  alldefault
archive1/RichRAID   secondarycachealldefault
archive1/RichRAID   usedbysnapshots   0  -
archive1/RichRAID   usedbydataset 9.56T  -
archive1/RichRAID   usedbychildren0  -
archive1/RichRAID   usedbyrefreservation  0  -
archive1/RichRAID   logbias   latencydefault
archive1/RichRAID   dedup offdefault
archive1/RichRAID   mlslabel  none   default
archive1/RichRAID   sync  standard   default
archive1/RichRAID   refcompressratio  1.00x  -
archive1/RichRAID   written   9.56T  -

So used is 9.56TB, volsize is 5.08TB (which is the amount of data used on
the volume). The Mac connected to the FC target sees a 5.6TB volume with
5.1TB used, so that makes sense-- but where did the other 4TB go?

(I'm about at the point where I'm just going to create and export another
volume on a second zpool and then let the Mac copy from one zvol to the
other-- this is starting to feel like voodoo here.)
-- 
Dave Pooser
Manager of Information Services
Alford Media  http://www.alfordmedia.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Bob Friesenhahn

On Sat, 15 Sep 2012, Dave Pooser wrote:


  The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
That... doesn't look right. (Comparing zfs list -t snapshot and looking at
the 5.34 ref for the snapshot vs zfs list on the new system and looking at
space used.)

Is this a problem? Should I be panicking yet?


Does the old pool use 512 byte sectors while the new pool uses 4K 
sectors?  Is there any change to compression settings?


With volblocksize of 8k on disks with 4K sectors one might expect very 
poor space utilization because metadata chunks will use/waste a 
minimum of 4k.  There might be more space consumed by the metadata 
than the actual data.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Matthew Ahrens
On Fri, Sep 14, 2012 at 11:07 PM, Bill Sommerfeld sommerf...@hamachi.orgwrote:

 On 09/14/12 22:39, Edward Ned Harvey 
 (**opensolarisisdeadlongliveopens**olaris)
 wrote:

 From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org[mailto:
 zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dave Pooser

 Unfortunately I did not realize that zvols require disk space sufficient
 to duplicate the zvol, and my zpool wasn't big enough. After a false
 start
 (zpool add is dangerous when low on sleep) I added a 250GB mirror and a
 pair of 3GB mirrors to miniraid and was able to successfully snapshot the
 zvol: miniraid/RichRAID@exportable


 This doesn't make any sense to me.  The snapshot should not take up any
 (significant) space on the sending side.  It's only on the receiving side,
 trying to receive a snapshot, that you require space.  Because it won't
 clobber the existing zvol on the receiving side until the complete new zvol
 was received to clobber it with.

 But simply creating the snapshot on the sending side should be no problem.


 By default, zvols have reservations equal to their size (so that writes
 don't fail due to the pool being out of space).

 Creating a snapshot in the presence of a reservation requires reserving
 enough space to overwrite every block on the device.

 You can remove or shrink the reservation if you know that the entire
 device won't be overwritten.


This is the right idea, but it's actually the refreservation (reservation
on referenced space) that has this behavior, and is set by default on
zvols.  The reservation (on used space) covers the space consumed by
snapshots, so taking a snapshot doesn't affect it (at first, but the
reservation will be consumed as you overwrite space and the snapshot
grows).

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Matthew Ahrens
On Sat, Sep 15, 2012 at 2:07 PM, Dave Pooser dave@alfordmedia.comwrote:

 The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
 That... doesn't look right. (Comparing zfs list -t snapshot and looking at
 the 5.34 ref for the snapshot vs zfs list on the new system and looking at
 space used.)

 Is this a problem? Should I be panicking yet?


 Well, the zfs send/receive finally finished, at a size of 9.56TB
 (apologies for the HTML, it was the only way I could make the columns
 readable):

 root@archive:/home/admin# zfs get all archive1/RichRAID
 NAMEPROPERTY  VALUE  SOURCE
 archive1/RichRAID   type  volume -
 archive1/RichRAID   creation  Fri Sep 14  4:17 2012  -
 archive1/RichRAID   used  9.56T  -
 archive1/RichRAID   available 1.10T  -
 archive1/RichRAID   referenced9.56T  -
 archive1/RichRAID   compressratio 1.00x  -
 archive1/RichRAID   reservation   none   default
 archive1/RichRAID   volsize   5.08T  local
 archive1/RichRAID   volblocksize  8K -
 archive1/RichRAID   checksum  on default
 archive1/RichRAID   compression   offdefault
 archive1/RichRAID   readonly  offdefault
 archive1/RichRAID   copies1  default
 archive1/RichRAID   refreservationnone   default
 archive1/RichRAID   primarycache  alldefault
 archive1/RichRAID   secondarycachealldefault
 archive1/RichRAID   usedbysnapshots   0  -
 archive1/RichRAID   usedbydataset 9.56T  -
 archive1/RichRAID   usedbychildren0  -
 archive1/RichRAID   usedbyrefreservation  0  -
 archive1/RichRAID   logbias   latencydefault
 archive1/RichRAID   dedup offdefault
 archive1/RichRAID   mlslabel  none   default
 archive1/RichRAID   sync  standard   default
 archive1/RichRAID   refcompressratio  1.00x  -
 archive1/RichRAID   written   9.56T  -

 So used is 9.56TB, volsize is 5.08TB (which is the amount of data used on
 the volume). The Mac connected to the FC target sees a 5.6TB volume with
 5.1TB used, so that makes sense-- but where did the other 4TB go?


I'm not sure.  The output of zdb -bbb archive1 might help diagnose it.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot used space question

2012-09-15 Thread Matthew Ahrens
On Thu, Aug 30, 2012 at 1:11 PM, Timothy Coalson tsc...@mst.edu wrote:

 Is there a way to get the total amount of data referenced by a snapshot
 that isn't referenced by a specified snapshot/filesystem?  I think this is
 what is really desired in order to locate snapshots with offending space
 usage.


Try zfs destroy -nv pool/fs@snapA%snapC (on Illumos-based systems).  This
will tell you how much space would be reclaimed if you were to destroy a
range of snapshots.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss