Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Matthew Ahrens
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote:

 # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv  npool/openbsd
 receiving full stream of naspool/open...@xfer-11292010 into 
 npool/open...@xfer-11292010
 received 23.5GB stream in 883 seconds (27.3MB/sec)
 cannot receive new filesystem stream: destination has snapshots (eg. 
 npool/open...@xfer-11292010)
 must destroy them to overwrite it

 What am I doing wrong?  What is the proper way to accomplish my goal here?


Try using the -d option to zfs receive.  The ability to do zfs send
-R ... | zfs receive [without -d] was added relatively recently, and
you may be encountering a bug that is specific to receiving a send of
a whole pool.


 And I have a follow up question:

 I had to snapshot the source zpool filesystems in order to zfs send them.

 Once they are received on the new zpool, I really don't need nor want this 
 snapshot on the receiving side.
 Is it OK to zfs destroy that snapshot?


Yes, that will work just fine.  If you delete the snapshot you will
not be able to receive any incremental streams starting from that
snapshot, but you may not care about that.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Don Jackson
 Try using the -d option to zfs receive.  The ability
 to do zfs send
 -R ... | zfs receive [without -d] was added
 relatively recently, and
 you may be encountering a bug that is specific to
 receiving a send of
 a whole pool.

I just tried this, didn't work, new error:

 # zfs send -R naspool/open...@xfer-11292010 | zfs recv -d npool/openbsd
 cannot receive new filesystem stream: out of space

The destination pool is much larger (by several TB)  than the source pool, so I 
don't see how it can not have enough disk space:

# zfs list -r npool/openbsd
NAME USED  AVAIL  REFER  
MOUNTPOINT
npool/openbsd   82.5G  7.18T  23.5G  
/npool/openbsd
npool/open...@xfer-11292010 0  -  23.5G  -
npool/openbsd/openbsd   59.0G  7.18T  23.5G  
/npool/openbsd/openbsd
npool/openbsd/open...@xfer-11292010 0  -  23.5G  -
npool/openbsd/openbsd/4.5   22.3G  7.18T  1.54G  
/npool/openbsd/openbsd/4.5
npool/openbsd/openbsd/4...@xfer-11292010 0  -  1.54G  -
npool/openbsd/openbsd/4.5/packages  18.7G  7.18T  18.7G  
/npool/openbsd/openbsd/4.5/packages
npool/openbsd/openbsd/4.5/packa...@xfer-112920100  -  18.7G  -
npool/openbsd/openbsd/4.5/packages-local49.7K  7.18T  49.7K  
/npool/openbsd/openbsd/4.5/packages-local
npool/openbsd/openbsd/4.5/packages-lo...@xfer-11292010  0  -  49.7K  -
npool/openbsd/openbsd/4.5/ports  288M  7.18T   259M  
/npool/openbsd/openbsd/4.5/ports
npool/openbsd/openbsd/4.5/po...@patch00047.2K  -  49.7K  -
npool/openbsd/openbsd/4.5/po...@patch00529.0M  -   261M  -
npool/openbsd/openbsd/4.5/po...@xfer-11292010   0  -   259M  -
npool/openbsd/openbsd/4.5/release462M  7.18T   462M  
/npool/openbsd/openbsd/4.5/release
npool/openbsd/openbsd/4.5/rele...@xfer-11292010 0  -   462M  -
npool/openbsd/openbsd/4.5/src728M  7.18T   703M  
/npool/openbsd/openbsd/4.5/src
npool/openbsd/openbsd/4.5/s...@patch000  47.2K  -  49.7K  -
npool/openbsd/openbsd/4.5/s...@patch005  25.1M  -   709M  -
npool/openbsd/openbsd/4.5/s...@xfer-11292010 0  -   703M  -
npool/openbsd/openbsd/4.5/xenocara   572M  7.18T   565M  
/npool/openbsd/openbsd/4.5/xenocara
npool/openbsd/openbsd/4.5/xenoc...@patch000 47.2K  -  49.7K  -
npool/openbsd/openbsd/4.5/xenoc...@patch005 6.52M  -   565M  -
npool/openbsd/openbsd/4.5/xenoc...@xfer-112920100  -   565M  -
npool/openbsd/openbsd/4.8   13.2G  7.18T   413M  
/npool/openbsd/openbsd/4.8
npool/openbsd/openbsd/4...@xfer-11292010 0  -   413M  -
npool/openbsd/openbsd/4.8/packages  11.9G  7.18T  11.9G  
/npool/openbsd/openbsd/4.8/packages
npool/openbsd/openbsd/4.8/packa...@xfer-112920100  -  11.9G  -
npool/openbsd/openbsd/4.8/packages-local49.7K  7.18T  49.7K  
/npool/openbsd/openbsd/4.8/packages-local
npool/openbsd/openbsd/4.8/packages-lo...@xfer-11292010  0  -  49.7K  -
npool/openbsd/openbsd/4.8/ports  277M  7.18T   277M  
/npool/openbsd/openbsd/4.8/ports
npool/openbsd/openbsd/4.8/po...@patch00047.2K  -  49.7K  -
npool/openbsd/openbsd/4.8/po...@xfer-11292010   0  -   277M  -
npool/openbsd/openbsd/4.8/release577M  7.18T   577M  
/npool/openbsd/openbsd/4.8/release
npool/openbsd/openbsd/4.8/rele...@xfer-11292010 0  -   577M  -
npool/openbsd/openbsd/4.8/src   96.9K  7.18T  49.7K  
/npool/openbsd/openbsd/4.8/src
npool/openbsd/openbsd/4.8/s...@patch000  47.2K  -  49.7K  -
npool/openbsd/openbsd/4.8/s...@xfer-11292010 0  -  49.7K  -
npool/openbsd/openbsd/4.8/xenocara  96.9K  7.18T  49.7K  
/npool/openbsd/openbsd/4.8/xenocara
npool/openbsd/openbsd/4.8/xenoc...@patch000 47.2K  -  49.7K  -
npool/openbsd/openbsd/4.8/xenoc...@xfer-112920100  -  49.7K  -
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Don Jackson
 
  # zfs send -R naspool/open...@xfer-11292010 | zfs recv -d
 npool/openbsd
  cannot receive new filesystem stream: out of space
 
 The destination pool is much larger (by several TB)  than the source pool, so 
 I
 don't see how it can not have enough disk space:

Oh.  Fortunately this is an easy one to answer.

Since zfs receive is an atomic operation (all or nothing) you can't overwrite a 
filesystem unless there is enough disk space for *both* the old version of the 
filesystem, and the new one.  It essentially takes a snapshot of the present 
filesystem, then creates the new received version, and only after successfully 
receiving the new one does it delete the old one.

That's why ... despite your failed receive ... You have not lost any 
information in your receiving filesystem.  

If you know you want to do this, and you clearly don't have enough disk space 
to hold both the old and new filesystems at the same time, you'll have to 
destroy the old filesystem in order to overwrite it.  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Curtis Schiewek
I was in the middle of doing a replace on a bad drive and I lost power.  Now 
the replacement won't finish and I'm not sure what to do.  I've tried 
detaching, reataching, moving drives around, and nothing works.  Here's my 
zpool status:


  pool: media
 state: DEGRADED
 scrub: scrub completed after 10h7m with 0 errors on Thu Dec  2 09:05:04 2010
config:

NAME   STATE READ WRITE CKSUM
media  DEGRADED 0 0 0
  raidz1   ONLINE   0 0 0
ad8ONLINE   0 0 0
ad10   ONLINE   0 0 0
ad4ONLINE   0 0 0
ad6ONLINE   0 0 0
  raidz1   DEGRADED 0 0 0
ad22   ONLINE   0 0 0
ad26   ONLINE   0 0 0
replacing  UNAVAIL  0 66.4K 0  insufficient replicas
  ad24 FAULTED  0 75.1K 0  corrupted data
  ad18 FAULTED  0 75.2K 0  corrupted data
ad24   ONLINE   0 0 0

I've actually moved the drive (which is fine) that was on ad20 to the ad24 port 
on my controller, which is why it's showing up twice.  

Any thoughts on how to cancel the replace and restart it?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante

On Fri, 3 Dec 2010, Curtis Schiewek wrote:


NAME   STATE READ WRITE CKSUM
media  DEGRADED 0 0 0
  raidz1   ONLINE   0 0 0
ad8ONLINE   0 0 0
ad10   ONLINE   0 0 0
ad4ONLINE   0 0 0
ad6ONLINE   0 0 0
  raidz1   DEGRADED 0 0 0
ad22   ONLINE   0 0 0
ad26   ONLINE   0 0 0
replacing  UNAVAIL  0 66.4K 0  insufficient replicas
  ad24 FAULTED  0 75.1K 0  corrupted data
  ad18 FAULTED  0 75.2K 0  corrupted data
ad24   ONLINE   0 0 0


What happens if you try zpool detach media ad24?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Curtis Schiewek
cannot detach ad24: no valid replicas

On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante mark.musa...@oracle.comwrote:

 On Fri, 3 Dec 2010, Curtis Schiewek wrote:

 NAME   STATE READ WRITE CKSUM
media  DEGRADED 0 0 0
  raidz1   ONLINE   0 0 0
ad8ONLINE   0 0 0
ad10   ONLINE   0 0 0
ad4ONLINE   0 0 0
ad6ONLINE   0 0 0
  raidz1   DEGRADED 0 0 0
ad22   ONLINE   0 0 0
ad26   ONLINE   0 0 0
replacing  UNAVAIL  0 66.4K 0  insufficient replicas
  ad24 FAULTED  0 75.1K 0  corrupted data
  ad18 FAULTED  0 75.2K 0  corrupted data
ad24   ONLINE   0 0 0


 What happens if you try zpool detach media ad24?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante



On Fri, 3 Dec 2010, Curtis Schiewek wrote:


cannot detach ad24: no valid replicas


I bet that's an instance of CR 6909724.  If you have another disk you can 
spare, you can do a zpool attach media ad24 newdisk, wait for it to 
finish resilvering, and then zfs should automatically clean up ad24  ad18 
for you.




On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante mark.musa...@oracle.comwrote:


On Fri, 3 Dec 2010, Curtis Schiewek wrote:

NAME   STATE READ WRITE CKSUM

   media  DEGRADED 0 0 0
 raidz1   ONLINE   0 0 0
   ad8ONLINE   0 0 0
   ad10   ONLINE   0 0 0
   ad4ONLINE   0 0 0
   ad6ONLINE   0 0 0
 raidz1   DEGRADED 0 0 0
   ad22   ONLINE   0 0 0
   ad26   ONLINE   0 0 0
   replacing  UNAVAIL  0 66.4K 0  insufficient replicas
 ad24 FAULTED  0 75.1K 0  corrupted data
 ad18 FAULTED  0 75.2K 0  corrupted data
   ad24   ONLINE   0 0 0



What happens if you try zpool detach media ad24?






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Günther
i have had the same problem a few weeks ago
and destroyed/rebuilded my pool (NexentaCore)

this seems to be a zfs bug, see

* http://bugs.opensolaris.org/bugdatabase and goto bug_id=6782540
* fixed in zpool version 28

gea
napp-it.org
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss