Re: Zpool surgery

2013-01-30 Thread Ulrich Spörlein
On Tue, 2013-01-29 at 15:52:50 +0100, Fabian Keil wrote:
 Dan Nelson dnel...@allantgroup.com wrote:
 
  In the last episode (Jan 28), Fabian Keil said:
   Ulrich Spörlein u...@freebsd.org wrote:
On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
 On 2013-Jan-27 14:31:56 -, Steven Hartland 
 kill...@multiplay.co.uk wrote:
 - Original Message - 
 From: Ulrich Spörlein u...@freebsd.org
  I want to transplant my old zpool tank from a 1TB drive to a new
  2TB drive, but *not* use dd(1) or any other cloning mechanism, as
  the pool was very full very often and is surely severely
  fragmented.
 
 Cant you just drop the disk in the original machine, set it as a
 mirror then once the mirror process has completed break the mirror
 and remove the 1TB disk.
 
 That will replicate any fragmentation as well.  zfs send | zfs recv
 is the only (current) way to defragment a ZFS pool.
   
   It's not obvious to me why zpool replace (or doing it manually)
   would replicate the fragmentation.
  
  zpool replace essentially adds your new disk as a mirror to the parent
  vdev, then deletes the original disk when the resilver is done.  Since
  mirrors are block-identical copies of each other, the new disk will contain
  an exact copy of the original disk, followed by 1TB of freespace.
 
 Thanks for the explanation.
 
 I was under the impression that zfs mirrors worked at a higher
 level than traditional mirrors like gmirror but there seems to
 be indeed less magic than I expected.
 
 Fabian

To wrap this up, while the zpool replace worked for the disk, I played
around with it some more, and using snapshots instead *did* work the
second time. I'm not sure what I did wrong the first time ...

So basically this:
# zfs send -R oldtank@2013-01-22 | zfs recv -F -d newtank
(takes ages, then do a final snapshot before unmounting and send the
incremental)
# zfs send -R -i 2013-01-22 oldtank@2013-01-29 | zfs recv -F -d newtank

Allows me to send snapshots up to 2013-01-29 to the archive pool from
either oldtank or newtank. Yay!

Cheers,
Uli
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-29 Thread Fabian Keil
Dan Nelson dnel...@allantgroup.com wrote:

 In the last episode (Jan 28), Fabian Keil said:
  Ulrich Spörlein u...@freebsd.org wrote:
   On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
On 2013-Jan-27 14:31:56 -, Steven Hartland 
kill...@multiplay.co.uk wrote:
- Original Message - 
From: Ulrich Spörlein u...@freebsd.org
 I want to transplant my old zpool tank from a 1TB drive to a new
 2TB drive, but *not* use dd(1) or any other cloning mechanism, as
 the pool was very full very often and is surely severely
 fragmented.

Cant you just drop the disk in the original machine, set it as a
mirror then once the mirror process has completed break the mirror
and remove the 1TB disk.

That will replicate any fragmentation as well.  zfs send | zfs recv
is the only (current) way to defragment a ZFS pool.
  
  It's not obvious to me why zpool replace (or doing it manually)
  would replicate the fragmentation.
 
 zpool replace essentially adds your new disk as a mirror to the parent
 vdev, then deletes the original disk when the resilver is done.  Since
 mirrors are block-identical copies of each other, the new disk will contain
 an exact copy of the original disk, followed by 1TB of freespace.

Thanks for the explanation.

I was under the impression that zfs mirrors worked at a higher
level than traditional mirrors like gmirror but there seems to
be indeed less magic than I expected.

Fabian


signature.asc
Description: PGP signature


Re: Zpool surgery

2013-01-28 Thread Ulrich Spörlein
On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
 On 2013-Jan-27 14:31:56 -, Steven Hartland kill...@multiplay.co.uk 
 wrote:
 - Original Message - 
 From: Ulrich Spörlein u...@freebsd.org
  I want to transplant my old zpool tank from a 1TB drive to a new 2TB
  drive, but *not* use dd(1) or any other cloning mechanism, as the pool
  was very full very often and is surely severely fragmented.
 
 Cant you just drop the disk in the original machine, set it as a mirror
 then once the mirror process has completed break the mirror and remove
 the 1TB disk.
 
 That will replicate any fragmentation as well.  zfs send | zfs recv
 is the only (current) way to defragment a ZFS pool.

But are you then also supposed to be able send incremental snapshots to
a third pool from the pool that you just cloned?

I did the zpool replace now over night, and it did not remove the old
device yet, as it found cksum errors on the pool:

root@coyote:~# zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 873G in 11h33m with 24 errors on Mon Jan 28 09:45:32 2013
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 027
  replacing-0  ONLINE   0 061
da0.eliONLINE   0 061
ada1.eli   ONLINE   0 061

errors: Permanent errors have been detected in the following files:


tank/src@2013-01-17:/.svn/pristine/8e/8ed35772a38e0fec00bc1cbc2f05480f4fd4759b.svn-base

tank/src@2013-01-17:/.svn/pristine/4f/4febd82f50bd408f958d4412ceea50cef48fe8f7.svn-base
tank/src@2013-01-17:/sys/dev/mvs/mvs_soc.c
tank/src@2013-01-17:/secure/usr.bin/openssl/man/pkcs8.1

tank/src@2013-01-17:/.svn/pristine/ab/ab1efecf2c0a8f67162b2ed760772337017c5a64.svn-base

tank/src@2013-01-17:/.svn/pristine/90/907580a473b00f09b01815a52251fbdc3e34e8f6.svn-base
tank/src@2013-01-17:/sys/dev/agp/agpreg.h
tank/src@2013-01-17:/sys/dev/isci/scil/scic_sds_remote_node_context.h

tank/src@2013-01-17:/.svn/pristine/a8/a8dfc65edca368c5d2af3d655859f25150795bc5.svn-base
tank/src@2013-01-17:/contrib/llvm/utils/TableGen/DAGISelMatcher.cpp
tank/src@2013-01-17:/contrib/tcpdump/print-babel.c

tank/src@2013-01-17:/.svn/pristine/30/30ef0f53aa09a5185f55f4ecac842dbc13dab8fd.svn-base

tank/src@2013-01-17:/.svn/pristine/cb/cb32411a6873621a449b24d9127305b2ee6630e9.svn-base

tank/src@2013-01-17:/.svn/pristine/03/030d211b1e95f703f9a61201eed63efdbb8e41c0.svn-base

tank/src@2013-01-17:/.svn/pristine/27/27f1181d33434a72308de165c04202b6159d6ac2.svn-base
tank/src@2013-01-17:/lib/libpam/modules/pam_exec/pam_exec.c
tank/src@2013-01-17:/contrib/llvm/include/llvm/PassSupport.h

tank/src@2013-01-17:/.svn/pristine/90/90f818b5f897f26c7b301c1ac2d0ce0d3eaef28d.svn-base
tank/src@2013-01-17:/sys/vm/vm_pager.c

tank/src@2013-01-17:/.svn/pristine/5e/5e9331052e8c2e0fa5fd8c74c4edb04058e3b95f.svn-base

tank/src@2013-01-17:/.svn/pristine/1d/1d5d6e75cfb77e48e4711ddd10148986392c4fae.svn-base

tank/src@2013-01-17:/.svn/pristine/c5/c55e964c62ed759089c4bf5e49adf6e49eb59108.svn-base
tank/src@2013-01-17:/crypto/openssl/crypto/cms/cms_lcl.h
tank/ncvs@2013-01-17:/ports/textproc/uncrustify/distinfo,v

Interestingly, these only seem to affect the snapshot, and I'm now
wondering if that is the problem why the backup pool did not accept the
next incremental snapshot from the new pool.

How does the receiving pool known that it has the correct snapshot to
store an incremental one anyway? Is there a toplevel checksum, like for
git commits? How can I display and compare that?

Cheers,
Uli
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-28 Thread Ulrich Spörlein
On Sun, 2013-01-27 at 20:13:24 +0100, Hans Petter Selasky wrote:
 On Sunday 27 January 2013 20:08:06 Ulrich Spörlein wrote:
  I dug out an old ATA-to-USB case and will use that to attach the old
  tank to the new machine and then have a try at this zpool replace thing.
 
 If you are using -current you might want this patch first:
 
 http://svnweb.freebsd.org/changeset/base/245995

Thanks, will do. Is it supposed to fix this?

root@coyote:~# geli attach da1
Segmentation fault
Exit 139
root@coyote:~# geli status
Name  Status  Components
gpt/swap.eli  ACTIVE  gpt/swap
 da0.eli  ACTIVE  da0
ada1.eli  ACTIVE  ada1


As you can see geli worked fine, but at some point it stops working and
can no longer attach new volumes.

I'm also seeing interrupt storms for USB devices, when I plug in the
drives into xhci0 instead of xhci1 (but this needs more testing, first I
need to get that damn zpool moved)

xhci0: Intel Panther Point USB 3.0 controller mem 0xe072-0xe072 irq 
21 at device 20.0 on pci0
xhci0: 32 byte context size.
usbus0 on xhci0
xhci1: XHCI (generic) USB 3.0 controller mem 
0xe050-0xe050,0xe051-0xe0511fff irq 19 at device 0.0 on pci5
xhci1: 64 byte context size.
usbus2 on xhci1

xhci0@pci0:0:20:0:  class=0x0c0330 card=0x72708086 chip=0x1e318086 rev=0x04 
hdr=0x00
vendor = 'Intel Corporation'
device = '7 Series/C210 Series Chipset Family USB xHCI Host Controller'
class  = serial bus
subclass   = USB
xhci1@pci0:5:0:0:   class=0x0c0330 card=0x chip=0x8241104c rev=0x02 
hdr=0x00
vendor = 'Texas Instruments'
device = 'TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller'
class  = serial bus
subclass   = USB


What's with the 32 vs 64 byte context size? And do you know of any problems
with the Intel controller?

Cheers,
Uli
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-28 Thread Hans Petter Selasky
On Monday 28 January 2013 11:29:35 Ulrich Spörlein wrote:
 What's with the 32 vs 64 byte context size? And do you know of any problems
 with the Intel controller?

These are two different USB DMA descriptor layouts.

--HPS
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-28 Thread Hans Petter Selasky
On Monday 28 January 2013 11:29:35 Ulrich Spörlein wrote:
 Thanks, will do. Is it supposed to fix this?
 
 root@coyote:~# geli attach da1
 Segmentation fault
 Exit 139
 root@coyote:~# geli status
 Name  Status  Components
 gpt/swap.eli  ACTIVE  gpt/swap
  da0.eli  ACTIVE  da0
 ada1.eli  ACTIVE  ada1
 
 
 As you can see geli worked fine, but at some point it stops working and
 can no longer attach new volumes.

I don't know. If this doesn't happen on 9-stable, yes.

--HPS
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-28 Thread Fabian Keil
Ulrich Spörlein u...@freebsd.org wrote:

 On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
  On 2013-Jan-27 14:31:56 -, Steven Hartland kill...@multiplay.co.uk 
  wrote:
  - Original Message - 
  From: Ulrich Spörlein u...@freebsd.org
   I want to transplant my old zpool tank from a 1TB drive to a new 2TB
   drive, but *not* use dd(1) or any other cloning mechanism, as the pool
   was very full very often and is surely severely fragmented.
  
  Cant you just drop the disk in the original machine, set it as a mirror
  then once the mirror process has completed break the mirror and remove
  the 1TB disk.
  
  That will replicate any fragmentation as well.  zfs send | zfs recv
  is the only (current) way to defragment a ZFS pool.

It's not obvious to me why zpool replace (or doing it manually)
would replicate the fragmentation.

 But are you then also supposed to be able send incremental snapshots to
 a third pool from the pool that you just cloned?

Yes.

 I did the zpool replace now over night, and it did not remove the old
 device yet, as it found cksum errors on the pool:
 
 root@coyote:~# zpool status -v
   pool: tank
  state: ONLINE
 status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
   scan: resilvered 873G in 11h33m with 24 errors on Mon Jan 28 09:45:32 2013
 config:
 
 NAME   STATE READ WRITE CKSUM
 tank   ONLINE   0 027
   replacing-0  ONLINE   0 061
 da0.eliONLINE   0 061
 ada1.eli   ONLINE   0 061
 
 errors: Permanent errors have been detected in the following files:
 
 
 tank/src@2013-01-17:/.svn/pristine/8e/8ed35772a38e0fec00bc1cbc2f05480f4fd4759b.svn-base
[...]
 tank/ncvs@2013-01-17:/ports/textproc/uncrustify/distinfo,v
 
 Interestingly, these only seem to affect the snapshot, and I'm now
 wondering if that is the problem why the backup pool did not accept the
 next incremental snapshot from the new pool.

I doubt that. My expectation would be that it only prevents
the zfs send to finish successfully.

BTW, you could try reading the files to be sure that the checksum
problems are permanent and not just temporary USB issues.

 How does the receiving pool known that it has the correct snapshot to
 store an incremental one anyway? Is there a toplevel checksum, like for
 git commits? How can I display and compare that?

Try zstreamdump:

fk@r500 ~ $sudo zfs send -i @2013-01-24_20:48 tank/etc@2013-01-26_21:14 | 
zstreamdump | head -11
BEGIN record
hdrtype = 1
features = 4
magic = 2f5bacbac
creation_time = 5104392a
type = 2
flags = 0x0
toguid = a1eb3cfe794e675c
fromguid = 77fb8881b19cb41f
toname = tank/etc@2013-01-26_21:14
END checksum = 1047a3f2dceb/67c999f5e40ecf9/442237514c1120ed/efd508ab5203c91c

fk@r500 ~ $sudo zfs send lexmark/backup/r500/tank/etc@2013-01-24_20:48 | 
zstreamdump | head -11
BEGIN record
hdrtype = 1
features = 4
magic = 2f5bacbac
creation_time = 51018ff4
type = 2
flags = 0x0
toguid = 77fb8881b19cb41f
fromguid = 0
toname = lexmark/backup/r500/tank/etc@2013-01-24_20:48
END checksum = 1c262b5ffe935/78d8a68e0eb0c8e7/eb1dde3bd923d153/9e0829103649ae22

Fabian


signature.asc
Description: PGP signature


Re: Zpool surgery

2013-01-28 Thread Dan Nelson
In the last episode (Jan 28), Fabian Keil said:
 Ulrich Spörlein u...@freebsd.org wrote:
  On Mon, 2013-01-28 at 07:11:40 +1100, Peter Jeremy wrote:
   On 2013-Jan-27 14:31:56 -, Steven Hartland kill...@multiplay.co.uk 
   wrote:
   - Original Message - 
   From: Ulrich Spörlein u...@freebsd.org
I want to transplant my old zpool tank from a 1TB drive to a new
2TB drive, but *not* use dd(1) or any other cloning mechanism, as
the pool was very full very often and is surely severely
fragmented.
   
   Cant you just drop the disk in the original machine, set it as a
   mirror then once the mirror process has completed break the mirror
   and remove the 1TB disk.
   
   That will replicate any fragmentation as well.  zfs send | zfs recv
   is the only (current) way to defragment a ZFS pool.
 
 It's not obvious to me why zpool replace (or doing it manually)
 would replicate the fragmentation.

zpool replace essentially adds your new disk as a mirror to the parent
vdev, then deletes the original disk when the resilver is done.  Since
mirrors are block-identical copies of each other, the new disk will contain
an exact copy of the original disk, followed by 1TB of freespace.

-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Zpool surgery

2013-01-27 Thread Ulrich Spörlein
Hey all,

I have a slight problem with transplanting a zpool, maybe this is not
possible the way I like to do it, maybe I need to fuzz some
identifiers...

I want to transplant my old zpool tank from a 1TB drive to a new 2TB
drive, but *not* use dd(1) or any other cloning mechanism, as the pool
was very full very often and is surely severely fragmented.

So, I have tank (the old one), the new one, let's call it tank' and
then there's the archive pool where snapshots from tank are sent to, and
these should now come from tank' in the future.

I have:
tank - sending snapshots to archive

I want:
tank' - sending snapshots to archive

Ideally I would want archive to not even know that tank and tank' are
different, so as to not have to send a full snapshot again, but
continue the incremental snapshots.

So I did zfs send -R tank | ssh otherhost zfs recv -d tank and that
worked well, this contained a snapshot A that was also already on
archive. Then I made a final snapshot B on tank, before turning down that
pool and sent it to tank' as well.

Now I have snapshot A on tank, tank' and archive and they are virtually
identical. I have snapshot B on tank and tank' and would like to send
this from tank' to archive, but it complains:

cannot receive incremental stream: most recent snapshot of archive does
not match incremental source

Is there a way to tweak the identity of tank' to be *really* the same as
tank, so that archive can accept that incremental stream? Or should I
use dd(1) after all to transplant tank to tank'? My other option would
be to turn on dedup on archive and send another full stream of tank',
99.9% of which would hopefully be deduped and not consume precious space
on archive.

Any ideas?

Cheers,
Uli

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: Zpool surgery

2013-01-27 Thread Fabian Keil
Ulrich Spörlein u...@freebsd.org wrote:

 I have a slight problem with transplanting a zpool, maybe this is not
 possible the way I like to do it, maybe I need to fuzz some
 identifiers...
 
 I want to transplant my old zpool tank from a 1TB drive to a new 2TB
 drive, but *not* use dd(1) or any other cloning mechanism, as the pool
 was very full very often and is surely severely fragmented.
 
 So, I have tank (the old one), the new one, let's call it tank' and
 then there's the archive pool where snapshots from tank are sent to, and
 these should now come from tank' in the future.
 
 I have:
 tank - sending snapshots to archive
 
 I want:
 tank' - sending snapshots to archive
 
 Ideally I would want archive to not even know that tank and tank' are
 different, so as to not have to send a full snapshot again, but
 continue the incremental snapshots.
 
 So I did zfs send -R tank | ssh otherhost zfs recv -d tank and that
 worked well, this contained a snapshot A that was also already on
 archive. Then I made a final snapshot B on tank, before turning down that
 pool and sent it to tank' as well.
 
 Now I have snapshot A on tank, tank' and archive and they are virtually
 identical. I have snapshot B on tank and tank' and would like to send
 this from tank' to archive, but it complains:
 
 cannot receive incremental stream: most recent snapshot of archive does
 not match incremental source

In general this should work, so I'd suggest that you double check
that you are indeed sending the correct incremental.

 Is there a way to tweak the identity of tank' to be *really* the same as
 tank, so that archive can accept that incremental stream? Or should I
 use dd(1) after all to transplant tank to tank'? My other option would
 be to turn on dedup on archive and send another full stream of tank',
 99.9% of which would hopefully be deduped and not consume precious space
 on archive.

The pools don't have to be the same.

I wouldn't consider dedup as you'll have to recreate the pool if
it turns out the the dedup performance is pathetic. On a system
that hasn't been created with dedup in mind that seems rather
likely.

 Any ideas?

Your whole procedure seems a bit complicated to me.

Why don't you use zpool replace?

Fabian


signature.asc
Description: PGP signature


Re: Zpool surgery

2013-01-27 Thread Steven Hartland
- Original Message - 
From: Ulrich Spörlein u...@freebsd.org



I have a slight problem with transplanting a zpool, maybe this is not
possible the way I like to do it, maybe I need to fuzz some
identifiers...

I want to transplant my old zpool tank from a 1TB drive to a new 2TB
drive, but *not* use dd(1) or any other cloning mechanism, as the pool
was very full very often and is surely severely fragmented.



Cant you just drop the disk in the original machine, set it as a mirror
then once the mirror process has completed break the mirror and remove
the 1TB disk.

If this is a boot disk don't forget to set the boot block as well.

   Regards
   Steve 




This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: Zpool surgery

2013-01-27 Thread Chris Rees
On 27 Jan 2013 14:31, Steven Hartland kill...@multiplay.co.uk wrote:

 - Original Message - From: Ulrich Spörlein u...@freebsd.org


 I have a slight problem with transplanting a zpool, maybe this is not
 possible the way I like to do it, maybe I need to fuzz some
 identifiers...

 I want to transplant my old zpool tank from a 1TB drive to a new 2TB
 drive, but *not* use dd(1) or any other cloning mechanism, as the pool
 was very full very often and is surely severely fragmented.


 Cant you just drop the disk in the original machine, set it as a mirror
 then once the mirror process has completed break the mirror and remove
 the 1TB disk.

 If this is a boot disk don't forget to set the boot block as well.

I managed to replace a drive this way without even rebooting.  I believe
it's the same as a zpool replace.

Chris
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: Zpool surgery

2013-01-27 Thread Jiri Mikulas

On 2013/01/27 15:31, Steven Hartland wrote:

- Original Message - From: Ulrich Spörlein u...@freebsd.org


I have a slight problem with transplanting a zpool, maybe this is not
possible the way I like to do it, maybe I need to fuzz some
identifiers...

I want to transplant my old zpool tank from a 1TB drive to a new 2TB
drive, but *not* use dd(1) or any other cloning mechanism, as the pool
was very full very often and is surely severely fragmented.



Cant you just drop the disk in the original machine, set it as a mirror
then once the mirror process has completed break the mirror and remove
the 1TB disk.

If this is a boot disk don't forget to set the boot block as well.

   Regards
   Steve


Hello
before you start rebuilding mirror this way, don't forget

zpool set autoexpand=on tank

after you drop old 1TB disc from config, space will be expanded to new 
disc size.


Regards
Jiri
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: Zpool surgery

2013-01-27 Thread Ulrich Spörlein
On Sun, 2013-01-27 at 14:56:01 +0100, Fabian Keil wrote:
 Ulrich Spörlein u...@freebsd.org wrote:
 
  I have a slight problem with transplanting a zpool, maybe this is not
  possible the way I like to do it, maybe I need to fuzz some
  identifiers...
  
  I want to transplant my old zpool tank from a 1TB drive to a new 2TB
  drive, but *not* use dd(1) or any other cloning mechanism, as the pool
  was very full very often and is surely severely fragmented.
  
  So, I have tank (the old one), the new one, let's call it tank' and
  then there's the archive pool where snapshots from tank are sent to, and
  these should now come from tank' in the future.
  
  I have:
  tank - sending snapshots to archive
  
  I want:
  tank' - sending snapshots to archive
  
  Ideally I would want archive to not even know that tank and tank' are
  different, so as to not have to send a full snapshot again, but
  continue the incremental snapshots.
  
  So I did zfs send -R tank | ssh otherhost zfs recv -d tank and that
  worked well, this contained a snapshot A that was also already on
  archive. Then I made a final snapshot B on tank, before turning down that
  pool and sent it to tank' as well.
  
  Now I have snapshot A on tank, tank' and archive and they are virtually
  identical. I have snapshot B on tank and tank' and would like to send
  this from tank' to archive, but it complains:
  
  cannot receive incremental stream: most recent snapshot of archive does
  not match incremental source
 
 In general this should work, so I'd suggest that you double check
 that you are indeed sending the correct incremental.
 
  Is there a way to tweak the identity of tank' to be *really* the same as
  tank, so that archive can accept that incremental stream? Or should I
  use dd(1) after all to transplant tank to tank'? My other option would
  be to turn on dedup on archive and send another full stream of tank',
  99.9% of which would hopefully be deduped and not consume precious space
  on archive.
 
 The pools don't have to be the same.
 
 I wouldn't consider dedup as you'll have to recreate the pool if
 it turns out the the dedup performance is pathetic. On a system
 that hasn't been created with dedup in mind that seems rather
 likely.
 
  Any ideas?
 
 Your whole procedure seems a bit complicated to me.
 
 Why don't you use zpool replace?


Ehhh,  zpool replace, eh? I have to say I didn't know that option
was available, but also because this is on a newer machine, I needed
some way to do this over the network, so a direct zpool replace is not
that easy.

I dug out an old ATA-to-USB case and will use that to attach the old
tank to the new machine and then have a try at this zpool replace thing.

How will that affect the fragmentation level of the new pool? Will the
resilver do something sensible wrt. keeping files together for better
read-ahead performance?

Cheers,
Uli
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-27 Thread Hans Petter Selasky
On Sunday 27 January 2013 20:08:06 Ulrich Spörlein wrote:
 I dug out an old ATA-to-USB case and will use that to attach the old
 tank to the new machine and then have a try at this zpool replace thing.

If you are using -current you might want this patch first:

http://svnweb.freebsd.org/changeset/base/245995

--HPS
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: Zpool surgery

2013-01-27 Thread Peter Jeremy
On 2013-Jan-27 14:31:56 -, Steven Hartland kill...@multiplay.co.uk wrote:
- Original Message - 
From: Ulrich Spörlein u...@freebsd.org
 I want to transplant my old zpool tank from a 1TB drive to a new 2TB
 drive, but *not* use dd(1) or any other cloning mechanism, as the pool
 was very full very often and is surely severely fragmented.

Cant you just drop the disk in the original machine, set it as a mirror
then once the mirror process has completed break the mirror and remove
the 1TB disk.

That will replicate any fragmentation as well.  zfs send | zfs recv
is the only (current) way to defragment a ZFS pool.

-- 
Peter Jeremy


pgp7mByYv45q2.pgp
Description: PGP signature