Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Nikola M.
On 04/ 6/11 07:14 PM, Brandon High wrote:
 On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org
 mailto:j...@netmusician.org wrote:

 How about getting a little more crazy... What if this entire
 server temporarily hosting this data was a VM guest running ZFS? I
 don't foresee this being a problem either, but with so


 The only thing to watch out for is to make sure that the receiving
 datasets aren't a higher version that the zfs version that you'll be
 using on the replacement server. Because you can't downgrade a
 dataset, using snv_151a and planning to send to Nexenta as a final
 step will trip you up unless you explicitly create them with a lower
 version.
Yes, that is exactly why one thinking about using something with more
liberal license then Solaris11 with payed license, should first install
latest OpenSolaris form snv_134 (Or 2009.06 then upgrade to /dev
Opensolaris 134) and then it can choose upgrade path to Both openIndiana
oi_148(b) and S11Ex on same zpool.

That way, zpool and zfs versions can stay on versions supported by OI
and Nexenta (and Schillix and FreeBSD and ZFS-Fuse on Linux and Zfs
Native on Linux in development) and one can experiment with more systems
supporting ZFS then only being locked in S11Ex.

If you sadly choose to install from closed S11Ex disc and not from Osol
snv_134 CD
(www.genunix.org snv_134  .ISO) and upgrade to OpenIndiana OI_xxx dev.
release and/orS11ex, then you might loose ability to use anything but
closed Solaris from Oracle, so be clever and you can use upgrade path
explained.

Of course, you can have as much Boot Environments (BE) on same zpool as
you like, since they basically behave like separate OS installs to boot
from the same zpool, that is the beauty of ZFS/(Open)Solaris based
distributions.
Just do NOT do upgrade to newest closed zpool/zfs version from S11Ex!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Joe Auty


  
  
Thanks for all of this info guys, I'm still digesting it...

My source computer is running Solaris 10 ZFS version 15. Does this
mean that I'd be asking for trouble doing a zfs send back to this
machine from any other ZFS machine running a version  15? I just
want to make sure I understand all of this info...

If this is the case, what are my strategies? Solaris 10 for my
temporary backup machine? Is it possible to run OpenIndiana or
Nexenta or something and somehow set up these machines with ZFS v15
or something?


  

  

Nikola M.
  April 7, 2011 5:28 AM
  

  
  

Yes, that is exactly why one thinking about using something with
more liberal license then Solaris11 with payed license, should
first install latest OpenSolaris form snv_134 (Or 2009.06 then
upgrade to /dev Opensolaris 134) and then it can choose upgrade
path to Both openIndiana oi_148(b) and S11Ex on same zpool.

That way, zpool and zfs versions can stay on versions supported
by OI and Nexenta (and Schillix and FreeBSD and ZFS-Fuse on
Linux and Zfs Native on Linux in development) and one can
experiment with more systems supporting ZFS then only being
locked in S11Ex.

If you sadly choose to install from closed S11Ex disc and not
from Osol snv_134 CD 
(www.genunix.org snv_134
.ISO) and upgrade to OpenIndiana OI_xxx dev. release
and/orS11ex, then you might loose ability to use anything but
closed Solaris from Oracle, so be clever and you can use upgrade
path explained.

Of course, you can have as much Boot Environments (BE) on same
zpool as you like, since they basically behave like separate OS
installs to boot from the same zpool, that is the beauty of
ZFS/(Open)Solaris based distributions.
Just do NOT do upgrade to newest closed zpool/zfs version from
S11Ex!

___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  
  

  

Brandon High
  April 6, 2011 1:14 PM
  

  
  

  
How
about getting a little more crazy... What if this
entire server temporarily hosting this data was a VM
guest running ZFS? I don't foresee this being a
problem either, but with so
  
  
  
  The only thing to watch out for is to make sure that the
receiving datasets aren't a higher version that the zfs
version that you'll be using on the replacement server.
Because you can't downgrade a dataset, using snv_151a and
planning to send to Nexenta as a final step will trip you up
unless you explicitly create them with a lower version.
  
  -B


-- 
Brandon High : bh...@freaks.com

  
  

  

Joe Auty
  April 5, 2011 3:38 PM
  

  
  

Hello,
  
  I'm debating an OS change and also thinking about my
  options for data migration to my next server, whether it
  is on new or the same hardware.
  
  Migrating to a new machine I understand is a simple matter
  of ZFS send/receive, but reformatting the existing drives
  to host my existing data is an area I'd like to learn a
  little more about. In the past I've asked about this and
  was told that it is possible to do a send/receive to
  accommodate this, and IIRC this doesn't have to be to a
  ZFS server with the same number of physical drives?
  
  How about getting a little more crazy... What if this
  entire server temporarily hosting this data was a VM guest
  running ZFS? I don't foresee this being a problem either,
  but with so much at stake I thought I would double check
  :) When I say temporary I mean simply using this machine
  as a place to store the data long enough to wipe the
  original server, install the new OS to the original
  server, and restore the data using this VM as the data
  source.
  
  Also, more generally, is ZFS send/receive mature enough
  that when you do data migrations you don't stress about
  this? Piece of cake? The difficulty of this whole
   

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Brandon High
On Thu, Apr 7, 2011 at 4:01 PM, Joe Auty j...@netmusician.org wrote:
 My source computer is running Solaris 10 ZFS version 15. Does this mean that 
 I'd be asking for trouble doing a zfs send back to this machine from any 
 other ZFS machine running a version  15? I just want to make sure I 
 understand all of this info...

There are two versions when it comes to ZFS - The zpool version and
the zfs version.

bhigh@basestar:~$ zpool list -o name,version
NAME   VERSION
rpool   31

bhigh@basestar:~$ zfs list -o name,version
NAME   VERSION
rpool5
rpool/ROOT   5
rpool/ROOT/snv_151   5
rpool/dump   -
rpool/rsrv   5
rpool/swap   -

I think that the version that matters (for your purposes) is the ZFS
version. It should be set when using 'send -R' and having 'zfs
receive' create the destination datasets. I recommend testing however.

 If this is the case, what are my strategies? Solaris 10 for my temporary 
 backup machine? Is it possible to run OpenIndiana or Nexenta or something and 
 somehow set up these machines with ZFS v15 or something?

You can set the zpool version when you create the pool, and you can
set the zfs version when you create the dataset. I'm not sure that
you'll need to set the pool version to anything lower if the dataset
version is correct though. You should test this, however.

-B

--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Dyer-Bennet

On Tue, April 5, 2011 14:38, Joe Auty wrote:

 Migrating to a new machine I understand is a simple matter of ZFS
 send/receive, but reformatting the existing drives to host my existing
 data is an area I'd like to learn a little more about. In the past I've
 asked about this and was told that it is possible to do a send/receive
 to accommodate this, and IIRC this doesn't have to be to a ZFS server
 with the same number of physical drives?

The internal structure of the pool (how many vdevs, and what kind) is
irrelevant to zfs send / receive.  So I routinely send from a pool of 3
mirrored pairs of disks to a pool of one large drive, for example (it's
how I do my backups).   I've also gone the other way once :-( (It's good
to have backups).

I'm not 100.00% sure I understand what you're asking; does that answer it?

Mind you, this can be slow.  On my little server (under 1TB filled) the
full backup takes about 7 hours (largely because the single large external
drive is a USB drive; the bottleneck is the USB).  Luckily an incremental
backup is rather faster.

 How about getting a little more crazy... What if this entire server
 temporarily hosting this data was a VM guest running ZFS? I don't
 foresee this being a problem either, but with so much at stake I thought
 I would double check :) When I say temporary I mean simply using this
 machine as a place to store the data long enough to wipe the original
 server, install the new OS to the original server, and restore the data
 using this VM as the data source.

I haven't run ZFS extensively in VMs (mostly just short-lived small test
setups).  From my limited experience, and what I've heard on the list,
it's solid and reliable, though, which is what you need for that
application.

 Also, more generally, is ZFS send/receive mature enough that when you do
 data migrations you don't stress about this? Piece of cake? The
 difficulty of this whole undertaking will influence my decision and the
 whole timing of all of this.

A full send / receive has been reliable for a long time.  With a real
(large) data set, it's often a long run.  It's often done over a network,
and any network outage can break the run, and at that point you start
over, which can be annoying.  If the servers themselves can't stay up for
10 or 20 hours you presumably aren't ready to put them into production
anyway :-).

 I'm also thinking that a ZFS VM guest might be a nice way to maintain a
 remote backup of this data, if I can install the VM image on a
 drive/partition large enough to house my data. This seems like it would
 be a little less taxing than rsync cronjobs?

I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
rsync to using zfs send/receive for my backup scheme at home, and had
considerable trouble getting that all working (using incremental
send/receive when there are dozens of snapshots new since last time).  But
I did eventually get up to recent enough code that it's working reliably
now.

If you can provision big enough data stores for your VM to hold what you
need, that seems a reasonable approach to me, but I haven't tried anything
much like it, so my opinion is, if you're very lucky, maybe worth what you
paid for it.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Magda
On Wed, April 6, 2011 10:51, David Dyer-Bennet wrote:

 I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
 properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
 rsync to using zfs send/receive for my backup scheme at home, and had
 considerable trouble getting that all working (using incremental
 send/receive when there are dozens of snapshots new since last time).  But
 I did eventually get up to recent enough code that it's working reliably
 now.

You may be interested in these scripts:

http://www.freshports.org/sysutils/zfs-replicate/
http://www.freshports.org/sysutils/zxfer/

Not sure how FreeBSD-specific these are, but one was originally written
for (Open)Solaris AFAICT.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet d...@dd-b.net wrote:

 On Tue, April 5, 2011 14:38, Joe Auty wrote:

 Also, more generally, is ZFS send/receive mature enough that when you do
 data migrations you don't stress about this? Piece of cake? The
 difficulty of this whole undertaking will influence my decision and the
 whole timing of all of this.

 A full send / receive has been reliable for a long time.  With a real
 (large) data set, it's often a long run.  It's often done over a network,
 and any network outage can break the run, and at that point you start
 over, which can be annoying.  If the servers themselves can't stay up for
 10 or 20 hours you presumably aren't ready to put them into production
 anyway :-).

At my employer we have about 20TB of data in one city and a zfs
replicated copy of it in another city. The data is spread out over 15
pools and over 200 datasets. The initial full replication of the
larger datasets (the largest is 3 TB) took days, the largest even took
close to two weeks. The incremental send/recv sessions are much
quicker, based on how much data has changed, but we run the
replication script every 4 hours and it usually completes before the
next scheduled run. Once we got past a few bugs in both my script and
the older zfs code (we are at zpool 22 and zfs 4 right now, we started
all this at zpool 10) the replications have been flawless.

 I'm also thinking that a ZFS VM guest might be a nice way to maintain a
 remote backup of this data, if I can install the VM image on a
 drive/partition large enough to house my data. This seems like it would
 be a little less taxing than rsync cronjobs?

 I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
 properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
 rsync to using zfs send/receive for my backup scheme at home, and had
 considerable trouble getting that all working (using incremental
 send/receive when there are dozens of snapshots new since last time).  But
 I did eventually get up to recent enough code that it's working reliably
 now.

We went with zfs send/recv over rsync for two big reasons, an
incremental zfs send is much, much faster than an rsync if you have
lots of files (our 20TB of data consists of 200 million files), and we
are leveraging zfs ACLs and need them preserved on the copy.

I have not tried zfs on a VM guest.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org wrote:

 How about getting a little more crazy... What if this entire server
 temporarily hosting this data was a VM guest running ZFS? I don't foresee
 this being a problem either, but with so


The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
and planning to send to Nexenta as a final step will trip you up unless you
explicitly create them with a lower version.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High bh...@freaks.com wrote:

 The only thing to watch out for is to make sure that the receiving datasets
 aren't a higher version that the zfs version that you'll be using on the
 replacement server. Because you can't downgrade a dataset, using snv_151a
 and planning to send to Nexenta as a final step will trip you up unless you
 explicitly create them with a lower version.

I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

The format of the stream is committed. You will be  able
to receive your streams on future versions of ZFS.

-or- does this just mean upward compatibility ? In other words I can
send from pool 15 to pool 22 but not the other way around.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Lori Alt

 On 04/ 6/11 11:42 AM, Paul Kraus wrote:

On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com  wrote:


The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
and planning to send to Nexenta as a final step will trip you up unless you
explicitly create them with a lower version.

 I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

The format of the stream is committed. You will be  able
to receive your streams on future versions of ZFS.

correct.


-or- does this just mean upward compatibility ? In other words I can
send from pool 15 to pool 22 but not the other way around.

It does mean upward compatibility only, but I believe that it's the 
dataset version that matters, not the pool version, and the dataset 
version has not changed as often as the pool version:


root@v40z-brm-02:/home/lalt/ztest# zfs get version rpool/export/home
NAME   PROPERTY  VALUESOURCE
rpool/export/home  version   5-
root@v40z-brm-02:/home/lalt/ztest# zpool get version rpool
NAME   PROPERTY  VALUESOURCE
rpool  version   32   default

(someone still on the zfs team please correct me if that's wrong.)

Lori





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Wed, Apr 6, 2011 at 10:42 AM, Paul Kraus pk1...@gmail.com wrote:
    I thought I saw that with zpool 10 (or was it 15) the zfs send
 format had been committed and you *could* send/recv between different
 version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

There is still a problem if the dataset version is too high. I
*believe* that a 'zfs send -R' should send the zfs version, and that
zfs receive will create any new datasets using that version. (I have a
received dataset here that's zfs v 4, whereas everything else in the
pool is v5.) As long as you don't do a zfs upgrade after that point,
you should be fine.

It's probably a good idea to check that the received versions are the
same as the source before doing a destroy though. ;-)

One other thing that I forgot to mention in my last mail too: If
you're receiving into a VM, make sure that the VM can manage
redundancy on its zfs storage, and not just multiple vdsk on the same
host disk / lun. Either give it access to the raw devices, or use
iSCSI, or create your vdsk on different luns and raidz them, etc.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss